Create UV Templates directly from Poser



  • @anomalaus said in Create UV Templates directly from Poser:
    I dread the necessary evil of paid upgrades to Poser, as I am addicted to eating regular meals :-/

    WTF is wrong with you, man? Just render yourself a pizza, for goodness sake! If you stare at it long enough, you won't feel hungry anymore! ;)



  • @fverbaas said in Create UV Templates directly from Poser:

    This has a promise of being very useful indeed. It is a key step in developing some funcionality like in Marvelous Designer. If the UV's are flat and true-size and seams, indeed, are known, and the 'material' making the seams can be made to shrink to zero in simulation, in principle the cloth room could do the refits I now do in MD, and the Poser geometry tools can be used to size the panels.

    Not sure if this: [URK, not enough privileges to upload an MP4 movie, I'll link it below]
    0_1506916358555_AndyUVTemplate.jpg

    0_1506916374479_AndyUVTemplateWrap.jpg

    UVs are flat, but what defines "true-size"? Texture vertex spacing matching object vertex spacing? There are vanishingly small numbers of figures that would ever conform to that requirement due to different level-of-detail requirements for different body parts like faces and hands, when compared to limbs and torsos. I guess clothing figures would differ. There will always be distortion when mapping curved shapes from 3D to 2D, by geometric definition. There is no such thing as a distortion-free planar mapping unless an object has only planar facets and adjacent facets are not constrained to share edges when unwrapping, though that makes for ugly and unpaintable UV maps.

    AndyUVTemplateWrap.mp4

    I haven't bothered to preview the UV templates in this script version (commented out), just saved the images to files and applied them to the UVtemplate object.


  • Poser Ambassadors

    @englishbob said in Create UV Templates directly from Poser:

    @anomalaus said in Create UV Templates directly from Poser:

    It's right an the top of my Things To Procrastinate About list

    I plan to steal that phrase and use it often. Just as soon as I get around to it. :)

    Lol thank you, made me smile!


  • Poser Ambassadors

    @anomalaus Really useful! Thank you!



  • @anomalaus
    Yes, there will always be distortion. Yet, when you wear a cotton shirt with a blue checkerboard pattern, it is built from flat panels of cotton fabric with a blue checkerboard pattern. The distortion is what the cotton fabric permits (not much) and yet the shirt fits you.
    If you take the shirt apart at the seams you can lay the panels before you at the table. You can arrange the panels such that the blue checker board pattern matches. If the table is say 1.50 m square, that what you see would be your true size UV map.
    If you use the panels as a template to cut new panels from red-striped fabric and sew them together you have a red striped shirt. The digital equivalent of this is of course to apply a tilable texture. The UV is therefore VERY workable and provided you choose materials wisely it is as easy as changing the referenced tile. .
    Clothing items are mainly made by dividing the very non-developabl shape of the body into sections that are more or less developable.

    See below a bodice block I quickly traced from Genesis 8 yesterday.with MD7. (True scale UV's the grid on the background is 10 cm size)
    0_1506962227001_Knipsel.JPG
    The figure is pretty busty but yet the panels fit with moderate strain: (Strainmap legend: green = 0% strain, yellow 10% and red 20%).
    0_1506962349412_Knipsel2.JPG
    with a tad too much space at the points of the breasts (we really need softbody simulation)
    0_1506962877368_Knipsel3.JPG
    which is solved with a touch of the steam brush (but hey I am disgressing)
    0_1506963033791_Knipsel4.JPG

    What meant to say is that the 2D -> 3D vv. conversion opens a lot more possibilities.



  • @F_Verbaas I imagine that on a per-group basis, using a metric of mean facet edge length (assuming predominantly quadrilateral facets), and applying that mean length to the texture facets of corresponding groups will produce "True Size" UV Templates, perhaps? Of course this is a distinction which is only relevantly applied to clothing, since human and animal figures tend to be predominantly unique (non-tiled) in their per-group texturing, apart from external bi-lateral symmetry, though that is frequently broken in Poser UV mapping schemes (see V4's arms)

    Can you explain the process you imagine being possible in Poser with appropriate development of this script? I do not envisage turning this into a full UV remapper at this stage, given that the ability to define and pin seams and select unwrapping schemes is freely available in Blender on multiple platforms, but I can imagine simply being able to rescale and reposition each group's UV map individually, before creating an object which could be procedurally textured and wrap with a single morph into the original object's shape for use in the Poser Fitting Room or Cloth Room. [When I last used them, I continually wished for a tool to apply tension to cloth along selected axes and boundaries]

    I am eternally wary of the baying of the ravening, slavering pack of rabid paralegal hounds certain content vendors threaten to unleash on any means to breach their EULA. When you refer to tracing Genesis 8, I know, of course, that you're referring to the 8th chapter of the first book of the Torah ;-)



  • @anomalaus
    I mainly wanted to illustrate the 'true size' UV represention you, I understood, questioned.
    Of course if the garment has un-distorted UV's there would not be a need to remap.
    The panels in 2D representation normally would be more simplifed like tailor's blocks, and would be like that in the garment definition, and not necessarily a result of a UV mapping process.
    0_1507054690932_Knipsel5.JPG

    Key of your script is it generates objects representing the material zones = the panels of garments mapped that way, as can be made by, sorry to mention, Marvelous Designer. Let me call them 2Din3D objects.
    In that sense it does the inverse of the 'assembly' in Marvelous Designer. It makes a '2D' representation of the 3D model.
    The script does know the seam information, that is which edges in the different material zones connect (that is what it produces the border coloring from).
    This is a fundamental new development and I was just pondering what else this functionality could be useful for.

    One way that comes to mind to exploit this further is that one could morph these 2Din3D objects and push the result back into the UV map, probably on the fly. These 2Din3D objets can, I assume, be deformed in Poser using for example magnets or morph brush. (something you would strictly avoid if the aim is to make a UV map). Let us for now assume these deformations are done in the plane of the object only. No need to go into a UV mapping program. Texture mapping fixups by morph brush.

    Wild idea is to bring these deformations of the UV map into the geometry. For clothing this would be the equivalent of a subset of the operations the MD user does in the 2D space: change the size of the patterns to change the fit of the garment. Say we want the Genesis 8 garment to fit a figure with a more average bust size. In the fashion world this would be done by making the shape of the front-side panels less pronounced. In the Poser world this would be done using a magnet.
    0_1507052933897_InkedInkedKnipsel_LI.jpg
    This would leave the mesh topology including the seams intact.
    The UV map could be updated to the new XY positions so textures would still look OK and undistorted.
    Then if there were some method of carrying the deformation of the 2Din3D panels into the 3D garment definition, the garment could be realistically resized. It would be great if the deformations could be introduced as (inverse) strain in the Cloth Room. (reduction of edge length in the deformed 2Din 3D shape to 0.9 times the original leads to strain of 1/0.9 in the 3D model.) Needs more extensive API for the cloth room, I know.

    Sorry. I am just letting the dove out. See if she will return with a freshly plucked olive leaf.



  • @anomalaus Ihave a question for you. Are these UV maps you create in Poser actual Objects?



  • @eclark1849 yes, indeed they are, though that was, as I explained to @matb , an initial instance of the script prior to my determination of the appropriate PIL (Python Image Library) methods to actually create an image file, which was my original goal, i.e. Create UV templates with seam guides from a figure or prop within the current Poser scene, or a wavefront obj file chosen by the user. One click, optional file selection, and voilà, done.

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script. Secondary outputs are the UV template plus seam guide image files. This can be very useful by animating the wrap morph, to provide extra detail on how the UV map deforms and joins seams, in the case where the seam guides are confusing due to the number of colours. I note that manually produced seam guides have a sparse set of seam matching facets and frequently need text overlays, which would be very difficult to automatically produce and remain legible, to explain which body part the seam fuses with. The wrap morph lets you physically explore the seam closure within Poser, obviating thousands of words of explanation.

    In both cases, temporary obj files are created to be subsequently loaded by Poser, before deletion.



  • I'm asking because I'm trying to find someway to group different bodyparts together as one object. I'm thinking of the hair room and creating a hair growth group. I've tried using the Grouping tool, but even Ambient Shade couldn't explain how the Tool works to my dense skull. I was hoping maybe there was some way I could take advantage of the uV seam guides if I could get them to fit on to the body of what ever I'm trying to grow hair on.



  • @F_Verbaas the first thing which comes to mind, as I attempt to map my imagination to your vision, is that every time Marvelous Designer assembles the 2D tailor's blocks into the 3D garment, it is performing an implicit, non-linear mapping (a wrap with seam closure, involving material constraints on facet edge stretching and warping). The wrap morph which I generate in the script, is a strictly linear mapping between the object's texture vertices and their corresponding object vertices. Any morphs which are imposed at the 2D stage, such as scaling or reshaping, will retain that planar axial aspect as they are transformed by the linear deltas of the wrap morph into the 3D object.

    IOW, I do not have the field transformations (separate, ordered rotations) which are actually occurring to the tailor blocks as they are positioned around the avatar prior to seam fusion. From the mathematical perspective, all I have is the final transformation matrix, I do not have the individual rotation and scaling matrices in their correct order, and, since such operations are not associative, I cannot just apply corrective factors before or after.

    The best example I can come up with, is to compare the 50% applied wrap morph, with the half-way positioning and shaping of the tailor's blocks. The linear, wrap morph can be inverting the sign of coordinates and turning materials inside-out on its way to the final shape. The tailor's blocks look pretty much the way they did before, just rotated through half the final angles and half-way to their final shape, due to their physical structure.

    Now, all that said, and that being just my initial reaction, I will not immediately consign your "Dove of peace" to the pigeon and olive pie. I will need to think about whether the information required to do the types of geometry gymnastics you suggest, is available to the limited means at my disposal (I include the methane generator which passes for my intellect, here).



  • @eclark1849 I remember clutching fruitlessly at the straw of hair growth groups created by Python, when I was attempting to create new groups on an object created by a python script. No dice. The hair growth groups weren't the same kind of group. (Way too many things in Poser use that word: Parameter groups, object body-part groups, Grouping Object Props, hair growth groups, etc. none of them interchangeable)

    The python actor.CreateHairGrowthGroup('HairName') method appears to create a <hair object>, which is a new child prop with no geometry!, parented to the currently selected actor. Usable by the hair room, no doubt, but still not useful for creating new facet groups directly in python.

    I'm afraid I have no special insight into the hair room to offer you, having given up when proving utterly unable to restrain hair using a torus prop set to collide, even when the torus was the size of a mill stone, the hair would not suffer to be contained by any reduction in the diameter of the torus' inner hole. Abandon hope all ye who enter here (without bell, book and candle, that is). I'm demonstrably not holey enough. @redphantom appears to be having success, so I would refer my questions there, first.



  • @anomalaus Honestly, at this point, I'd just settle for a Phyton script that would let me create presets in the Cloth Room, the Hair room and maybe the Bullet Physics Room. I don't know why, and I'm not a programmer, so I don't know if there was a reason they didn't do so in the first place, but a way to at least save the settings so they could be used over would be a given to me.

    I look at Poser and see that they are capable of having utilities created by python, and then I look at Blender and almost everything they do is by Python. I wish I knew Python.



  • @eclark1849 There are apps you can get free for your phone that will teach python. And there are books out there. I have one collecting dust at home. One day I may actually open it.



  • @rokketman I actually have a lot of those books already. I think I need an actual real teacher. Might have to check out something like UDemy and take an online course.



  • @eclark1849 Yes, I signed up for one of their Python classes, but haven't started it as yet.



  • @anomalaus said in Create UV Templates directly from Poser:

    @eclark1849 yes, indeed they are, though that was, as I explained to @matb , an initial instance of the script prior to my determination of the appropriate PIL (Python Image Library) methods to actually create an image file, which was my original goal, i.e. Create UV templates with seam guides from a figure or prop within the current Poser scene, or a wavefront obj file chosen by the user. One click, optional file selection, and voilà, done.

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script. Secondary outputs are the UV template plus seam guide image files. This can be very useful by animating the wrap morph, to provide extra detail on how the UV map deforms and joins seams, in the case where the seam guides are confusing due to the number of colours. I note that manually produced seam guides have a sparse set of seam matching facets and frequently need text overlays, which would be very difficult to automatically produce and remain legible, to explain which body part the seam fuses with. The wrap morph lets you physically explore the seam closure within Poser, obviating thousands of words of explanation.

    In both cases, temporary obj files are created to be subsequently loaded by Poser, before deletion.

    So basically, what you're saying is that you've written a script to create "instances" in Poser? Or am I misunderstanding?



  • @anomalaus
    I think for now the actual assembly of clothing from panels is better left to Marvelous Designer and the like. If the shrinkage functoin in the cloth room had worked properly separated for u and v direction ther would have been a possibility but practice learns you also need the manual picking and pulling of the fabric, and the cloth room is far from there. VWD does suppport user intervention and could be a route to explore.

    In your reply to @eclark1849 you say: In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script.
    So the user can morph the clothing into the UV layout and back. I understand that each 'normal' vertex that is associated with more than one texture vertex, is cloned and the clone paired to a texture vertex so there will be a one-to-one relation. I take it you keep record which vertices were cloned? Then it would be possible to weld the clones back together with the original vertex when the script closes.

    In your scrpt there is a morph attached to each vertex of the 2Din3D objects to bring it from UV-layout shape into the 3D shape. and vice versa.
    If the user would have deformed the 2Din3D objects (in yx plane), the morph back into the 3D shape would become distorted (offset in xy plane), but substracting the deltas of the deformation of the 2Din3D shape would bring them back in place again. The 3D shape would be as before.

    If the deformation applied to the 2Din3D objects is applied to the texture vertices also (du=-dx, dv=-dy), the UV mapping of the 3D shape will be as with the deformed 2Din3D objects. In theory this would bring a method to change between UV mappings; a sort of texture transformer functionality driven by morphs in the uv space. One would need a few extra cuts in the 2Din3D objects to allow for differences in cut of the skin. That is where facet groups would come in to define the paneling. Swapping uv maps within Poser. That would be attractive but the question is: does PoserPython allow access to writing the texture vertices?

    Then about the changing of clothing: That would be a cloth room thing, but the cloth room (or VWD) would need the possibility to read fabric strain (and therfore stress) from the edge lengths in the uv map. (that gives the true zero-strain distance between the vertices. Simulation would then pull the garment into shape around the figure. Applying the changes to the structural panels would make the decorations move with them.
    That would require the input of @h-elwood-gilliland and @rtorres and their teams, or VirtualWorldDynamics.

    Ok there goes the dove again. If it does not come back It may have found a place somewhere to build a nest or it may have ended up in your pie. In the latter case: bon appetit!



  • @eclark1849
    I use https://www.renderosity.com/mod/bcs/2nd-skin-2/64988/ for this purpose. You can delete unwanted faces from the obj that is created by 2nd skin to make a skull cap or beard cap (in a modeling program of course).You could grow hair on the entire body if you wanted. I have found many uses for this little app over the years. You can make a second skin model of any body parts you choose, and with a slight offset, if you wish. The models that it makes are not distributable however.



  • @eclark1849 in my mind, instancing is a technique whereby an application minimises the memory resources it must allocate by only loading the definitions of common objects once. I believe that there are many places in the "binary digits in mass storage" to "coloured pixels on a 2D display device" pathway, which can make effective use of instancing.

    Object mesh definitions: Imagine our scene is a forest of more or less identical trees. If the individual trees are similar enough that they can all be derived from the same mesh vertices and facets, then you only need to load that mesh into memory once.

    UV mapping: If each leaf on the tree is mapped to a common UV texture space, so texturing one leaf textures them all, this is another case of instancing, built into the very definition of the wavefront OBJ file format. One set of UV texture vertices covers every separate leaf. Each separate leaf facet refers to its own, individual vertices and the shared texture vertices (vertex normals have an absolutely one-to-one correspondence with positional vertices and are often derived from the final mesh shape, and thus left out of the OBJ file)

    What Poser currently lacks is what I'd prefer to call "Cloning", where a clone is derived from its progenitor by a small subset of the definitions necessary to create the progenitor, and shares the rest. In current versions of Poser, I would think of the subset as everything which resides within a CR2 file and the shared portion as what remains in the geometry OBJ file. So, in that sense, Poser already has object instancing at the file system level. What it does with objects in memory is another story.

    The latter stages of the display pipeline are probably more important in determining whether instancing is useful. If your GPU can do instancing at the level of a defined final shape and textures, and only needs to decide where and at what orientation an instance is to be displayed, and the application is capable of making use of GPU instancing, then there are gains to be made there.

    But, if you want a room full of identifiably unique humans, even if they are "instances" or "clones" of the same figure, the moment you give them unique morph settings, they probably stop being instantiable from a GPU perspective, as their mesh and facet shapes differ.

    I don't know whether I've answered your question adequately, because it all depends on what happens between Poser and the 2D display hardware, not at the level of anything I might do in a script.