Create UV Templates directly from Poser



  • @eclark1849 yes, indeed they are, though that was, as I explained to @matb , an initial instance of the script prior to my determination of the appropriate PIL (Python Image Library) methods to actually create an image file, which was my original goal, i.e. Create UV templates with seam guides from a figure or prop within the current Poser scene, or a wavefront obj file chosen by the user. One click, optional file selection, and voilà, done.

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script. Secondary outputs are the UV template plus seam guide image files. This can be very useful by animating the wrap morph, to provide extra detail on how the UV map deforms and joins seams, in the case where the seam guides are confusing due to the number of colours. I note that manually produced seam guides have a sparse set of seam matching facets and frequently need text overlays, which would be very difficult to automatically produce and remain legible, to explain which body part the seam fuses with. The wrap morph lets you physically explore the seam closure within Poser, obviating thousands of words of explanation.

    In both cases, temporary obj files are created to be subsequently loaded by Poser, before deletion.



  • I'm asking because I'm trying to find someway to group different bodyparts together as one object. I'm thinking of the hair room and creating a hair growth group. I've tried using the Grouping tool, but even Ambient Shade couldn't explain how the Tool works to my dense skull. I was hoping maybe there was some way I could take advantage of the uV seam guides if I could get them to fit on to the body of what ever I'm trying to grow hair on.



  • @F_Verbaas the first thing which comes to mind, as I attempt to map my imagination to your vision, is that every time Marvelous Designer assembles the 2D tailor's blocks into the 3D garment, it is performing an implicit, non-linear mapping (a wrap with seam closure, involving material constraints on facet edge stretching and warping). The wrap morph which I generate in the script, is a strictly linear mapping between the object's texture vertices and their corresponding object vertices. Any morphs which are imposed at the 2D stage, such as scaling or reshaping, will retain that planar axial aspect as they are transformed by the linear deltas of the wrap morph into the 3D object.

    IOW, I do not have the field transformations (separate, ordered rotations) which are actually occurring to the tailor blocks as they are positioned around the avatar prior to seam fusion. From the mathematical perspective, all I have is the final transformation matrix, I do not have the individual rotation and scaling matrices in their correct order, and, since such operations are not associative, I cannot just apply corrective factors before or after.

    The best example I can come up with, is to compare the 50% applied wrap morph, with the half-way positioning and shaping of the tailor's blocks. The linear, wrap morph can be inverting the sign of coordinates and turning materials inside-out on its way to the final shape. The tailor's blocks look pretty much the way they did before, just rotated through half the final angles and half-way to their final shape, due to their physical structure.

    Now, all that said, and that being just my initial reaction, I will not immediately consign your "Dove of peace" to the pigeon and olive pie. I will need to think about whether the information required to do the types of geometry gymnastics you suggest, is available to the limited means at my disposal (I include the methane generator which passes for my intellect, here).



  • @eclark1849 I remember clutching fruitlessly at the straw of hair growth groups created by Python, when I was attempting to create new groups on an object created by a python script. No dice. The hair growth groups weren't the same kind of group. (Way too many things in Poser use that word: Parameter groups, object body-part groups, Grouping Object Props, hair growth groups, etc. none of them interchangeable)

    The python actor.CreateHairGrowthGroup('HairName') method appears to create a <hair object>, which is a new child prop with no geometry!, parented to the currently selected actor. Usable by the hair room, no doubt, but still not useful for creating new facet groups directly in python.

    I'm afraid I have no special insight into the hair room to offer you, having given up when proving utterly unable to restrain hair using a torus prop set to collide, even when the torus was the size of a mill stone, the hair would not suffer to be contained by any reduction in the diameter of the torus' inner hole. Abandon hope all ye who enter here (without bell, book and candle, that is). I'm demonstrably not holey enough. @redphantom appears to be having success, so I would refer my questions there, first.



  • @anomalaus Honestly, at this point, I'd just settle for a Phyton script that would let me create presets in the Cloth Room, the Hair room and maybe the Bullet Physics Room. I don't know why, and I'm not a programmer, so I don't know if there was a reason they didn't do so in the first place, but a way to at least save the settings so they could be used over would be a given to me.

    I look at Poser and see that they are capable of having utilities created by python, and then I look at Blender and almost everything they do is by Python. I wish I knew Python.



  • @eclark1849 There are apps you can get free for your phone that will teach python. And there are books out there. I have one collecting dust at home. One day I may actually open it.



  • @rokketman I actually have a lot of those books already. I think I need an actual real teacher. Might have to check out something like UDemy and take an online course.



  • @eclark1849 Yes, I signed up for one of their Python classes, but haven't started it as yet.



  • @anomalaus said in Create UV Templates directly from Poser:

    @eclark1849 yes, indeed they are, though that was, as I explained to @matb , an initial instance of the script prior to my determination of the appropriate PIL (Python Image Library) methods to actually create an image file, which was my original goal, i.e. Create UV templates with seam guides from a figure or prop within the current Poser scene, or a wavefront obj file chosen by the user. One click, optional file selection, and voilà, done.

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script. Secondary outputs are the UV template plus seam guide image files. This can be very useful by animating the wrap morph, to provide extra detail on how the UV map deforms and joins seams, in the case where the seam guides are confusing due to the number of colours. I note that manually produced seam guides have a sparse set of seam matching facets and frequently need text overlays, which would be very difficult to automatically produce and remain legible, to explain which body part the seam fuses with. The wrap morph lets you physically explore the seam closure within Poser, obviating thousands of words of explanation.

    In both cases, temporary obj files are created to be subsequently loaded by Poser, before deletion.

    So basically, what you're saying is that you've written a script to create "instances" in Poser? Or am I misunderstanding?



  • @anomalaus
    I think for now the actual assembly of clothing from panels is better left to Marvelous Designer and the like. If the shrinkage functoin in the cloth room had worked properly separated for u and v direction ther would have been a possibility but practice learns you also need the manual picking and pulling of the fabric, and the cloth room is far from there. VWD does suppport user intervention and could be a route to explore.

    In your reply to @eclark1849 you say: In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script.
    So the user can morph the clothing into the UV layout and back. I understand that each 'normal' vertex that is associated with more than one texture vertex, is cloned and the clone paired to a texture vertex so there will be a one-to-one relation. I take it you keep record which vertices were cloned? Then it would be possible to weld the clones back together with the original vertex when the script closes.

    In your scrpt there is a morph attached to each vertex of the 2Din3D objects to bring it from UV-layout shape into the 3D shape. and vice versa.
    If the user would have deformed the 2Din3D objects (in yx plane), the morph back into the 3D shape would become distorted (offset in xy plane), but substracting the deltas of the deformation of the 2Din3D shape would bring them back in place again. The 3D shape would be as before.

    If the deformation applied to the 2Din3D objects is applied to the texture vertices also (du=-dx, dv=-dy), the UV mapping of the 3D shape will be as with the deformed 2Din3D objects. In theory this would bring a method to change between UV mappings; a sort of texture transformer functionality driven by morphs in the uv space. One would need a few extra cuts in the 2Din3D objects to allow for differences in cut of the skin. That is where facet groups would come in to define the paneling. Swapping uv maps within Poser. That would be attractive but the question is: does PoserPython allow access to writing the texture vertices?

    Then about the changing of clothing: That would be a cloth room thing, but the cloth room (or VWD) would need the possibility to read fabric strain (and therfore stress) from the edge lengths in the uv map. (that gives the true zero-strain distance between the vertices. Simulation would then pull the garment into shape around the figure. Applying the changes to the structural panels would make the decorations move with them.
    That would require the input of @h-elwood-gilliland and @rtorres and their teams, or VirtualWorldDynamics.

    Ok there goes the dove again. If it does not come back It may have found a place somewhere to build a nest or it may have ended up in your pie. In the latter case: bon appetit!



  • @eclark1849
    I use https://www.renderosity.com/mod/bcs/2nd-skin-2/64988/ for this purpose. You can delete unwanted faces from the obj that is created by 2nd skin to make a skull cap or beard cap (in a modeling program of course).You could grow hair on the entire body if you wanted. I have found many uses for this little app over the years. You can make a second skin model of any body parts you choose, and with a slight offset, if you wish. The models that it makes are not distributable however.



  • @eclark1849 in my mind, instancing is a technique whereby an application minimises the memory resources it must allocate by only loading the definitions of common objects once. I believe that there are many places in the "binary digits in mass storage" to "coloured pixels on a 2D display device" pathway, which can make effective use of instancing.

    Object mesh definitions: Imagine our scene is a forest of more or less identical trees. If the individual trees are similar enough that they can all be derived from the same mesh vertices and facets, then you only need to load that mesh into memory once.

    UV mapping: If each leaf on the tree is mapped to a common UV texture space, so texturing one leaf textures them all, this is another case of instancing, built into the very definition of the wavefront OBJ file format. One set of UV texture vertices covers every separate leaf. Each separate leaf facet refers to its own, individual vertices and the shared texture vertices (vertex normals have an absolutely one-to-one correspondence with positional vertices and are often derived from the final mesh shape, and thus left out of the OBJ file)

    What Poser currently lacks is what I'd prefer to call "Cloning", where a clone is derived from its progenitor by a small subset of the definitions necessary to create the progenitor, and shares the rest. In current versions of Poser, I would think of the subset as everything which resides within a CR2 file and the shared portion as what remains in the geometry OBJ file. So, in that sense, Poser already has object instancing at the file system level. What it does with objects in memory is another story.

    The latter stages of the display pipeline are probably more important in determining whether instancing is useful. If your GPU can do instancing at the level of a defined final shape and textures, and only needs to decide where and at what orientation an instance is to be displayed, and the application is capable of making use of GPU instancing, then there are gains to be made there.

    But, if you want a room full of identifiably unique humans, even if they are "instances" or "clones" of the same figure, the moment you give them unique morph settings, they probably stop being instantiable from a GPU perspective, as their mesh and facet shapes differ.

    I don't know whether I've answered your question adequately, because it all depends on what happens between Poser and the 2D display hardware, not at the level of anything I might do in a script.



  • @F_Verbaas thanks for the Pauline cloth object. The first and most major hurdle I see, at this stage, is that linear morph transformations cannot interpolate rotations about an axis

    As @karina mentioned in the thread about rolling dice:

    That could be done eventually (provided you solve the problem of realigning the die properly after it's tossed and landed with a different orientation).

    --> But then, morphs work linearly:
    :---------------------------------------------------------:
    You can't simulate rotations with a morph, because the vertices will move to the new destination in a straight line. So "morphing" a 180 degree rotation would only mirror the prop relative to the rotation axis: Top planes will move to bottom, and bottom plane will move to top. Normals will thus be inverted too.
    The side planes would be turned upside down instead...

    As seen in the image below, ignoring the lack of "True-Scaling" I have yet to apply to the pre-wrapped UV template object, with the wrap morph at 50%, the transition between 2D panels and 3D wrapped model causes the cloth to intersect itself (highlighted area) as the furthest edges of the panel which need to close a seam with the nearest edge of the sleeve linearly traverse space through the panel itself to their final destination, rather than wrap like a physical piece of cloth, so some form of simulation is required to interpolate the wrapping, which linear morphs are demonstrably unsuitable for.

    0_1507458041041_Screen Shot 2017-10-08 at 8.50.22 pm.png

    My gut reaction here, is that the solution must involve some continuously smooth transformation of the vertex normals with no flipping discontinuities. Given the example with the sleeve, where the cloth intersects, the forward pointing normals have gone from a flat, nominally convex surface to a concave... or maybe not, I have to get my head around this.

    In specific terms of clothing for a human figure, rather than just wrapping a 2D shape around an arbitrary 3D object, perhaps the practicalities of physical clothing design (of which I am not terribly familiar) would suggest that one can define, on a per panel basis which limb or trunk segment is being wrapped and use that avatar body part's twist axis to re-parameterise the clothing panel from UV to polar cylindrical coordinates, then separate the wrap into ordered stages such as UVSpace scaling and orientation (rotation around the W(Z) axis, positioning in XYZ space (a linear translation) and finally, a cylindrical partial wrap to approximately join seams, before a final, minimal morph to actually bring the seam vertices into coincidence. The initial three steps (apart from scaling) would not change any inter-vertex distances, with the final weld morph adding in the stresses which you showed from the MD fitting example.



  • @anomalaus You are welcome. Good to see the seams of the block going colored. Thank you for your patience with my wild ideas
    I think you are looking beyond what I mean to say. Apologize. My bad. I should not have shown the 'flattening' process. The 'suit' is for Pauline. It has no relation to Andy unless Andy wants to wear Pauline's jumpsuit.
    The process I proposed is to deform the flattened panels in 3D space and copy same modifications into the uv definition, and then morph back fully into the original 3D shape. I do not understand what you mean to achieve with using only 50% morph. Sure that will not bring the pieces in a useful position. Morphing back 100% would restore the 3D shape, but with UV mapping as changed in the 2D alignment. With non-true scale UV's as found on figures that will work also, but it will be less easy to do because the user needs to work one deformation into another rather than from flat into something else.
    This could work provided Poser allows Python to set UV co-ordinates in some way.

    For clothing 'grading' indeed the method would need some knowledge about clothing cutting. That is hardly rocket science though. Just using common sense brings one very far. Otherwise there is still plenty reading material.
    Also it would need the cloth room to recognize difference between edge length in the 3D representation (which is a result of the simulation), and the edge length in the UV representation, which is immutable in the simulation and in a geometry as I provided represents the stress-free state of the material as it was on the cutter's table. The simulation process will try to minimize the energy caused by these strains and do the 'sewing' action. This is where the change in geometry will be achieved. This is essentially what you do in MD while tuning the fit, except that in MD special 'seam' facets hold the energy.
    I hammer on cloth room reading the zero-stress edge lengths from the UV representation because essentially it makes the difference between using new clothes and worn-off second hand clothes that have developed full permanent set to the body of the person who used to wear it.
    Exploring this option for clothing grading will have to wait until the cloth room (or VWD) have the possibility to pick this up. (and improve the quality of the results of dynamic clothing significantly!)



  • @F_Verbaas I apologise for confusing the issue with showing Pauline's clothing in the scene with an Andy figure, I was just using the default scene, into which the script creates a UV template object from a selected file, i.e. your Pauline cloth suit.

    The point I am trying to clarify is that Poser's standard means of deforming a mesh, morphTargets, cannot, by themselves, be a complete solution to fitting clothing within Poser, since they only contain linear translation vectors for each vertex. To properly assemble planar clothing panels into the correct orientations for them to be stitched around a figure, they need to be rotated. Props can be rotated, because they have rotation properties, or parameters. Vertices cannot be rotated, as they only have positional coordinates. Morph targets only modify the position of vertices and know nothing about facets or surface normals, which depend on the orientation of the underlying object or prop.

    If I cut the groups which comprise the clothing suit into separate props (like using Spawn Props in the Grouping Tool), then each of those props can be separately rotated and positioned around the figure they're intended to cover, by using the prop's translation and rotation dials. But doing so loses (unless it is internally linked somehow) the connection to the original object.

    ... [much time passes in life and subconscious contemplation] ...

    Perhaps what I'm looking for, to actually link the UV deformations (which are in the plane of the flattened cloth panels) to XYZ deformations which remain parallel to the facets, is to evaluate the average direction of the vertex normals for each panel, both before and after wrapping the figure, and use such average rotations (which should never have a magnitude greater than 180°) on all of the panel's vertices together, to orient it in XYZ space, before applying the final wrapping.

    Every time I try to extract component transformations which can be sequentially applied, I come back to the limitation that morphTargets store vertex translations, not edge rotations. If I had a deformation structure which recorded facet edge rotations instead (perhaps using quaternions or even dual quaternions, which is why I referred to simulations earlier, as I think they must do something like this), I could faithfully record the wrapping required to turn a flat panel of cloth into a sleeve (with zero strain or self intersection, until the seam welds are applied). Poser just can't do that with its existing structures yet. I can probably build them in Python, but I will have to reinvent Marvelous Designer's wheels. This will not be a quick fix project.



  • Here's my "light" reading matter, at the moment: Transferring Skin Weights to 3D Scanned Clothes X-/



  • @anomalaus said in Create UV Templates directly from Poser:

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    Sounds great will this work on a Mac? Right now I export a obj import to my 3D app and there I get the uv map, so your script will save a lot of time. Good luck watching the progress.



  • @anomalaus
    Thank you for your explanation.
    Sure it would make little sense to fully reproduce MD functionality with Poser's tools. That was never my suggestion. It was about texture changes on the fly and, maybe at some time, geometry deformations effectuated in the cloth room.

    I do not think you need to enter into calculating deltas from the transformations. When magnets are being used, recored transformations are senseles anyway. The information about the transformations is in the vertices.
    Transformations and deformatons of the 2din3d objects could be accommodated via a 'shadow' copy in memory, keeping the positions of all vertices in the position before the transformaton was made:
    1 - Let vertex vi be vertex no i of the panel p moved and morph delta di be the delta that would bring vi back in the original position on the 3d object.
    2 - Let sp be a shadow copy of p kept in backgound memory, and, likewise vertices svi be the shadow copy of vi.
    3 - Let user move p around to a position, size and deformation that suits his purpose. What user cannot do is change the topology of p. Let then sdi be the difference vector sd i= (svi - vi ) , that is, the morph delta to move p back to the original position as remembered by sp.
    Consequently, the morph delta to move vi back to the original place on the 3D object is sdi + di. and application on all vertices in p of their respective restore deltas brings p, transformed and deformed as it was, back into its original position and shape.

    In the geometry I did send you the UV's are mapped into the (0,0)-(1,1) square. I forgot to mention the 'real' size of the square and I did not save the original. Assume one unit length in uv space is 2 m in xyz space.



  • Hi folks,

    I wonder if I may be so bold as to ask this. The main reason I am so excited about this, at the moment, is because I have had a project on hiatus for a few months now. For those in the know, Annie did a pinup as a sassy nurse a while back. Well, I wanted to create a set of emergency service-related pinups. I've got a test firefighter one rendering right now (been two days so far... involves atmospherics, so yeah) and I have a police one waiting to be completed, which sees Annie taking on the role of a California Highway Patrol officer, complete with sassy uniform (which I've put the correct patches etc on) and CHP Crown Vic patrol car, for which I'm using the product I've linked to.

    The problem is this: The CV-1 comes with UV templates, but I can't get them to work for me. I need to create a perfect CHP livery for the car, but I can't get the template lined up, so it's all going down the pan!

    Could someone help me with this, please? I desperately want to finish this set soon... maybe I'll have a go at doing a coastguard one, eh? Not Baywatch, Pamela Anderson's got nothing on our Annie, lol!

    I'll see if I can find the nurse pinup to post an example of. I thought I published it to my DA page, but it seems I didn't, oddly.

    Here's the product I'm using for the police one, it's a brilliant product with loads of options, I love it! My only issue is with the templates.

    https://www.renderosity.com/mod/bcs/cv-1-for-poser-and-daz/66928

    Cheers,

    Glen.



  • This is the nurse one. I need to work on some of the geometry issues with the dress and her leg etc, but this is pretty much it.

    Back on topic now, apologies for digressing a bit. 0_1507931269917_Nurse 001.png