Create UV Templates directly from Poser



  • @anomalaus
    I think for now the actual assembly of clothing from panels is better left to Marvelous Designer and the like. If the shrinkage functoin in the cloth room had worked properly separated for u and v direction ther would have been a possibility but practice learns you also need the manual picking and pulling of the fabric, and the cloth room is far from there. VWD does suppport user intervention and could be a route to explore.

    In your reply to @eclark1849 you say: In the UV object version of the script, a single, flat object, defined by the texture vertices of the original, textured with the UV template and seam guides, and with a 'Wrap' morph which transforms the flat object into the original object's 3D shape, is the primary output of the script.
    So the user can morph the clothing into the UV layout and back. I understand that each 'normal' vertex that is associated with more than one texture vertex, is cloned and the clone paired to a texture vertex so there will be a one-to-one relation. I take it you keep record which vertices were cloned? Then it would be possible to weld the clones back together with the original vertex when the script closes.

    In your scrpt there is a morph attached to each vertex of the 2Din3D objects to bring it from UV-layout shape into the 3D shape. and vice versa.
    If the user would have deformed the 2Din3D objects (in yx plane), the morph back into the 3D shape would become distorted (offset in xy plane), but substracting the deltas of the deformation of the 2Din3D shape would bring them back in place again. The 3D shape would be as before.

    If the deformation applied to the 2Din3D objects is applied to the texture vertices also (du=-dx, dv=-dy), the UV mapping of the 3D shape will be as with the deformed 2Din3D objects. In theory this would bring a method to change between UV mappings; a sort of texture transformer functionality driven by morphs in the uv space. One would need a few extra cuts in the 2Din3D objects to allow for differences in cut of the skin. That is where facet groups would come in to define the paneling. Swapping uv maps within Poser. That would be attractive but the question is: does PoserPython allow access to writing the texture vertices?

    Then about the changing of clothing: That would be a cloth room thing, but the cloth room (or VWD) would need the possibility to read fabric strain (and therfore stress) from the edge lengths in the uv map. (that gives the true zero-strain distance between the vertices. Simulation would then pull the garment into shape around the figure. Applying the changes to the structural panels would make the decorations move with them.
    That would require the input of @h-elwood-gilliland and @rtorres and their teams, or VirtualWorldDynamics.

    Ok there goes the dove again. If it does not come back It may have found a place somewhere to build a nest or it may have ended up in your pie. In the latter case: bon appetit!



  • @eclark1849
    I use https://www.renderosity.com/mod/bcs/2nd-skin-2/64988/ for this purpose. You can delete unwanted faces from the obj that is created by 2nd skin to make a skull cap or beard cap (in a modeling program of course).You could grow hair on the entire body if you wanted. I have found many uses for this little app over the years. You can make a second skin model of any body parts you choose, and with a slight offset, if you wish. The models that it makes are not distributable however.



  • @eclark1849 in my mind, instancing is a technique whereby an application minimises the memory resources it must allocate by only loading the definitions of common objects once. I believe that there are many places in the "binary digits in mass storage" to "coloured pixels on a 2D display device" pathway, which can make effective use of instancing.

    Object mesh definitions: Imagine our scene is a forest of more or less identical trees. If the individual trees are similar enough that they can all be derived from the same mesh vertices and facets, then you only need to load that mesh into memory once.

    UV mapping: If each leaf on the tree is mapped to a common UV texture space, so texturing one leaf textures them all, this is another case of instancing, built into the very definition of the wavefront OBJ file format. One set of UV texture vertices covers every separate leaf. Each separate leaf facet refers to its own, individual vertices and the shared texture vertices (vertex normals have an absolutely one-to-one correspondence with positional vertices and are often derived from the final mesh shape, and thus left out of the OBJ file)

    What Poser currently lacks is what I'd prefer to call "Cloning", where a clone is derived from its progenitor by a small subset of the definitions necessary to create the progenitor, and shares the rest. In current versions of Poser, I would think of the subset as everything which resides within a CR2 file and the shared portion as what remains in the geometry OBJ file. So, in that sense, Poser already has object instancing at the file system level. What it does with objects in memory is another story.

    The latter stages of the display pipeline are probably more important in determining whether instancing is useful. If your GPU can do instancing at the level of a defined final shape and textures, and only needs to decide where and at what orientation an instance is to be displayed, and the application is capable of making use of GPU instancing, then there are gains to be made there.

    But, if you want a room full of identifiably unique humans, even if they are "instances" or "clones" of the same figure, the moment you give them unique morph settings, they probably stop being instantiable from a GPU perspective, as their mesh and facet shapes differ.

    I don't know whether I've answered your question adequately, because it all depends on what happens between Poser and the 2D display hardware, not at the level of anything I might do in a script.



  • @F_Verbaas thanks for the Pauline cloth object. The first and most major hurdle I see, at this stage, is that linear morph transformations cannot interpolate rotations about an axis

    As @karina mentioned in the thread about rolling dice:

    That could be done eventually (provided you solve the problem of realigning the die properly after it's tossed and landed with a different orientation).

    --> But then, morphs work linearly:
    :---------------------------------------------------------:
    You can't simulate rotations with a morph, because the vertices will move to the new destination in a straight line. So "morphing" a 180 degree rotation would only mirror the prop relative to the rotation axis: Top planes will move to bottom, and bottom plane will move to top. Normals will thus be inverted too.
    The side planes would be turned upside down instead...

    As seen in the image below, ignoring the lack of "True-Scaling" I have yet to apply to the pre-wrapped UV template object, with the wrap morph at 50%, the transition between 2D panels and 3D wrapped model causes the cloth to intersect itself (highlighted area) as the furthest edges of the panel which need to close a seam with the nearest edge of the sleeve linearly traverse space through the panel itself to their final destination, rather than wrap like a physical piece of cloth, so some form of simulation is required to interpolate the wrapping, which linear morphs are demonstrably unsuitable for.

    0_1507458041041_Screen Shot 2017-10-08 at 8.50.22 pm.png

    My gut reaction here, is that the solution must involve some continuously smooth transformation of the vertex normals with no flipping discontinuities. Given the example with the sleeve, where the cloth intersects, the forward pointing normals have gone from a flat, nominally convex surface to a concave... or maybe not, I have to get my head around this.

    In specific terms of clothing for a human figure, rather than just wrapping a 2D shape around an arbitrary 3D object, perhaps the practicalities of physical clothing design (of which I am not terribly familiar) would suggest that one can define, on a per panel basis which limb or trunk segment is being wrapped and use that avatar body part's twist axis to re-parameterise the clothing panel from UV to polar cylindrical coordinates, then separate the wrap into ordered stages such as UVSpace scaling and orientation (rotation around the W(Z) axis, positioning in XYZ space (a linear translation) and finally, a cylindrical partial wrap to approximately join seams, before a final, minimal morph to actually bring the seam vertices into coincidence. The initial three steps (apart from scaling) would not change any inter-vertex distances, with the final weld morph adding in the stresses which you showed from the MD fitting example.



  • @anomalaus You are welcome. Good to see the seams of the block going colored. Thank you for your patience with my wild ideas
    I think you are looking beyond what I mean to say. Apologize. My bad. I should not have shown the 'flattening' process. The 'suit' is for Pauline. It has no relation to Andy unless Andy wants to wear Pauline's jumpsuit.
    The process I proposed is to deform the flattened panels in 3D space and copy same modifications into the uv definition, and then morph back fully into the original 3D shape. I do not understand what you mean to achieve with using only 50% morph. Sure that will not bring the pieces in a useful position. Morphing back 100% would restore the 3D shape, but with UV mapping as changed in the 2D alignment. With non-true scale UV's as found on figures that will work also, but it will be less easy to do because the user needs to work one deformation into another rather than from flat into something else.
    This could work provided Poser allows Python to set UV co-ordinates in some way.

    For clothing 'grading' indeed the method would need some knowledge about clothing cutting. That is hardly rocket science though. Just using common sense brings one very far. Otherwise there is still plenty reading material.
    Also it would need the cloth room to recognize difference between edge length in the 3D representation (which is a result of the simulation), and the edge length in the UV representation, which is immutable in the simulation and in a geometry as I provided represents the stress-free state of the material as it was on the cutter's table. The simulation process will try to minimize the energy caused by these strains and do the 'sewing' action. This is where the change in geometry will be achieved. This is essentially what you do in MD while tuning the fit, except that in MD special 'seam' facets hold the energy.
    I hammer on cloth room reading the zero-stress edge lengths from the UV representation because essentially it makes the difference between using new clothes and worn-off second hand clothes that have developed full permanent set to the body of the person who used to wear it.
    Exploring this option for clothing grading will have to wait until the cloth room (or VWD) have the possibility to pick this up. (and improve the quality of the results of dynamic clothing significantly!)



  • @F_Verbaas I apologise for confusing the issue with showing Pauline's clothing in the scene with an Andy figure, I was just using the default scene, into which the script creates a UV template object from a selected file, i.e. your Pauline cloth suit.

    The point I am trying to clarify is that Poser's standard means of deforming a mesh, morphTargets, cannot, by themselves, be a complete solution to fitting clothing within Poser, since they only contain linear translation vectors for each vertex. To properly assemble planar clothing panels into the correct orientations for them to be stitched around a figure, they need to be rotated. Props can be rotated, because they have rotation properties, or parameters. Vertices cannot be rotated, as they only have positional coordinates. Morph targets only modify the position of vertices and know nothing about facets or surface normals, which depend on the orientation of the underlying object or prop.

    If I cut the groups which comprise the clothing suit into separate props (like using Spawn Props in the Grouping Tool), then each of those props can be separately rotated and positioned around the figure they're intended to cover, by using the prop's translation and rotation dials. But doing so loses (unless it is internally linked somehow) the connection to the original object.

    ... [much time passes in life and subconscious contemplation] ...

    Perhaps what I'm looking for, to actually link the UV deformations (which are in the plane of the flattened cloth panels) to XYZ deformations which remain parallel to the facets, is to evaluate the average direction of the vertex normals for each panel, both before and after wrapping the figure, and use such average rotations (which should never have a magnitude greater than 180°) on all of the panel's vertices together, to orient it in XYZ space, before applying the final wrapping.

    Every time I try to extract component transformations which can be sequentially applied, I come back to the limitation that morphTargets store vertex translations, not edge rotations. If I had a deformation structure which recorded facet edge rotations instead (perhaps using quaternions or even dual quaternions, which is why I referred to simulations earlier, as I think they must do something like this), I could faithfully record the wrapping required to turn a flat panel of cloth into a sleeve (with zero strain or self intersection, until the seam welds are applied). Poser just can't do that with its existing structures yet. I can probably build them in Python, but I will have to reinvent Marvelous Designer's wheels. This will not be a quick fix project.



  • Here's my "light" reading matter, at the moment: Transferring Skin Weights to 3D Scanned Clothes X-/



  • @anomalaus said in Create UV Templates directly from Poser:

    In the image file version of the script, UV templates with seam guide image files are the primary output of the script.

    Sounds great will this work on a Mac? Right now I export a obj import to my 3D app and there I get the uv map, so your script will save a lot of time. Good luck watching the progress.



  • @anomalaus
    Thank you for your explanation.
    Sure it would make little sense to fully reproduce MD functionality with Poser's tools. That was never my suggestion. It was about texture changes on the fly and, maybe at some time, geometry deformations effectuated in the cloth room.

    I do not think you need to enter into calculating deltas from the transformations. When magnets are being used, recored transformations are senseles anyway. The information about the transformations is in the vertices.
    Transformations and deformatons of the 2din3d objects could be accommodated via a 'shadow' copy in memory, keeping the positions of all vertices in the position before the transformaton was made:
    1 - Let vertex vi be vertex no i of the panel p moved and morph delta di be the delta that would bring vi back in the original position on the 3d object.
    2 - Let sp be a shadow copy of p kept in backgound memory, and, likewise vertices svi be the shadow copy of vi.
    3 - Let user move p around to a position, size and deformation that suits his purpose. What user cannot do is change the topology of p. Let then sdi be the difference vector sd i= (svi - vi ) , that is, the morph delta to move p back to the original position as remembered by sp.
    Consequently, the morph delta to move vi back to the original place on the 3D object is sdi + di. and application on all vertices in p of their respective restore deltas brings p, transformed and deformed as it was, back into its original position and shape.

    In the geometry I did send you the UV's are mapped into the (0,0)-(1,1) square. I forgot to mention the 'real' size of the square and I did not save the original. Assume one unit length in uv space is 2 m in xyz space.



  • Hi folks,

    I wonder if I may be so bold as to ask this. The main reason I am so excited about this, at the moment, is because I have had a project on hiatus for a few months now. For those in the know, Annie did a pinup as a sassy nurse a while back. Well, I wanted to create a set of emergency service-related pinups. I've got a test firefighter one rendering right now (been two days so far... involves atmospherics, so yeah) and I have a police one waiting to be completed, which sees Annie taking on the role of a California Highway Patrol officer, complete with sassy uniform (which I've put the correct patches etc on) and CHP Crown Vic patrol car, for which I'm using the product I've linked to.

    The problem is this: The CV-1 comes with UV templates, but I can't get them to work for me. I need to create a perfect CHP livery for the car, but I can't get the template lined up, so it's all going down the pan!

    Could someone help me with this, please? I desperately want to finish this set soon... maybe I'll have a go at doing a coastguard one, eh? Not Baywatch, Pamela Anderson's got nothing on our Annie, lol!

    I'll see if I can find the nurse pinup to post an example of. I thought I published it to my DA page, but it seems I didn't, oddly.

    Here's the product I'm using for the police one, it's a brilliant product with loads of options, I love it! My only issue is with the templates.

    https://www.renderosity.com/mod/bcs/cv-1-for-poser-and-daz/66928

    Cheers,

    Glen.



  • This is the nurse one. I need to work on some of the geometry issues with the dress and her leg etc, but this is pretty much it.

    Back on topic now, apologies for digressing a bit. 0_1507931269917_Nurse 001.png



  • @Glen85 I don't have this product, so I can't just create UV templates for it to check. Can you post a low resolution screen capture demonstrating the kind of problem you're having? Is it misalignment of seams or overlapping UV templates? How do the police livery textures compare to the UV templates? Can you overlay them in an image editor with partially transparent layers?



  • It just seems odd because there are overlapping areas and other areas which are cut off short. I've tried to work with one of the original maps which came with it, as it's reasonably close to the CHP livery, being black and white, but quickly ran into problems.

    0_1507986939562_picture052.jpg 0_1507986960021_picture050.jpg 0_1507986971223_picture046.jpg

    There are lots of examples of the livery here:

    https://www.google.co.uk/search?q=california+highway+patrol+crown+victoria&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwizj6immfDWAhWCAxoKHSLPAgUQsAQIJQ&biw=1536&bih=735#imgrc=_



  • Having said all of this, I've never made anything with a template before, so maybe it's just me not understanding how to do it.



  • This better illustrates one of the problem areas. The black parts are actually clear of all the doors, but the template puts these areas partially over the doors.
    0_1507987209742_picture046.jpg



  • @Glen85 OK. Assuming the templates are correct, the livery textures must have exactly the same aspect ratio as the template, which looks square. Poser will take your texture map, which looks taller than it is wide (from the second image) and scale it (increase its width) until it fills the UV map space, which is square, in this case.

    Your textures must include pixels (even if they are transparent) for the whole UV template. Even though things line up when you overlay them, because the texture map is narrower than the template, Poser will stretch everything, assuming that it has to cover the whole UV space.

    There are ways to get around this, by setting scales other than 1 in the ImageMap node in Poser, and using U and V offsets to line things up, but the simplest way is just to make sure that textures have the same proportions as the templates.



  • But that texture fits the car perfectly, it's the original one. The template seems to be wrong, that's the problem, and there are no guides on the original textures to help with creation of new ones. That's why I'm struggling, because it seems like the textures and templates don't match up.



  • @Glen85

    If you go to http://uvmapper.com/downloads.html, you can get UV Mapper Classic for either Windows or Mac. This is the FREE version. This program will allow you to create your own templates.

    1. Choose FIle > Load Model, and select your OBJ file.
    2. After the file loads, choose File > Save Texture Map.
    3. In the BMP Export Options, enter the size of the template you want to create. These are usually square, and rule of thumb is to make them 1024x1024, 2048x2048, or 4096x4096.
    4. Choose OK.

    The template can only be saved in BMP format, but you can easily convert to any other format with Photoshop.

    After you create the template, it would be interesting to see if it is any different than the one you show above.



  • Errmmm... >.>

    Methinks it went wrong.

    0_1508002532655_cv1b.png



  • @Glen85
    Nope nothing wrong. What that tells me is that the car you are trying to texture does not have the same UV map as your template above, and also that it uses more than one texture, which is why they are overlapping. Where did you get the car?

    Wait a sec ... create a copy of the original template. Then use photoshop to resize the template copy to the same size as your texture map that works. What result do you get when you layer them in photoshop? Do they match up?