Some ideas for Poser GUI

  • Poser Ambassadors

    (a) In Android, click on a button, snap a picture and load it automatically into a texture in Poser.

    This texture will look on your OBJ just horrible.

    The rest sounds like an Google play application when you are bored and not like a serious 3D Software.

  • Poser Ambassadors

    Get Google involved and they'll collect each and every vertex on your PC's.

  • @vilters Mwahahaa! They'll never get my cryptovertices!!!

  • @ladonna said in Some ideas for Poser GUI:

    The rest sounds like an Google play application when you are bored and not like a serious 3D Software.

    The point is to make the smartphone to have functions of a 3D mouse.

    Isn't there any use that a 3D graphic can have for a device that can measure in real time 3 angles, 3 positions, 3 speeds, 3 accelerations, and has a built in camera and videocamera. That device wouldn't have some more utility in controlling a 3D application that a 2D mouse has?

  • I get imagination to work, and I think... track down the phone position and angle... then reflect the phone position and angle in realtime on the hip of a toon... IK the feet... play some music... voila, I can make the toon dance by waving my smartphone with the music. For this alone there should be a lot of customers.

    I see a ton of fun uses like that -- not even getting into what long and difficult programming could create - say put my phone in selfie, orbit it around my head, generate the textures and apply to the head of say a gorilla... voila got my head look-alike on a 3D gorilla. Very funny.

  • Poser Ambassadors

    @fbs7 I even don't know what to answer here.

  • @fbs7 Some of the functions you cite would require a lot of computing power. Perhaps too much for a simple smartphone battery.

  • I ran across a video that all of you should see.

    This idea has been played with by a few different people, and it looks promising.

    It isn't that far fetched of an idea to me.

    The coding for it can be found here.

  • @fbs7 I tested/tried some things you mentioned. In reality it's less useful as you may think.

  • @eclark1849 Android has not mutch to do. Just reporting some sensor data, clicks and mousepositions, grabbing videos/photos and sending all this to the desktop.

  • @adp said in Some ideas for Poser GUI:

    @fbs7 I tested/tried some things you mentioned. In reality it's less useful as you may think.

    Oh, I see. That's a pity. How about, back in the 2D realm, tap in the smartphone camera, face it towards a wall... select a color... now get an object in your hand with color similar to that one... the code in Android ignores all other colors and tracks down only the object in your hand; track the center of it in 2D coordinates.

    Play a song, you wave the object around with your hand, that's tracked down and positions either XY or XZ of a body part (say hip or one hand) of the toon in Poser in the PC (or Mac).

    Now, as this is in 2D, so in order to make a real dancing animation would have to do it twice (one for XY, another for XZ), but because the song is playing in real time, one can try several passes and see how that looks in the actual animation in a short time. It's really just puppeteering with the Android device.

    The math shouldn't be too out of this world - just filter a color, calculate the centroid in real time, send XY coordinates to Poser in real time (through say tcp/ip or whatever).

    Then, what do you think of this: if one would be willing to go after serious math, could also do this: get 7 small pieces of tape; put one on each corner of the mouth, one above upper lip, one below lower lip, two above inner eyebrow, one on the tip of the nose as reference (to make it easier, tape of a different color, say red). Android on selfie, it filters out all colors, tracks down the centroid of the nose color as reference, then calculates the relative position of the centroid of the of the other tape marks relative to this one. Then discard the nose marker.

    Now you set some morphs in your preferred toon, make a face to the android, click on "Lean". The code will associate the morph positions with the XY positions of the centroid of the 6 white markers. Learn several morph settings that way, each associated with a face (say face with mouth open associated with MoutOpen morph).

    Now the serious math part; run the animation, track the colors on the face, calculate the centroids, then calculate a solution for a combination of weights for the morphs that added together produce the position of the markers on the screen. This is a minimization problem (minimize the error). Set the morphs in real time.

    Something like this: morphs M1, M2, M3, ... correspond to markers "a".."f" positions X1a, Y1a, X1b, Y1b, ... X1f, Y1f, X2a, Y2a, etc... then for a generic XYa, XYb, XYc, ... position, calculate morph values m1, m2, m3, ... that minimize the error Err; this can be solved by multi-variate Newton-Raphson, I think so

    Xcalc_a = m1 * X1a + m2 * X2a + ...
    Xcalc_f = m1 * X1f + m2 * X2f + ...
    Ycalc_a = m1 * Y1a + m2 * Y2a + ...
    Err = sum ( (Xcalc_a - Xa)^2 + (Xcalc_b - Xb)^2 + .... + (Ycalc_a - Xa)^2 + (Xcalc_b - Yb)^2 + ... )

    This way one could puppeter several morphs in a toon by doing faces in real time through the animation. For example might be able to track down the vowels on speech, or basic expressions. Unfortunately cannot track eyes this way.