Some ideas for Poser GUI

  • Nowadays I'm very much discouraged on posting any suggestions to SM, but my mind can't really stop, so here are a couple of new ideas, thanks to my 75-years old mom who insisted that I should get out of the Dark Ages and finally buy me a smartphone to talk to her via Whatsapp. I found it amazing how Whatsapp integrates functions in the windows desktop with functions in the smartphone. So, the idea is to get an Android helper appl to take benefit of things that the smartphone can do, and the desktop can't.

    (a) In Android, click on a button, snap a picture and load it automatically into a texture in Poser.

    (b) Get a completely zero-posed character, set some morphs or angles, then point the Android camera towards something (say a word or a picture or an object), click one button, Poser will then remember than this set of morphs/angles is associated with that picture. Repeat that for several poses/morphs and objects. Now start the animation; point Android to the figure associated with the pose, then at the right moment in the animation, click another button in Android; it will see the figure, find it's associated with so-and-so morphs and angles, and sets Poser to those morphs/angles. That way you can puppeteer your toon by pointing to some objects and taking pictures.

    (c) Click on a body part in Poser, hold the smartphone in the hand, start animation, click a button. Then the Android client will track down the gyroscope in the smartphone and change the x/y/z angles of the object in the animation as the x/y/z angles of the smartphone. Similarly, track down the accelerometer and change the x/y/z position in the animation as the calculated x/y/z position of the smartphone. Of course this will only give a rough first version of those curves, will require manual adjustment and cannot be retouched with the phone (only in Poser), but I do suspect that for things like hands and feet it may be a time saver for long animations.

    (d) Similar idea, give two buttons in the smartphone client (left foot, right foot), start animation, position the smartphone in your desk, wait for the proper moment in the animation then click the button for the foot; that will move the foot to the same relative position in the animation as the phone is on the table (compared to the initial point, that is). Then let the animation running, move the phone to another step on your table, wait for the moment, click the button for the other foot. Voila, now you can make the toon walk by following an imaginary path through the smartphone with the position of the feet.

    (e) Similar idea, click start animation, move your smarphone over your desk on a path, Poser tracks that and generates the left foot/right foot steps that follows that path automatically.

    (f) Similar idea, click start animation, move your smartphone around, poser will track the position of the phone and set up the camera to the same position. Now you can animate the camera to simulate what your smartphone is doing.

    (g) Choose a color for the green screen in Poser (for example..err.. green), then click a button in the smartphone and it will merge the video captured in the smartphone into the green screen area in the movie in Poser. I suppose this could be extended for rotoscoping too, I guess. Now people can finally have themselves and your preferred Poser starlet in the same realtime movie.

    (h) Select a material node in Poser, click a button in the smartphone, now the phone shows a camera, put a 'X' in the middle of it; now whatever color shows in the camera on that X will be set as the color in the material in Poser.

    (i) Click a button in the smartphone and the Android will show your poser animation in 3D via its holographic projector, with a scantly-clad Terai Yuki saying "Help me, Obi-Wan-Kenobi"... hey, wait, will have to wait some 15 years more for this one. Nevermind this.

    I know the chance that SM will consider any of that is somewhere in the area below zero, but at least we have some fun.

  • @fbs7 when Google or Amazon or Apple buy Poser, or more likely Smith Micro itself, then we'll probably see AI in all our apps.

  • @anomalaus With all due respect, Poser is a small fish in a big Pond. Of the three, Google is probably most likely since it already owns Sketchup. Back in the day when Jobs was around, Apple MIGHT have been interested in Poser, but frankly, I don't think Cook knows what to do with Apple, let alone any properties they might buy.

  • Poser Ambassadors

    (a) In Android, click on a button, snap a picture and load it automatically into a texture in Poser.

    This texture will look on your OBJ just horrible.

    The rest sounds like an Google play application when you are bored and not like a serious 3D Software.

  • Poser Ambassadors

    Get Google involved and they'll collect each and every vertex on your PC's.

  • @vilters Mwahahaa! They'll never get my cryptovertices!!!

  • @ladonna said in Some ideas for Poser GUI:

    The rest sounds like an Google play application when you are bored and not like a serious 3D Software.

    The point is to make the smartphone to have functions of a 3D mouse.

    Isn't there any use that a 3D graphic can have for a device that can measure in real time 3 angles, 3 positions, 3 speeds, 3 accelerations, and has a built in camera and videocamera. That device wouldn't have some more utility in controlling a 3D application that a 2D mouse has?

  • I get imagination to work, and I think... track down the phone position and angle... then reflect the phone position and angle in realtime on the hip of a toon... IK the feet... play some music... voila, I can make the toon dance by waving my smartphone with the music. For this alone there should be a lot of customers.

    I see a ton of fun uses like that -- not even getting into what long and difficult programming could create - say put my phone in selfie, orbit it around my head, generate the textures and apply to the head of say a gorilla... voila got my head look-alike on a 3D gorilla. Very funny.

  • Poser Ambassadors

    @fbs7 I even don't know what to answer here.

  • @fbs7 Some of the functions you cite would require a lot of computing power. Perhaps too much for a simple smartphone battery.

  • I ran across a video that all of you should see.

    This idea has been played with by a few different people, and it looks promising.

    It isn't that far fetched of an idea to me.

    The coding for it can be found here.

  • @fbs7 I tested/tried some things you mentioned. In reality it's less useful as you may think.

  • @eclark1849 Android has not mutch to do. Just reporting some sensor data, clicks and mousepositions, grabbing videos/photos and sending all this to the desktop.

  • @adp said in Some ideas for Poser GUI:

    @fbs7 I tested/tried some things you mentioned. In reality it's less useful as you may think.

    Oh, I see. That's a pity. How about, back in the 2D realm, tap in the smartphone camera, face it towards a wall... select a color... now get an object in your hand with color similar to that one... the code in Android ignores all other colors and tracks down only the object in your hand; track the center of it in 2D coordinates.

    Play a song, you wave the object around with your hand, that's tracked down and positions either XY or XZ of a body part (say hip or one hand) of the toon in Poser in the PC (or Mac).

    Now, as this is in 2D, so in order to make a real dancing animation would have to do it twice (one for XY, another for XZ), but because the song is playing in real time, one can try several passes and see how that looks in the actual animation in a short time. It's really just puppeteering with the Android device.

    The math shouldn't be too out of this world - just filter a color, calculate the centroid in real time, send XY coordinates to Poser in real time (through say tcp/ip or whatever).

    Then, what do you think of this: if one would be willing to go after serious math, could also do this: get 7 small pieces of tape; put one on each corner of the mouth, one above upper lip, one below lower lip, two above inner eyebrow, one on the tip of the nose as reference (to make it easier, tape of a different color, say red). Android on selfie, it filters out all colors, tracks down the centroid of the nose color as reference, then calculates the relative position of the centroid of the of the other tape marks relative to this one. Then discard the nose marker.

    Now you set some morphs in your preferred toon, make a face to the android, click on "Lean". The code will associate the morph positions with the XY positions of the centroid of the 6 white markers. Learn several morph settings that way, each associated with a face (say face with mouth open associated with MoutOpen morph).

    Now the serious math part; run the animation, track the colors on the face, calculate the centroids, then calculate a solution for a combination of weights for the morphs that added together produce the position of the markers on the screen. This is a minimization problem (minimize the error). Set the morphs in real time.

    Something like this: morphs M1, M2, M3, ... correspond to markers "a".."f" positions X1a, Y1a, X1b, Y1b, ... X1f, Y1f, X2a, Y2a, etc... then for a generic XYa, XYb, XYc, ... position, calculate morph values m1, m2, m3, ... that minimize the error Err; this can be solved by multi-variate Newton-Raphson, I think so

    Xcalc_a = m1 * X1a + m2 * X2a + ...
    Xcalc_f = m1 * X1f + m2 * X2f + ...
    Ycalc_a = m1 * Y1a + m2 * Y2a + ...
    Err = sum ( (Xcalc_a - Xa)^2 + (Xcalc_b - Xb)^2 + .... + (Ycalc_a - Xa)^2 + (Xcalc_b - Yb)^2 + ... )

    This way one could puppeter several morphs in a toon by doing faces in real time through the animation. For example might be able to track down the vowels on speech, or basic expressions. Unfortunately cannot track eyes this way.