Lighting Firefly - Inverse Linear/Square - Start/End Distance & "Grid Units" :/

  • Wassa "Grid Unit?" (Yeah, I know, Poser scale and all that... BUT!)

    So, using Firefly, I'm messing around with lights, horribly mangling some renders. After some issues with things turning out "not right" I conducted a bunch of anecdotal "tests" using point and spotlights with Start and End fade-off parameters, trying to achieve an effect and trying to track down all the stray photons I couldn't account for in other renders.

    These parameters don't mean much to Inverse Linear/Square lights, do they? Using everything from standard, acceptable, values to the absurd, it seems there's always a portion of light "left over." But, that's not something I could find anything about in the manual. (No other light sources, No IDL/HDRI/etc, no ambient, yada yada yada, just the single light.)

    What gives?

    Also - For clarification, what's the current... Grid Unit measurement? (The old "PNU?" system? )

    And... the measuring tools in Poser - What do they base their measurements on and is that the same variable that the Light parameters use? All based on Grid Units or "other?"


    PS - Didn't want to load up a bunch of experiment pics. Description: Place spotlight 2 units above floor, facing floor, set 100% strength, start/end values at whatever absurdly low number you wish, doesn't matter (.0001/.0002 or even less) - There's still a good bit of illumination "left over." Elevate the light to 50 units, same values, and,depending on the Angle/End, there's still the same amount of illumination left over. (Or, close enough.) Point lights are just crazy with it, too, and I just stopped trying to experiment with those in this way.

    PP - Yes, I know, a photon doesn't stop "until." But, the "End" variable seems to insist it's there to defy nature in exactly this way, even using the natural inverse square calc for light.

  • Poser Ambassadors

    The "Constant" model was never constant - instead it is a function that is the smaller of two lines. One line is horizontal. The other line is sloped from start to end, where the start is max brightness and at end it is 0.

    The inverse linear doesn't use lines with a start and end. It's 1/x and if you take a good hard look at this, 1/x approaches 0 but is never quite zero. Same with 1/x^2 (inverse square) which falls faster but NEVER goes to zero.

    The Poser devs are notorious for doing things like this. The start and end distance have no role in the inverse linear and inverse square falloff.

    Now here's the biggest question. Why is there no node that emits "distance to the light"??? If there had been you could make any distance function you want.

  • Poser Ambassadors

    As for the units, the parameter dials for start and end (and most everything really) are going to be in whatever you selected for your PDU (Poser Display Unit). For me those are inches.

  • @bagginsbill

    I'm using Feet.

    So... In Fireflly, it's not possible to exert fine control over the Inverse light calcs? In other words, let's say I wanted a dark portrait, single spotlight, with very low light (dark) background visible, if at all - What to do in Firefly if I want the realistic inverse square fall-off, but don't want that extra 10%+ bleed escaping into the rest of the scene to wash-out any dark shadow effect I had going on?

    Doomed to use Constant just to gain some measure of control? Doomed to no IDL at all when one light bounces 10% of its total illumination uncontrollably washing out shadows? Wait for Superfly to process it, instead? :)

    I wondered why nonsensical things were going on in renders with lighting in Firefly. I knew about the older version's limitations, but, even then, could get decent stuff out of it when I felt like it. Now that I want to explore it more, it's looking like Superfly is the only thing capable of handling things realistically.

    PS - Do you know the lumen values or a way to calculate these for Firefly lights, so that I have some sort of a "standard" to start basing things on? Otherwise, it's a thing of constantly moving goalposts when trying corrections. (IIRC, settings for ambient channels can be done, but I don't think I've read any translation for "100% default spotlight inverse square = xxx lumens." (ie: light bulb, cause if it's not a light-bulb, then wtf is it? :) )

  • @morkonan
    If you accessed my lighting tutorial for superfly (available here: the rules work the same for Firefly as they do for Superfly (except no area light of course).
    There is a small error in the tutorial, but it is close enough to give pretty good control over inverse linear falloff at least.
    My tutorial covers converting real world lights to Poser values (lumen/lux calculations, the lot).

    There is a second edition in the works (that corrects the error) but I'm simultaneously teaching, renovating the house and going in for surgery over the next month. So maybe a bit after April.

  • Poser Ambassadors

    Piersyf's lighting paper is the best guide I've seen on the subject of light level settings.

    If you add on the notion of filmic tone mapping, things get even easier. I'm about to go on vacation so won't be adding much over the next week, but have a look at the tone mapping thread.

    sRGB ruining our renders?

    I've switched over to rendering without concern about "exposure", save as HDR, then a second render through a tone mapping Poser shader. The second render is just rendering a self-lit card, which takes less than 3 seconds even at full size. As long as I get the relative intensities of various light sources correct (which is easy by following piersyf's work) I can then change my film and exposure dozens of times a minute to get any look I want FROM THE SAME RENDER. In fact, all the images below show different "films" and different "exposures" in the same render simultaneously. I can do any number of settings in one render to produce a catalog and then I pick the one I like.

    0_1488378860777_Reinhard m=.5.jpg
    0_1488378866068_Reinhard m=1.jpg
    0_1488378871503_Exponential vs Reinhard.jpg
    0_1488378885190_MFalcon 10x.jpg
    0_1488378890779_MFalcon 13 EV.jpg
    0_1488378899723_Exponential with contrast.jpg

    Every one of the images above was a straight save from Poser using my tone mapping shader, no postwork.

    My tone mapping resources page
    (a work in progress)

  • @piersyf

    Thanks! Grabbing it now!

    Good luck with the surgery, I hope it all goes great! Though, you have a lot on your plate and between teaching, house renovation and surgery, it's probably difficult to tell which one could have the most catastrophic outcomes... I wish you all the best in each! :)


    I've been watching the sRGB thread with great interest. So, you're using Poser for a post-work app? :) Ironic, isn't it? The features your using or rather "bending to your will" are already incorporated in the program, so it makes one wonder why such capability isn't natively expanded upon in the program, itself. (Love me some tone-mapping!)

    Enjoy your vacation in the Conch Republic! (Mentioned in another thread, IIRC.) Thank you for your insight and replies!

  • A reply in general concerning the Lighting Tutorial and node "magic" -

    BB's wonderful Gamma and Light Meter primitives were something I really enjoyed working with, even though I wasn't quite sure I understood all the experimental returns. :)

    Given the calculator link posted in the Light tutorial ( couldn't a material be made with user-editable values (a node/series requiring user-input ranges for the material) for a desired light-level, applied to a similar prop, and then test renders could tell the user, based on the visual feedback from the prop, if the desired light level/range has been reached? Or, some similar sort of mechanic that provides visual feedback.

    Here's what I going on about - Realistic rendering isn't a "simple" thing. There's no "Do Art" button in any rendering application. There's a general user knowledge level that gets users only "so far" and once that limit has been reached, frustration ensues. It's not because of a lack of intelligence, it's just plain ignorance. Everyone is ignorant in some way, and that's fine, really, so long as its not dangerous. (Willful ignorance is not OK, though!) But, ignorance only has one cure. :) The thing is, when doing experimental renders, coupled with ignorance, one can get a great looking render in one shot out of a slew of bad renders, with no immediately obvious cause for the anomaly. This makes gaining experiential knowledge haphazard at best.

    In short - There are no general "teaching tools" for this other than "read this thing and go forth and render as it says." Feedback for the user is only in the form of the final render and, since its often anecdotal, it's not universally helpful in the pursuit of "true knowledge." "Tool" that provides immediate "feedback" to let the user know, for sure, that they're "doing it right" is really needed. When using the Gamma Meter, I was overjoyed to see this sort of tool in Poser. (I haven't used the Light Meter much or recently.) "Click." "Render." "True Feedback." If there were more tools like this that provided feedback based on uncompromising knowledge independent of "looks like" render-result opinions, then true learning and useful experimentation is the natural successor...

    Thanks, guys! Your feedback has been great and I'm consuming "knowledge" rapidly. Though, digesting it, comprehending it and applying it have yet to follow. But, I'm trying.

  • Sorry for the machine-gun posts - Obviously, I'm quite enthusiastic. :)

    Quick question, probably not-so-simple-answer:

    Preface: Lots of experimentation today with rendering and different "tools." Lightmeter/Gammameter/sRGB board (To ensure calibration and to examine various effects across color), SSS and HDRI (BB's Envirodome, correct gamma setting)

    There are tons of great HDRI images out there and they're a huge help when it comes down to "realism." However:

    Is it possible to analyze an HDRI, obtain usable data from it, and use that information to help determine a ballpark light setting for, example, a single spotlight/point, in a "known" environment?

    Or, perhaps the better question: Before rendering, what data can I get from an HDRI image (using Photoshop/other) in order to be able to better predict what its effects may be on the final render? (Data, even broad, that is generally applicable to a Firefly render or Superfly, if this info is cross-compatible.) Armed with certain information from the file, can I make a reasonable assumption that a particular HDRI at 100% will result in "blow-out" given a "known" scene with already predetermined variables? (Note: ""Without" basing such an opinion on an "example" image, which contains information I can't "see.")

    PS - Questions asked in ignorance can often be nonsensical to the learned. If this is the case, then just consider it to be someone asking what a photon experiences in its own frame and how much time it has to think about it... ;) I will eventually discover the answer to my question, if not today, then in some reasonable time frame.

  • Poser Ambassadors

    @morkonan - the best method of checking whether a render is technically good is to test render & look at the histogram in an image editor. The ideal is to maximise the dynamic range while avoiding clipping. A histogram has the distinct advantage of being hardware independent & objective; I believe it's the single most useful tool in any kind of digital imaging.

    I do think a lot comes down to approach - bagginsbill's technique above is excellent, the best method I've seen, if you want to do as much in Poser as possible. Personally I regard a 3D render engine in the same way as my camera, so all I want is good data & I'm used to processing it using other tools. What I'm saying is that the way I go about getting images out of Poser is specific to the way I'm comfortable working, but I can get results I'm happy with quickly & consistently.

    My default setup for Superfly is a full 360 environment for bounced light & reflections (the Poser ground or Construct) plus a single white infinite light at 100% intensity. The colour & intensity of whatever the environment is will affect everything that is not directly lit. HDR doesn't matter to me - LDR textures on the environment (e.g. skies etc) are fine. The only times I've had bad/unexpected results has been when using content with what I would consider to be errors.

    Note that I'm not doing indoor renders with multiple light sources though, mostly simple outdoors stuff ;)

  • @caisson said in Lighting Firefly - Inverse Linear/Square - Start/End Distance & "Grid Units" :/:

    @morkonan - the best method of checking whether a render is technically good is to test render & look at the histogram in an image editor. The ideal is to maximise the dynamic range while avoiding clipping. A histogram has the distinct advantage of being hardware independent & objective; I believe it's the single most useful tool in any kind of digital imaging.

    I do think a lot comes down to approach - bagginsbill's technique above is excellent, the best method I've seen, if you want to do as much in Poser as possible. Personally I regard a 3D render engine in the same way as my camera, so all I want is good data & I'm used to processing it using other tools. What I'm saying is that the way I go about getting images out of Poser is specific to the way I'm comfortable working, but I can get results I'm happy with quickly & consistently.

    Basically, what I want to do is to ensure that whatever is rendered is making the best use of what Poser can offer. If, for instance, I can render an image that is technically correct, verified internally using whatever tools possible, and is "visually appealing" using some arbitrary standard, then I'm happy. I don't really "do" renders for a render's sake. I render objects, usually that I've made, morphs, textures, etc, just to see how the end-result looks in Firefly/other. When it comes down to rendering a scene, I might put one together as a sort of "play" experience, just having fun and such. But, that play is usually entirely focused on "can I make the renderer do ____?" If I want a low-light, high-contrast render, it's just to see if I can go to that mountain and climb it, producing something worth the work that went into it. For instance, before IDL and mesh-lights (ambient channel), I wanted to render a scene where a sci-fi spaceship crew member was hunched over a control board, studying a screen. Low IDL, dark background, primary light coming from the screen and all the little buttons that were "lit"... (Think of the Air-Traffic Controller scene in "Close Encounters of the Third Kind.")

    I finally managed it, after days of tweaking, here and there. It was a nice "ah hah" moment. But, I had absolutely zero use for that render other than the value of the experience of actually making it happen. :)

    Now, I want to gain enough of an understanding of how various elements come together to form an image in Firefly (Superfly is, amazingly enough, simpler to predict, since it's PBR.) so that I can predict an outcome, with a high degree of certainty, before I press the "Render" button.

    I've learned a pretty good bit since I first asked the questions above. And, I've got some experiential knowledge in the form of a lot of focused experimental renders that can give me a firm handle on, at the very least, Poser's spotlights and saturation levels. (I'm still looking at how the darn things create shadows and what I can do to manipulate that inside Poser. Not easy, likely has to go to tonemapping. But, shadow lights are an option too for Firefly, at least.)

    I'm going to definitely fire up BB's tonemapping primitive, just so I can see how fast I can tonemap images without having to re-render them!

    Histograms are what I would assume I'd have to go to in order to check a rendered image. But, I want to do everything I can before the image actually gets rendered. :)

    Do you know of any tools that can provide me "data" for an HDRI image? Photoshop (The old CS2 version) won't give me much in the way of "info." (Gamut warnings, I assume, are not an issue, here, for using the image in a renderer, since that info will be further processed and is, I assume, desirable/intended, given the nature of the image.)

    Off to HDRI land to try to figure it out, but after I start my next experiment with point-lights..

    PS - I think I'll construct a single primitive for all these experiments. Not sure, but something that combines BB's tools with others, like color checkers, some "true" visual rulers/measuring tools, maybe something rigged so cameras/lights/etc can be exactly placed and the like. If it's worth distributing, I'll put it up somewhere.

  • @bagginsbill

    Just a note: Once the shader is finalized, it should be possible to get a python script to load up a primitive, the shader, and reach into Poser's rendercache to apply the desired .exr image for further processing... all in one neat little package. Right? Basically... "BB's EZ-Tonemapper" script. :)

  • Poser Ambassadors

    Loading an HDR image in Photoshop, adding an Exposure adjustment layer & pushing the slider around will give you an idea of the sort of range captured in the file. Some are better than others, it all depends on how it was made.

    Picturenaut might be worth looking at too, especially as it's free. It does give an estimate of EV range in the bottom corner, I'm just not sure how accurate it is.

  • @caisson


    I've been looking at the list over at HDRlabs: Just trying to figure out which one a relative novice can glean useful information with. Thanks for the PS tip, will try that out. (I have noticed that not all HDRI images out there are of the best quality or give good results, no matter how big they are or what format they're in.)