This Bucket size thing, can it go?



  • Yeah so ever since I've been playing around with Poser, I honestly don't know what bucket size actually is... I mean I think I've read the definition several times over in the manual and on the morphology website (or what was that site called), about that its a balance of how much work you give to the RAM or something... but I've never fully grasped its significance.

    I mean I know the significance of raytrace bounces, and irradiance caching, and pixel samples, those have tangible effects... but bucket size is like guessing the size of the bucket depending on how strong you feel that day, compared to stamina, and the how much water the villages needs for the day... this can be automated yo. Get the render done asap... duh.

    Ever since PP2014 came out, I did some experiments and settled for a size of 1024 and it seemed to work fine. I still didn't grasp its meaning, but that number seemed to work for my setup. But then P11 comes along and apparently something like 64 is better... yet my standard 1024 seems to work ok still, yet sometimes not, and in Superfly its a totally different thing... it renders huge chunks in one go. Too big and its too slow, too small and its too slow again... why are we left guessing? I thought computers could do these calculations these days. It feels so arbitrary.

    Now I know that this might spark a storm of experts to reply to this with all kind of numbers and stats and stuff... but, can ya'll developers @Poser-Team please just automate this weird thing? A PC has RAM, storage, and CPU capacity that can all be analysed and accessed, and the size of the Poser file can be found too, the processes of the render could be calculated too. I really don't see any great big advantage in having to fart around with guessing some silly number per PC setup when these days some clever little program can probably test for you.... or better yet, adjust on the fly?

    What am I missing? Is it some developer / nerd fetish thing, or is it on the list of things to sort out?



  • Hi @erogenesis

    What I know and what I tried on my PC and few CPUs and GPUs as well

    With SuperFly best bucket size for CPU is around 16-32, you can try 64 if this does help there, but 16-32 worked for me, same applies for my FireFly settings

    With GPUs that's different story, with Maxwell cards like Titan X or GTX980Ti I would suggest try 256 or 192, with Pascal GPUs I use 512 when I render with all 3* GPUs, with single GPU I would experiment and you will see there

    But still I would don't use BPT with GPU rendering and don't use Render in separate process, this can cause Windows hard lock

    Hope this helps

    Thanks, Jura



  • @erogenesis yeah, just gonna have to tough it out and experiment a bit. get the settings you feel give you the fasted render times and stick to them.



  • But still 1024 bucket size @erogenesis not sure if I would use in FireFly or SuperFly, FireFly loves cores and threads and SuperFly loves NVIDIA cores

    In SuperFly more GPUs you have faster rendering will be there and preferably Pascal GPUs,not sure if there is limit how many GPU you can use in SuperFly

    I tested on our workstation in work 6* GPUs in few SW where scaling across the GPUs has been sometimes just underwhelming or not consistent and would love to test our workstation in Poser SuperFly

    But right now most of my scenes or renders I do in quite fast times

    But agree with @ghostship its all about experimenting with scenes and what is works for you and what not

    And if you need something to render in short notice I would be happy to help

    Hope this helps

    Thanks, Jura


  • Poser Ambassadors

    Bucket size depends so much on the system the render engine is running on that it is impossible to define a standard. Every system has a different bottleneck somewhere.

    FireFly and SuperFly CPU like the smaller bucket sizes like from 8 to 32.

    SuperFly GPU render engine likes the larger bucket sizes from 256 (what seems to work best on MY system) to 512 for the higher cards.

    Too many different systems, graphics cards, RAM chips are out there that it's simply impossible to define a standard.

    Because you never know exactly where a certain render is gonna "choke", my defaults are /
    FireFly and SuperFly CPU = 16 or 32 (usually 32)
    SuperFly GPU = 256

    Best regards, Tony



  • Tile size (bucket-size) is optionally automated in Blender; (for both Cycles and Blender-Internal render engines).

    It can be manually set by the user, but for instance on my system, when I set it to 512x512 (gpu), and click 'calculate optimal size', it changes to 480x270 when the render is set for 1920x1080; and it changes to 480x400 when the render size is 1920x1600—which makes 4 tiles.

    A quick experiment shows me that a smaller (too small) tile size can potentially make the render take longer. I set the tile-size to 8x8, and the default cube (@1920x1080) scene took one minute and fifty-four seconds to render. With a much larger tile size of 960x540, it took less than two seconds.


  • Poser Ambassadors

    I didn't see anywhere that ero wants us all to explain why bucket size has no single perfect value. He asked why the software can't figure it out for itself.

    As a software developer, I can only guess, as the Poser team has never, to my knowledge, explained much of any of their choices.

    Here's my guess:

    There are so many problems that need fixing, that if there is one the end-user can address, they just don't work on it. They could work on it but then there would be something else that they're NOT working on. It's an opportunity cost.

    I have no trouble choosing a bucket size appropriate for my hardware.



  • @bagginsbill
    Of course... And I mentioned that Blender has it automated... As I understand it, isn't Superfly based on Cycles? And with Blender having it already automated, and —pubic source code... what is there to work on? The works already done.

    *(...and of course I don't mean cut & pasted source; but the code is there to examine.)



  • @bagginsbill haha you nailed it bb. Yeah I was asking about if it could be automated, not about the solutions or options. Mind you I do appreciate the feedback @jura11 @ghostship @mechanaut @vilters. But yeah @mechanaut something like what you describe would be cool: 'calculate optimal'.

    But bb you're probably right in that they're swamped as it is and indeed I'd rather that they would work on the IK system or animation layers or rigging tools than an automated bucket size. Still, I cannot imagine it would be too time-consuming when and if they ever decide to deal with it. Its just a bit of IO access and math, not so?



  • @mechanaut said in This Bucket size thing, can it go?:

    *(...and of course I don't mean cut & pasted source; but the code is there to examine.)

    lol who knows, maybe it is that simple!



  • @erogenesis said in This Bucket size thing, can it go?:

    I've read the definition several times over in the manual and on the morphology website (or what was that site called)...

    If you meant Morphography, I deny knowing any more about it than you do. :) I'm with you and @bagginsbill though. Why should I have to fiddle about with stuff that the computer should be able to work out for itself? I mean, I enjoy fiddling about with computers, but not while I'm trying to achieve Great Art. :D

    While we're ranting on a Sunday, why can't the renderer work out the optimum exposure for me if I ask it to? This ability is one of the few things I like about LuxRender.



  • @erogenesis
    Funny thing about Blender... You can right-click any button in the interface and examine the source —in the application. I don't know of anything else that does that.


  • Poser Ambassadors

    @erogenesis said in This Bucket size thing, can it go?:

    But bb you're probably right in that they're swamped as it is and indeed I'd rather that they would work on the IK system or animation layers or rigging tools than an automated bucket size. Still, I cannot imagine it would be too time-consuming when and if they ever decide to deal with it. Its just a bit of IO access and math, not so?

    But you artifically isolated one thing. I can go into the bug tracking system and expose another 500 "enhancement requests". When you JUST look at auto bucket size you think it's not too big a deal to just fix it.

    But why that one over the other 499? Hmmm?

    There are literally 1000 enhancement request in my own software (a big security application that costs literally over $1 million PER COPY) that I simply do not even read. I have about 200 really important features we need to implement and the 1000 little enhancements, trivial as they are, do not ever rise to the top.

    Even if I can develop a feature in one hour (and truly I can do 40 such in a week) the burden to the QA department and the documentation department and the sales department is so bad, I am literally ORDERED not to touch those things.



  • @bagginsbill Which is why I said that "when and if they ever decide to deal with it". I have no doubt in my mind that they're sufficiently swamped, and maybe its not fair on them for us to hammer their heads all their time about stuff that is probably extremely trivial... but I think that problem is not unique to SM + Poser. But that is why I definitely do try to rank my suggestions / complaints in some way or another to do my part in helping them decipher what is truly important. I wasn't clear with that with this post however.

    At least I have gotten my answer, that it could potentially be automated. But yeah, they do have much bigger fish to fry... I might be frying one of them lol :D


  • Poser Ambassadors

    Plus ! ! ! You guys are forgetting something.

    • Some area's are easy to render.
    • Other area's are rather difficult to render. => Where in the scene is the heavy transmapped hair??

    If I set bucket size rather large at 128x128, all cores and buckets are finished, while the cores and buckets that have the transmapped hair in them are soldering on and on, and on for minutes to go.
    All other cores are at rest, and the few that have to chew on the transmapped hair have to keep soldering on.

    That is why it is better to set buckets to a lower value to get more cores to "chew" on those "heavy to render" transmapped area's.

    PS : I use "heavy transmapped hair" just as an example. There are other nodes that require long render times.

    Simply dividing the render area in easy to divide area's as Blender does it, to give all cores their workload, is not enough. The app alo needs to "know" what to render in a specific area to attribute the required cores to that particular area.



  • The following is a snapshot from the Poser Pro 2014 Manual.

    0_1504464823461_Maximum Bucket Size.jpg



  • @abrahamjones hm, in other words, just set it to the max, Poser will deal with it if its too big anyway... :P


  • Poser Ambassadors

    @abrahamjones
    First : That's from PP2014 and FireFly only.
    Free Translation : Poser will continue the render but choking while doing so. LOL.

    Balancing the bucket size to what is "IN" the scene is best.

    Simple things can co with large bucket sizes, but reduce bucket size If you have lots of transmaps to get more cores rendering those "heavy" area's.

    PS : SuperFly in GPU mode is less sensitive and is usually very happy at 256x256.



  • @vilters That's ok I still mostly use PP2014.

    In short, if they can and have time, it would be great if they could automate the bucket size. Thanks all for the input and opinions.



  • @vilters That is more or less my meaning. Consider the following snaps from the PP2014 manual.

    0_1504488326686_Bucket Size.jpg

    I interpret the above as follows: As you know, Bucket Size tells Poser the size of its metaphorical "paint brush." If your image is 1920 x 1080 pixels and your bucket size is 10 x 10, Poser will successively render 10 x 10 sized chunks until the image is completely rendered. Specify a bucket size of 32, and it will render in 32 x 32 sized chunks.

    If you have a CPU with lots of cores, indicate their number in the Number of Threads field, have a beer and relax. However, if you have only one or two cores, and your renders are taking forever, specify 1 or 2 in the Number of Threads field and/or lower the Maximum Bucket Size.

    In other words, when you have a older CPU with no threads to play with and renders are taking forever, lowering bucket size should help. Increasing it will assuredly make things worse. If you have a monster CPU with plenty of threads, increasing bucket size probably means a somewhat quicker render. I seriously doubt that lowering the bucket size for complicated images will actually get "more cores rendering," but, given lots of cores, it certainly won't do any harm to instruct Poser to render in smaller chunks.