This Bucket size thing, can it go?



  • @erogenesis said in This Bucket size thing, can it go?:

    I've read the definition several times over in the manual and on the morphology website (or what was that site called)...

    If you meant Morphography, I deny knowing any more about it than you do. :) I'm with you and @bagginsbill though. Why should I have to fiddle about with stuff that the computer should be able to work out for itself? I mean, I enjoy fiddling about with computers, but not while I'm trying to achieve Great Art. :D

    While we're ranting on a Sunday, why can't the renderer work out the optimum exposure for me if I ask it to? This ability is one of the few things I like about LuxRender.



  • @erogenesis
    Funny thing about Blender... You can right-click any button in the interface and examine the source —in the application. I don't know of anything else that does that.


  • Poser Ambassadors

    @erogenesis said in This Bucket size thing, can it go?:

    But bb you're probably right in that they're swamped as it is and indeed I'd rather that they would work on the IK system or animation layers or rigging tools than an automated bucket size. Still, I cannot imagine it would be too time-consuming when and if they ever decide to deal with it. Its just a bit of IO access and math, not so?

    But you artifically isolated one thing. I can go into the bug tracking system and expose another 500 "enhancement requests". When you JUST look at auto bucket size you think it's not too big a deal to just fix it.

    But why that one over the other 499? Hmmm?

    There are literally 1000 enhancement request in my own software (a big security application that costs literally over $1 million PER COPY) that I simply do not even read. I have about 200 really important features we need to implement and the 1000 little enhancements, trivial as they are, do not ever rise to the top.

    Even if I can develop a feature in one hour (and truly I can do 40 such in a week) the burden to the QA department and the documentation department and the sales department is so bad, I am literally ORDERED not to touch those things.



  • @bagginsbill Which is why I said that "when and if they ever decide to deal with it". I have no doubt in my mind that they're sufficiently swamped, and maybe its not fair on them for us to hammer their heads all their time about stuff that is probably extremely trivial... but I think that problem is not unique to SM + Poser. But that is why I definitely do try to rank my suggestions / complaints in some way or another to do my part in helping them decipher what is truly important. I wasn't clear with that with this post however.

    At least I have gotten my answer, that it could potentially be automated. But yeah, they do have much bigger fish to fry... I might be frying one of them lol :D


  • Poser Ambassadors

    Plus ! ! ! You guys are forgetting something.

    • Some area's are easy to render.
    • Other area's are rather difficult to render. => Where in the scene is the heavy transmapped hair??

    If I set bucket size rather large at 128x128, all cores and buckets are finished, while the cores and buckets that have the transmapped hair in them are soldering on and on, and on for minutes to go.
    All other cores are at rest, and the few that have to chew on the transmapped hair have to keep soldering on.

    That is why it is better to set buckets to a lower value to get more cores to "chew" on those "heavy to render" transmapped area's.

    PS : I use "heavy transmapped hair" just as an example. There are other nodes that require long render times.

    Simply dividing the render area in easy to divide area's as Blender does it, to give all cores their workload, is not enough. The app alo needs to "know" what to render in a specific area to attribute the required cores to that particular area.



  • The following is a snapshot from the Poser Pro 2014 Manual.

    0_1504464823461_Maximum Bucket Size.jpg



  • @abrahamjones hm, in other words, just set it to the max, Poser will deal with it if its too big anyway... :P


  • Poser Ambassadors

    @abrahamjones
    First : That's from PP2014 and FireFly only.
    Free Translation : Poser will continue the render but choking while doing so. LOL.

    Balancing the bucket size to what is "IN" the scene is best.

    Simple things can co with large bucket sizes, but reduce bucket size If you have lots of transmaps to get more cores rendering those "heavy" area's.

    PS : SuperFly in GPU mode is less sensitive and is usually very happy at 256x256.



  • @vilters That's ok I still mostly use PP2014.

    In short, if they can and have time, it would be great if they could automate the bucket size. Thanks all for the input and opinions.



  • @vilters That is more or less my meaning. Consider the following snaps from the PP2014 manual.

    0_1504488326686_Bucket Size.jpg

    I interpret the above as follows: As you know, Bucket Size tells Poser the size of its metaphorical "paint brush." If your image is 1920 x 1080 pixels and your bucket size is 10 x 10, Poser will successively render 10 x 10 sized chunks until the image is completely rendered. Specify a bucket size of 32, and it will render in 32 x 32 sized chunks.

    If you have a CPU with lots of cores, indicate their number in the Number of Threads field, have a beer and relax. However, if you have only one or two cores, and your renders are taking forever, specify 1 or 2 in the Number of Threads field and/or lower the Maximum Bucket Size.

    In other words, when you have a older CPU with no threads to play with and renders are taking forever, lowering bucket size should help. Increasing it will assuredly make things worse. If you have a monster CPU with plenty of threads, increasing bucket size probably means a somewhat quicker render. I seriously doubt that lowering the bucket size for complicated images will actually get "more cores rendering," but, given lots of cores, it certainly won't do any harm to instruct Poser to render in smaller chunks.



  • A question on this:

    Wouldn't it be fairly simple to develop a script that would read the hardware environment and offer a "suggestion" concerning these sorts of hardware-specific render settings that so often confuse new or even experienced users? (Granted, it may need permissions, but most have already given the parent process those.)



  • @morkonan I think that it matters what's in the scene as well. too many textures or hair transparencies will make a GPU render stop but I think if you make the buckets small enough it will go though. This is why you can't have a script that figures it out for you: Too many variables. For super fly I use either 256 or 384 for a bucket size depending on what is in the scene. This will be different for different computers.



  • What I am trying to say is that, as CPUs become more powerful and provided you have entered the number of processors/cores in the Number of Threads field, Maximum Bucket Size becomes increasingly irrelevant.

    Two CPU processors, each with four cores = eight cores. Enter 8 in the Number of Threads field and have a glass of wine. Given sufficient RAM memory, you needn't even think about bucket size.

    Maximum Bucket Size is becoming meaningful only to slobs who have single core processors and six GBs of RAM. Slobs like me! Because we haven't got muscular CPU resources, we should/must lower the bucket size to improve our render times.

    Note Bene: Erogenesis says he uses a bucket size of 1024. That's effin' gigantic! He's rendering scenes (whatever their absolute vertical & horizontal dimensions) in 1024x1024 pixel blocks! Obviously, he has a system that can handle that load. If I tried that bucket size (again, whatever the absolute dimensions), Poser would hang when I tried to render even an uncomplicated two-character scene. I haven't got the resources for so large a bucket size. Therefore, when the scene is complicated and Poser doesn't render as quickly as I wish or simply hangs when I try to render, I lower the bucket size.

    Of course, if you have a monster CPU and you are rendering your version of Bosch's 'The Garden of Earthly Delights' at dimensions of 36 x 36 inches with a resolution of 300 ppi, you probably want to think about bucket size.



  • @ghostship said in This Bucket size thing, can it go?:

    @morkonan I think that it matters what's in the scene as well. too many textures or hair transparencies will make a GPU render stop but I think if you make the buckets small enough it will go though. This is why you can't have a script that figures it out for you: Too many variables. For super fly I use either 256 or 384 for a bucket size depending on what is in the scene. This will be different for different computers.

    But, if it could read the scene, first? Either read it directly or read a saved scene file with the same name?

    I guess what I'm saying is that as long as we know what the standards would be and can get the information from the program, a script that would give a suggestion as to render settings would be... extremely helpful. One that worked in conjunction with D3D's excellent render settings would be ideal, of course.

    It'd clear up about half the questions regarding generic optimization of render settings on forums across the galaxy... :)