Clustering computers for rendering



  • How cool would it be to "cluster" a few computers or laptops to act as one unit for rendering.
    Especially for the clustered unit to share the combined RAM. Would this be possible?


  • Poser Ambassadors

    @krios I doubt if it would be feasible to pool the memory, but it is very doable to tackle a single large (pixel dimension) render of a complex scene with high-quality render settings by spreading it out across a network.

    You can use Reality to port Poser into the Lux render engine, and Lux can network a render.
    Vue Infinite can run renders via HyperVue, which networks the render.

    I fervently wish for Poser12Pro to have networked rendering (of a single frame/render) enabled for Queue Manager.


  • Poser Ambassadors

    You can cluster computers together. Share cpu core, pool memory, and pool high end GPUs. (Tesla, FirePro, etc)

    Doing so requires something like VmWare server to be running as the boot os on all of them, configured to pool, etc, and Windows/OS/etc to be running in a virtual machine across all of them.

    Off the top of my head you are limited to 64 physical cpu cores per VM running in some versions of VmWare, which is plenty... Memory pool is in the terabytes....

    Your going to need a really fast network as well.



  • @shvrdavid
    Thank you for the info David!
    It might take some effort to build up the courage to try this, but it's good to know that it's doable.
    Will follow your VmWare lead to learn more.

    @seachnasaigh
    If Lux can network a render and thus overcome the RAM limitations (8 GB) of the individual laptops, then it would be worth looking into.



  • @seachnasaigh
    Imagine if you could pool your blades into one virtual machine.
    You'd end up with a super computer to render your hi-rez images.


  • Poser Ambassadors

    @krios The speed increase would not be as great as you think for 22 machines; running them "pooled", as a virtual machine, will have LAN (aethernet) connections (especially the switch/router) as a speed bottleneck.

    I could still do okay; the workstations each have dual aethernet ports, as do the C1100 blades; the r610 blades have four LAN aethernet ports, plus a diagnostic port, a KVM monitoring/command port, and another which I don't know the function of.

    The idea would be to "gang NICs", i.e., run Cat6 cables from all LAN ports into the network switch/router. That switch/router will have a major traffic burden, though. And I'd need to have a 64-port gigabit switch to make all of those connections. @shvrdavid would be more knowledgeable than I am on this.

    The more elegant solution is to have the master workstation's Queue Manager distribute a bucket to each remote (render slave); that way, all of the render-in-process communication between CPUs and RAM is internal within each single machine. Those motherboard connections are much faster than network connections.
    This is the way that HyperVue distributes a render across a network, distributing "tiles" (buckets).
    Lux distributes a number of samples; Superfly could theoretically do likewise through Queue Manager, but I would think it would be simpler for SM's code writing team to distribute buckets, which would work for both Firefly and Superfly.



  • @seachnasaigh said in Clustering computers for rendering:

    @krios The speed increase would not be as great as you think for 22 machines; running them "pooled", as a virtual machine, will have LAN (aethernet) connections (especially the switch/router) as a speed bottleneck.

    Sounded too good to be true. But it would still be an interesting experiment to practice hardware skills.
    Maybe Queue Manager will come around in the next few versions, especially if more people show interest for bucket distribution.


  • Poser Ambassadors

    @krios I think it would be worthwhile for 2,3, maybe 4 computers. But beyond that, I suspect you'd hit that point of diminishing returns on account of the traffic jam through your network's switch/router.
    Just my hunch. I may be utterly wrong.



  • @seachnasaigh
    Thank you for the explanation, much appreciated.