suggestions for computer build



  • my current computer is mostly ok for what I'm doing with it but I might be getting a second video card and will need to build a new computer that has 2 video card slots.

    The plan so far is full size ATX board and case and PSU larger than my current 620watt.

    1. the new mobo's can handle up to 64GB of ram. I have 16GB in my current system and haven't run into problems but if there is some reason to go 32 or 64, tell me.

    2. I prefer Intel processors and currently have an i5 2400. I don't have any speed issues with the i5 so I might just go with another i5. What CPU would you suggest?

    3. I used to sell and repair PC's about five years ago. Back then SSD's were total crap (we had huge returns of failed SSD's.) What is deal with them today? Are they more reliable now?

    4. would like to get a BR burner. Anybody use these?


  • Poser Ambassadors

    My recommendations for a Poser computer:

    • Case which accepts E-ATX motherboards

    • Server/workstation motherboard with dual CPU sockets. Socket choice depends on which processors your budget will allow.

    • Pair of HyperThreaded multi-core Xeon processors. Which series (which determines socket type), core count, and clock speed depend on budget.

    • Registered (server) memory, with ECC (error correcting code) and metal heat spreaders.

    The cheapest option which has good performance are the X5650-X5690 Westmere Xeons, which can be bought (used) in pairs at modest cost. They use the 1366 socket. A pair will give you 24 render threads.
    0_1484453068954_Westmere Xeon clock speeds.PNG

    0_1484453130492_SupMicro MBD-X8DTi-F-O ortho view -800p.jpg

    Of course, if you can afford a pair of the new Xeons, that would be even better. :D



  • @seachnasaigh LOL. Yes for a long time I considered a used dual Xeon workstation like you always suggest. But now that I have a decent video card for GPU rendering (it was free from my nephew) and probably getting a second GPU (for free) I'd like to stick with something I can build from parts that wont cost me an arm and a leg. If there is a dual Xeon workstation out there for around $600-800 that has slots for two GPU's I'd consider that.



  • To answer your questions.

    1. That depends. Is the memory you have capable of running at the max mem speed on the new mobo, and yes, with another video card(s), you should go to the next bank size to be safe. IE 24 gig (48 if dual CPU) or so in proper banking (matched) Multiple GPUs like memory, and fast memory at that.

    2. In a GPU render cow, core and memory speed is what you want . Your going to get multiple cores in any CPU, just get a fast one. Dishing stuff out doesn't take a lot of cores, you want core speed and the fastest QPI (max mem transfer speed).

    3. SSD's have come a long way, check out the Samsung Evo's They have come down drastically in price, and I have yet to have one die. (think I have 5 of them now)

    4. Why, do you need coasters? No, I'm just kidding. Basically only a few companies make these internal drives and they are just rebranded, any brand name one will be fine. It never hurts to do some homework on actual real world performance thou.

    As far as Server boards go, there are a lot of them out there.
    Some of which, are designed especially with GPU compute in mind.
    The key is finding those when they come up for sale, they are not as common as one with say one or 2 PciE 8x slots...

    Most Server boards with few, or worse yet no PciE slots (On Board Matrox video etc), Are usually optimized to run lots of virtual machines at once with the hyperthreading turned off (in the bios) for the CPUs. They are basically database machines that see heavy CPU usage. They rely on high cache and core count CPUs for performance and the bios is set up accordingly.

    Then there are one that are at the other end of the spectrum. The can have up too 8 or 9 PciE slots, and are designed for high core speed CPUs, and many PciE cards installed at once. You can use them for communication, GPU compute, Nas units, etc. They normally support 256 or 512 gig of mem as well. 1 and 2 gig high speed memory cheap are cheap... and there are a lot memory of slots....So you can do 24 or 48 gig for peanuts on it.

    Hmm, where do I find these boards at?
    Look for one type of motherboard in a used server if you want to build a GPUGU server, even if it is stripped and you need to get CPUs for it. And you will thank yourself later.

    Many of the SuperMicro EATX X series server motherboards (X9xxxx and X10xxxx for example) have multiple PciE 16x slots and multiple PciE 8x ones. (up to 7, I think)
    Because it is a dual Xeon system, you have at least twice the CPU hardware PciE channels available over a single CPU. Compared to an I5, about 3 times as many with some Xeons.

    What does that mean? One some boards, all 3 video cards can run at 16x..... Worst case is 2 at 16x, one at 8x.
    It will also be about the fastest motherboard to ever have that controller chip series as well... X9 are C602 (DDR3) and X10 are C612 (DDR4) controllers

    Food for thought, You can find those boards used dirt cheap once and a while.



  • @shvrdavid I'm looking at some mobos on line and not sure about PCI slots. Looks like I want PCI Express 3.0 x16. Some say that when the second slot is filled it runs at x8 or x4. Does this matter? Seems like I'd want both slots to run at x16.



  • Slot speed is dependent on the number of channels and how many the rest of the motherboard uses. (Sata, etc) You want PciE slots (version 2 or 3 is fine for a render box) in a render box, not the old pci ones. If a video card slows from 16x to 8x, or even 4x, the impact is not much when doing GPU compute stuff. Sure there is some, but keep in mind there are basically only two different ways to do memory swaps on a video card. There is a slight decrease in speed yes, but nothing to get up in arms at. It wont make much difference in compute time.

    GPU compute is fastest when everything is loaded onto the card first, and then off it goes crunching it. Once the scene loads into GPU memory, your off to the races.

    PciE is a parallel set up, they are communication lanes. In a scenario where there is no preview, it is basically only a few memory transfers and the render is done.

    Typically a low end CPU has 32 PciE channels, 16 are usually used by the mobo. High end CPUs have 48.
    Going dual cpu with Xeons doubles that to 64 and 96 channels respectively.
    PciE channels can not be shared.
    A mother board with 16 free can run cards at
    16x, or
    8x 8x, or
    8x 4x 4x, or
    4x 4x 4x 4x

    A motherboard with more free channels with a controller and cpu that supports it; uses a different pattern that is similar.
    On a system with 24 free, it will start at 16, then 16 8, etc.

    It will add up to the maximum number of free channels no matter how many controllers are plugged in. Or you wont use all of them.
    On a multiple CPU server you may end up with some that never get used... and start with multiple 16x slots at the same time.

    It's a bit more complex than that in detail, but nothing to really worry about.
    This is where multiple CPU servers shine thou. Which goes back to the different types of server boards.. CPU based where most of the channels are used by the motherboard for additional controllers or to speed everything up, or expansion based where most of the channels are dedicated to expansion slots.



  • I got massively better value by buying a dual core xeon behind the leading edge than if I'd bought state of the art.
    I'm still far from happy with SSDs. The MTBF is still pitiful IMHO, and what concerns me is that you can lose a whole drive irrecoverably with no warning sign. At least with a normal drive, you can change the electronics. AND I don't like the way that you "delete" with them.
    I have a couple of LG BR burners. They seemed like a great idea, but you quickly get bored with the novelty. But as a large scale backup medium, they are more convenient in terms of space if not compatibility. YOu can also use them to make your own movies from MKVs or other video sources.



  • Hi there

    As always all depends on budget and how much you are willing to spend

    Regarding the RAM,I would suggest as minimum 32GB if you do render with other engines and if you like high poly scene,64GB is optimal I would say,128GB is really not needed for yours needs

    Regarding the AMD,I would wait on RYZEN there,this CPU will be placed and is fast as i7-6900k which is 8 core/16 thread CPU and should cost around $300-$500USD as most,they will support multi GPU and DDR4 etc

    SSD,I'm running 3 SSD and no issues at all with them,maybe previously they failed,but these SSD from Samsung are best and yes they cost bit more,I would satay away from Crucial SSD MX or BX series,they poor SSD for money

    I would rather get external HDD which don't cost a lot and use them as regular backup HDD

    I would suggest something like this

    This combo does have 128GB RAM already,2x E5-2670 which are 8 core/12 thread CPU and you will have in total 32 cores for rendering

    http://www.natex.us/Intel-S2600CP2J-Motherboard-Kit-p/s2600cp-sr0h8-128gb-12800.htm

    PSU 850W+ from EVGA,Silverstone or Seasonics PSU,personally I would go with 1000W plus

    Although this board does have only x8 which should be OK for yours needs,difference between the x8 and x16 in rendering is not so big,I'm running my GPU x16 and x8

    Hope this helps there

    Thanks,Jura



  • @jura11 @ghostship

    Just to clarify what I meant by server mobo types, the one jura11 pointed out is a cpu based design. In other words, it will crunch numbers CPU wise with the best of them.

    Read the specs on that Mobo, you will get to this.
    Shouldn't be used for virtual machine applications that requuire PCIe passthrough.

    All but 8 channels are dedicated to supporting the CPUs, it doesn't have onboard video, your PciE expansion is severely limited.

    If you want to build a screaming CPU system, jura11 pointed a really good one out.
    If you want multiple GPUs you need the other style board which sacrifices some CPU umph.



  • @shvrdavid said in suggestions for computer build:

    @jura11 @ghostship

    Just to clarify what I meant by server mobo types, the one jura11 pointed out is a cpu based design. In other words, it will crunch numbers CPU wise with the best of them.

    Read the specs on that Mobo, you will get to this.
    Shouldn't be used for virtual machine applications that requuire PCIe passthrough.

    All but 8 channels are dedicated to supporting the CPUs, it doesn't have onboard video, you any PciE expansion is severely limited.

    If you want to build a screaming CPU system, jura11 pointed a really good one out.
    If you want multiple GPUs you need the other style board which sacrifices some CPU umph.

    @shvrdavid

    I'm pretty sure this PCIE passthrough is what you are using in VM based machines,in normal operation I'm pretty used GPU will be used as should,I'm pretty sure too my friend have this board and he is using this board for rendering,he is running 4 GPU(Titan X Pascal) on that board,on board GPU no sure if you are really need

    I'm thinking build similar system as next,but I still I will be waiting on AMD Ryzen as next and then decide

    Hope this helps

    Thanks,Jura



  • Your correct Jura, but at the same time you cant share 8 channels real well either.

    If you have a motherboard that has 64- 128 PciE lanes dedicated to expansion, then it is a different story.

    My whole point was that both directions of going server motherboard wise have specific advantages and disadvantages.