How much for graphics acceleration

  • I just have a quick question I have been unable to find an answer for so far. I am building a new workstation PC, and the primary purpose will be comic book creation using Clip Studio Paint. I know the software supports graphics acceleration, but I was wondering how much? Some programs have a limit of how much they can use. I'm trying to decide if getting a 4GB Quadro card is worth almost twice the cost of the 2GB version. If the software supports a 4GB card, then i'll keep it. If not, there's no need to spend the money.

  • @dustincox

    The rule with hardware is to always get the best that you can afford. In a year's time a new card will already be an old card... while the bargain priced three year old card will be four years old.

    **The expression goes, "It's always a bad time to buy a computer" —because the hardware will always be surpassed in a few months.

    I'd say get the better card to have the better card when it's next year... even if Clip can't use all of it now. Clip might have updates by next year; or you might switch to another app that can use the card... or could use a card that's twice as fast with double the RAM of that old 4GB Quadro card bought way back in 2017.

  • I just found a real answer, which hasn't been provided here so far. The graphics card has no maximum, but it does have a minimum. Fortunately, that minimum is only 256MB, which means that even a 2GB card is 8x the necessary performance. After all, the software is vector based, so it's not that demanding in the first place. The graphics card is only used for the 3D sections of the software, so some people might not even use the acceleration at all.

  • @dustincox

    Oh but it has, I think you've greatly missed the point. It's good that you found short term information on Clip Studio that you can use. Good luck with it; but the barb was needless, and unbecoming.

  • @mechanaut the question I asked, is if the software has a maximum amount of RAM it can use from the video card for hardware acceleration. It's a valid question. A lot of programs have a limit. That question has yet to be answered on here. The closest I have gotten is this. There is no max limit, but unless you use all of the 3D features extensively, the acceleration doesn't even kick in. And also, I never said what models of graphics cards I was comparing, so the whole one is newer than the other is completely invalid. Both are actively manufactured 2017 models.

  • I'm just going to throw this out there.

    Programs can have a minimum amount of GPU memory.
    (Basically what is the minimum display supported, resolution, color depth, OpenGl version, etc)
    Programs can not have a maximum GPU memory consumed/available.
    (Unless your dealing with servers running virtual machines, Specialized GPU's, etc, but forget about that for a bit.)

    If your running Windows, most programs don't really know that much about the memory on your system other than amounts.
    Just about everything is in a virtual address, and the op system has it somewhere else.

    The operating system handles what program gets how much memory, when, and if it gets paged out to make room for some other task.
    When it comes to GPU memory, there is a catch.

    Paging system memory is rather easy to do.
    Paging GPU memory is another story, and also depends on the hardware.
    Lets face it, you can run out of CPU memory and the system will still work (very slow thou), paging to the hard drive, etc.
    Run out of GPU memory, well, things will go horribly wrong.
    The program will continue to ask for more memory (ctxcreate, etc) and there wont be any left.

    The main reason you want as much GPU memory as possible is simple.
    Everything you see uses it. So does a lot of stuff you might never think of.

    In Windows 10, the desktop is hardware accelerated. Everything on the screen is.
    Chances are really good the browser you are using to view this, is as well. No matter what op system you are using.
    Lots of stuff uses hardware acceleration, even if the program never does in the code.

    Now onto video cards.

    Would I buy a Quadro card?
    Not a chance.
    I don't have a single program that requires it, not a one.
    And worse yet, the newer GTX games cards have more cores, more memory, higher clock speeds, etc.

    One of the main benefits of a Quadro card, is applying a texture to the backside of the normals.
    IE: texturing both sides of something without duplicating the mesh in memory.
    With the amount of memory in a game card today, that's a really moot point now.

    The minimum card I would recommend for anyone building a 3D system is a GTX1070 8gig card.
    Buying anything less than that is rather pointless today.
    If you can swing it, a GTX1080TI is a beast of a video card.

    Just about everything uses hardware acceleration, even if it isn't written into it.
    You might as well have a card that wont break a sweat doing it, and has plenty of memory from the word go.

  • @shvrdavid this is a good answer. And it is true that most things use hardware acceleration. And I also agree that an 8GB card would be better, if I was building a 3D rig. At the moment however, I'm just building a 2D graphics rig for static comic books. I've though about the process carefully. I'm getting plenty of overhead on the other parts, so upgrading it to a 3D rig would not be difficult. I would basically just need a different graphics card, which, using some form of Quadro card, can just be expanded with a Tesla unit, and more RAM. I more need something that runs. Unfortunately, the GPU is becoming a sticking point, because the processor one of the AMD Ryzen series, so has no integrated graphics core. So, I can't exactly split the rendering (integrated to power monitors, GPU for graphics software, which the Quadro cards can so).

  • My render cow uses a GTX 1070 8gig Gigabyte Mini OC, and runs monitors at the same time.
    The processor is a first Gen I3 with a GPU in it, but I cant get both to work at the same time and be stable.
    I have the times raised in the video drivers and don't have any issues.
    Each HD monitor takes about .8 gig of the memory, leaving me plenty to render with.

    I have no idea what GPU you have now, but let me give you an example.
    This notebook has a GTX765m with 2gig dedicated. (GK106 Kepler GPU)
    A GTX1070, driving one monitor, is about 20 to 30 times faster than the 765m when rendering.
    20 to 30 times faster is nothing to sneeze at.
    Anything prior to the Pascal GPU's, short of the top end Maxwell GPU's, can't hold a candle to a 1070 or higher Pascal.
    Factory overclock the 10 series (or do it yourself), and no previous generation can keep up with it.
    CUDA core, per CUDA core, a Pascal card runs circles around previous generations.
    And guess what, all 10 series GPU's self overclock anyway...

    If you want to go the Quadro, then later Tesla route that is up to you.
    Add a Tesla, you will probably need another power supply....

    Just consider that a 10 series GTX will do all of the above for about the same price, using about 1/4 the juice as well.
    If your not running programs that require Quadro's, well, that's why they are so cheap now....