GPU - the other processor PDF  | Drucken |  E-Mail
Geschrieben von: Tom LANSFORD   
Freitag, 14. August 2009 um 09:00 Uhr

Graphics Processing Unit, graphics processor, GPU: finally most of us transitioned from thinking about a VGA controller to a pretty amazing chip that can now render the film 'Toy Story' in realtime. And for our work, we are spoiled with realistic graphics that we interact with rather than waiting until we come back from lunch.  Those of us who care a bit about our workstations realize that the old VGA-controller-now-GPU can run special programs written by experts just for our application - and the workstation also runs the latest games - cool...


But the GPU holds more surprises for us. Today GPU is becoming a misnomer - and why? Because for the last 10 years your workstation has been incubating a super-computer.



3D Graphics processing is a terribly difficult problem which relies on floating point manipulations - the most expensive processing - and for millions of pixels at the same time. Then to do that 60 or more times per second! So the evolution of the GPU in the last decade gradually gave birth to a second processor next to the CPU in your workstation - a co-processor, if you will, which is extremely good at massively parallel, floating-point calculations, for example: realistic, 3D graphics of course, but a whole lot more.


Computational Fluid Dynamics, Molecular Dynamics, Financial Modeling, CAT-scan recomposition pose problems which mirror that of 3D graphics requiring floating-point performance, processing huge data-sets, and can be processed in parallel.  Results can be delivered not in hours, but in minutes; not in minutes, but in real-time.


It was never obvious that the GPU would bring the same performance benefits to, for example, CFD that it brings to real-time graphics. The GPU needed to become more programmable, so it moved from fixed-function to programmability. The programming language needed to become more accessible, so it moved from interface programming like OpenGL & Direct X to mainstream languages like C.  And the GPU needed a new set of tools for developers - so new compilers and debugging tools were developed for it. And after all that, it still needed a large dose of evangelism.


Good news for us - already the corner has been turned. More and more applications today take advantage of the new super-computer in your workstation. In the next few years this trend will explode as operating systems allow our applications to easily recognize this other processor and to use that super-computer hidden inside.


We have finally gotten used to calling them graphics processors.  And now the GPU delivers much much more than great graphics.


Zuletzt aktualisiert am Montag, 01. Februar 2010 um 13:50 Uhr

Google Reklame