No matter whether we've got a low end or high end system, we all expect the realtime 3D revolution to continue until we achieve near parity with reality. The push forward is backed by many factors including pure hardware performance and brilliant advances in techniques for better approximating what we see. But there's another side to the equation beyond just hardware and developers: there is the graphics API.
Unlike CPUs, graphics hardware (GPUs) do not have a common instruction set upon which tools and software can be built. In order to get power of the hardware out to the public, we need a common interface that works no matter what GPU is underneath. It's left to the graphics hardware designer to take the code generated by this application programming interface (API) and translate it into something that their chip can use. Because it's the developer's single point of contact, the graphics API is incredibly important. It defines how much flexibility programmers have in using hardware and shapes the world of high performance realtime 3D graphics.
Some of the key work done through graphics API is taking descriptions of 3D objects in a 3D world, sending those objects and other resources to the hardware, and then telling the hardware what to do with them. There is sort of a step by step process that needs to be followed that we generally call a pipeline. Graphics API pipelines have different stages where different work is done. Here's the general structure of a 3D graphics pipeline:
First vertex data (information about the position of the corners of shapes) is taken in and processed. Then those shapes can then be further manipulated and re-processed if needed. After this, 3D objects are broken down from 3D shapes by projecting them into 2D fragments called pixels (this step is called rasterization), and then these pixels are each processed by looking up texture information and using lighting techniques and so on. When pixels are finished processing, they are output and displayed on the screen. And that's the mile high overview of how 3D graphics work.
For the past dozen years (it seems longer doesn't it?), we've seen makers of 3D graphics hardware accelerate two very prominent APIs: OpenGL and DirectX.
We recently touched on advancements tangential to OpenGL in our OpenCL article, but today our focus will be on DirectX. Microsoft's DirectX graphics API is much more heavily used in game engines than OpenGL in good part because DirectX tends to move much more quickly and sets the bar for both hardware and OpenGL in terms of feature set and flexibility. Which always makes upcoming versions of DirectX exciting to talk about: they define the future capabilities of hardware and expose improved tools to developers. Upcoming DirectX versions are glimpses into our graphical future. Currently we have a lot of DirectX 9 and DirectX 10 games available and in development, but DirectX 11 looms on the horizon.
As usual, Microsoft will be trying to time the release of their next DirectX revision with the release of compatible graphics hardware. As with last time, DirectX 11 will also be released with Windows 7. With the Windows 7 Beta already under way, we expect the OS to be done some time this year.
Microsoft has been rather aggressive with Windows 7 scheduling in light of the rejection of Vista, so it appears they are stepping up to the plate to get everything out sooner rather than later. There was a little more than 4 years between the release of DirectX 9 and DirectX 10. As it hit the streets with Vista in January of 2007, DirectX 10 has just turned 2 and we are already anticipating it's replacement in the very near future. As we will learn, this speedy transition should be very good for DirectX 11 adoption as DirectX 10 hasn't even become pervasive yet: many games are still DirectX 9 only. |