小弟今天在國外網站上看到一篇很有趣的文章,內文主要在討論3D運算程式語言的未來走向
裡面以Nvidia的CUDA為主題,聊到了OpenCL 1.0 與 CUDA之間的戰爭,由於CUDA是Nivdia的專利技術,而該技術又必須使用自家的GPU,相較於開放式架構的OpenCL起來,似乎不太受歡迎
你覺得這些遊戲開發者們,會喜歡用要授權金的CUDA,而開發出來的遊戲還必須使用他們的GPU跟硬體,其他的電腦全都不能用....還是會喜歡使用免費而且寫出來的遊戲是全世界的電腦都可以玩的的OpenCL??
看來答案已經呼之欲出了~
可是Nvidia真的會幹掉CUDA嗎XD??這答案只能靠時間來證明了~
https://www.maximumpc.com/article/col...time_kill_cuda
Like many of you, the first real 3D accelerator I owned was a 3dfx Voodoo card. This was way back in 1995. DirectX and Direct3D had yet to be released to the public, and OpenGL was only used for CAD and scientific rendering apps. In those primordial times, if a game developer wanted to harness the awesome rendering power of the Voodoo hardware, he had to write his game with Glide, 3dfx’s own application programming interface (API). This was all before the open standards movement became a powerful force in development circles, and Glide offered 3dfx a major competitive advantage: If a gamer wanted to see all the kick-ass 3D effects that Glide enabled, he had to play the game on 3dfx hardware—lest he suffer Glideless, in a depressing, busted-up world of jaggy, unfiltered textures.
The 3dfx/Glide domination ended when id Software and other game developers started releasing titles that used the OpenGL API, which wasn’t dependent on 3dfx hardware (but worked with 3dfx chips through a Glide translation layer). OpenGL opened the door for other 3D chip companies to build competitive products, and thus ATI, S3, Matrox, and Nvidia entered the fray with hardware of their own.
With every new OpenGL or DirectX game released, Glide slowly transitioned from an advantage to a liability for 3dfx. As competitors like Nvidia embraced new technology and embarked on a period of incredibly rapid improvements, 3dfx remained tied to its Glide past, and, as a result, was slow to embrace new rendering enhancements, such as 32-bit color and antialiasing. Ultimately, this contributed to 3dfx’s demise, and embracing open standards allowed Nvidia and ATI to flourish.
Why are we talking about this today? Because Nvidia stands at a crossroads, with two closed, proprietary APIs that have mainstream potential: the general-purpose computing CUDA API, and the PhysX physics-acceleration API, which sits on top of CUDA. These are both promising technologies, but only owners of Nvidia hardware can harness their power. Meanwhile, there are two emerging open standards that mirror what Nvidia is doing with its proprietary development. One is OpenCL 1.0, and the other is a general-purpose GPU computing API, which Microsoft will include in DirectX 11. There are a relatively small number of consumer applications that use CUDA, PhysX, or OpenCL right now, but the possible applications for the tech are endless—grossly simplified, these APIs let graphics chips perform CPU-like functions. The question Nvidia needs to be asking is simple: Will developers write their general-purpose GPU computing apps using a proprietary API that works on only a subset of PCs—those stuffed with Nvidia hardware—or will they use an open API that will work on every PC on the market?
Nvidia’s path is clear: It needs to stop trying to convince us that closed APIs are good, and instead embrace OpenCL and Microsoft’s yet-to-be-named solution. It needs to port PhysX to run on one of the open APIs, then use PhysX as a platform to advertise the kind of power that Nvidia delivers (with the recipients of all this messaging being ATI diehards and anyone considering the forthcoming Larrabee GPU from Intel).
By focusing on what its always done well—building kick-ass hardware—instead of force-feeding us closed APIs, Nvidia will thrive. As for CUDA? It’s served its purpose, but its time has passed. It’s time to kill CUDA.
[ 本帖最後由 azai2008 於 2009-1-22 11:13 編輯 ] |