Live From Nvidia’s GPU Technology Conference
This editor is live at Nvidia’s big event; their three-in-one GPU technology conference at the Hotel Fairmont in beautiful downtown San Jose, California for 3 days. We are going to give you our impressions of this event as it is going on and also after each day. Later on you can expect a more polished summary when we review all of our notes, transcribe all of our audio and the HD video that we are shooting, and finally make sense of all of this that we see. You can expect us to wrap this all into a series on Nvidia’s GPU computing and what this conference is really all about. It is really a revolution that started as part of Nvidia’s vision relatively few years ago. It is Nvidia’s revolution to make the GPU “all purpose” and just as important as the CPU in computing.
Today, September 30, was the opening day and we saw Nvidia’s superstar CEO, Jen-Hsun Huang (who everyone refers to as, “Jensen”), give his keynote address to a completely packed audience of more than 1,000 people including more than 100 of us press people, with the overflow audience being directed to other rooms. In fact, Nvidia streamed the entire keynote address in real-time high-definition stereo. However, it was a lot more impressive live, as it was of course, 3D – but much more impressive than what we were used to.
Jensen opened the conference by asking the audience to not look at him as we do in regular 3D, but look to the giant screens and put on our 3D glasses. We saw an explosion of bubbles streaming everywhere – ‘live’ next to and around him on the stage – but much more impressive on the much larger-than-life big screens and rendered in real time 3D that seems improved over what we saw a year ago at NVISION08.
Jensen also demoed an interactive ray tracing simulation of a sports car that was as close to photo-realism as possible. They actually simulated the light from every possible source and did it in real time too.
Jensen then touched on the importance of 3D – as the future of movies and television – and then he gave away a few brand new Fuji Finepix 3D cameras that are released today for $600. One camera is able to capture 3D images for the enthusiast:
http://www.fujifilm.com/products/3d/camera/finepix_real3dw1/
Jensen then began his address to the audience in earnest by giving a quick flashback history of Nvidia and the three phases they went through to transform their industry. In the beginning, things were pretty much fixed in graphics until they invented the Graphics Processing Unit in 1999. They had begun to use programmable shaders which they say caused a revolution in the entire industry. And now for much more realism, they are developing ray tracing as part of an engine for the GPU.
Before that, 16 years ago, Jensen contrasts their Riva 128 – with 3 million transistors – with today’s newly announced architecture, “Fermi” with over 3 billion transistors. It is an entirely new design; Fermi architecture is said to be the foundation for the world’s first computational graphics processing units (GPUs), delivering new breakthroughs in both graphics and GPU computing. Jensen did not spend much time on graphics but went on to talk about GPU computing as being the “next big thing”. It is very new, yet more than 90,000 developers are working on CUDA and over 200 universities are teaching CUDA as part of their computer programs.
Jensen then went on to talk about the impact of speeding up things and what it means to be able to speed things up 50 or more times. He used the analogy of being able to go from San Francisco to New York in three minutes as being “transformative”. It would completely transform real estate, for example, as you could live anywhere.
Jensen then went on to show how GPU parallelism speed benefited Johns Hopkins University and its work of simulating the first few seconds of a levee break. Their incredibly intensive and accurate simulation time was shortened from 24 days to 4 hours!! What benefits them even more, is that they can improve on their simulations further, without wasting any time.
Jensen then introduced David Robinson, CEO of Techniscan, to talk about his company’s work utilizing ultrasound for early tumor detection of breast cancer. They require massive computational power. Two Tesla C1060s take less than 30 minutes whereas before it took more than twice as long with four CPU clusters. Besides saving time and money, they are now looking to detect even smaller cancers earlier with even more GPU computing power.
So we see a great need for even more GPU computational power. Enter “Fermi”. Nvidia’s new architecture delivers a feature set that accelerates performance on a much wider array of computational applications . Jeffrey A. Nichols, associate laboratory director for Computing and Computational Sciences at Oak Ridge National Laboratory, announced their plans for a new supercomputer that will use the new Nvidia Fermi GPUs to develop regional climate models. These regional models will require increased physics and resolutions and compute power of at least double what they have now. Evidently, Fermi will deliver this power. Cray, Dell , HP, IBM and Microsoft also will be using Fermi in their supercomputers. Jensen later gave the press an ‘ideal’ of the CPU working with the GPU in a ratio (referring to Oakridge’s planned supercomputer) as a single quad core CPU to 4 GPUs.
Nvidia’s intention is to make GPUs general purpose parallel computing processors that also have amazing graphics and not just ‘graphics chips’ anymore. Jensen believes that Fermi is the foundation for this new industry which will encompass Nvidia’s entire family of GPUs – GeForce, Quadro and Tesla.
Here are some of Fermi’s new features as listed by Nvidia themselves:
C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
512 CUDA Cores(TM) featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
8x the peak double precision arithmetic performance over NVIDIA’s last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
NVIDIA Parallel DataCache(TM) – the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
NVIDIA GigaThread(TM) Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX(R) fluid and rigid body solvers)
Nexus – the world’s first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio
Here is a shot of the Fermi die;
Images, technical whitepapers, presentations, videos and more on “Fermi” can all be found here.
At the press conference, Jensen gave us a little more of a look at Fermi in a likely final form. Here are several more shots:
There is so much more that we will fill in for you as we expand this summary to include what Jensen next covered on Web Computing. Adobe’s Johnny Loiacono noted that the amount of flash video streamed to desktops has increased on a magnitude of ten – in just two years! And 3-D is next for flash and flash games; over 1.2 billion handheld devices use Flash already – yet it is a barely tapped market.
We saw a Ferrari customized next on stage, live and in real time – its real tire was brought on stage and 3D customized “wheels” were added to it. Even the lighting was reflected accurately and perfectly in real time. We also got to see iray. iray technology speeds the creative process by enabling designers to easily and accurately simulate their creations using materials and lighting that relate directly to the physical world as it is experienced every day. They demonstrated creating photo-realistic images in seconds what a designer might have previously taken many hours to produce.
Jensen was kept busy. He introduced Nvidia’s Nexus, the industry’s first development environment for massively parallel computing that is integrated into Microsoft Visual Studio.
Here is how Nvidia explains it in their press release today:
NVIDIA Nexus radically improves productivity by enabling developers of GPU computing applications to use the popular Microsoft Visual Studio-based tools and workflow in a transparent manner, without having to create a separate version of the application that incorporates diagnostic software calls. NVIDIA Nexus also includes the ability to run the code remotely on a different computer. Nexus includes advanced tools for simultaneously analyzing efficiency, performance, and speed of both the graphics processing unit (GPU) and central processing unit (CPU) to give developers immediate insight into how co-processing affects their applications.
Nexus is composed of three components:
-- The Nexus Debugger is a source code debugger for GPU source code, such as CUDA C, HLSL and DirectCompute. It supports source breakpoints, data breakpoints and direct GPU memory inspection. All debugging is performed directly on the hardware. -- The Nexus Analyzer is a system-wide performance tool for viewing GPU events (kernels, API calls, memory transfers) and CPU events (core allocation, threads and process events and waits)--all on a single, correlated timeline. -- The Nexus Graphics Inspector provides developers the ability to debug and profile frames rendered using APIs such as Direct3D. Developers can use the Graphics Inspector(TM) to scrub through draw calls, look at any textures, vertex buffers, and API state in the entire frame.The NVIDIA Nexus supports Windows 7 and Windows Vista operating systems and full integration within Visual Studio (2008 SP1 standard edition or later). A BETA version of NVIDIA Nexus is scheduled to be available on Oct. 15. For more information on NVIDIA Nexus or to register as a developer, please visit: www.nvidia.com/nexus
We also saw some pretty amazing demonstrations of cloud computing. And a netbook streaming HD video – which was impossible up-until-now. Then it was time for the press conferences and a close-up of Fermi.
Well, that is it for now. There is so much more that hasn’t even been touched on that was discussed there today. We will develop this into a full article series over the next week. Stay tuned for day two – tomorrow, October 2, 2009.
Mark Poppin
ABT Senior Editor
Please join us in our Forums
Become a Fan on Facebook
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed
Join our Distributed Computing teams
- Folding@Home – Team AlienBabelTech – 164304
- SETI@Home – Team AlienBabelTech – 138705
- World Community Grid – Team AlienBabelTech