Nvidia’s GPU Technology Conference, Day 2
Day Two Keynote (continued): The Universe’s Beginning
This section will been even briefer yet it will demonstrate GPU processing meeting the needs of some scientific communities that are doing important research in Australia’s outback by using radio astronomy to look at the beginning of the universe – the first 300,000 to 1,000,000 years of its existence that is not at all well understood yet.
To do this they must use many small linked antennas over 1 square kilometer using red shift and a 3D projection of the sky.
They must be far from civilization’s radio interference to do this work.
This means they have special needs for low power and high performance per watt. All of their supercomputing must be done with generators and this means an extremely limited power supply for their project. Even though the antennas are miniature, the sheer amount of data that is require to be processed would require 20KW of power using traditional CPU clusters. In a remote situation, that is impossible so they turned to the GPU.
They also have extreme computing needs with mass amounts of data. Using the GPU parallel processor has allowed them to accomplish this with a savings of 10 to 1 performance to watt over using just the CPU. Success with GPU computing and a speedup of 20 times over the CPU is an added bonus far beyond the power saving. What does this lead to and what kind of data needs will they have in ten short years? Well, let the following slides presented at the conference answer it.
We can see practical benefits for all of us coming from this research. Besides understanding the universe’s beginning, being able to predict solar flares accurately can help us to safeguard our communication systems and our power grid.
These are exciting times and the GPU’s parallel processing makes the formerly impossible work.
Simulations
Now lets us look at a few of the challenges of GPU computing in tackling simulations.
Professor Pfister went on to address the many challenges of simulations including a specific example of determining the organic properties of molecules using the drug Taxon as a quantum multi-body problem. Since the researchers are simulating the molecule’s bouncing electrons, massive computational power is necessary. Here the GPU absolutely left CPU computing in the dust in every way. The GPU will allow for simulation of small molecules as well as large ones now. The future of this kind of research may also lead to improved organic voltaic materials from copying plant properties.
Next up, he presented the challenge of determining how the brain performs object recognition. They are determining models and parameters of the human visual system which require intense computing. Again, the speed differences between using the GPU to perform these tasks and the CPU were shown to be similar to the difference between using an airplane verses a car.
Predicting & Preventing Heart Attacks
Here absolutely massive amounts of computing power is needed – think MRI in real time. Again, the GPU makes possible in emergency situations what the CPU simply cannot do it in a timely fashion. Scientists are using multiscale hemodynamics to predict plaques eruptions which release dangerous artery clogging fat. They look at the circulation in real time and determine where there are potential problems. One of the researchers actually prevented his own heart attack by participating in this project!
With an incredible speed-up of 120 times over the CPU, the near-future promises miniaturization and eventually portability.
W can see that the scaling of Nvidia’s multi-GPU in the cluster is impressive.
There is no contest. The GPU computing wins by a huge margin and Dr Pfister has asked Nvidia for further expanded GPU capabilities so that we can have his future tools for the benefit of mankind. His first HTC lesson is this one:
He then asks for GPU HTC appliances similar to what other industries already have:
Finally, he says the scientists and researchers need to program in their own languages – they should not have to learn CUDA but rather be comfortable with what they already use. And we will find out shortly that Nvidia has listened to him. There are several domain-specific languages now available including C++ and Fortran with more on the way. Best of all, they released Visual Studio and also a full GPU debugger tool, Nexus – and much more software to go with their new Fermi and even their older GPUs.
The scientists say if they can get these tools, our future will have these things:
Well, that concluded just 90 minutes of one intense keynote presentation; and there were dozens of more presentations. However, it was time to race over to the Emerging Companies opening address – the kick off to the Emerging Companies Summit, featuring over 60 startups using Nvidia processors in a whole new range of ways. This particularly interested this editor as AlienBabeltech is an emerging company that already plans to use GPU parallel computing in a brand new way so as to change the way humans interact on the Internet. So, we sadly had to forgo listening to Bill Dally, Nvidia’s chief scientist, in the other lecture room.
Awesome post!
Thanks!
Great post, thanks!!
Thanks for the awesome post, it helped me out a lot.
Great post, I’d Digg this.
Thanks for the awesome post, it helped me out a lot.
Great post, thanks!!