NVIDIA’s GTC 2014 – final report
Wednesday
Wednesday, March 26, 2014 | ||
S4745 | Now You See It: Unmasking Nuclear and Radiological Threats Around the World | |
S4333 | Computing the Cure: Combining Sequencing and Physical Simulation on GPUs to Provide Patient Customized Cancer Treatments | |
S4621 | Beyond Pedestrian Detection: Deep Neural Networks Level-Up Automotive Safety | |
S4405 | Making It Fast and Reliable: Speech Recognition with GPUs by Sequential Utilization of Available Knowledge Sources | |
S4901 | The Evolution and Future of Wearable Displays | |
S4614 | DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham Origins |
The above is this editor’s schedule for Wednesday, the highlight being Pixar’s 1-hour keynote from 10-11AM in the main hall.
Our first morning session after a continental breakfast in the press lounge was:
Now You See It: Unmasking Nuclear and Radiological Threats Around the World
This presentation by Decision Sciences was very interesting. It detailed how Cosmic Radiation can be used to detect nuclear and radiological terrorist threats – safely and quickly – using the GPU.
Together with electrons, muons are naturally produced powerful and penetrating particles that are discharged when Cosmic rays interact with the upper atmosphere. Muon attenuation has long been used to image the insides of the Egyptian pyramids as well as the Mayan pyramids.
The US government wants to use these particles to inspect cargo entering the USA’s borders against nuclear or radiological threats. The idea is to use banks of tube detectors placed strategically on either side of what is being transported to identify all of the signatures of the interaction of Muons and electrons with the materials inside the cargo containers.
This scanning technology was invented at Los Alamitos National Laboratory in 2006 and has been improved to the point of use in the field. This year, this new scanning method is being tested in the Bahamas with plans to expand it, if successful, to all ports of entry to the USA.
The challenge is to produce low-latency imaging very quickly as there are massive amounts of data to compute. Fortunately, the GPU is up to the challenge with a GTX 770 providing a 200X speedup compared to using a CPU platform.This is a very impressive and very important use of the GPU to detect terrorist threats of smuggling radiological or nuclear material. And it may be expanded to detect other contraband.
Beyond this, this kind of scanning may find its way into medical scanning to produce a safer scan. Or it can possibly be used to scan huge objects like bridges or solid rocket boosters for defects.
You tax dollars at work. On to the next session.
Computing the Cure: Combining Sequencing and Physical Simulation on GPUs to Provide Patient Customized Cancer Treatments
Three prestigious universities are working together in a program called “Computing the Cure”. As the costs for sequencing plunges, it is now possible to determine what genetic changes are occurring in tumor samples that cause the specific cancer. So it is now possible to prescribe target drugs to act on the errant biomolecules.
The problem is that the tumors develop resistance mutations and there are far too many rare mutations that complicate making useful therapeutic decisions. One possible solution exists to use free energy calculations that will allow doctors to predict what specific drugs will work. Because these calculations are so time consuming, it requires CUDA and Nvidia hardware to make this work quickly.
This is great progress in producing targeted medicines with fewer side effects. It was time for the Pixar keynote:
Pixar’s Keynote
Pixar’s presentations are always something to look forward to and this year did not disappoint. Pixar has had 14 hit movies with $7B of income and they sit at the bleeding edge of graphics. Dirk Van Guilder of Pixar gave the presentation after he was introduced by Pixar’s VP of Marketing, Greg Estes. He pointed out that there were fifty presentations from the media production industry, including Pixar with 5 including this keynote.
Van Guilder pointed out that the last 10 of 14 hit Pixar films used Nvidia chips because they are the most cutting edge. In a timeline, Pixar used vector graphics machines in the 1980s; Silicon Graphics for four films in the 1990s. In 2001 they began using Nvidia and they used Quadro 400 for Monsters, Inc. Now Pixar are using Kepler and they plan to upgrade to Maxwell.
Much of this presentation concerned Monsters, Inc. and the tools used in its creation. Making a movie is quite complex and Pixar used 200,000 story boards for Monsters, Inc. After the story boards comes the layout. Animation adds emotion. Then lighting makes it look like the artist intended. To actually make the movie requires having complete control of the scene in real time and that is why Pixar uses Nvidia GPUs.
He pointed out that there are 1150 controls for Sullivan – they can animate fur using 900,000 threads each with 4 vertices – all in real time with good framerates! This is something that Pixar simply cannot do using the CPU. For their animation tools, Pixar uses OGL which and runs without slowdowns on the GPU.
To properly set the mood and convey the story, Pixar requires real time interactive lighting which is physically based. To get their effects, finite sized light sources produce soft area shadows. The energy of the light is related to the light size. Also indirect illumination can produce the subtle effects that Pixar is known for.
Since Pixar is on the bleeding edge of CGI, they developed a more simple and consistent lighting response using less lights. By using ray tracing to sample – using hundreds of millions of rays against tens of millions of objects – they are able to develop their own custom program, Katana, which runs on top of Nvidia’s Optix application.
These kind of tools allow the Pixar artists full control over every scene in real time. This allows for much speedier and much better results than what was done previously. Pixar ended the session by thanking especially Nvidia’s Optix team.
Pixar was our third session on Wednesday and it ended at noon so we headed to the exhibit hall and prepared to exchange our tickets for a wristband that would let us into Nvidia’s GTC party at 9PM at the Fairmont.
After the exhibit hall and a quick lunch, it was back to the presentations at 2PM.
Beyond Pedestrian Detection: Deep Neural Networks Level-Up Automotive Safety
This was another interesting session that dealt with using deep neural networks for assisted driving. The particular goal of this presentation was to show the progress being made in automotive safety.
The researchers are working on algorithms that are able to extract richer information about pedestrians and maneuver the automobile much as a skilled driver would. It means a method of determining if a pedestrian is holding a cellphone or not.
These kinds of calculations are massively parallel which is the domain of the GPU. We expect to see much progress with assisted driving in the next couple of years.
Making It Fast and Reliable: Speech Recognition with GPUs by Sequential Utilization of Available Knowledge Sources
We actually stayed for two sessions that were about speech recognition, each employing different methods. It is interesting to consider that Star Trek‘s universal translator may be close to becoming a reality.
In every case at the sessions, so far, the researchers made it clear that the highly parallel nature of their tasks require the GPU.
We headed for the next session.
The Evolution and Future of Wearable Displays
The presenter was a natural. His company is Vuzix. He pointed out that a watch is a wearable display and it is also fashionable. For wearable displays to become mainstream, they must become fashionable.
There is new technology dependent on the GPU that can use regular looking glasses to produce a fashionable augmented reality display.
The image is injected into the side of the glasses and projected as a hologram in front of the user’s eyes.
This is something that we are going to follow the progress of, from one GTC to the next.
Our last session of the day was an hour long.
DirectX 11 Rendering and NVIDIA GameWorks in Batman: Arkham Origins
This presentation was by Ubisoft, Montreal and it was directed to programmers. It actually dealt with how the Ubisoft programmers used techniques that worked for last generation consoles as well as for the PC. They discussed the trade-offs and the reasons for their decisions. It was quite technical, yet the presenters were able to keep the presentation for non-programmers interesting.
The presenters also showed why they used Nvidia GameWorks and how it helped make their game a better experience with HBAO+ and PhysX, for example.
Well, it was time for dinner, to check out the exhibit hall again … and then to head for Nvidia’s party for some more great food and entertainment. Nvidia had a laser maze to challenge the fit as well as an obstacle course for remotely controlling AR Drones. The music and the videos were great, the food was excellent, and the drink flowed. Tomorrow would be the last day of the GTC.