Why you should attend: NVIDIA GPUs are among the world's fastest and most efficient accelerators delivering world record scientific application performance. NVIDIA's CUDA technology is one of the most pervasive parallel computing models, used by over 450 scientific applications and over 200,000 developers worldwide.
A Hands-on Lab format will be used. There is no cost to attend. Seating is limited to the first 80 people registered.
Who it's for: Undergraduate/Graduate
students, Postdocs, Researchers, Data Scientists and Professors
Tuesday, June 6, 9:00AM - 4:30PM
Day 1: Introduction to GPU programming
· Basics of GPU Programming
- An Introduction to the CUDA C/C++ language
- Hands-on examples will illustrate simple kernel launches and using threads
· Performance and Optimization
- Overview of Global and shared memory usage - Hands-on examples will illustrate a 1D Stencil and Matrix Transpose
- Using NVIDIA Profiler to identify performance bottlenecks
· Advanced Optimizations using Streams and Concurrency to overlap communication and computation
- Hands-on examples will use CUBLAS with Matrix Multiply
June 7, 9:30AM - 4:30PM
Day 2: Intro to Deep Learning
Getting Started with Deep Learning (Caffe, DIGITS)
Learn how to leverage deep neural networks (DNN) within the deep learning workflow to solve a real-world image classification problem using NVIDIA DIGITS. You will walk through the process of data preparation, model definition, model training and troubleshooting. You will use validation data to test and try different strategies for improving model performance using GPUs. On completion of this lab, you will be able to use NVIDIA DIGITS to train a DNN on your own image classification application.
Deep Learning for Image
(TensorFlow) (uses medical imagery to isolate a particular part of the heart)
There are a variety of important applications that need to go beyond detecting individual objects within an image, and that will instead segment the image into spatial regions of interest. An example of image segmentation involves medical imagery analysis, where it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells, so that you can isolate a particular organ. Another example includes self-driving cars, where it is used to understand road scenes. In this lab, you will learn how to train and evaluate an image segmentation network.
Introduction to Recurrent Neural
This two-part lab is an introduction to Recurrent Neural Networks (RNN), starting from their foundation. The first part will go through what they are and how they work by learning to train them. The second part will motivate use of RNN for Natural Language Processing using text. RNN can be trained to predict the next character in a sequence of text. Finally, you will see why RNNs have been historically considered hard to train, supplemented by various suggested readings throughout the lab.
About the INSTRUCTOR
Dr. Jonathan Bentz is a Solution Architect with NVIDIA, focusing on Higher Education and Research customers. In this role he works as a technical resource to customers and OEMs to support and enable adoption of GPU computing. He delivers GPU training such as workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA, Jonathan worked for Cray as a software engineer where he developed and optimized high performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.
Please bring your laptop to participate in hands-on exercises. A GPU in your laptop is not required. For CUDA and OpenACC, no previous GPU programming experience is required. However, beginner-level C and Linux experience will be expected.
For more information on CUDA training: https://developer.nvidia.com/cuda-education-training
For Deep Learning preparation, it is advised that participants review the NVIDIA Getting Started Blog posts and articles beginning at: https://developer.nvidia.com/deep-learning
- Deep Learning in a Nutshell Series by Tim Dettmers (University of Lugano, Switzerland)
- Hacker's Guide to Neural Networks by Andrej Karpathy (Stanford University)
- Getting Started with DIGITS by Allison Gray (NVIDIA)
- Deep Learning Posts on the ParallelForAll technical blog
For CUDA/OpenACC, please also take the time to register at our CUDA developer