We show general purpose GPU computing using Google's Go language together with minimal use of Nvidia CUDA. This unusual match can perform very reliable, high-performance scientific computation using surprisingly brief and clear code.
GPU-accelerated scientific computing is gaining popularity because of its high performance. Often, nvidia's CUDA toolkit is used together with C/C++. Although undeniably popular, subtle or hard-to-debug issues commonly pop-up. This is especially problematic in a research context where correctness should be the main priority and where we don't want to spend most of our time with low-level debugging.
In this talk we present our uncommon and quite novel approach of pairing google's Go language on the CPU with minimal CUDA on the GPU. In this way we developed an open-source (GPLv3) GPU-accelerated simulation package using about 5x less code than a version using C++ and python, and running about 100x faster than a state-of-the art CPU implementation. The Go+CUDA combination is highly type-safe and memory safe, concurrent (GPU-CPU parallelism) and relieves the programmer from most of the GPU's typical memory management and synchronization issues.
Outline
- Brief introduction to Go and CUDA
- Our Go GPU libraries (BSD-licensed)
- Nearly overhead-free and type-safe GPU memory management
- Automated unit testing of GPU code
- Brief demonstration our open-source software simulating a byte being written by a hard disk head.
(shown live in an HTML5 web GUI, provided by an HTTP server embedded in the simulation software)
Intended for researchers using scientific GPU computing. Illustrates how Go can be used in conjunction with CUDA. What to expect/not to expect from Go+CUDA.