This talks introduces programming concepts and languages for parallel programming on accelerator cards.
Curious about the buzz around these graphic cards? Ever heard of a Xeon Phi?
Let's talk about them!
In my talk, I will introduce data- and task-based parallelism on multi cores as a basis. The well-known standards mentioned here will be OpenMP and OpenMPI. And then I will show you the hardware-close programming languages CUDA and OpenCL. I will also mention OpenACC and C++AMP as one possible way towards more abstraction and better code maintainability.
All of them allow to program accelerator cards with or without some safeguards in place. But because every accelerator card is different in how to reach its maximum speed, I will also cover some fundamental card architectures and their pitfalls.
At the end of the talk, you will be able to map your problem to one (or both) of the parallelism concepts, have a first idea how to get started ... and if it is worth the work.
Looking forward to seeing you there!