Theano is an exciting relatively new open source library which was developed by the machine learning community to accelerate the training of Deep Neural networks and other mathematical algorithms built on scipy by generating compiled kernels for the CPU and the GPU using the CUDA compiler and simple python algorithm definitions. Since GNU Radio provides native python blocks which execute work functions directly on scipy-style input and output vectors joining these two technologies is a magical approach which allows for the extremely rapid definition of work block mathematical kernels, and the automated compilation and offload onto massively parallel graphics processing unit hardware. By using this approach we will demonstrate how some highly concurrent and computationally expensive algorithms can be implemented extremely concisely and executed efficiently using graphics processors to accelerate GNU Radio channel models and other blocks with minimal effort.
For more information on Theano see: http://www.iro.umontreal.ca/~lisa/pointeurs/theano_scipy2010.pdf and http://arxiv.org/pdf/1211.5590.pdf