Embedded & GPU Processing

feature_gpu_computing_1Numerical computations drive the algorithms that solve our real-world questions. The evolution of the underlying parallel hardware (based on Graphics Processing Units or GPUs) has been relentlessly pushing the boundaries of what is possible. GPUs have become smaller, more embedded, and more efficient on a yearly basis, and offer ways to approach new and old problems.

The most prominent example is the revival of Neural Network computing, and the explosive growth of popular parallel computing frameworks such as Teano, Pytorch, Tensorflow, Caffe, Quasar and many others. A second example is the concept of Edge Computing, where computations are moved towards the network node that is best suited to perform the task.



Algorithms often require trans-coding into common parallel patterns, adding abstractions that may hide performance opportunities. It becomes apparent that each problem, solution and configuration environment has one or many sweet spots of operation, and low-level insight combined with high-level scheduling is required to find the optimal computing path. To investigate new approaches to solve very specific tasks more/most efficiently, the hardware must be programmed using native instructions. Research at IDLab-MEDIA focuses on identifying these performance opportunities in the visual computing domain, and pursues insights to come up with solutions that can adapt to different environments.

View all posts