Our article with the title “Highly Parallel Steered Mixture-of-Experts Rendering at Pixel-level for Image and Light Field Data” was recently accepted for publication in the Journal of Real-Time Image Processing.
In the specific article we describe our novel image approximation framework namely Steered Mixture-of-Experts (SMoE) and its potential capabilities in coding and streaming higher dimensional image data under hard real-time constraints. This is achieved due to the inherent SMoE capability of pixel parallel reconstruction. With appropriate hardware, the goal of reaching 6 Degrees-of-Freedom in virtual reality for camera captured content becomes more realistic.
The picture above is an example of an original light field image next to a SMoE rendered version of it. The original light field of 15x15x626x434 samples, was rendered in 2.9 milliseconds (0.0029 seconds) with a mean PSNR-YCbCr of 30.71 dB and a 0.86 mean SSIM-Y.
The block size selection of the pixel-data can be optimized for achieving maximum exploitation of the device hardware. In the figure above, the influence of exactly this block size selection is shown side by side with a table with total timings for all 4 test sets and all 4 GPU implementations.
The same principle was also applied on Full HD and 4K resolutions, and we achieved rendering at 85 fps and 22 fps respectively.
For more information about this work, check the online version of the article at Springer Link: https://doi.org/10.1007/s11554-018-0843-3