Immersive Experiences

IDLab-MEDIA investigates the fundamental shortcomings present in immersive experiences. Research in this area covers the following aspects:

  • Representation and streaming of immersive scenes using both standardized streaming technology and innovative light representations;
  • Visual quality of immersive equipment;
  • The interaction between user behaviour and quality perception in interactive installations;
  • The Art & Science Interaction Lab, a modular research facility to bring, measure and test AR and VR experiences. The room is equipped with state-of-the-art audiovisual equipment and motion tracking enabling us to (re-)create a multitude of AR and VR experiences.
  • Paper accepted in IEEE Transactions on Multimedia: Representing and Coding Light Field Images and Video October 14, 2019 Paper accepted in IEEE Transactions on Multimedia: Representing and Coding Light Field Images and Video Our paper, “Steered Mixture-of-Experts for Light Field Images and Video: Representation and Coding“, has been accepted in IEEE Transactions on Multimedia Key observations: Introduction of a novel representation method for any-dimensional image data embedded in a strong Bayesian framework Multiple short conference papers have been presented on the subject, but no full paper was yet published. Extra novelties ...
  • Article accepted for publication in the Journal of Real-Time Image Processing: Pixel-level parallel rendering for images and light fields December 9, 2018 Article accepted for publication in the Journal of Real-Time Image Processing: Pixel-level parallel rendering for images and light fieldsOur article with the title “Highly Parallel Steered Mixture-of-Experts Rendering at Pixel-level for Image and Light Field Data” was recently accepted for publication in the Journal of Real-Time Image Processing. In the specific article we describe our novel image approximation framework namely Steered Mixture-of-Experts (SMoE) and its potential capabilities in coding and streaming higher dimensional image ...
  • Paper accepted at SPIE optics + photonics 2018: Light field video coding September 1, 2018 Paper accepted at SPIE optics + photonics 2018: Light field video codingWe are pleased to announce that our paper “Steered Mixture-of-Experts for Light Field Video Coding” has been accepted.  It will be published in the proceedings of SPIE Optical Engineering and Applications (Applications of Digital Image Processing XLI). The paper has been presented in the SPIE optics + photonics conference, as part of the session on ...
  • Funding granted for imec.icon project ILLUMINATE July 13, 2018 Funding granted for imec.icon project ILLUMINATE ILLUMINATE – Interactive streaming and representation for totally immersive virtual reality applications Recent breakthroughs in capture and display technologies are leveraging highly immersive Virtual Reality (VR) applications. These emerging applications offer more Degrees-of-Freedom (DOF) to the users and thus make the experience much more immersive compared to traditional 2D visual content. This is largely pushed by ...
  • Paper accepted at EUSIPCO 2018: Compression of 360° images May 23, 2018 Paper accepted at EUSIPCO 2018: Compression of 360° imagesWe’re happy to announce that our paper “Steered Mixture-of-Experts Approximation of Spherical Image Data” has been accepted for presentation at EUSIPCO 2018! The paper will be presented in the Special Session on Recent Advances in Immersive Imaging Technologies. Steered Mixture-of-Experts (SMoE) is a novel framework for approximating multidimensional image modalities. Our goal is to provide full ...
  • Funding granted for the Interaction Lab at De Krook April 26, 2018 Funding granted for the Interaction Lab at De KrookThe advent of numerous digital technologies/devices has a profound impact on how people interact with each other, with their technologically enhanced context, and with increasingly interactive content. Technological advances also create new interaction paradigms (e.g. VR), or allow measuring interaction at unprecedented precision. This has led to an emerging field of interdisciplinary research that could ...
  • Two papers accepted for PCS 2018! Modeling and Real-Time Rendering of Light Field Video March 31, 2018 Two papers accepted for PCS 2018! Modeling and Real-Time Rendering of Light Field VideoNot one, but both light field video papers got accepted for Picture Coding Symposium 2018! Both papers discuss parts of Steered Mixture-of-Experts (SMoE). The main take-away messages are that We are able to model and approximate 5-D light field videos up to high objective quality We introduced a effective novel way to model SMoE models: robust and ...
  • SELVIE – Scalable, Efficient, and Low-delay Video Interaction during Events May 31, 2016 SELVIE - Scalable, Efficient, and Low-delay Video Interaction during EventsThe SELVIE project aims to increase the involvement of audiences at large-scale events. SELVIE wants to tap into the rising trend of increasing smartphone use on events. https://vimeo.com/164392457 The project’s goal is to stream visitor-made smartphone videos (so-called SELVIEs: a video-based selfie) in real-time to the event’s screens to increase their interactive nature. SELVIE wants to build ...
  • Thomas Sikora of TU Berlin receives Google Faculty Research Award – recognition of joint work with IDLab-MEDIA (UGent-imec) February 2, 2016 Prof. Thomas Sikora of Technical University Berlin received one of the prestigious 2016 Faculty Research Awards in the area of Machine Perception. The award was given to TU Berlin to assist future work on Steered Mixtures-of-Experts (SMoE) for Video Coding. The award is also a recognition of the fruitful collaboration between the Communication Systems Lab ...
  • PRO-FLOW – Enabling Internet Video Streaming and Collaboration with Sub-Second Latency January 1, 2016 PRO-FLOW – Enabling Internet Video Streaming and Collaboration with Sub-Second LatencyOnline – fixed and mobile – video consumption is soaring. Due to the increasing usage of mobile devices such as smartphones and tablets, for instance, current global mobile video traffic is estimated to amount to a staggering 4.4 million terabyte (or 4,400,000,000 gigabyte) per month. While the industry has already realized major advances in domains such ...

View all posts