Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Laser World of Photonics Showcase
News
Menu
Research & Development

‘MoBluRF’ package creates sharp 4D reconstructions from blurry video

22 Sep 2025

Development by Korea’s Chung-Ang University enables “sharp fields” from rough videos taken on handheld devices.

Neural Radiance Fields (NeRF) is a machine learning technique that can create 3D reconstructions of a scene from 2D images captured from multiple angles, representing it from new perspectives. While NeRF is established for creating static images, existing methods struggle when using monocular videos as the input due to motion blur.

Now, researchers at Chung-Ang University, Korea, have developed “MoBluRF”, which they describe as “a two-stage framework that enables creation of accurate, sharp 4D (dynamic 3D) NeRFs from blurry videos, captured from everyday handheld devices.”

Conventional NeRF creates 3D representations of a scene from a set of 2D images, captured from different angles. It works by training a deep neural network to predict the color and density at any point in 3D space.

To achieve this, it casts imaginary light rays from the camera through each pixel in all input images, sampling points along those rays with their 3D coordinates and viewing direction. Using this information, NeRF can reconstruct a scene in 3D and can render it from entirely new perspectives, a process known as novel view synthesis (NVS).

Beyond still images, a video can also be used, with each frame of the video treated as a static image. However, existing methods are highly sensitive to the quality of the videos. Videos captured with a single camera, such as those from a phone or drone, exhibit motion blur caused by fast object motion or camera shake.

This makes it difficult to create sharp, dynamic NVS. This is because most existing deblurring-based NVS methods are designed for static multi-view images, which fail to account for global camera and local object motion. In addition, blurry videos often lead to inaccurate camera pose estimations and loss of geometric precision.

Chung-Ang solution

To address these issues, a research team jointly led by Assistant Professor Jihyong Oh from the Graduate School of Advanced Imaging Scienceat Chung-Ang, and Professor Munchurl Kim from Korea Advanced Institute of Science and Technology (KAIST), Korea, along with Minh-Quan Viet Bui and Jongmin Park, co-developed the MoBluRF method.

“Our framework is capable of reconstructing sharp 4D scenes and enabling NVS from blurry monocular videos using motion decomposition, while avoiding mask supervision, significantly advancing the NeRF field,” said Dr. Oh. The work was first described in IEEE Transactions on Pattern Analysis and Machine Intelligence.

MoBluRF consists of two main stages: Base Ray Initialization (BRI) and Motion Decomposition-based Deblurring (MDD). Existing deblurring-based NVS methods attempt to predict hidden sharp light rays in blurry images, called latent sharp rays, by transforming a ray called the base ray.

However, directly using input rays in blurry images as base rays can lead to inaccurate prediction. BRI addresses this issue by roughly reconstructing dynamic 3D scenes from blurry videos and refining the initialization of “base rays” from imprecise camera rays.

Next, these base rays are used in the MDD stage to accurately predict latent sharp rays through an Incremental Latent Sharp-rays Prediction method, which incrementally decomposes motion blur into global camera motion and local object motion components, greatly improving the deblurring accuracy.

“By enabling deblurring and 3D reconstruction from casual handheld captures, our framework enables smart phones and other consumer devices to produce sharper and more immersive content,” said Dr. Oh. “It could also help create crisp 3D models of shaky footages from museums, improve scene understanding and safety for robots and drones, and reduce the need for specialized capture setups in virtual and augmented reality.”

Photon Lines LtdAlluxaOptikos Corporation Sacher Lasertechnik GmbHNyfors Teknologi ABInfinite Optics Inc.Hyperion Optics
© 2025 SPIE Europe
Top of Page