Monde Duinkharjav

Budmonde Duinkharjav
Дуйнхаржавын Будмондэ

Profile Picture
Clicky Stuffs
Email: budmonde@nyu.edu
Twitter: @budmonde
Github: budmonde
...
if it exists, it's "budmonde".
Resume
Google Scholar

Last Update: Feb 1, 2024

I am a PhD candidate at NYU's Immersive Computing Lab, advised by Prof. Qi Sun. My research is focused around studying the relationship between what we observe in our surroundings and how its perception affects our behavior and ability to perform visual tasks in the context of comptuer graphics applications. More broadly, I'm interested in how research on human perception can be leveraged to augment and improve computer graphics systems to aide in our daily lives.

Prior to starting my studies at NYU in Spring 2021, I received my BS and MEng degrees in Computer Science and Engineering at MIT in 2018 and 2019 respectively. During my time at MIT, I was part of the Computer Graphics Group at CSAIL, and was advised by Prof. Fredo Durand, where I worked on incorporating differentiable ray tracing into machine learning pipelines.

Updates

May 2023
I started an internship at Adobe Research mentored by Chang Xiao.
May 2023
I received an award for Outstanding Performance on the PhD Qualification Exam at NYU.
Dec 2022
I got an Honorable Mention at the Snap Research Fellowship.
Oct 2022
A paper I contributed to received the Best Journal Paper Award at IEEE ISMAR 2022.
Aug 2022
My first-authored paper received the Best Paper Award at ACM SIGGRAPH 2022.
May 2022
I started an internship at NVIDIA's Human Performance and Experience Team (HPX) mentored by Rachel Brown.

Publications

Profile Picture
Numerically Lossy Perceptually Lossless Image Encoding For Memory- and Energy-Efficient Mobile Virtual Reality
ASPLOS 2024to appear
A DRAM traffic compression scheme for HMDs is achieved by leveraging peripheral deterioration of color discrimination.
Profile Picture
The Shortest Route Is Not Always the Fastest: Probability-Modeled Stereoscopic Eye Movement Completion Time in VR
Budmonde Duinkharjav, Benjamin Liang, Anjul Patney, Rachel Brown, Qi Sun
SIGGRAPH Asia 2023  |  Journal
The amplitude and direction of gaze movements in binocular vision exhibit different temporal performance characteristics; our computational model predicts this relationship.
Citation duinkharjav2023stereolatency  copied to clipboard 
Profile Picture
Color-Perception-Guided Display Power Reduction for Virtual Reality
Budmonde Duinkharjav*, Kenneth Chen*, Abhishek Tyagi, Jiayi He, Yuhao Zhu, Qi Sun(* co-authors)
SIGGRAPH Asia 2022  |  Journal
A power saver for OLED modules in untethered HMDs is achieved by applying a perceptually unnoticeable foveated color modulation filter.
Citation duinkharjav2022vrpowersaver  copied to clipboard 
Profile Picture
Reconstructing room scales with a single sound for augmented reality displays
Benjamin Liang, Andrew Liang, Iran Roman, Tomer Weiss, Budmonde Duinkharjav, Juan Pablo Bello, Qi Sun
JID 2022
Room dimensions can be inferred from hearing how an audio signal propogates within.
Citation liang2022audioreconstruction  copied to clipboard 
Profile Picture
FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality
Nianchen Deng, Zhenyi He, Jiannan Ye, Budmonde Duinkharjav, Praneeth Chakravarthula, Xubo Yang, Qi Sun
ISMAR 2022Best Journal Paper
Neural Radiance Field performance can be accelerated while perserving visual fidelity in egocentric applications via a coordinate system re-parameterization.
Citation deng2021fovnerf  copied to clipboard 
Profile Picture
Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency
SIGGRAPH 2022  |  JournalBest Paper
Human decision making performance varies as a function of the image features observed; this relationship can be leveraged to tailor graphics applications for best performance.
Citation duinkharjav2022gazetiming  copied to clipboard 
Profile Picture
Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming
TVCG 2022
Data transmission bandwidth is saved by determining the LOD of 3D assets in a foveated manner.
Citation chen2022irg  copied to clipboard 

Posters and Demos

Profile Picture
Imperceptible Color Modulation for Power Saving in VR/AR
SIGGRAPH 2023  |  Emerging Technologies
Follow-up live demo of the power saver for OLED modules in untethered HMDs; see Color-Perception-Guided Display Power Reduction for Virtual Reality for details.
Citation chen2023vrpowersaver  copied to clipboard 
Profile Picture
Learning Non-stationary SVBRDFs using GANs and Differentiable Rendering
Budmonde Duinkharjav
MIT MEng Thesis 2019
Texture-maps for 3D assets can be learned directly from rendered images via differentiable ray tracing and GANs.
Citation duinkharjav2019svbrdfgan  copied to clipboard 

Teaching Experience

MIT 6.815/865: Digital and Computational Photography: Teaching Assistant, Spring 2019

MIT 6.858: Computer Systems Security: Teaching Assistant, Spring 2018

MIT 6.148: WebLab: Introduction to Web Programming: Co-Instructor
Lecture Videos: Winter 2017 and Winter 2018