AR Research 2015-2018

Since 2015, I’ve worked on a variety of research projects that I’d retrospectively say were all related to the future of augmented reality, and particularly head mounted displays for augmented reality. I’ve recently started shifting my research focus to a new area, so I figured this would be a good time to post a retrospective of what I’ve done and what I feel I’ve learned over these last 3 years. Obviously that’s a lot of time to compress in a single blog post, so it’s safe to say this is going to be very brief in comparison to the level of depth of the work and effort. Since this is 3 years of time, I’m going to use the number 3 heavily here.

Three things I’ve tried

At NVIDIA, we’ve been experimenting with HMDs, GPU-based rendering for them, and evolving the interface between GPUs and displays. Here are a few projects I’ve worked on that are firmly in that space.

  1. Binary refreshing OLED for reduced latency (and 1700Hz+ framerate!)
  2. Varifocal virtuality: codesigned display optics and computational blur
  3. Foveated displays - not yet published!

Three collaborations

While I’ve worked with others, I wanted to draw attention to the great work Rob Shepherd and his Organic Robotics Lab at Cornell University have been doing. All of these represent work at the interface between the human

  1. HTC Vive skin for kinesthetic haptic feedback
  2. A custom controller for passive haptic feedback
  3. OrbTouch: A soft tetris controller trained with machine learning

Three things others have done

There has been an incredible amount of progress in AR since 2015. I’m going to just call attention to a few things I’ve personally found interesting over that time in the research space.

  1. Fast primary ray visibility on GPUs!
  2. High dynamic range augmented reality
  3. Saccadic redirected walking
comments powered by Disqus