Since 2015, I’ve worked on a variety of research projects that I’d retrospectively say were all related to the future of augmented reality, and particularly head mounted displays for augmented reality. I’ve recently started shifting my research focus to a new area, so I figured this would be a good time to post a retrospective of what I’ve done and what I feel I’ve learned over these last 3 years. Obviously that’s a lot of time to compress in a single blog post, so it’s safe to say this is going to be very brief in comparison to the level of depth of the work and effort. Since this is 3 years of time, I’m going to use the number 3 heavily here.
Three things I’ve tried
At NVIDIA, we’ve been experimenting with HMDs, GPU-based rendering for them, and evolving the interface between GPUs and displays. Here are a few projects I’ve worked on that are firmly in that space.
- Binary refreshing OLED for reduced latency (and 1700Hz+ framerate!)
- Varifocal virtuality: codesigned display optics and computational blur
- Foveated displays - not yet published!
Three collaborations
While I’ve worked with others, I wanted to draw attention to the great work Rob Shepherd and his Organic Robotics Lab at Cornell University have been doing. All of these represent work at the interface between the human
- HTC Vive skin for kinesthetic haptic feedback
- A custom controller for passive haptic feedback
- OrbTouch: A soft tetris controller trained with machine learning
Three things others have done
There has been an incredible amount of progress in AR since 2015. I’m going to just call attention to a few things I’ve personally found interesting over that time in the research space.