My research interests lie in creating AR experiences and new forms of natural interaction with hands and objects in 3D, particularly in everyday environments with augmented reality (AR). My strength lies in the ability to understand and combine complex technologies across the full stack, from sensors and optical systems to GPU-accelerated ML and vision processing, 3D graphics, and gesture recognition to the user interface. I lead cross-functional teams for rapid prototyping of vertically integrated real-time systems that help proof out novel technical contributions or use case scenarios, unblock prototyping on the user-facing side, and identify issues in the backend early on. My background in human-computer interaction allows me to carefully consider human factors, user needs, and usability issues while designing new user interaction techniques or defining technical specifications (e.g. latency, ergonomics, etc.).
At Google, I was a Technical Lead for R&D projects in the area of AR systems and input devices. I led rapid prototyping of immersive mixed reality experiences with 3D graphics, GPGPU, ML, vision, and hardware. I led and shipped ARCore Depth Lab, led and internally shipped AR prototyping systems., drove the evaluation of the human perception of passthrough AR optics and explored the solution space, and initiated and co-led a new input device workstream for AR.
Before joining Google, I was a founding team member and Senior Technology Scientist at Perceptive IO, which got acquired by Google. Before that, I worked on freeform 3D interaction technology as a Researcher at Microsoft Research in Redmond and in Cambridge UK.
I hold a Ph.D. in Computing Science from Newcastle University UK and a Diplom (MSc) in Media Informatics from Ludwig-Maximilian-University (LMU) in Munich, Germany.
My previous work includes Holoportation, KinectFusion, Digits, RetroDepth, FlexSense, and more.