Meta AR/VR Job | Research Scientist, Human-Computer Co-adaptive Interfaces

Job(岗位): Research Scientist, Human-Computer Co-adaptive Interfaces

Type(岗位类型): Artificial Intelligence | Engineering, Machine Learning, Research

Citys(岗位城市): Burlingame, CA

Date(发布日期): 2022-1-13


At Meta Reality Labs Research, our goal is to explore, innovate, and design novel interfaces and hardware for virtual, augmented, and mixed reality experiences. We are driving research towards a vision of an always-on augmented reality device that can enable high-quality contextually relevant interactions across a range of complex, dynamic, real-world tasks in natural environments; to achieve this goal, our team draws on methods and knowledge from artificial intelligence, machine learning, computer vision, and human–computer interaction. We are looking for a skilled and motivated researcher with expertise in sequence modeling, model personalization, domain adaptation and/or active learning to join our team. More broadly, the chosen candidate will work with a diverse and highly interdisciplinary team of researchers and engineers and will have access to cutting edge technology, resources, and testing facilities.

In this position, you will work with an interdisciplinary team of domain experts in embodied artificial intelligence (AI), human–computer interaction, computer vision, cognitive and perceptual science, and sensing and tracking on problems that contribute to creating human-machine co-adaptive interfaces which enable easy discoverability and human learning of novel input methods, along with online adaptation of input & action recognition models. The position will involve building models that integrate multimodal data sources—including electromyography (EMG), video, and other biosignals from wrist-wearable inputs and other sensing methods—to build personalized models for recognizing input commands to future augmented-reality devices. These models will leverage large-scale real-world data sets and the scale of Meta machine-learning infrastructure, and will be deployed into AR/VR prototypes to uncover research questions on the path to the next era of human-centered computing.


Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience

PhD degree in the field of deep learning, artificial intelligence, machine learning, computer science, computational neuroscience, relevant technical field, or equivalent practical experience

Demonstrated track record in developing scalable, robust systems for training deep-learning models

3+ years of experience in PyTorch or equivalent framework


Formulate and evaluate hypotheses from ideation all the way through implementation and demonstration of live online experimental results

Design and build new datasets for exploring methods for developing new input interfaces using novel sensor system prototypes

Explore applied machine learning methods starting from building 0-to-1 baselines on novel problems & datasets through progression towards modern machine learning methods

Leverage advances in traditional machine learning domains such as speech & language understanding, online learning, active learning, reinforcement learning, and others for exploring improvements to decoding novel sensor data

Additional Requirements(额外要求)

Research experience with Automatic Speech Recognition, Machine Translation, and/or Text To Speech

Experience spanning hypothesis formulation, dataset preprocessing, training, and evaluation of new algorithms to implementations of reusable Python modules

Experience with deploying machine-learning/AI systems in closed-loop systems

Experience with joint hardware-software development and associated rapid prototyping

Experience with biosignals, body-machine interfaces, neural analysis, signal processing, or related fields