Meta AR/VR Job | Research Scientist Context Awareness, Audio (RL Research) | Oculus

Job(岗位): Research Scientist Context Awareness, Audio (RL Research) | Oculus

Type(岗位类型): Research

Citys(岗位城市): Redmond, WA

Date(发布日期): 2022-3-11


Meta/Reality Labs Research (RL Research) brings together a world-class team of researchers and engineers to create the future of augmented and virtual reality, which together will become as universal and essential as smartphones and personal computers are today. And just as personal computers have done over the past 45 years, AR and VR will ultimately change everything about how we work, play, and connect.

We are developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interfaces, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence. Some of those will advance much faster than others, but they all need to happen to enable AR and VR that are so compelling that they become an integral part of our lives.

At Reality Labs – Research we are hiring a Research Scientist for Context Awareness to help us create context-aware devices. You will work on devices that can understand and react to the effects of user intent on acoustic event relevance, and understand and react to the effects of acoustic environment on intent, emotion, needs and desires.

You will work on systems that incorporate input from audio event detection, acoustic properties detection, and active talkers detection. They may leverage these to predict user intent or integrate with an intent system, and may achieve any or all of these (and more) through combining signals from non-audio modalities.

You are working inter-disciplinary to achieve research outcomes in collaboration with other teams will be important to success in the role. This includes collaboration with audio and acoustics-related domains such as the hearing sciences, room acoustics modeling and detection, and audio signal detection, as well as collaboration with domains as broad as computer vision, virtual assistants, and user intent modeling. In addition to inter-disciplinary collaboration, skills needed will include machine learning as applied to audio processing and signal detection, as well as some understanding of microphone array systems and their capabilities.


PhD in audio signal processing, machine learning or equivalent relevant to the role

3+ years of experience in the development of DSP and ML algorithms and validation procedures

3+ years experience with development, implementation, and testing of audio digital signal processing methods

3+ years of experience with one or more machine learning toolkits (e.g.: TensorFlow, PyTorch, NumPy, Scikit-Learn, Pandas or equivalent)

Experience designing data collection protocols and with data collection quality control


Develop machine-learning based technologies to detect acoustic context from signals such as detected talkers, detected emotions, detected talker effort, noise levels, room acoustics properties, and potentially many others

Help build experimental design pipelines and generate reliable, correct large-scale audio and human auditory perception datasets for model training/validation/testing

Partner with DSP engineers and software engineers to realize designs as functioning real-time demonstrations

Partner with user experience researchers to study the user experience of proposed designs

Additional Requirements(额外要求)

Experience in C/C++ programming.

Experience with development and implementation of machine learning or deep learning algorithms applied to auditory perception, auditory modeling, audiology, or hearing correction problems.

Experience with building computational models on speech and/or acoustic datasets.

Proven track record of independence in achieving significant results and innovation as demonstrated by first-authored publications or industrial experience or equivalent.

Strong interpersonal skills to facilitate cross-group and cross-culture collaboration.