Meta AR/VR Job | Audio Algorithms Research Software Engineer | Oculus
Citys（岗位城市）: Redmond, WA
Reality Labs Research at Meta brings together a world-class team of researchers, developers, and engineers to create the future of virtual and augmented reality, which together will become as universal and essential as smartphones and personal computers are today. Just as personal computers have done over the past 45 years, AR and VR will ultimately change everything about how we work, play, and connect. We are developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, haptic interaction, eye/hand/face/body tracking, perception science, and true telepresence. Some of those will advance much faster than others, but they all need to happen to enable AR and VR that are so compelling that they become an integral part of our lives.
The audio team within Reality Labs Research is looking for an innovative Research Software Engineer with a broad set of skills in immersive and interactive spatial audio. In this position, you’ll develop, optimize, and validate advanced algorithms and computational modeling approaches to solve complex problems at the intersection between spatial audio, acoustics, and signal processing, to advance the state-of-the art technologies that will change the way we perceive spatial audio in the metaverse.
Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
Hands-on experience in audio processing and real-time audio
Experience in C/C++, MATLAB, and/or Python programming
Knowledge of audio capture and render frameworks. Experience integrating, debugging, and shipping audio algorithms and technologies
R&D in the areas of audio, ML, and signal processing, both device and network related, to deliver best-in-class immersive spatial audio features and quality across Meta devices and family of apps
Develop novel signal processing algorithms, and train/deploy ML models for the audio areas listed above that can run on all devices and work well for our diverse user base and operating conditions
Collaborate closely with both the research and product team to deploy new technologies including augmenting systems and algorithms to make them robust to shipping scenarios
Develop metrics to evaluate perceptual audio quality in real-time communication
Collaborate with partner teams across Meta, including HW, AI, Calling, Connectivity, and Research to understand features, operation, and specify enhancements
Engage with the research and open-source community to contribute and stay up-to-date with the latest advancements in the field
Design, develop, test, and deploy audio frameworks and applications
Design and conduct in-field experiments to continuously improve audio quality for our users
Identify areas of performance, quality and reliability improvement across the stack
PhD in Audio, Acoustics, Machine Learning, Computer Science, Computer Engineering, Electrical Engineering, or a related field.
Experience with ML frameworks such as PyTorch or TensorFlow to train models for one or more of the following applications: noise suppression, dereverberation, echo cancellation, packet loss concealment, MOS estimation.
Experience with audio signal processing applied to speech enhancement.
Experience with auditory perception, hearing sciences, machine learning, or research in audio capture with microphone-arrays.
Proven track record of achieving significant results and innovation as demonstrated by first-authored publications and/or patents.
Demonstrated experience in developing hardware and software technologies into research or consumer products in time bound and multi-partner initiatives.