Meta AR/VR Job | Research Science Manager, Multimodal machine learning for HCI
Job(岗位): Research Science Manager, Multimodal machine learning for HCI
Type(岗位类型): 3D Software Engineering
Citys(岗位城市): Redmond, WA
Burlingame, CA
Date(发布日期): Before 2021-12-14
Summary(岗位介绍)
At Facebook Reality Labs Research, our goal is to explore, innovate, and design novel interfaces and hardware for virtual, augmented, and mixed reality experiences. We are driving research towards a vision of an always-on augmented reality device that can enable high-quality contextually relevant interactions across a range of complex, dynamic, real-world tasks in natural environments; to achieve this goal, our team draws on methods and knowledge from artificial intelligence, machine learning, computer vision, and human–computer interaction. We are looking for a skilled and motivated Research Science Manager with expertise in leading strategy and directing teams in multimodal machine learning, sensor fusion, gesture recognition, contextual modeling or related fields to join our team. More broadly, the chosen candidate will work with a diverse and highly interdisciplinary team of researchers and engineers and will have access to cutting edge technology, resources, and testing facilities.
In this position, you will work with an interdisciplinary team of domain experts in embodied artificial intelligence (AI), human–computer interaction, computer vision, cognitive and perceptual science, and sensing and tracking on problems that contribute to creating human-centered contextual interactive systems. The position will involve building, leading, and directing a team that develops full multimodal ML technology stacks—leveraging video, audio, IMU, EMG, optical-sensor, and other wearable sensor data—to enable recognition of a diverse library of gestures from a rich ecosystem of wearable devices. The position will also involve leading strategy and cross-functional workstreams centered on uncovering contextually relevant information for predicting user behavior from wearable sensors. The ML models built by this portfolio of work will leverage large-scale real-world data sets and the scale of Facebook machine-learning infrastructure, and will be deployed into AR/VR prototypes to uncover research questions on the path to the next era of human-centered computing.
Qualifications(岗位要求)
PhD degree in computer science, signal processing, machine learning, artificial intelligence, or related technical field
Demonstrated track record in defining and leading applied research in multimodal machine learning, sensor fusion, gesture recognition, or related areas
5+ years of experience in managing teams and working in a cross-functional setting
5+ years of experience building, testing, and deploying machine-learning/AI systems in real-time, interactive applications, and developing a new algorithm tailored solution to challenging real-world problems
Description(岗位职责)
Lead strategy and cross-functional workstreams centered on uncovering contextually relevant information for predicting user behavior from wearable sensors
Define and execute a cutting-edge applied research program toward developing full multimodal ML technology stacks to enable recognition of a diverse library of gestures from wearable sensors
Direct and manage a team of research scientists and engineers working on advancing AR/VR, including creating career development plans, managing performance, and conducting performance reviews
Provide team guidance, regular feedback, education, coaching, and mentoring
Identify, recruit, interview, and hire new research scientists and engineers
Additional Requirements(额外要求)
Experience in building machine-learning systems that leverage wristband sensors
Experience with machine learning applied to electromyograph (EMG) signals
Experience with sensor systems and joint hardware/algorithm design
Experience with hand or body tracking, state of the art machine-learning models for time-series modeling (e.g. inertial, audio, video), and human-computer interaction
Familiarity with concepts in embodied AI, representation learning, and few shot learning