Meta AR/VR Job | Machine Learning Acceleration Engineer Intern (PhD)
Citys（岗位城市）: Sunnyvale, CA
Reality Labs (RL) focuses on delivering Meta’s vision through Augmented Reality (AR) and Virtual Reality (VR). The compute performance and power efficiency requirements of Virtual and Augmented Reality require custom silicon. Meta Silicon team is driving the state of the art forward with breakthrough work in computer vision, machine learning, mixed reality, graphics, displays, sensors, and new ways to map the human body. Our chips will enable AR and VR devices where our real and virtual world will mix and match throughout the day. We believe the only way to achieve our goals is to look at the entire stack, from transistor, through architecture, to firmware and algorithms.
You will use your background in Machine Learning (ML) to enable efficient hardware (h/w) acceleration of ML algorithms for Computer Vision (CV) and Image Processing in AR and VR devices. You will also have an unique opportunity to optimize production systems, as well as help future-proof our silicon and software by proactively understanding state of the art research and develop hardware or software solutions to mitigate inefficiencies. To be successful in this role, you should possess strong software development skills, familiarity with ML algorithms and framework/toolchains and hands-on experience in s/w-h/w co-design, especially in the context of ML.
This is a 12 week internship with various start dates throughout the year.
Currently has, or is in the process of obtaining, a PhD degree in Computer Science, Electrical Engineering or related field.
Must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment.
Interpersonal experience: cross-group and cross-culture collaboration.
Python (or similar) scripting experience and exposure to ML frameworks like Pytorch/TF.
Experience in software design and programming in C/C++.
Understanding of computer architecture and performance implications.
Collaborate with computer architects, software, ML and silicon engineers, to map and optimize ML workloads on various backend targets including CPU’s, DSP’s and Deep Learning Accelerators.
Perform ML algorithm<->software<->hardware codesign to achieve best energy and performance efficiency.
Develop performant C/C++ kernels and optimize domain specific compilers to port industry standard ML libraries to custom hardware.
Review SOTA research trends in hardware specific ML model optimizations and mapping
evaluate and integrate promising techniques into shipping products.
Run analysis/profiling , identify performance and power bottlenecks on the actual h/w, virtual platforms, simulators or emulators and provide feedback for optimizations across the stack.
Experience with h/w acceleration on GPU’s/CPU’s/DSP’s/custom-ASICs.
Understand classic ML, CV algorithms , DeepLearning algorithms like BERT, RNN, CNN and frameworks like Tensorflow/Pytorch.
Familiarity with the state of art ML algorithm optimizations like Neural Architecture Search, quantization, pruning etc.
Familiarity with Deep learning compilers like tensor-rt, XLA is a plus.
Familiarity with high performance sw kernel development for customized ISA.
Familiarity with code profiling and debug tools. Tools in the context of ML is a plus.
Comfortable with reading others code, tracing them, and code refactoring.
Intent to return to degree-program after the completion of the internship.