Careers

Work with us

We are solving the most exciting problems at the intersection of CGI faces, real-time computer vision and deep learning, and deploying it on millions of mobile devices and applications. Headquartered in San Francisco, we are a remote-friendly team distributed across several states in the US and Europe.

Open positions

Description

Loom.ai is enabling a new era of virtual communication through the creation, animation and sharing of personalized, 3D avatars. Based in San Francisco, and an alumni of the Y Combinator Fellowship, the Academy Award-winning team has created a best-in-class solution powered by deep learning, computer vision and visual effects. Loom.ai recently announced its partnership with Samsung which brings its fully embedded SDK solution to power ‘AR Emoji’ on the brand-new Samsung Galaxy S10 devices.

In just seconds, Loom.ai can take a single photograph and transform it into a fully representative, 3D avatar as personalized as the individual by recognizing the many nuances that make each face unique. Animatable and expressive in real-time, these avatars can be used to power current and future applications in mobile messaging, entertainment, AR/VR, e-commerce, video conferencing, broadcasting and more. Our current team comprises multiple PhDs, has decades of experience writing industry-strength software for VFX/games along with a keen eye for scaling/infrastructure, very strong research creds (papers in SIGGRAPH/SCA/CVPR), and two Sci-Tech Oscars.

As the Product Engineer, your responsibility will be to integrate and support Loom.ai's avatar SDK's in Unity and similar real-time 3D game engines. You will also be working closely with our current customers in mobile AR and help onboard 3D developers in new market segments.

Requirements
  • 5+ years of professional game programming experience in Unity3D or similar real-time 3D engine.
  • Proven track record of shipping premium consumer product experiences in games, AR/VR, or mobile.
  • Experience with character animation including a solid understanding of animation, skinning, blendshapes, etc.
  • Experience with performance profiling and optimization for desktop and mobile environments.
  • Very strong programming skills (C#, C++ 11/14).
  • Experience with scalable production practices, including code management, CI, testing and review.
  • A great communicator, collaborator, self-starter and team player.
  • Strong desire to onboard and support external developers and users of the API.

 

Plus: 

  • Experience using the facial rigging system in ARKit.
  • Experience implementing an avatar system for VR.
  • Experience for interfacing with writing/using external native plugins.
  • Strong knowledge of 3D Vector mathematics, real-time graphics, physics, animations.
  • 3D look development and shaders for Unity.
  • Experience with technical writing.
Description

Loom.ai is enabling a new era of virtual communication through the creation, animation and sharing of personalized, 3D avatars. Based in San Francisco, and an alumni of the Y Combinator Fellowship, the Academy Award-winning team has created a best-in-class solution powered by deep learning, computer vision and visual effects. Loom.ai recently announced its partnership with Samsung which brings its fully embedded SDK solution to power ‘AR Emoji’ on the brand-new Samsung Galaxy S10 devices.

In just seconds, Loom.ai can take a single photograph and transform it into a fully representative, 3D avatar as personalized as the individual by recognizing the many nuances that make each face unique. Animatable and expressive in real-time, these avatars can be used to power current and future applications in mobile messaging, entertainment, AR/VR, e-commerce, video conferencing, broadcasting and more.

Our current team comprises multiple PhDs, has decades of experience writing industry-strength software for VFX/games along with a keen eye for scaling/infrastructure, very strong research creds (papers in SIGGRAPH/SCA/CVPR), and two Sci-Tech Oscars. You have an opportunity to join us at a pre-series A stage and have a direct impact on shipping computer vision applications on tens of millions of mobile phones worldwide.

Requirements
  • Experience writing real-time, advanced 2D/3D computer vision software. Examples include algorithms for facial landmark tracking, body pose estimation, structure from motion, optical flow and texture synthesis.
  • Experience with machine learning techniques for classification and regression applied to images or video.
  • Strong C++ skills (preferably C++ 11/14) and object oriented design.
  • Ability to write organized, efficient, readable and reusable code.
  • Solid foundations in math, algorithms, data structures, and numerical optimization.
  • B.S/ B.E / MS or PhD degree in Computer Science or related.
  • 5+ years of software development experience in a commercial environment. 

 

Plus:

  • OSX, iOS, and Android development experience.
  • Experience with Python.
  • Strong communication and collaboration skills.
  • Proven ability to lead feature development from concept definition to shipping product.
Description

Loom.ai is enabling a new era of virtual communication through the creation, animation and sharing of personalized, 3D avatars. Based in San Francisco, and an alumni of the Y Combinator Fellowship, the Academy Award-winning team has created a best-in-class solution powered by deep learning, computer vision and visual effects. Loom.ai recently announced its partnership with Samsung which brings its fully embedded SDK solution to power ‘AR Emoji’ on the brand-new Samsung Galaxy S10 devices.

In just seconds, Loom.ai can take a single photograph and transform it into a fully representative, 3D avatar as personalized as the individual by recognizing the many nuances that make each face unique. Animatable and expressive in real-time, these avatars can be used to power current and future applications in mobile messaging, entertainment, AR/VR, e-commerce, video conferencing, broadcasting and more.

Our current team comprises multiple PhDs, has decades of experience writing industry-strength software for VFX/games along with a keen eye for scaling/infrastructure, very strong research creds (papers in SIGGRAPH/SCA/CVPR), and two Sci-Tech Oscars.

Requirements
  • Experience applying deep learning to computer vision problems
  • Knowledge of convolutional networks and common architectures (Inception, ResNet, DenseNet, etc)
  • Proficiency with at least one deep learning library (TensorFlow, Torch, MXNet, etc)
  • Familiarity with traditional computer vision in C++ with libraries such as OpenCV
  • Solid software design skills and ability to write organized, efficient, readable and reusable research code
  • Strong communication and collaboration skills
  • MS/PhD in a related field with an emphasis on computer vision or machine learning, or BS with equivalent industry experience

 

Plus: 

  • Experience deploying machine learning models in production environments
  • Familiarity with distributed computing frameworks such as Hadoop or Spark, or distributed training of deep learning models
  • Knowledge of algorithms applied to faces such as face recognition, landmarking, or reconstruction, or advanced deep learning topics
  • Knowledge of algorithms for facial animation synthesis from audio, video and other signals
  • Publications in machine learning or computer vision conferences (CVPR, ICCV, ICML, NIPS, etc)
  • Ability and enthusiasm to learn new technologies quickly
Description

Loom.ai is enabling a new era of virtual communication through the creation, animation and sharing of personalized, 3D avatars. Based in San Francisco, and an alumni of the Y Combinator Fellowship, the Academy Award-winning team has created a best-in-class solution powered by deep learning, computer vision and visual effects. Loom.ai recently announced its partnership with Samsung which brings its fully embedded SDK solution to power ‘AR Emoji’ on the brand-new Samsung Galaxy S10 devices.

In just seconds, Loom.ai can take a single photograph and transform it into a fully representative, 3D avatar as personalized as the individual by recognizing the many nuances that make each face unique. Animatable and expressive in real-time, these avatars can be used to power current and future applications in mobile messaging, entertainment, AR/VR, e-commerce, video conferencing, broadcasting and more.

Our current team comprises multiple PhDs, has decades of experience writing industry-strength software for VFX/games along with a keen eye for scaling/infrastructure, very strong research creds (papers in SIGGRAPH/SCA/CVPR), and two Sci-Tech Oscars.

Requirements
  • Experience with OpenGL (both core and ES) and more advanced realtime rendering techniques
  • Knowledge of scene-graphs and integrations with a rendering pipeline
  • Knowledge of common computer animation constructs and skills such as skinning, texturing, keyframe animation, etc. including file formats and conversion
  • Strong C++ programming skills and object-oriented design
    Ability to write organized, efficient, readable and reusable code
    OSX, iOS, and Android experience
  • Strong communication and collaboration skills

 

Plus:

  • Mobile app development. Especially for 3D applications
  • Experience with WebGL and/or other web-based 3D frameworks
  • Advanced shader design possibly including environment lighting and shadows
  • Experience programming in Unity or Unreal engine
  • 5+ years of industry experience in writing tools or engines for games or VFX
  • Proven ability to lead feature development from concept definition to shipping product