- Spotify
- New York, NY
- Full-Time
- 5 days ago
- $184,050 - $262,928
Senior Machine Learning Engineer – Policy & Safety: our view in 3 lines...
- The Role: Build and lead production-grade machine learning systems that power content safety and policy enforcement at scale.
- The Person: Design, build, and ship production ML systems for content safety, lead technical initiatives across detection and policy evaluation systems, and drive evaluation and experimentation to improve model performance and reliability.
- Requirements: Experience training, evaluating, and maintaining ML models using modern frameworks such as PyTorch and working with distributed systems and Scala is required.
Job Description
We design Spotify’s consumer experience—end to end, moment to moment, across every screen, platform, and partner integration. Our mission is to make listening feel effortless, personal, and joyful for billions of users around the world. That means turning complexity into clarity across hundreds of touchpoints—from our mobile and desktop apps to the smart speakers, TVs, cars, and integrations where Spotify shows up every day. If it touches a consumer, we shape it. We bring deep insight into human behavior, design, and technology to craft experiences that feel intuitive, expressive, and unmistakably Spotify.
The Policy & Safety team sits within Content Platform in the Experience Mission, building the systems that keep Spotify safe, compliant, and trusted by millions of users and creators. This team owns Spotify’s content moderation infrastructure — from detection models to policy enforcement systems and compliance data pipelines.
Working at the intersection of machine learning, platform engineering, and regulatory compliance, the team partners closely with Trust & Safety, Legal, and Public Affairs. They’re on the critical path for every new content type and social feature — including messaging, comments, and collaborative experiences — ensuring safety is built in from day one. With a strong focus on “safety by default,” the team is investing in large-scale rearchitecture and ML-driven systems to proactively protect users and empower safer interactions across the platform.
