Advance Your Career

Explore opportunities across Spectrum Equity’s portfolio

MLOps Engineer ( Machine Learning Operations Engineer)



Software Engineering, Operations
United States
Posted on Saturday, June 29, 2024

SponsorUnited is one of the fastest growing sports & entertainment technology platforms in the world, connecting the partnership ecosystem by providing the most comprehensive and relevant marketing and sales data available anywhere.

Almost every major pro sports team uses our platform, and our customers currently span 1,200+ organizations across sports, music, media, brands and agencies.

Job Description:

SponsorUnited is seeking a MLOps Engineer (Machine Learning Operations Engineer) to spearhead the development, implementation, and management of our data/ML pipelines, feature stores and monitoring/observability of ML/LLM infrastructure. This role is central to our mission, ensuring the robustness, scalability, efficiency and uptime of our data and AI operations. As the steward of our data quality, workflows and data/AI architecture, you will play a critical role in leveraging AWS technologies to optimize our data warehousing, data lakes and ML infrastructure, setting the stage for innovative uses of ML and AI across our platform.

Key Responsibilities:

  1. Data Pipeline Ownership: Architect, deploy, and manage scalable data pipelines capable of handling vast volumes of data from a diverse array of data sources, including websites, images, audio feeds, and video feeds.
  2. Data Extraction and Intelligence: Implement pipelines to extract relevant information with precision, ensuring high-quality data is readily available for analytics and machine learning applications.
  3. Data & AI Infrastructure Management: Oversee our comprehensive data infrastructure, ensuring optimal performance, reliability, and security across our AWS-based data warehousing and data lake solutions as well as ML infrastructure.
  4. Data Technology Expertise: Leverage extensive knowledge of AWS data services or other cloud data services (including but not limited to Redshift, S3, Glue, Athena, and Kinesis) to build and maintain state-of-the-art data solutions that support our analytical and operational needs.
  5. Cross-Functional Leadership: Collaborate closely with product managers, data scientists, and engineering teams to understand and fulfill data requirements, facilitating the seamless integration of ML/AI technologies to enhance our offerings.
  6. Performance Optimization: Apply advanced techniques for tuning and optimizing data throughput and storage efficiency, ensuring swift and reliable access to critical data insights.
  7. Innovation and Best Practices: Stay at the forefront of data management and cloud technologies, advocating for and implementing cutting-edge practices that contribute to our legacy of technical innovation and excellence.


  • 10+ years of experience in Data Engineering, MLOps or a similar role, with a strong emphasis on building scalable solutions, demonstrating the ability to design and deploy scalable data solutions.
  • Bachelor's degree in Computer Science, Data Science, or a related field (or equivalent experience).
  • Expert knowledge of data lake and data warehouse architecture, data ingestion, transformation, and data modeling best practices.
  • Experience with ML lifecycle and deploying ML models with model/data monitoring, feature stores and model experimentation tools like MLflow or Weights & Biases.
  • Some experience with LLMOps lifecycle including understanding of compound AI systems, tracing and evaluation metrics/tools.
  • Strong, hands-on experience with cloud-based databases (e.g., AWS RDS, Azure SQL Database or Google cloud) AWS RDS and data lake technologies, like AWS S3, Azure Data Lake Storage, or Google Cloud Storage, for managing and storing unstructured and semi-structured data.
  • Certification in relevant AWS architectures and database technologies such as AWS Certified Solutions Architect Associate/Professional and AWS Certified Data Analytics Specialty is a strong plus.
  • Strong programming skills in languages to develop applications and scripts for data automation and knowledge of languages like Python, Bash, R, Java, for automation and maintenance tasks.
  • Expertise in AWS data warehousing platforms, such as Amazon Redshift, as well as exposure to other platforms like Google BigQuery.
  • Strong understanding of data security and compliance regulations, like SOC2, NIST, PCI, etc.
  • Excellent problem-solving and communication skills.