Logo for Jobber
Senior MLOps Engineer
Jobber
Posted over a month ago
Description

Jobber exists to help people in small businesses be successful. We work with small home service businesses, like your local plumbers, painters, and landscapers, to transform the way service is delivered through technology. With Jobber they can quote, schedule, invoice, and collect payments from their customers, while providing an easy and professional customer experience. Running a small business today isn’t like it used to be—the way we consume and deliver service is changing rapidly, technology is evolving, and customers expect more. That’s why we put the power and flexibility in their hands to run their businesses how, where, and when they want! 

Our culture of transparency, inclusivity, collaboration, and innovation has been recognized by Great Place to Work, Canada’s Most Admired Corporate Cultures, and more. Jobber has also been named on the Globe and Mail’s Canada’s Top Growing Companies list, and Deloitte Canada’s Technology Fast 50™, Enterprise Fast 15, and Technology Fast 500™ lists. With an Executive team that has over thirty years of industry experience of leading the way, we’ve come a long way from our first customer in 2011—but we’ve just scratched the surface of what we want to accomplish for our customers.

The role: 

Reporting to the Director, Data the Senior Machine Learning Operations Engineer on the MLOps team will be building an ML platform from the ground up to unlock improved operational outcomes, workflow efficiencies and new business insights across our organization. We help teams at Jobber leverage data, tools and technology to successfully execute on their mandates. We research, develop and maintain systems which support other internal teams from an operational and analytical perspective.

We’re looking for people who are ready for their next challenge, and want to use their experience to influence people, processes and decisions.

The Senior Machine Learning Operations Engineer will:

  • Collaborate in architecting and building a comprehensive ML Platform from the ground up, enabling Data Scientists and ML engineers to efficiently develop, deploy, and manage ML models.
  • Lead collaboration efforts with Data Scientists and ML engineers to define the scope, requirements, and success criteria for ML projects, ensuring alignment with business objectives.
  • Design and implement robust data pipelines to process raw structured and unstructured data, proactively building features for feature stores to support diverse ML use cases.
  • Oversee the complete MLOps lifecycle, including requirements gathering, data cleaning and organization, model development, production deployment, monitoring, and maintenance.
  • Conduct thorough feasibility analyses through proofs-of-concept (POCs) and provide data-driven recommendations on preferred approaches, tools, and products within the open-source MLOps ecosystem.
  • Develop and maintain a deep understanding of Large Language Models (LLMs) and their specific MLOps requirements, staying current with rapid advancements in this field.
  • Implement and optimize end-to-end MLOps pipelines for model training, evaluation, and deployment, ensuring scalability and efficiency.
  • Establish and implement best practices for version control, testing, and monitoring of ML models, promoting reproducibility and reliability.
  • Architect scalable and efficient data processing systems capable of handling large-scale machine learning applications.
  • Continuously assess and improve the MLOps infrastructure to enhance performance, reliability, and cost-effectiveness.

To be successful, you should have:

  • A background in software or data engineering 
  • Polished communication skills with a proven record of leading work across disciplines
  • Strong proficiency in Python programming
  • Extensive experience with Apache Spark for large-scale data processing
  • Expertise in containerization, particularly Docker and CI/CD technologies 
  • Experience designing and implementing RESTful APIs
  • Comprehensive knowledge of AWS services, including: ECS Fargate for container orchestration, EMR (Elastic MapReduce) for big data processing and AWS Glue for ETL workflows
  • Proven track record of building and maintaining complex ETL pipelines
  • Experience with workflow management tools, specifically Apache Airflow
  • Proficiency in using dbt (data build tool) for data transformation and modelling
  • Strong understanding of DevOps principles and CI/CD practices
  • Excellent problem-solving skills and attention to detail
  • Ability to work effectively in a fast-paced, collaborative environment

It would be really great (but not a deal-breaker) if you had:

  • Demonstrated experience in building ML platforms or MLOps infrastructure
  • Experience with Polars, a high-performance DataFrame library for Rust and Python
  • Familiarity with caching tools and strategies for optimizing data access and processing
  • Knowledge of vector databases and their applications in machine learning pipelines
  • Experience with search engines like Elasticsearch for efficient data indexing and retrieval
  • Understanding of ML model serving frameworks and A/B testing methodologies
  • Contributions to open-source MLOps tools or frameworks
  • Familiarity with ML model versioning tools (e.g., MLflow, DVC)

More Similar Roles...

    Want more remote roles like this one sent to you?