We are looking for a Senior Data Engineer II who can help design, build, and maintain frameworks and systems that acquire, validate/cleanse, and load data into our analytics systems. This role will be foundational in our goal to scale our processes and capabilities to meet internal demand using reproducible and robust methods. The data that this position will unlock is crucial to our company’s aggressive growth plans by helping internal teams answer previously unanswered questions about our customers, products, and usage patterns.
This position will have engagement across many parts of the company, including software engineering, product management, analytics and operations teams, and analytic engineering. The frameworks and systems that you build will integrate with and enhance our current stack that includes GCP, BigQuery, dbt, Prefect, Python, Fivetran, and Atlan.
Examples of projects you’ll work on
- Design, build, and maintain an efficient and flexible data ingestion framework that allows internal product, engineering, and operations teams to contribute data sources (new or updated) into GCP and BigQuery.
- Research, design, and lead development of an event-driven data ingestion and transformation framework
- Implement data quality checks and monitoring processes to ensure data accuracy and consistency.
- Create and maintain comprehensive documentation for data engineering processes, systems, and workflows
- Improve our overall observability and monitoring of data pipeline performance and resilience.
- Troubleshoot and resolve data pipeline issues to ensure downstream data availability.
- Contribute to our dbt systems by making sure the source and staging layers align with our standards, are efficient, cost-effective, and highly available.
- Provide technical leadership and mentorship to other members of our team, fostering a collaborative environment dedicated to growth and learning.
What you bring
- Strong software development skills (some combination of Python, Java, Scala, Go)
- High proficiency in SQL
- Experience building and maintaining data ingestion pipelines and frameworks
- Working knowledge of dbt or similar data transformation tools
- Highly motivated self-starter that is keen to make an impact and is unafraid of tackling large, complicated problems
- Excellent communication skills, able to explain technical topics to non-technical audiences, and maintain many of the essential cross-team and cross-functional relationships necessary for the team’s success
A plus if you have
- Experience working with BigQuery and GCP services
- Knowledge about observability
- Previous experience with Grafana visualization, or a desire to invest the time to learn
In the US, the Base compensation range for this role is $173,000 - $207,000. Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
*Compensation ranges are country specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range & benefits at the beginning of the process