About Twine
Twine is a leading platform connecting top-tier freelancers, consultants, and contractors with companies that need creative and tech expertise. Trusted by Fortune 500 companies and innovative startups alike, Twine is the go-to marketplace for mission-critical projects. With a network of over 500,000 freelancers and 35,000+ companies, we provide a comprehensive solution for businesses looking to build agile teams and for freelancers seeking opportunities to work on high-impact projects.
Our Mission
At Twine, our mission is to empower creators, whether they're businesses or individual freelancers, to grow and thrive. As automation and AI reshape the workforce, we’re driving the shift towards remote, freelance-driven work. We connect companies with top creative talent, enabling collaboration, innovation, and success on a global scale.
About the Role
A client is seeking a Senior Engineer. You will lead a team of data engineers in designing, building, and maintaining a high-performance software system to manage analytical data pipelines that fuel the organization’s data strategy using software engineering best practices. You will work closely with stakeholders across the business to understand their data needs, ensure scalability, and foster a culture of innovation and learning within the data engineering team and beyond.
You will be responsible for the overall architecture of specific modules within a product, perform design, assist implementation considering system characteristics to produce optimal performance, reliability, and maintainability. Your role will also include providing technical guidance to team members, creating and managing technical documentation, and leading efforts in architecting, designing, and building scalable data pipelines for processing large volumes of structured and unstructured data.
Qualifications:
- Authorization to work in the USA without requiring sponsorship.
- 10+ years of related experience in developing data solutions and data movement.
- Bachelor’s or Master’s in Computer Science, Information Systems, or an engineering field or relevant experience.
- AWS experience with a focus on implementing and optimizing data pipelines.
Technical Skills:
- Extensive hands-on data system design and coding experience.
- Experience with modern data pipelines (AWS Step Functions, Prefect, Airflow, Luigi, Python, Spark, SQL).
- Production delivery experience in Cloud-based PaaS Big Data technologies (EMR, Snowflake, Databricks, etc.).
- Strong programming experience in PySpark, SQL, Python, and proficiency in at least one programming language (C#, GoLang, JavaScript, or ReactJs).
- Experience with distributed file systems (S3, HDFS, ADLS) and various open file formats (JSON, Parquet, CSV, etc.).
- Database design skills including normalization/de-normalization and data warehouse design.