Join us on our exciting journey at Dataroid, the award-winning digital analytics and customer engagement platform!
Dataroid empowers leading brands to enhance every individual customer experience through deep customer insight, data science modelling, and omnichannel marketing.
As Turkey's fastest-growing data analytics platform, we embrace challenges, explore new technologies, and aim to impact industries globally by helping businesses harness data to create seamless customer experiences.
At Dataroid, our developers drive innovation, staying ahead of technology trends to deliver simple and seamless solutions. Already used by leading enterprises in finance, airlines, and retail, Dataroid reshapes the experience of over 100 million users.
Dataroid is in search of a Data Engineer who will be responsible for designing and building large-scale resilient data pipelines, ensuring high performance, scalability, and seamless integration with various frameworks.
\n- Design and build large-scale resilient data pipelines using various frameworks like Apache Spark, Apache Flink, Kafka etc.
- Write well designed, reusable, testable, secure and scalable high-quality code
- Collaborate with cross-functional teams
- Discover, learn and implement new technologies
- BSc/MSc/PhD degree in Computer Science or a related field or equivalent work experience
- 2+ years of experience in Data Engineering or similar role
- Experience with data modeling and ETL/ELT practices
- Experience with one or more high-level Python or Java based batch and/or stream processing frameworks such as Apache Spark, Apache Flink or Kafka Streams
- Experience with relational and non-relational data stores, key-value stores and search engines (PostgreSQL, ScyllaDB, Druid, ClickHouse, Redis, Hazelcast, Elasticsearch etc.)
- Familiarity with data workflow orchestration tools like Airflow or dbt
- Knowledge of storage formats such as Parquet, ORC and/or Avro
- Proficiency with Python or Java
- Proficiency in code versioning tools such as Git
- Strong sense of analytical thinking and problem-solving skills
- Strong verbal and written communication skills
- Familiarity with distributed storage systems like HDFS and/or S3
- Familiarity with data lake and data warehouse solutions including Hive, Iceberg and/or Delta Lake
- Familiarity with distributed systems and concurrent programming
- Familiarity with containerization & orchestration - Docker and/or Kubernetes
- Experience or willing to learn large scale stream processing technologies
- Familiarity with generative models and a strong enthusiasm for generative AI, large language models (LLMs) and the agentic world
- Prior experience with SCRUM/Agile methodologies
Why Dataroid?
🌟Great Compensation & Benefits: We provide an attractive compensation package, including private health insurance, company-supported pension plans, meal vouchers, commute assistance, remote work benefits, and a paid day off for your birthday.
💻Enhanced Workdays: Enjoy the flexibility of adaptable working hours. We offer online events, inspiring guest speakers, office snacks, a culture that limits unnecessary meetings, and many other perks designed to make your weekdays more enjoyable.
📚Growth & Learning: Your development is our priority, with access to premier online learning platforms like Udemy, digital libraries, and tailored training programs to support your career journey.
🚀Thriving Community: Be part of a vibrant and close-knit team that values connection. Enjoy happy hours, workshops, seasonal celebrations, and other events that bring us together.
🎯Open Dialogue: Our flat organizational structure fosters direct and transparent communication. Your ideas and feedback are always welcome, ensuring your voice is heard and valued.
FIND MORE ABOUT US 🔎
Our career page: https://www.dataroid.com/careers/
Our company page: https://www.dataroid.com/
We will process your personal data as part of our recruitment procedures. To find out more, please consult our Candidate Privacy Notices.