About Nivoda:
Nivoda is a young and energetic global team headquartered in London with offices in Mumbai, New York, Hong Kong, Johannesburg, Antwerp and Amsterdam.
We are an extremely fast-growing B2B marketplace changing how the global jewelry industry operates. We connect buyers and sellers of jewelry on our online platform and facilitate the most transparent, efficient and cost-effective way for the jewelry industry to sell and buy jewelry.
Nivoda has a rapidly growing workforce expanding into new countries with a dynamic, supportive and collaborative culture.
The company's sales have grown over 250% in the last 12 months, and the team has grown from 30 to over 400 internationally. We are a global team who can always be trusted, driven to make big and bold moves to transform a traditional industry.To know more please visit www.nivoda.net
We offer:
Dynamic working environment in an extremely fast-growing company
Work in an international environment
Work in a pleasant environment with very little hierarchy
Intellectually challenging, play a massive role in Nivoda’s success and scalability
Flexible working hours
We are seeking a talented Data/Analytical Engineer with experience in software development / programming and a passion for building data-driven solutions, you’re ahead of trends and work at the forefront of AWS/Snowflake, DBT, Data Lake and Data warehouse technologies.
The ideal candidate thrives working with large volumes of data, enjoys the challenge of highly complex technical contexts, and is passionate about data and analytics. The candidate is an expert within data modeling, ETL design and cloud/big-data technologies
The candidate is expected to have strong experience with all standard data warehousing/ data lake technical components (e.g. ETL, Reporting, and Data Modelling), infrastructure (hardware and software) and its integration.
Key job responsibilities:
Implementing ETL/ELT pipelines within and outside of a data warehouse using Python, Pyspark and Snowflakes Snow SQL.
Support Redshift DWH to Snowflake Migration.
Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Snowflake, Glue/lake formation, EMR/Spark/Scala etc.
Work with data analysts to scale value-creating capabilities, including data integrations and transformations, model features, and statistical and machine learning models.
Work with Product Managers, Finance, Service Engineering Teams and Sales Teams on a day-to-day basis to support their new analytics requirements.
Implement data quality and data governance measures and execute data profiling and data validation procedures
Implement and uphold data governance practices to maintain data quality, integrity, and security throughout the data lifecycle.
Leverage open-source technologies to build robust and cost-effective data solutions.
Develop and maintain streaming pipelines using technologies like Apache Kafka etc.
Skills and Qualifications:
Must have total 5+ yrs. of IT experience and 3+ years' experience in data Integration, ETL/ETL development, and database design or Data Warehouse design
Broad expertise and experience with distributed systems, streaming systems, and data engineering tools, such as Kubernetes, Kafka, Airflow, Dagster, etc.
Experience in data transformation, ETL/ELT tool and technologies such as AWS Glue, DBTetc for transforming structured/semi structured and unstructured datasets.Experience in ingesting and integrating data from APIs/JDBC/CDC sources.
Deep knowledge of Python, SQL, relational/ non-relational database design, and master data strategies.
Experience defining, architecting, and rolling out data products, including ownership of data products through their entire lifecycle.
Deep understanding of Star and Snowflake dimensional modeling. Experience with relational databases, including SQL queries, database definition, and schema design.
Experience with data warehouses, distributed data platforms, and data lakes.
Strong proficiency in SQL and at least one programming language (e.g., Python,Scala, JS).
Familiarity with data orchestration tools, such as Apache Airflow, and the ability to design and manage complex data workflows.
Familiarity with agile methodologies, sprint planning, and retrospectives.
Proficiency with version control systems, Bitbucket/Git.
Ability to work in a fast-paced startup environment and adapt to changing requirements with several ongoing concurrent projects.
Excellent verbal and written communication skills.
Preferred/bonus skills
Redshift to Snowflake migration experience.
Experience with DevOps technologies such as Terraform, CloudFormation, and Kubernetes.
While not mandatory, experience or knowledge in machine learning techniques is highly preferable, enriching our data engineering capabilities.
Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)