WE ARE
Opinov8 is a technology service provider, with an Opinion in Innovation, we engage with our clients at any stage in their product engineering and innovation goals, with solutions being mainly collaborative and outcome-driven. We are driven by delivering value, for our clients, through technological innovation.
We are a young energetic and dynamically agile company and the team behind Opinov8 are software industry veterans. We are ambitious on our own and on our client’s behalf and we do not look at a client engagement as a project, we are in it to help our clients build sustainable products, platforms, and businesses.
Our core values and what we look for in our Opinov8rs:
You are Collaborative - which is a key component to being successful in distributed environments.
You are Innovative - it is a requirement that we innovate, as individuals and as a company.
Be Adventurous - courageous people are adventurous, we are hiring and nurturing this quality.
Always Respectful - remember if you don’t respect others they will not respect you.
Inherently Intelligent - it goes without saying we are all intelligent in our own way.
Ultimately Responsible - being responsible is a core personality trait that is needed for us and our clients’ success and ultimately a happy work environment.
WE BUILD
With a multi-dimensional view of healthcare data, our client carries out comprehensive analyses to deliver valuable market insights that enable customers to decide on the best course of action for their business:
-increase market share and stay ahead of their competition;
-empower the sales team, solve market access challenges, and reach their peers and stakeholders.
We are looking for a Senior Data Engineer with extensive hands-on experience in designing and implementing data pipelines. Ideally, the candidate knows the modern data stack from both the engineering and analytics sides.
Besides being responsible for building data pipelines — the senior Data engineer is expected to assume the following responsibilities:
-Model datasets both for internal and external consumption
-Establish and maintain the data documentation to ensure that business and data analysts use the same definitions and language
-Define data quality metrics and standards
-Set software engineering best practices for analytics (DataOps)
-Close collaboration with other team members and business stakeholders
-Good knowledge and experience in modern data stack
-Ingestion (we use Kafka)
-Storage (we use DeltaLake)
-Data transformation (any, dbt is pereferrable)
-Data governance, data catalog, data quality (any)
-BI and headless BI (any)
-Expected skills
-Spark
-Python or Scala
-SQL
-Good additional skills and experience
-Public cloud (Azure, AWS etc.)
-NoSql db (keyvalue, document, graph, etc.)
-Databricks
-Terraform
-Python or scala not only for DE tasks (be able to create service)
-Orchestration tools (airflow, luigi, argo etc.)
-Docker
-TDD
Would be a plus:
-Experienced in building GraphQL or REST APIs using OData
-DAPR
-Observability
-Open Telemetry
-Terraform
Project tech stack:
-Microservices (containerized microservices running in AKS)
-NRT streaming architecture using Kafka/EventHub
-MongoDB, Postgres
-C# / .NET 6
-Angular.js
-Azure AD and OAuth2, MSAL
-Event-Driven Design
-Apache Spark / Azure data bricks / Azure Data Lake/ Azure Synapse / Azure Data factory
-PowerShell / Bash
-TDD/BDD with Specflow
-CI/CD
-IaC
\n