Description:
We are seeking an innovative Scala Architect with a strong background in designing and building robust data engineering frameworks using Apache Spark. The ideal candidate will lead our efforts in architecting a scalable data processing platform, driving the strategic direction of our data infrastructure to support the bank's data analytics and business intelligence initiatives.
Key Responsibilities:
Architect and design scalable and high-performance data processing pipelines using Scala and Apache Spark.
Solid foundation in software engineering, including Object-Oriented Design (OOD) and design patterns
Exposure to Cloudera or Hortonworks Hadoop distribution including : HDFS, Yarn and Hive.
Provide technical leadership in big data engineering practices and contribute to the strategic planning of our data infrastructure.
Define architectural standards and frameworks, ensuring compliance and performance are met.
Mentor junior team members and lead by example in the development of best-in-class data solutions.
Collaborate with stakeholders to understand business needs and translate them into technical specifications.
Minimum Qualifications:
Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
5+ years of professional experience in Scala development.
Extensive experience with Apache Spark and building data engineering frameworks.
Proven track record of architecting and delivering large-scale data processing solutions.
Strong understanding of distributed systems and data architecture principles.
Preferred Qualifications:
Prior experience with AWS cloud services.
Familiarity with other Big Data tools (e.g., Hadoop, Kafka) and modern data storage systems.
\n