This job is about joining a dynamic data team as a Data Engineer, where you will leverage your Databricks expertise to design, build, and optimize large-scale data pipelines. Your work will ensure high data quality and support analytics initiatives, making a significant impact on the organization’s data-driven decisions. The team values collaboration and innovation, working closely with data analysts, data scientists, and product teams to deliver reliable datasets.
You'll be responsible for
🔧
Developing and maintaining ETL/ELT pipelines
Develop and maintain scalable ETL/ELT pipelines using Databricks (Spark, Delta Lake).🏗️
Building and optimizing data lakehouse architectures
Build and optimize data lakehouse architectures for batch and streaming workloads.🤝
Collaborating with cross-functional teams
Collaborate with data analysts, data scientists, and product teams to deliver reliable datasets for reporting and ML models.Skills you'll need
📊
Databricks
Hands-on experience with Databricks, Apache Spark, and Delta Lake is essential for this job.🐍
SQL and Python
Strong SQL and Python skills are required to effectively manage and manipulate data.☁️
Cloud platforms
Familiarity with cloud platforms like Azure, AWS, or GCP is necessary for supporting infrastructure.View more