Job Description

description of job

Databricks Data Engineer

Job Summary

We are looking for a skilled Data Engineer with 5+ years of experience in Databricks, PySpark, Python, and AWS. The role involves building scalable data pipelines, migrating data assets to the cloud, and delivering insights that drive business decisions. Strong communication skills and proactive problem-solving are essential.

Key Responsibilities

  • Design and implement data pipelines using Databricks, PySpark, and Python.

  • Migrate data from on-premises to AWS, including automation with GenAI tools.

  • Develop and optimize data models to support analytics and reporting.

  • Perform data integration, cleaning, and transformation using Python (Pandas) and SQL.

  • Automate workflows to improve scalability and efficiency.

  • Collaborate with stakeholders to clarify requirements and deliver actionable insights.

Required Experience

  • 5+ years of experience as a Data Engineer / Data Analyst.

  • Strong expertise in Databricks on AWS, PySpark, Python, and SQL.

  • Hands-on experience with ETL tools (Informatica preferred).

  • Experience in cloud migration projects and automation.

  • Bachelor’s degree in Computer Science, Data Engineering, or related field.

Skills & Attributes

  • Proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL.

  • Strong problem-solving and critical thinking skills.

  • Excellent communication and stakeholder engagement abilities.

  • Self-directed, proactive, and able to manage tasks independently.

 

Job Overview

  • Location : Pune,Gurugram,Noida, India
  • Vacancy : 1
  • Key Skills : Pandas, Scikit-learn, Matplotlib, cloud migration, SQL, ETL Tools, Informatic, Python, Databricks on AWS, PySpark,