description of job
Swedium Global Services is looking for Senior Data Engineer
Job Title: Senior Databricks Data Engineer
Location: Noida / Delhi NCR / Pune
Domain: Investment Banking / Retail Banking
About the Role We are looking for an experienced Senior Data Engineer with strong expertise in Databricks, data engineering, cloud migration and GenAI automation. The ideal candidate will have 10+ years of experience in data engineering, hands-on skills in Databricks on AWS or Azure (ADF), PySpark, and Python, and a solid understanding of banking domain processes. This role involves collaborating closely delivering scalable data solutions, and leveraging GenAI and LLMs for advanced automation and insights.
Job Profile:
• Hands-on experience with Databricks on AWS, Pyspark and Python
• Must have prior experience with migrating a data asset to the cloud using a GenAI automation option.
• Experience in migrating data from on-premises to AWS/Azure
• Expertise in developing data models, delivering data-driven insights for business solutions.
• Experience in pretraining, fine-tuning, augmenting, and optimizing large language models (LLMs)
• Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights.
• Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance.
• Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets.
• Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks.
Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work.
Technical Skills
• 10+ years of experience in Data Engineering or related roles.
• Hands-on experience with Databricks on AWS or Azure (ADF), PySpark, and Python.
• Strong proficiency in PySpark, Python (Pandas, Scikit-learn, Matplotlib) and SQL.
• Expertise in cloud migration and GenAI automation.
• Experience with NoSQL databases (MongoDB) and REST API development/integration.
• Proven ability to automate workflows and implement best practices.
Behavioral Skills:
• Ability to manage in tight circumstances, very pro-active with risk & issue management.
• Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions.
• Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability.
• Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raise questions about data validity, share insights, and co-creating solutions that address evolving needs.
• Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner.
• Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary
• A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
Preferred Qualifications
• Bachelor’s degree or higher in Computer Science, Data Engineering, or related field.
• Experience in Investment Banking or Retail Banking domain.