
Job Description
Key responsibilities:
• Design, build, and maintain robust data pipelines to curate and ingest data efficiently.
• Work with Azure cloud services to deploy and manage scalable data solutions.
• Implement and maintain CI/CD pipelines for seamless data workflows.
• Collaborate with cross-functional teams to ensure data integrity, security, and accessibility.
Key skills & experience:
• Strong SQL skills.
• Strong programming skills in Python (PySpark).
• Hands-on experience with Azure Data Services (Azure Data Factory, Databricks, Synapse, Data Lakes etc).
• Experience with CI/CD pipelines for release management.
• Experience in database management.
• Knowledge of best practices in data ingestion, transformation, and curation (Dimensional modelling & Medallion schema).
Nice to have:
• Insurance experience
• Fabric experience
• Purview experience
• Power BI experience
Qualifications
Full-time degree in any stream.
Industries: Financial Services, Insurance, Management Consulting
Job Skills
- Python (PySpark).
- Strong SQL skills.
- Azure Data Services (Azure Data Factory, Databricks, Synapse, Data Lakes etc).
- data ingestion, transformation, and curation (Dimensional modelling & Medallion schema).
Job Overview
Date Posted
Location
Offered Salary
Not disclosed
Expiration date
Experience
