Data Engineer / Data Visualisation Lead

Job Category: Engineering/IT
Job Type: Full Time
Job Location: Mumabi/Navi Mumbai
Skill: DBA Schema Desiging & Dimension Modelling SQL optimisation
Experience: 3+ Years
Date: 01-12-2025

Only candidates from Mumbai/Navi Mumbai region will be considered


Job Requirements

Responsibility

  • Develop and maintain scalable data pipelines and build new integrations using AWS native technologies to support increasing data sources, volumes, and complexity.
  • Collaborate with analytics and business teams to improve data models that enhance business intelligence tools and dashboards, fostering data-driven decision-making across the organization.
  • Implement processes and systems to ensure data reconciliation, monitor data quality, and ensure production data is accurate and available for key stakeholders, downstream systems, and business processes.
  • Write unit, integration, and performance test scripts, contribute to engineering documentation, and maintain the engineering wiki.
  • Perform data analysis to troubleshoot and resolve data-related issues.
  • Work closely with frontend and backend engineers, product managers, and analysts to deliver integrated data solutions.
  • Collaborate with enterprise teams, including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering, to design and develop data integration patterns and models supporting various analytics use cases. Partner with DevOps and the Cloud Center of Excellence to deploy data pipeline solutions in Takeda AWS environments, meeting security and performance standards.

Education and Experience

  • Bachelor’s Degree from an accredited institution in Engineering, Computer Science, or a related field.
  • 3+ years of experience in data engineering, software development, data warehousing, data lakes, and analytics reporting. Strong expertise in data integration, data modelling, and modern database technologies and Cloud technologies (for example, Azure or AWS)

Key Skills and Competencies

  • Extensive experience in DBA, schema design & dimensional modelling, and SQL optimisation.
  • Programming experience in Python or other languages
  • Working proficiency with PySpark (Databricks platform preferred)
  • Expert in Power Apps and Power automate.
  • Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams.
  • Understanding of good engineering practices (DevSecOps, source-code versioning, …)
  • Preferred – Experience with streaming technologies like Spark Streaming or Kafka
  • Preferred – Infrastructure as Code (IaC) experience, preferably with Terraform.
  • Preferred – Experience designing and developing API data integrations using SOAP / REST.

Apply for this position

Allowed Type(s): .pdf, .doc, .docx