More jobs:
Job Description & How to Apply Below
Experience : 10-15years
Location :
Bangalore
Key Responsibilities:
Design cloud-native data architectures supporting scalable batch, real-time, and streaming data processing.
Develop and maintain data pipelines, ETL designs, and data models for structured, unstructured, and streaming sources.
Implement data governance frameworks, including data quality, lineage, cataloguing, metadata management, and access controls.
Ensure secure data solutions compliant with standards like GDPR, HIPAA, or CCPA.
Leverage cloud data services (e.g., Azure Data Lake/Synapse/Data Factory, AWS S3/Redshift/Kinesis, Google Big Query/Dataflow) and big data tools.
Work with real-time streaming technologies (Kafka, Kinesis, Event Hub) and orchestration tools (Airflow, Oozie).
Hands-on contribution to database design (SQL/No
SQL), data warehousing/modelling (on-prem and cloud), and distributed computing.
Required
Skills & Experience:
10+ years in data architecture and strong expertise in data structures, distributed systems, and handling high-volume, complex data from diverse sources.
Proficiency in Python.
Hands-on with data integration/ETL tools and big data ecosystems.
Solid understanding of SDLC, databases (SQL Server, Snowflake, Cosmos DB, Dynamo
DB)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×