Big Data Architect- Databricks, Spark, AWS
Listed on 2026-02-19
-
IT/Tech
Data Engineer, Cloud Computing
We are currently hiring a Big Data Architect for a hybrid role in Atlanta, GA.
Responsibilities Data Architecture & Engineering- Design and implement medallion architecture (raw, silver, gold layers) to enable efficient data ingestion, processing, and quality management.
- Develop standardized ETL and streaming pipelines using Databricks, Apache Spark, and Apache Airflow, ensuring low-latency data processing.
- Define and enforce data quality and observability frameworks, integrating dashboards and monitoring tools to maintain high data integrity.
- Optimize data pipeline performance and infrastructure costs, identifying bottlenecks and areas for improvement.
- Lead the technical discovery and ongoing development, assessing current systems, identifying pain points, and defining the target state architecture.
- Provide technical recommendations and a roadmap for implementation, ensuring best practices in data engineering and architecture.
- Guide the selection and implementation of cloud-based data platforms to support scalability, efficiency, and future growth.
- Ensure compliance with security, governance, and regulatory requirements in data handling and processing.
- Act as the technical point of contact between engineering teams, business stakeholders, and management.
- Work closely with team members to ensure smooth collaboration and knowledge transfer.
- Translate business requirements into technical solutions, ensuring alignment between data engineering practices and business objectives.
- Define best practices, coding standards, and development workflows for data engineering teams.
- Ensure a smooth transition from discovery to implementation, providing hands‑on guidance and technical oversight.
- Participate in planning and work closely with the Delivery Manager to manage timelines and priorities across related programs.
- Monitor and troubleshoot data pipeline performance, ensuring high availability and reliability of data systems.
- Cloud provider: AWS
- Programming language:
Python - Frameworks and technologies: AWS Glue, Apache Spark, Apache Kafka, Apache Airflow
- Experience working with on‑premise systems is a plus
- Databricks is a must
- Bachelor’s/Master’s degree in Computer Science/Engineering or a related field.
- Opportunity to work on cutting‑edge projects
- Work with a highly motivated and dedicated team
- Competitive salary
- Flexible schedule
- Benefits package – medical insurance, vision, dental, etc.
- Corporate social events
- Professional development opportunities
- Well‑equipped office
Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & Dev Ops, application modernization and customer experience.
Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).