More jobs:
Data Architect - Data, Analytics & AI; Quality
Job in
Indianapolis, Hamilton County, Indiana, 46262, USA
Listed on 2026-01-05
Listing for:
Eli Lilly and Company
Full Time
position Listed on 2026-01-05
Job specializations:
-
IT/Tech
Data Engineer, Data Analyst
Job Description & How to Apply Below
Location: Indianapolis
Data Architect - Data, Analytics & AI (Quality)
Join Eli Lilly and Company as a Data Architect – Data, Analytics & AI (Quality) in Indianapolis, IN. This role leads the digital transformation of the global Quality function, enabling data‑driven quality decisions, AI‑enabled compliance monitoring, and shaping the data product strategy for quality assurance, control, and inspections.
You will lead a multidisciplinary team of analytics professionals, data scientists, and engineers to build trusted, intelligent quality data products that deliver real‑time insights, inspection readiness, and operational excellence while meeting GxP, data integrity, and compliance standards.
Responsibilities- Define, design, build, and maintain the data ingestion and integration vision, strategy, and principles for the Data, Analytics & AI (Quality) function.
- Develop automated data pipelines and support continuous integration and delivery of data products.
- Lead data ingestion and integration processes, sourcing data from internal and external systems and ensuring seamless data flow and scalability.
- Design, develop, and maintain scalable and efficient data pipelines using tools such as Apache Airflow, Apache Kafka, and AWS Glue.
- Implement best practices for data ingestion, integration, transformation, and storage, ensuring data quality, reliability, and accessibility.
- Partner with Lilly architects, software vendors, and third‑party implementation partners to develop and execute a technical strategy for Quality data structures and products.
- Work with business stakeholders to identify future data use cases, anticipated business outcomes, and enable processes to support these needs.
- Provide expertise in enterprise data domains, including data relationships, data quality, and business data requirements.
- Lead and mentor a team of data engineers, fostering a culture of collaboration, innovation, and excellence.
- Establish and enforce data engineering standards, policies, and procedures to ensure data quality, consistency, and compliance.
- Implement monitoring and alerting mechanisms to proactively identify and resolve data pipeline issues.
- Collaborate with data governance and security teams to enforce data privacy and security measures across the data lifecycle.
- Lead support and enhancement of solutions from a persistent pod standpoint, including operations metrics and model drift monitoring.
- Collaborate with manufacturing and quality leadership teams to align on operational needs and quality standards.
- Engage vendors and partners to leverage external expertise and technologies, ensuring alignment with organizational goals and standards.
- Mentor junior resources and provide guidance and support in technical skill development and project execution.
- Demonstrated communication, leadership, teamwork, project delivery, and problem‑solving skills.
- Experience in architectural processes (blueprinting, reference architecture, governance).
- Knowledge of external data standards such as HL7, CDISC, SDTM, MedDRA, RxNorm, SNOMED.
- Skills in data modeling, data warehousing, data integration, data governance, data security, and cloud architecture principles.
- Strong learning agility and relationship building to influence change.
- Successful record of high‑quality, user‑focused, on‑time & on‑budget IT service and project delivery.
- Experience with formal project management methodologies and agile frameworks (Scrum, Kanban, SAFe).
- Excellent analytical, problem‑solving, and communication skills across diverse teams.
- Experience designing large‑scale data models for functional, operational, and analytical environments.
- Proficiency with data modeling tools (Erwin, ER/Studio, Lucidchart).
- Experience with statistical methods, ontology development, semantic graph construction, relational schema design.
- Strong SQL/PLSQL and data modeling proficiency.
- Experience with AWS, Azure, other cloud technologies, and cloud‑based data solutions (Postgres, Redshift, Aurora, Athena).
- Experience with CI/CD, Jenkins, Git Hub, automation platforms.
- Ability to review and recommend design patterns, performance considerations, and deployment strategies.
- Knowledgeable in data governance,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×