×
Register Here to Apply for Jobs or Post Jobs. X

Data Architect - Data, Analytics & AI; Quality

Job in Indianapolis, Hamilton County, Indiana, 46262, USA
Listing for: BioSpace
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Position: Data Architect - Data, Analytics & AI (Quality)
Location: Indianapolis

Data Architect – Data, Analytics & AI (Quality)

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life‑changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first.

We’re looking for people who are determined to make life better for people around the world.

What You’ll Be Doing
  • Partner with Lilly architects, software vendor, and third‑party implementation partner to develop and execute on a technical strategy for Quality data structure and data products.
  • Work with business to identify future uses for Lilly data and anticipated business results and enable processes to support these needs.
  • Develop and provide expertise in enterprise data domains; this includes data relationships, data quality, understanding of business data needs and the associated technology toolsets and methodologies.
Leadership And Strategy
  • Lead and mentor a team of data engineers, fostering a culture of collaboration, innovation, and excellence.
  • Define and drive the data engineering strategy, aligning with business objectives and technical requirements.
  • Collaborate with cross‑functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and translate them into actionable engineering solutions.
Data Pipeline Development
  • Design, develop, and maintain scalable and efficient data pipelines to support data analytics, reporting, and machine learning initiatives.
  • Implement best practices for data ingestion, integration, transformation, and storage, ensuring data quality, reliability, and accessibility.
  • Automate data pipeline processes to improve efficiency and reduce manual intervention, leveraging tools and frameworks such as Apache Airflow, Apache Kafka, and AWS Glue.
Data Ingestion & Integration
  • Lead the development of data ingestion and integration processes, sourcing data from various internal and external sources.
  • Collaborate with stakeholders to define data ingestion requirements and implement solutions for real‑time and batch data integration.
  • Ensure seamless data flow between systems and applications, optimizing data transfer and transformation processes for performance and scalability.
Technical Expertise
  • Stay abreast of emerging technologies and trends in data engineering, continuously evaluating and adopting new tools and techniques to enhance our data infrastructure.
  • Provide technical leadership and guidance on data engineering best practices, coding standards, and performance optimization techniques.
  • Hands‑on involvement in data engineering tasks, including coding, debugging, and troubleshooting complex data pipeline issues.
Quality Assurance & Governance
  • Establish and enforce data engineering standards, policies, and procedures to ensure data quality, consistency, and compliance.
  • Implement monitoring and alerting mechanisms to proactively identify and address data pipeline issues, ensuring minimal disruption to business operations.
  • Collaborate with data governance and security teams to enforce data privacy and security measures across the data lifecycle.
Persistent Pod
  • Lead support and enhancement of solutions from persistent pod standpoint.
  • Create Operations Metrics, define parameters for data effectiveness, measure data drift and drive operational stability of models.
  • Work with persistent pod including vendors to resolve outages, issues etc.
  • Work closely with manufacturing and quality leadership teams to update and align on operational needs and quality standards.
Cross‑Functional Collaboration
  • Collaborate effectively with cross‑functional teams, including data scientists, engineers, analysts, and business stakeholders, to deliver integrated solutions that meet business requirements.
  • Serve as a trusted advisor to senior leadership, providing insights and recommendations on technology trends, industry best practices, and strategic opportunities in data and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary