×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Tempe, Maricopa County, Arizona, 85285, USA
Listing for: Caris Life Sciences, Ltd.
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Senior Data Engineer page is loaded## Senior Data Engineer locations:
Tempe, AZ - 85281time type:
Full time posted on:
Posted Yesterday job requisition :
JR104507
** At Caris, we understand that cancer is an ugly word—a word no one wants to hear, but one that connects us all. That’s why we’re not just transforming cancer care—we’re changing lives.
** We introduced precision medicine to the world and built an industry around the idea that every patient deserves answers as unique as their DNA. Backed by cutting-edge molecular science and AI, we ask ourselves every day:  That question drives everything we do. But our mission doesn’t stop with cancer. We're pushing the frontiers of medicine and leading a revolution in healthcare—driven by innovation, compassion, and purpose.

** Join us in our mission to improve the human condition across multiple diseases. If you're passionate about meaningful work and want to be part of something bigger than yourself, Caris is where your impact begins.
**** Position Summary
** The
** Senior Data Engineer
** will support our precision medicine and biomarker discovery initiatives. This role is responsible for designing, building, and maintaining scalable, cloud-native data platforms and pipelines that support analytics, machine learning, and computational biology workflows across structured and unstructured, multi-modal datasets, and brings strong software engineering and data architecture expertise, deep experience with AWS cloud services, and a collaborative mindset to partner closely with data scientists, computational biologists, and R&D stakeholders.
** Job Responsibilities
*** Design, build, and maintain scalable, reliable, and secure data pipelines for ingesting, transforming, storing, and serving large, multi-source and multi-omics datasets.
* Architect and implement cloud-native data solutions on AWS to support analytics workflows, machine learning pipelines, and scientific research.
* Develop and maintain automation frameworks for data ingestion, processing, validation, and delivery.
* Build and deploy APIs, services, and data access layers to enable analytics and machine-learning solutions at scale.
* Develop and deploy applications and workflows in cloud and/or HPC environments, adhering to industry best practices for system architecture, CI/CD, testing, and software design.
* Partner closely with data scientists, computational biologists, and R&D scientists to design and evolve shared analytics platforms.
* Optimize data systems for performance, cost efficiency, scalability, and reliability.
* Ensure data quality, observability, and lineage across pipelines and platforms.
* Adhere to coding, documentation, security, and compliance standards; manage technical deliverables for assigned projects.
* Provide general informatics and platform support for laboratory research, technology development, and clinical studies.
* Contribute to architectural decisions and mentor junior engineers as appropriate.
** Required Qualifications
*** Ph.D.’s degree in Computer Science, Engineering, or a related technical field (or equivalent practical experience).
* 5+ years of professional experience in data engineering, platform engineering, or backend software engineering roles.
* Strong proficiency in Python and experience building production-grade data pipelines and services.
* Extensive experience designing and operating data platforms on AWS, including services such as EC2, S3, Dynamo

DB, EKS/ECS, Lambda, Glue, Athena, and related services.
* Experience with Infrastructure as Code (IaC) using tools such as Terraform, Cloud Formation, or CDK.
* Expertise in designing, implementing, and maintaining relational and non-relational databases (e.g., MySQL, Postgre

SQL, Mongo

DB).
* Extensive experience with containerization and orchestration technologies.
* Strong proficiency with Linux and command-line–based workflows.
* Familiarity with modern data platform concepts, including data lakes, lake houses, streaming, and batch processing architectures.
* Experience applying best practices in Dev Ops, Data Ops, and/or MLOps, including CI/CD, monitoring, and automated testing.
* Strong communication skills and the…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary