×
Register Here to Apply for Jobs or Post Jobs. X

Analytics Engineer

Remote / Online - Candidates ideally in
Louisville, Jefferson County, Kentucky, 40201, USA
Listing for: HANDLE Global
Full Time, Seasonal/Temporary, Remote/Work from Home position
Listed on 2025-12-29
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Job Description & How to Apply Below
Position: Staff Analytics Engineer

Apply for this job and hear back from our recruiter in under 48 hours!

Louisville, Kentucky (Headquarters) - Remote position

Department: R&D

Type:
Full-Time

Job Description

Posted on:
December 22, 2025

Company Overview

HANDLE Global, a leading healthcare supply chain analytics and fulfillment solutions provider, is dedicated to revolutionizing the healthcare industry by providing cutting‑edge software solutions. Our innovative products help healthcare providers optimize operational efficiency, enhance patient care, and drive business growth.

Role Overview

“Easier done than said.” If this is you, keep reading. We need someone who can instantly grasp the problem and get things done (while also being an extremely effective communicator who can break down complex problems and ELI5!). The person we’re looking for will assume ownership of business critical data models, lead data discovery calls with customers and prospects, be the lead developer for customer data ingestion pipelines, and be incredibly adept at seeing the big picture while working closely with others on both strategic and tactical initiatives.

What

You'll Do
  • Own the Data Architecture: Take ownership of our data infrastructure and drive it forward. The foundation is built, now we need you to evolve it to support our next phase of growth. Set standards for our semantic layer, data quality frameworks, and modeling conventions. We know where we're headed; we need you to ensure our codebase is both flexible enough to handle new customer requirements and stable enough to maintain our momentum as we scale.
  • Build & Ship Production Pipelines: Take over hands‑on delivery across our modern stack (Postgre

    SQL, Databricks, Dagster, dbt core, Fivetran/Airbyte). Write clean, maintainable SQL and Python that the team can build on. Own data quality so stakeholders can trust what they see.
  • Lead Customer Data Integration: Own and evolve the ELT pipelines that onboard every new SaaS customer using PySpark and our internal Data Hammer package to translate their unique business logic into our standardized schema. Own change data capture (CDC) processes for customer data—monitoring pipeline health, handling schema evolution, and troubleshooting data integration issues. Continuously enable new features that reduce customer time‑to‑value.

    These pipelines are business‑critical and already running. We need you to take them over, improve them, and scale them.
  • Keep Systems Reliable: Inherit ownership of our data operations. Navigate complex data issues across the entire stack, piecing together context as our platform and customer integrations evolve. Enhance our existing monitoring and alerting systems to surface problems before they impact customers. Own the daily health of customer data pipelines and coordinate with customer success, master data, and platform engineering teams when integration issues arise.
  • Bridge Technical & Business Worlds: Step into customer‑facing and stakeholder conversations. Turn vague business requirements into concrete technical solutions. Work directly with the data team, product, platform engineering, customer success, and customers themselves during technical discussions. Present findings to leadership with clarity and confidence.
  • Raise the Bar: Take our analytics engineering practices to the next level. Mentor teammates and champion data‑driven decision‑making across the organization. Help us maintain our execution velocity as the team and codebase grow.
Qualifications and Skills
  • Bachelor’s degree in Computer Science, Data Engineering, Analytics, or a related field (Master’s preferred) or equivalent combination of education, training, and experience.
  • Strong business acumen and people skills.
  • 7+ years of professional experience in analytics engineering, data engineering, or related roles.
  • Expert‑level SQL skills, including advanced query optimization and complex dbt‑core projects.
  • Strong proficiency in Python for data manipulation, automation, and API consumption.
  • PySpark experience for data ingestion and transformations.
  • Deep experience with modern data stack tools (dbt, Databricks, cloud data warehouses).
  • Proficiency with orchestration frameworks…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary