×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Data Engineer

Job in Madison, Dane County, Wisconsin, 53774, USA
Listing for: Findhelp, A Public Benefit Corporation
Full Time position
Listed on 2026-02-01
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 146400 - 183000 USD Yearly USD 146400.00 183000.00 YEAR
Job Description & How to Apply Below
Position: Senior Staff Data Engineer

We’re changing the way people connect to social care.

At Findhelp, we’ve built a comprehensive platform of products and services that make it easy for you to connect people to resources, follow them on their journey, and track your impact in a fast and reliable way. Our industry-leading social care network includes more than half a million local, state, and national programs that serve every ZIP Code in the country, from rural areas to major metropolitan centers.

Findhelp is headquartered in Austin, Texas and has been enabling healthcare, government, education, and other organizations to connect people with the social care resources that serve them, with privacy and security, since 2010.

As a mission driven organization, we are focused on creating a positive impact by connecting people in need to the programs that serve them with dignity and ease. Powered by our proprietary technology that enables people to find the resources available in their area, we have helped millions of seekers find food, health, housing and employment programs.

We are seeking a skilled Staff Data Engineer to own and enhance our data integration processes, with a focus on uploading and syncing data related to 211 relationships and other organizational data. This role is critical in ensuring data consistency, automation, and scalability across our data ecosystem. As a Senior Staff Data Engineer, you will be responsible for designing, building, and maintaining data pipelines that transform, load, and synchronize data from various sources into our environment.

You will leverage cloud-based batch data load strategies and update APIs to ensure seamless integration. You will also play a key role in optimizing production systems, resolving incidents, and identifying automation opportunities to improve efficiency.

Responsibilities
  • Data Integration & Processing:
    Own all data uploading and syncing related to 211 relationships and potentially other organizational datasets.
  • Pipeline Development:
    Design, build, and maintain scalable data pipelines that transform raw source outputs (e.g., customer 211s) into a structured format for internal use.
  • API-Driven Data Uploads:
    Partner with Programs Team on creation and updating APIs to upload and synchronize bulk data efficiently.
  • Database Design & Documentation:
    Create and maintain data models, metadata, ETL specifications, and process flows to support business data projects.
  • Production System Monitoring:
    Monitor, maintain, and optimize data pipelines to ensure reliability, performance, and data integrity.
  • Incident Resolution:
    Investigate and resolve user-reported incidents related to data syncing, transformation, and pipeline failures.
  • Automation & Optimization:
    Identify opportunities to automate, consolidate, and simplify data solutions for better scalability.
  • Code Reviews & Best Practices:
    Conduct periodic code reviews to enforce best practices in design, performance tuning, and maintainability.
Qualifications
  • 7+ years of experience in data engineering, with a focus on data pipeline development, transformation, and bulk syncing, with experience in one or more languages commonly used for data operations including SQL and Python.
  • Deep knowledge and hands‑on experience building/operating highly available, distributed systems of data extraction, ingestion, and processing in cloud environments such as GCP (ideal), Microsoft Azure or AWS.
  • Experience with bulk API integration and JSON processing, including designing, optimizing, and troubleshooting high-volume data exchanges.
  • Demonstrated strength in data modeling, ETL development, and data warehousing.
  • Experience with relational databases such as MySQL (ideal), Oracle, or Postgre

    SQL.
  • Experience with Big Query (ideal), Redshift, or Snowflake.
  • Experience in Airflow, Git Hub, and CI/CD processes are a must.
  • Ability to troubleshoot and optimize data solutions for performance and reliability.

$146,400 - $183,000 a year

The salary range provided reflects the national average for this job title and does not represent compensation specific to Findhelp. Actual compensation will vary based on experience, qualifications, and market factors relevant to the position.

We…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary