×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Data Engineer Remote

Remote / Online - Candidates ideally in
New Mexico, USA
Listing for: Sezzle
Remote/Work from Home position
Listed on 2025-11-29
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 5000 - 9500 USD Monthly USD 5000.00 9500.00 MONTH
Job Description & How to Apply Below
Position: Senior Data Engineer  Remote

About Sezzle

With a mission to financially empower the next generation, Sezzle is revolutionizing the shopping experience beyond payments, blending cutting‑edge tech with seamless, interest‑free installment plans that make shopping smarter and more accessible. We’re not just transforming payments; we’re redefining how people discover, interact with, and purchase the things they love while driving real impact on merchant sales through increased conversions and higher order values.

As we continue to shape the future of fintech and retail, we’re building an innovative, dynamic team passionate about creating more than just a transaction – a truly unique shopping journey.

Compensation

For this high‑level senior role, with 9+ years of experience, the compensation range is $5,000 - $9,500 USD per month. This range acknowledges the extensive expertise, leadership capabilities, and significant contributions expected at this level, offering a competitive salary to reflect that.

About the Role

We are seeking a talented and motivated best‑in‑class Senior Data Engineer. This role presents an exciting opportunity to thrive in a dynamic, fast‑paced environment within a rapidly growing team, with abundant prospects for career advancement. Sezzle is growing; our data generation and consumption is growing with us at an increasing scale. Data is extremely valuable, and we want to empower the business, the engineers, and the rest of the organization to analyze large volumes of it quickly and efficiently.

Sezzle is a heavy consumer of Redshift, leveraging AWS DMS, DBT‑based transformations, and similar tooling to populate a rapidly growing data lake that supports multiple data warehouses for other business units. It’s scaled well so far, and we’re looking to continue improving it while also stepping into new tooling and technologies.

What you’ll do
  • Design, build, and optimize large‑scale, high‑performance data pipelines to support analytics, product insights, and operational workflows.
  • Architect and evolve Sezzle’s data ecosystem, driving improvements in reliability, scalability, and efficiency.
  • Lead development of ETL/ELT workflows using Redshift, DBT, AWS DMS, and related modern data tooling.
  • Partner with cross‑functional teams (engineering, analytics, product, finance, risk) to gather or adapt requirements and deliver robust, high‑quality datasets.
  • Evaluate and integrate new technologies, guiding the evolution of Sezzle’s data stack and infrastructure.
  • Optimize Redshift and warehouse performance, including query tuning, modeling improvements, and cost management.
What we look for
  • 9+ years of experience in data engineering, with a strong track record of production‑grade systems.
  • Deep expertise with AWS Redshift or similar products
    , including performance tuning, table design, and workload management.
  • Strong hands‑on experience with ETL/ELT frameworks
    , especially DBT
    , AWS DMS
    , and similar tools.
  • Proficiency in SQL (advanced level) and at least one programming language such as Python
    , Scala
    , or Java
    .
  • Experience building and maintaining AWS‑based data platforms
    , including S3, Lambda, Glue, or EMR.
  • Track record designing scalable, fault‑tolerant data pipelines using modern orchestration tools (Airflow, Dagster, Prefect, etc.) processing more than 100 GB – 1 TB of new data a day.
  • Strong understanding of data modeling
    , distributed systems, and warehouse/lake design patterns.
  • Ability to work in a fast‑paced, collaborative environment with excellent communication and documentation skills.
Preferred Knowledge and Skills
  • Prior experience in high‑growth, data‑intensive fintech or similar regulated environments.
  • Familiarity with streaming technologies (Kafka, Kinesis, Flink, Spark Streaming).
  • Knowledge of lakehouse architectures and modern stacks such as Snowflake, Databricks, Iceberg, or Delta Lake.
  • Exposure to machine learning pipelines
    , feature stores, or MLOps concepts.
  • Experience leading data platform migrations
    , warehouse re‑architectures, or large‑scale performance overhauls.
  • Enthusiasm for automation
    , CI/CD for data, and infrastructure as code (Terraform, Cloud Formation).
About You
  • You have relentlessly high standards – many people may think…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary