Senior Software Engineer - Data
Listed on 2025-12-13
-
IT/Tech
Data Engineer, Data Analyst, Data Science Manager, Data Warehousing
About The Role:
Parker’s mission is simple but ambitious:
to increase the number of financially independent people
. We believe the best way to achieve this is by giving independent business owners the financial tools they need to scale profitably.
Our core product combines a virtual corporate card with dynamic spending limits and profitability-focused software tooling
—empowering eCommerce merchants to grow faster while staying in control of their margins.
We’ve raised over $180M in equity and debt from world-class investors, including Valar Ventures, Y Combinator, SVB
, and notable founders such as Solomon Hykes (Docker), Paul Buchheit (Gmail), Paul Graham (Y Combinator), and Robert Leshner (Compound). We’re a Series B fintech scaling rapidly, with strong product-market fit and accelerating demand.
We're looking for a Senior Software Engineer - Data to join our team and help build reliable, scalable, and well-documented data systems. This is an excellent opportunity for someone who has an interest in Fintech career to grow within a modern data stack environment. You'll support the development of data pipelines, help maintain our data infrastructure, and collaborate with analysts, data scientists and backend engineers to make data accessible and trustworthy.
Responsibilities:- Assist in building and maintaining data pipelines (ETL/ELT) for internal and external data
- Support data ingestion from APIs, files, and databases into our data warehouse
- Write SQL queries/ Python scripts to clean, join, and transform data for reporting and analysis
- Monitor data quality and troubleshoot pipeline issues
- Contribute to documentation and testing of data workflows
- Learn and work with tools like dbt, Dagster
- Follow best practices for version control (Git) and coding standards
- Languages:
Python, SQL - Data Warehouses:
Redshift, Snowflake, Big Query, Postgres - Orchestration & Integration:
Dagster, Airbyte, dbt, Prefect - Cloud: AWS (S3, Glue, Lambda), GCP, or similar
- Dev Tools:
Git Hub, Docker, VS Code
- 7+ years of experience in a data, backend, or analytics role (internships count!)
- Strong SQL skills and an interest in analytics engineering
- Intermediate Python knowledge (e.g., working with data, files, APIs)
- Understanding of relational databases and columnar data warehouse and data modeling concepts
- Comfortable with Git and command-line tools
- Curiosity and willingness to learn modern data tooling
- Clear communication and collaboration skills
- Experience with dbt, Airflow, Dagster, or similar tools
- Exposure to cloud platforms (AWS, GCP, etc.)
- Familiarity with data quality, observability, or testing frameworks
Past projects involving large datasets or data APIs - Exposure to Graph
QL and Typescript
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).