More jobs:
Data Engineer
Job in
Washington, District of Columbia, 20022, USA
Listed on 2025-12-15
Listing for:
Tential Solutions
Full Time
position Listed on 2025-12-15
Job specializations:
-
IT/Tech
Data Engineer, Data Science Manager, Data Analyst, Big Data
Job Description & How to Apply Below
Join to apply for the Data Engineer role at Tential Solutions
We’re partnering with a Big 4 consulting firm to add a Data Engineer to their team supporting a major banking and credit organization. This role focuses on building and optimizing scalable, cloud-based data pipelines using Python, Java, SQL, AWS, Spark, Databricks, and EMR
. You’ll work across consulting and client teams to deliver reliable data solutions that power analytics, risk, and credit decisioning use cases. This position is fully remote
.
- Design, build, and maintain scalable data pipelines and ETL/ELT processes using Python, Java, and SQL.
- Develop and optimize distributed data processing workloads using Spark (batch and/or streaming) on AWS.
- Build and manage data workflows on AWS, leveraging services such as EMR, S3, Lambda, Glue, and related components as appropriate.
- Use Databricks to develop, schedule, and monitor notebooks, jobs, and workflows supporting analytics and data products.
- Implement data models and structures that support banking/credit analytics, reporting, and downstream applications (e.g., risk, fraud, portfolio, customer insights).
- Monitor, troubleshoot, and tune pipeline performance, reliability, and cost in a production cloud environment.
- Collaborate with consultants, client stakeholders, data analysts, and data scientists to understand requirements and translate them into technical solutions.
- Apply best practices for code quality, testing, version control, and CI/CD within the data environment.
- Contribute to documentation, standards, and reusable components to improve consistency and speed across the data engineering team.
- Strong hands‑on experience with Python and Java for data engineering, ETL/ELT, or backend data services.
- Advanced SQL skills, including complex queries, performance tuning, and working with large, relational datasets.
- Production experience on AWS, ideally with services such as EMR, S3, Lambda, Glue, IAM, and Cloud Watch.
- Practical experience building and optimizing Spark jobs (PySpark, Spark SQL, or Scala).
- Hands‑on experience with Databricks (notebooks, clusters, jobs, and/or Delta Lake).
- Proven experience building and supporting reliable, performant data pipelines in a modern cloud environment.
- Solid understanding of data warehousing concepts, data modeling, and best practices for structured and semi‑structured data.
- Experience working in collaborative engineering environments (Git, code reviews, branching strategies).
- Strong communication skills and comfort working in a consulting/client‑facing environment.
- Experience in banking, credit, financial services, or highly regulated environments.
- Background with streaming data (e.g., Spark Streaming, Kafka, Kinesis) and real‑time or near–real‑time data processing.
- Familiarity with orchestration tools (e.g., Airflow, Databricks jobs scheduler, Step Functions).
- Experience supporting analytics, BI, or data science teams (e.g., building curated datasets, feature stores, or semantic layers).
Seniority Level: Entry level.
Employment type:
Contract. Job function:
Information Technology.
Remote:
Yes
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×