×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer II - Digital Technology

Job in Arlington, Tarrant County, Texas, 76000, USA
Listing for: GM Financial
Full Time position
Listed on 2026-02-07
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below

Data Engineer II - Digital Technology

Apply to join the Data Engineer II - Digital Technology role at GM Financial

Why GM Financial Technology

Innovation isn’t just a talking point at GM Financial, it’s how we operate. From generative AI and cloud-native technologies to peer-led learning and hackathons, our tech teams are building real solutions that make a difference. We’re committed to AI-powered transformation, using advanced machine learning and automation to help us reimagine customer interactions and modernize operations, positioning GM Financial as a leader in digital innovation within a dynamic industry.

Job Description

We are expanding our efforts into complementary data technologies for decision support in areas of ingesting and processing large data sets including data commonly referred to as semi-structured or unstructured data. Our interests are in enabling data science and search-based applications on large and low latent data sets in both a batch and streaming context for processing. To that end, this role will engage with team counterparts in exploring and deploying technologies for creating data sets using a combination of batch and streaming transformation processes.

These data sets support both off‑line and in‑line machine learning training and model execution. Other data sets support search engine‑based analytics. Exploration and deployment of technologies activities include identifying opportunities that impact business strategy, collaborating on the selection of data solutions software, and contributing to the identification of hardware requirements based on business requirements. Responsibility also includes coding, testing, and documentation of new or modified scalable analytic data systems including automation for deployment and monitoring.

This role partakes, along with team counterparts, in developing solutions in an end‑to‑end framework on a group of core data technologies.

Job Duties
  • Contribute to the evaluation, research, experimentation efforts with batch and streaming data engineering technologies in a lab to keep pace with industry innovation
  • Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques
  • Contribute to the definition and refinement of processes and procedures for the data engineering practice
  • Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, capture, collect and format data from the external sources, internal systems, and the data warehouse to extract features of interest
  • Code, test, deploy, monitor, document and troubleshoot data engineering processing and associated automation
Qualifications
  • Experience with Adobe solutions (ideally Adobe Experience Platform, XDM, RTCDP DTM/Launch) and REST APIs
  • Digital technology solutions (DMPs, CDPs, Tag Management Platforms, Cross‑Device Tracking, SDKs, etc.) Knowledge of Real Time‑CDP and Journey Analytics solution
  • SQL experience: querying data and sharing what insights can be derived
  • Working knowledge of Agile development /SAFe, Scrum and Application Lifecycle Management.
  • Experience with ingesting various source data formats such as JSON, Parquet, Sequence File, Cloud Databases, MQ, Relational Databases such as Oracle.
  • Experience with Cloud technologies (such as Azure, AWS, GCP) and native toolsets.
  • Understanding of cloud computing technologies, business drivers and emerging computing trends.
  • Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape.
  • Working knowledge of Object Storage technologies to include but not limited to Data Lake Storage Gen2, S3, Minio, Ceph, ADLS etc.
  • Strong background with source control management systems;
    Build Systems (Maven, Gradle, Webpack);
    Code Quality (Sonar);
    Artifact Repository Managers (Artifactory), Continuous Integration/ Continuous Deployment (Azure Dev Ops).
  • Experience with processing large data sets using Hadoop, HDFS, Spark,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary