×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer III

Job in Garden City, Finney County, Kansas, 67846, USA
Listing for: Associated Wholesale Grocers, Inc.
Full Time position
Listed on 2025-12-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Warehousing
Job Description & How to Apply Below

Department: Information Technology

Associated Wholesale Grocers (AWG) is transforming our business intelligence, analytics, and data automation capabilities. The goal is to level up data warehousing and data management capabilities, including curated and cleansed data sets, certified data models and sources. We are adapting to cloud-based data automation for data cleansing, extraction, loading, transformations and visualizations. Another goal is to expand storage and compute capacity for ingestion of structured and unstructured data.

In addition, we are hardening security constraints, in the areas of role-based access, anonymized data sets and intrusion prevention constraints.

As a Data Engineer III, you will play a pivotal role in the design, development, and optimization and support of a robust data ecosystem to support the company’s enterprise-wide business intelligence and analytics initiatives. Leveraging your experience in data engineering and visualization capabilities, you will be responsible for delivering enterprise data solutions, ensuring the adherence to data governance frameworks, and driving the development of scalable ETL processes.

Your primary area of responsibility will be working on a strong data engineering which supports a new company-wide business intelligence and analytics environment. You will be performing the design, build deployment and support of standardized datasets derived from multiple systems of record. A key area of success for this role is developing or applying existing development and scripting skills in process automation to orchestrate data pipelines from business systems, data lakes and third-party data repositories and ability to visualize them.

You will be creating and loading enterprise data warehouse/dimensional models that will be used for analysis and reporting. As you work on projects and enhancements you will assist in refining data governance within business processes and the technical tool chain, in addition to ensuring proper error/exception handling procedures are followed in the solutions.

The ability to quickly learn the meaning of business data and understand the business domain is paramount to this role. Your skills will be leveraged to accurately identify key data and build the system processes which will extract, transform, load, and visualize the data that is used in key business decisions. This role includes hands-on development experience and participation in regular on-call duties.

Position Responsibilities:


Deliver Solutions (65%)

  • Architect, design and build real-time and batch data integrations from disparate source systems into the data lake/data warehouse.
  • Write SQL, Pyspark, Python and Databricks Notebooks to orchestrate data movement and transformation from source to target, using parameterization whenever possible.
  • Build and maintain scalable ETL processes using workflow automation tools such as Azure Data Factory, Azure Databricks, Cawa and Cleo Harmony
  • Provide comprehensive documentation for installations and deliverables.
  • Design enterprise grade data management solutions including dimensional models and be able to cleanse and transform raw data into structured format.
  • Scale and tune data pipelines, data sets and SQL queries.
  • Responsible for the full SDLC (Software Development Lifecyle) phases of Data Engineering to deliver enterprise grade data engineering solutions that meet the business current and future aspirations. This includes requirements gathering to warranty support including Unit testing, support of Integration testing and User Acceptance Testing.
  • Understand dimensional modelling, importing, and cleaning/transforming/shaping using DAX & Power Query, Power BI Desktop and Service.
  • Execute Azure Dev Ops CI/CD processes such as creating branches, merging to main, creating pull requests, getting approvals;
    Participate in CAB meetings, creating RFCs for migrating changes to Production. It would be nice to have knowledge of DABs (databricks asset bundles).


Support & Availability Services (15%)

  • Participate in core/project team solving key/major production support issues, define/develop and execute data engineering optimizations…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary