×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer II

Job in Indianapolis, Hamilton County, Indiana, 46262, USA
Listing for: Delineate
Full Time, Part Time position
Listed on 2026-01-07
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager
  • Engineering
    Data Engineer, Data Science Manager
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Location: Indianapolis

Grow With Us:
Join Our Team of Innovators

Our data engineers skillfully navigate the intricate waters of technology, business dynamics, and data variability. Their role demands a delicate balance between technical expertise and strategic vision, ensuring that data solutions align with industry standards and harmonize uniquely with our clients' goals and organizational analytics maturity.

Job overview

Job Title: Data Engineer II
Job Location: Indianapolis, IN
Description: The Data Engineer is a key technical contributor responsible for designing, building, and maintaining robust data infrastructure and pipelines that enable seamless data integration, transformation, and analysis. This role involves optimizing cloud resource usage, ensuring data quality and governance, and implementing scalable, efficient data solutions aligned with business objectives. As a member of the data services team, the Data Engineer collaboratively designs database schemas, develops ETL workflows, and ensures compliance with data privacy and regulatory standards.

They proactively diagnose and resolve complex technical issues, optimize queries, and contribute to process improvements.
Status: Part-Time or Full-time considered.
Salary: Salary for this position is competitive and will be determined based on the candidate's experience, expertise, and qualifications.

Education

Bachelor's degree in Data Science, Computer Science, Software Engineering, Information Systems, Mathematics, or a related technical field is required. Equivalent practical experience in data engineering or related technical roles may be considered in lieu of formal education.

While certifications are not required for this role, they can demonstrate a strong foundation in key technologies and platforms, showcasing your commitment to professional growth and technical expertise. Example certifications include the following:

  • Microsoft DP-900:
    Azure Data Fundamentals
  • Databricks Certified Data Engineer Associate or Professional
  • Microsoft Azure Data Engineer Associate
  • AWS Certified Data Engineer Associate
  • Snowflake Snow Pro Core Certification
  • Google Professional Data Engineer

Technical skills and knowledge you bring to the role

The ideal candidate is highly skilled in Python, SQL, and cloud-based data storage technologies, with a strong focus on automation and continuous learning. They take ownership of tasks within cross-team initiatives, mentor more junior team members, and recommend innovative tools and solutions to enhance performance. This role is integral to driving the scalability, reliability, and efficiency of our clients' data systems.

  • Programming Languages: Proficiency in Python and SQL for data processing and query optimization. Experience with PySpark and one or more additional languages like Scala, Java, or Bash for managing data workflows.
  • Data Storage and Databases: Strong knowledge of relational databases (e.g., Postgre

    SQL, MySQL, Oracle). Experience with modern data warehouses such as Snowflake, Amazon Redshift, or Google Big Query. Familiarity with data lakes (e.g., Amazon S3, Azure Data Lake) and lakehouse solutions (e.g., Delta Lake, Apache Iceberg).
  • Big Data Frameworks: Hands-on experience with Apache Spark for distributed data processing, including leveraging Apache Spark through Databricks. Knowledge of Apache Kafka or similar tools for real-time data streaming.
  • Cloud Platforms: Experience with cloud technologies such as AWS (S3, Glue, Redshift), Microsoft Azure (Data Factory, Synapse), or Google Cloud Platform (Big Query, Dataflow).
  • Data Governance and Security: Understanding of data governance frameworks, compliance (GDPR, HIPAA), and tools like Unity Catalog, Apache Atlas, and Great Expectations.
  • Pipeline Monitoring and Optimization: Experience with monitoring tools, such as Apache Airflow, for pipeline performance. Ability to optimize and troubleshoot data pipelines for scalability and efficiency.

Key responsibilities

In this role, you will work alongside the Delineate team to:

  • Develop database schemas for moderately complex data models, optimizing for query performance. Design and implement data models utilizing concepts like dimensional modeling…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary