Information Security Engineer
Listed on 2026-02-16
-
Engineering
Data Engineer -
IT/Tech
Data Engineer
Location: Irving, TX
Salary: $53.00 USD Hourly – $57.00 USD Hourly
OverviewWe are seeking a Data Engineer to support our Identity and Access Management (IAM) Data Lake initiatives within the Information Security Engineering organization. In this contingent role, you will contribute to medium-complexity engineering efforts, participate in large-scale planning, and help develop secure, scalable data solutions on Google Cloud Platform
.
You will review and analyze technical challenges, recommend solutions, and collaborate with cross-functional partners across Information Security Engineering to meet delivery requirements. This role requires strong technical expertise, problem-solving skills, and the ability to work within established security, compliance, and engineering frameworks.
Location: Irving, TX (Preferred). Dallas, TX or Ohio are acceptable alternatives.
Responsibilities- Design, build, and maintain Data Lake solutions on Google Cloud Platform using modern big data tools and frameworks.
- Develop and optimize data ingestion pipelines (batch and streaming) leveraging Google Cloud Platform-native services
. - Analyze moderately complex information security engineering challenges and propose robust, scalable solutions.
- Implement data processing using Py Spark ,
Airflow
, APIs, and CI/CD workflows. - Support data modeling, schema design, Avro/Parquet/ORC usage, and metadata strategies.
- Apply best practices in access control, bucket structuring, lifecycle management, and secure data architecture.
- Collaborate closely with internal partners to deliver high-quality engineering outcomes in alignment with security policies and compliance requirements.
- Troubleshoot data pipeline issues and contribute to continuous improvement efforts within the IAM Data Lake environment.
- 4 years of experience in Information Security Engineering, Data Engineering, or related technical fields (experience may be through work, consulting, training, military service, or education).
- Proven hands-on expertise with:
- Google Cloud Platform (4-6 years)
- PySpark (4-6 years)
- Data processing frameworks (4-6 years)
- Airflow (2-4 years)
- API development (2-4 years)
- Data pipelines (2-4 years)
- Hadoop ecosystem / HDFS (2-4 years)
- Data modeling principles (1-2 years)
- AVRO and other columnar formats
- Strong understanding of Google Cloud Platform architectural best practices, including:
- Bucket structure and naming standards
- Access control models
- Lifecycle management policies
- Expertise with Parquet
, Avro
, ORC
, and compression strategies. - Experience building batch and streaming pipelines using Google Cloud Platform services such as Dataflow, Pub/Sub, Cloud Storage, Big Query, and Composer.
- Knowledge of Pub/Sub-based streaming architecture
, including schema management, evolution, and versioning. - Familiarity with Change Data Capture (CDC) and incremental ingestion techniques.
- Understanding of downstream consumption patterns including APIs, materialized views, and curated analytical datasets.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).