More jobs:
Data Engineer
Job in
Saint Petersburg, Pinellas County, Florida, 33739, USA
Listed on 2026-01-01
Listing for:
Pitisci & Associates
Full Time
position Listed on 2026-01-01
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
W2 contract for local Tampa Bay candidates only!!
Our client located in St.Petersburg, FL is seeking a Data Engineer to build and maintain data pipelines that connect Oracle-based source systems to AWS cloud environments to provide well-structured data for analysis and machine learning in AWS Sage Maker. It includes working closely with data scientists to deliver scalable data workflows as a foundation for predictive modeling and analytics.
Duties- Develop and maintain data pipelines to extract, transform, and load data from Oracle databases and other systems into AWS environments (S3, Redshift, Glue, etc.).
- Setting up pipelines to feed the needs of the data scientists
- SQL connections to load files
- All new pipelines need to be created as there are no existing pipelines/Mostly batch pipelines
- Collaborate with data scientists to ensure data is prepared, cleaned, and optimized for Sage Maker-based machine learning workloads.
- Implement and manage data ingestion frameworks, including batch and streaming pipelines.
- Automate and schedule data workflows using AWS Glue, Step Functions, or Airflow.
- Develop and maintain data models, schemas, and cataloging processes for discoverability and consistency.
- Optimize data processes for performance and cost efficiency.
- Implement data quality checks, validation, and governance standards.
- Work with Dev Ops and security teams.
Required
- Strong proficiency with SQL and hands-on experience working with Oracle databases.
- Strong experience in batch processing.
- Experience designing and implementing ETL/ELT pipelines and data workflows.
- Hands‑on experience with AWS data services, such as S3, Glue, Redshift, Lambda, and IAM.
- Proficiency in Python for data engineering (pandas, boto3, pyodbc, etc.).
- Solid understanding of data modeling, relational databases, and schema design.
- Familiarity with version control, CI/CD, and automation practices.
- Ability to collaborate with data scientists to align data structures with model and analytics requirements
- Experience integrating data for use in AWS Sage Maker or other ML platforms.
- Exposure to MLOps or ML pipeline orchestration.
- Familiarity with data cataloging and governance tools (AWS Glue Catalog, Lake Formation).
- Knowledge of data warehouse design patterns and best practices.
- Experience with data orchestration tools (e.g., Apache Airflow, Step Functions).
- Working knowledge of Java is a plus.
B.S. in Computer Science, MIS or related degree and a minimum of five (5) years of related experience or combination of education, training and experience.
#J-18808-LjbffrTo View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×