Senior Data Engineer; Remote
Baltimore, Anne Arundel County, Maryland, 21276, USA
Listed on 2026-02-16
-
IT/Tech
Data Engineer
Resp & Qualifications
Purpose:
The primary purpose of a Senior Data Engineer in Production Support is to maintain the seamless operation of data pipelines, databases, and analytics platforms in live environments. This includes monitoring and troubleshooting data workflows, resolving incidents, and proactively addressing performance bottlenecks to minimize downtime and ensure data availability for end users and business stakeholders.
Essential Functions:
- Incident Management: Quickly diagnose and resolve data-related issues in production, minimizing impact on business operations.
- Monitoring & Alerting: Implement and maintain monitoring solutions to detect anomalies, failures, or performance degradation in data systems.
- Root Cause Analysis: Investigate recurring problems, identify their root causes, and implement long-term solutions to prevent future incidents.
- Performance Optimization: Analyze data workflows and infrastructure for inefficiencies, tuning systems for optimal performance and scalability.
- Collaboration: Work closely with data engineers, analysts, software developers, and IT teams to ensure seamless integration and deployment of data solutions.
- Documentation & Best Practices: Maintain clear documentation of production environments, issue resolutions, and standard operating procedures.
- Change Management: Participate in the planning and execution of changes to data systems, including upgrades, patches, and configuration updates, while minimizing disruption to ongoing operations.
- Security & Compliance: Ensure data systems adhere to organizational security policies and regulatory requirements, identifying and mitigating potential vulnerabilities.
- Capacity Planning: Forecast future data storage and processing needs, recommending infrastructure enhancements to support growth and evolving business requirements.
- User Support: Provide technical assistance to end users and business teams, addressing queries and enabling effective utilization of data resources.
- Automation: Develop and maintain scripts and tools to automate repetitive support tasks, streamlining processes and reducing manual intervention.
Supervisory Responsibility:
Position does not have direct reports but is expected to assist in guiding and mentoring less experienced staff. May lead a team of matrixed resources.
Qualifications:
Education Level: Bachelor's Degree in Computer Science, Information Technology or Engineering or related field OR in lieu of a Bachelor's degree, an additional 4 years of relevant work experience is required in addition to the required work experience.
Experience:
- 5 years Experience with database design and developing modeling tools.
- Experience developing and updating ETL/ELT scripts. Hands-on experience with application development, relational database layout, development, data modeling.
Knowledge,
Skills and Abilities
(KSAs):
- Experience with Informatica IICS:
Practical knowledge in using Informatica Intelligent Cloud Services (IICS) for building, managing, and optimizing cloud-based data integration workflows. - Knowledge of Kafka:
Familiarity with Apache Kafka architecture and able to trouble shoot issues in a timely manner. - MDM SaaS
Experience:
Understanding and hands-on experience with Master Data Management (MDM) Software as a Service solution for ensuring data consistency, quality, and governance across enterprise systems. - Proficiency in Git Hub:
Demonstrated expertise in utilizing Git Hub for version control, code collaboration, and managing development workflows within data engineering projects. - Extensive Experience with SQL:
Advanced proficiency in writing complex queries, optimizing performance, and managing large datasets in Snowflake, Oracle, and SQL Server Databases. - Expertise in Microsoft Azure:
Hands-on experience with Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Blob Storage, Fabrics, and other relevant Azure cloud services. - Power BI
Experience:
Hands-on experience in developing, deploying, and maintaining data visualizations and dashboards using Power BI, including advanced DAX queries, and integrating with diverse data sources for actionable insights. - Control-M Knowledge:
Familiarity with Control-M workload automation, including designing, scheduling, and monitoring data pipeline jobs to ensure reliable and efficient batch processing within enterprise environments. - Proficiency in Scripting
Languages:
Demonstrated ability to automate data workflows using Python, Power Shell, or similar scripting languages. - Experience with Version Control Tools:
Strong understanding and practical use of Git, Git Hub, Azure Dev Ops for collaborative development and CI/CD pipelines. - Strong Data Engineering Fundamentals:
Solid understanding of ETL/ELT processes, data warehousing concepts, and best practices in data architecture and governance. - Flexible working long hours and on demand:
Willingness and ability to adapt to varying work schedules, including evenings and weekends, to meet project deadlines and production support…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).