×
Register Here to Apply for Jobs or Post Jobs. X

Data Platform Engineer

Job in New York, New York County, New York, 10261, USA
Listing for: BRAINS WORKGROUP, INC.
Full Time, Part Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Location: New York

Our client, a major bank in New York City, is looking for Data Platform Engineer
. Permanent position with competitive compensation package (base range is 120‑150K), excellent benefits, and target bonus. Must be 2/3 days per week in New York City Office.

Data Platform Engineer

Looking for a highly skilled Kafka Platform Engineer to design, build, and operate our enterprise event-streaming platform using Red Hat AMQ Streams (Kafka on Open Shift). In this role, you will be responsible for ensuring a reliable, scalable, secure, and developer-friendly streaming ecosystem. You will work closely with application teams to define and implement event‑driven integration patterns, and you will leverage Git Lab and Argo CD to automate platform delivery and configuration.

This position requires a strong blend of platform engineering, Dev Ops practices, Kafka cluster expertise, and architectural understanding of integration/streaming patterns.

Qualifications
  • Bachelor’s degree in computer science, Engineering, or a related field.
  • Proven experience with Kafka administration and management.
  • Strong knowledge of Open Shift and container orchestration.
  • Proficiency in scripting languages such as Python or Bash.
  • Experience with monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).
  • Excellent problem‑solving skills and attention to detail.
  • Strong communication and collaboration skills.
Preferred Qualifications
  • Experience with Red Hat Open Shift administration.
  • Knowledge of service mesh patterns (Istio, Open Shift Service Mesh).
  • Familiarity with stream processing frameworks (Kafka Streams, ksql

    DB, Flink).
  • Experience using observability stacks (Prometheus, Grafana).
  • Background working in regulated or enterprise‑scale environments.
  • Knowledge of Dev Ops practices and tools (e.g., ArgoCD, Ansible, Terraform).
  • Knowledge of SRE Monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).
Job Description Kafka & AMQ Streams Engineering
  • Design, deploy, and operate AMQ Streams (Kafka) clusters on Red Hat Open Shift.
  • Configure and manage Kafka components including brokers, Kraft, Mirror Maker 2.
  • Explore Kafka Connect, and Schema Registry concepts and implementations.
  • Ensure performance, reliability, scalability, and high availability of the Kafka platform.
  • Implement cluster monitoring, logging, and alerting using enterprise observability tools.
  • Manage capacity planning, partition strategies, retention policies, and performance tuning.
Integration Patterns & Architecture
  • Define and document standardized event‑driven integration patterns, including:
    • Event sourcing
    • CQRS
    • Pub/sub messaging
    • Change data capture
    • Stream processing & enrichment
    • Request‑reply over Kafka
  • Guide application teams on using appropriate patterns that align with enterprise architecture.
  • Establish best practices for schema design, topic governance, data contracts, and message lifecycle management.
Security & Governance
  • Implement enterprise‑grade security for Kafka, including RBAC, TLS, ACLs, and authentication/authorization integration (SSO and OAuth).
  • Maintain governance for topic creation, schema evolution, retention policies, and naming standards.
  • Ensure adherence to compliance, auditing, and data protection requirements (Encryption at Rest and flight).
Collaboration & Support
  • Provide platform guidance and troubleshooting expertise to development and integration teams.
  • Partner with architects, SREs, and developers to drive adoption of event‑driven architectures.
  • Create documentation, runbooks, and internal knowledge‑sharing materials.
CI/CD & Git Ops Automation
  • Build and maintain Git Ops workflows using Argo CD for declarative deployment of Kafka resources and platform configurations.
  • Develop CI/CD pipelines in Git Lab, enabling automated builds, infrastructure updates, and configuration promotion across environments.
  • Maintain Infrastructure‑as‑Code repositories and templates for Kafka resources.

Please email your resume or use this link to apply directly:  Or email:  Check ALL our Jobs:

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary