×
Register Here to Apply for Jobs or Post Jobs. X

Senior Application Developer - Disaster Recovery & Data Replication; Remote

Remote / Online - Candidates ideally in
Sunnyvale, Santa Clara County, California, 94087, USA
Listing for: CrowdStrike, Inc.
Full Time, Remote/Work from Home position
Listed on 2025-12-26
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below
Position: Senior Application Developer - Disaster Recovery & Data Replication (Remote)

Crowd Strike, Inc. Full time R25807

As a global leader in cybersecurity, Crowd Strike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed — we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily.

Our customers span all industries, and they count on Crowd Strike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every Crowd Striker both the flexibility and autonomy to own their careers. We're always looking to add talented Crowd Strikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other.

Ready to join a mission that matters? The future of cybersecurity starts with you.

About the Role

Crowd Strike's Data Platform group is integral not only to our day-to-day operation but also to our customers' ability to access and learn from our historical threat remediation findings. We empower them directly with self-service platforms and tooling to execute vulnerability- and attack-surface management failsafes in their own environments, based on hundreds of petabytes of historical threat data.

Our Data Infrastructure team within DP is seeking an exceptional Senior Application Developer (De‑facto tech lead, in this case) to lead disaster recovery and data replication initiatives within our Data Platform team(s). This is a high‑impact, hands‑on functional leadership position where you'll architect and build mission‑critical applications that scale seamlessly from small datasets to multi‑petabytes, ensuring data resilience and business continuity for our cybersecurity platform.

What

You'll Do
  • Design, develop, and maintain enterprise‑grade disaster recovery and data replication solutions using Python and/or object‑oriented Java
  • Build scalable data applications on top of Apache Spark and Apache Flink that scale seamlessly to handle massive data volumes (multi‑PB scale)
  • Lead the technical roadmap for disaster recovery infrastructure, driving architectural decisions and best practices
  • Mentor and guide engineering teams on distributed systems design, data platform technologies, and coding excellence
  • Architect and implement robust ETL pipelines for disaster recovery scenarios with focus on data integrity and performance
  • Design fault‑tolerant, highly available data replication systems across multiple regions and availability zones
  • Optimize data processing workflows to handle exponential data growth while maintaining sub‑second recovery objectives
  • Implement monitoring, alerting, and automated recovery mechanisms for data platform resilience
  • Deploy and manage applications on Kubernetes (K8s) with deep understanding of container orchestration, scaling, and resource management
  • Implement infrastructure‑as‑code practices for disaster recovery environments
  • Collaborate with SRE teams to ensure 99.99%+ availability of critical data services
  • Drive continuous improvement in deployment automation, testing, and operational excellence
What You'll Need
  • 8+ years of software development experience with strong coding skills in Python and/or object‑oriented Java
  • 5+ years hands‑on experience with Apache Spark / Apache Flink in production environments
  • Deep expertise in data platform architecture, ETL design patterns, and large‑scale data processing
  • Strong Kubernetes (K8s) experience including deployment, scaling, monitoring, and troubleshooting
  • Proven track record of scaling data applications from small datasets to multi‑petabyte scale
  • Experience designing and implementing disaster recovery and data replication solutions
  • Strong understanding of distributed systems, data consistency models, and fault tolerance patterns
  • Advanced knowledge of Spark internals, optimization techniques, and performance tuning
  • Experience with Flink streaming applications and stateful processing
  • Proficiency with K8s ecosystem (Helm,…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary