More jobs:
Java Developer Security Clearance
Job in
Ashburn, Loudoun County, Virginia, 20147, USA
Listed on 2026-02-14
Listing for:
SAIC
Full Time
position Listed on 2026-02-14
Job specializations:
-
Software Development
Data Engineer, Java Developer, Software Engineer
Job Description & How to Apply Below
Description SAIC is looking for a Java Developer who will be responsible for converting existing PySpark codebases into optimized Java-based Spark applications. This role includes developing, refactoring, and maintaining scalable data processing solutions developed on the Databricks platform (or similar Spark execution environments).
Key Responsibilities:
• Convert existing PySpark applications into equivalent, efficient Java Spark implementations
• Design, develop, and maintain scalable Spark-based data pipelines
• Implement data processing logic using Java 8+ with best practices in OOP and functional programming
• Integrate solutions with IRS datasets including IRMF, BMF, and IMF
• Optimize Spark jobs for performance, maintainability, and cost-efficiency
• Collaborate across development, data engineering, and architecture teams
• Troubleshoot and debug Spark workloads in distributed environments
• Ensure compliance with IRS data handling, security, and governance policies Qualifications
Required Qualifications
Required:
* Bachelor's degree in Computer Science, Information Systems, or a related field.
* Active MBI Clearance
* 5+ years of professional experience in a data engineering or software development role.
* Advanced expertise in:
• IRS datasets (IRMF, BMF, IMF) and tax system data structures.
• Java 8+ (experience with functional programming, Streams API, Lambdas).
• Apache Spark (Spark Core, Spark SQL, Data Frame APIs, performance tuning).
• Big data ecosystems (HDFS, Hive, Kafka, S3).
• Working with batch and streaming ETL pipelines for data processing.
* Proficient with Git, Maven/Gradle, and Dev Ops tools.
* Expertise in debugging Spark transformations and ensuring performance.
Preferred Qualifications:
* Hands-on experience converting PySpark workloads into Java Spark.
* Familiarity with ecosystems such as Databricks, Google Dataproc, or similar.
* Knowledge of Delta Lake or Apache Iceberg.
* Proven experience in big data performance modeling and tuning. Target salary range: $80,001 - $120,000. The estimate displayed represents the typical salary range for this position based on experience and other factors.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×