×
Register Here to Apply for Jobs or Post Jobs. X

Java Developer Security Clearance

Job in Ashburn, Loudoun County, Virginia, 20147, USA
Listing for: SAIC
Full Time position
Listed on 2026-02-14
Job specializations:
  • Software Development
    Data Engineer, Java Developer, Software Engineer
Salary/Wage Range or Industry Benchmark: 80001 - 120000 USD Yearly USD 80001.00 120000.00 YEAR
Job Description & How to Apply Below
Position: Java Developer with Security Clearance
Description SAIC is looking for a Java Developer who will be responsible for converting existing PySpark codebases into optimized Java-based Spark applications. This role includes developing, refactoring, and maintaining scalable data processing solutions developed on the Databricks platform (or similar Spark execution environments).

Key Responsibilities:

• Convert existing PySpark applications into equivalent, efficient Java Spark implementations
• Design, develop, and maintain scalable Spark-based data pipelines
• Implement data processing logic using Java 8+ with best practices in OOP and functional programming
• Integrate solutions with IRS datasets including IRMF, BMF, and IMF
• Optimize Spark jobs for performance, maintainability, and cost-efficiency
• Collaborate across development, data engineering, and architecture teams
• Troubleshoot and debug Spark workloads in distributed environments
• Ensure compliance with IRS data handling, security, and governance policies Qualifications

Required Qualifications

Required:

* Bachelor's degree in Computer Science, Information Systems, or a related field.

* Active MBI Clearance

* 5+ years of professional experience in a data engineering or software development role.

* Advanced expertise in:
• IRS datasets (IRMF, BMF, IMF) and tax system data structures.
• Java 8+ (experience with functional programming, Streams API, Lambdas).
• Apache Spark (Spark Core, Spark SQL, Data Frame APIs, performance tuning).
• Big data ecosystems (HDFS, Hive, Kafka, S3).
• Working with batch and streaming ETL pipelines for data processing.
* Proficient with Git, Maven/Gradle, and Dev Ops tools.
* Expertise in debugging Spark transformations and ensuring performance.

Preferred Qualifications:

* Hands-on experience converting PySpark workloads into Java Spark.
* Familiarity with ecosystems such as Databricks, Google Dataproc, or similar.
* Knowledge of Delta Lake or Apache Iceberg.
* Proven experience in big data performance modeling and tuning. Target salary range: $80,001 - $120,000. The estimate displayed represents the typical salary range for this position based on experience and other factors.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary