Machine Learning Engineer Security Clearance
Job in
Washington, District of Columbia, 20001, USA
Listed on 2026-02-19
Listing for:
Zachary Piper Solutions, LLC
Full Time
position Listed on 2026-02-19
Job specializations:
-
Software Development
Machine Learning/ ML Engineer, AI Engineer, Data Engineer, Data Scientist
Job Description & How to Apply Below
Piper Companies is currently looking for an Machine Learning Engineer in Washington DC to design, build, and operationalize scalable AI/ML solutions across a variety of mission-critical applications. This is a hybrid role and candidates will undergo a federal background check to receive a security clearance. Candidates with a prior or active clearance are preferred. Responsibilities for the Machine Learning Engineer:
* Collaborate with data scientists and subject matter experts to design, build, and train machine learning systems
* Implement and optimize Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and AI agent architectures for enterprise use cases
* Deploy ML solutions via MLflow, AWS Sage Maker, or custom APIs
* Document ML artifacts, processes, and performance outcomes clearly and comprehensively
Qualifications for the Machine Learning Engineer:
* 5+ years of experience in ML Engineering or Applied Machine Learning and Python programming
* Hands-on experience with ML frameworks (e.g., scikit-learn, XGBoost, PyTorch, Tensor Flow)
* Hands-on experience with training frameworks such as Tensor Flow, PyTorch, or Hugging Face
* Proficiency with Databricks, MLflow, and Py Spark
* Practical experience building and deploying LLMs, RAGs, and AI agent systems
* Experience with AWS services such as S3, EC2, Lambda, Sage Maker, and Step Functions for scalable ML workloads
Compensation for the Machine Learning Engineer:
* Salary Range: $,000 (depending on experience)
* Comprehensive benefit package;
Cigna Medical, Cigna Dental, Vision, 401k w/ ADP, PTO, paid holidays, sick Leave as required by law
This job opens for applications on 2/13/26. Applications for this job will be accepted for at least 30 days from the posting date #LI-BM2 #LI-HYBRID transformers, attention mechanisms, self-attention, multi-head attention, positional encoding, tokenization, subword tokenization, Byte Pair Encoding (BPE), sentence piece, embeddings, contextual embeddings, vector representations, model fine-tuning, instruction tuning, supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), direct preference optimization (DPO), quantization, QLoRA, LoRA adapters, parameter-efficient fine-tuning (PEFT), model distillation, hallucination mitigation, prompt engineering, prompt templates, chain-of-thought prompting, zero-shot inference, few-shot learning, model latency optimization, inference serving, model sharding, distributed training, gradient checkpointing, mixed precision training, FP16, BF16, tensor parallelism, pipeline parallelism, model parallelism, GPU acceleration, CUDA kernels, vLLM, Hugging Face Transformers, tokenizer parallelization, vector databases, dense retrieval, semantic search, ANN search, FAISS, Milvus, Pinecone, Weaviate, Chroma
DB, hybrid search, BM25 retrieval, chunking strategies, document splitting, embedding generation, retriever-ranker architecture, context window management, retrieval latency, indexing pipelines, retrieval evaluators, grounding, context injection, query expansion, document reranking, relevance scoring, metadata filtering, context compression, synthetic data generation for retrieval, RAG evaluation metrics, retrieval latency optimization, MLflow Tracking, MLflow Models, MLflow Registry, MLflow Projects, experiment tracking, model lineage, model versioning, artifact storage, run metadata, auto logging, reproducible ML pipelines, conda environments, model deployment, model packaging, MLflow scoring server, parameters & metrics tracking, model governance, reproducibility, A/B model comparison, deployment to REST endpoints, MLflow + Databricks integration, age Maker Studio, Sage Maker Notebook Instances, Sage Maker Training Jobs, Sage Maker Inference Endpoints, Sage Maker Serverless Inference, Sage Maker Jump Start, Sage Maker Pipelines, Sage Maker Feature Store, Sage Maker Model Registry, Sage Maker Experiments, Sage Maker Autopilot, model monitoring, Data Wrangler, processing jobs, distributed training on Sage Maker, multi-model endpoints, batch transform, Sage Maker SDK, ECR containers, IAM roles for ML, VPC-configured training, spot training, Hugging Face DLCs, Sage Maker Clarify, Sage Maker Debugger, Databricks Runtime, Delta Lake, Delta Tables, Unity Catalog, Databricks Workflows, Databricks Jobs, Databricks Model Serving, Databricks Feature Store, MLflow on Databricks, Databricks Notebooks, DBFS, Photon engine, Databricks SQL, vector search in Databricks, Mosaic AI, Mosaic
ML training, distributed compute clusters, autoscaling clusters, Lakehouse architecture, Databricks Marketplace, data ingestion pipelines, clustering policies, secret scopes, job clusters, ML runtime versions
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×