Artificial Intelligence Engineer
Listed on 2026-02-13
-
Software Development
AI Engineer, Data Engineer
Join AI Studio in Boston as a core engineer transforming how AI powers construction management. Partnering with Product Managers, Site AI Engineers, and Data Engineers, you’ll solve pain points, redesign workflows, and deploy AI agents that cut down reporting, accelerate RFIs, simplify lookahead planning, progress updates, materials tracking, and more. You’ll focus on building secure, scalable, and high-performance AI agents using modern technologies, including AWS Bedrock and Databricks — shaping the backbone of clients' “Construction Site of the Future.”
The AI Engineer (AI Studio) builds the foundation that enables jobsite AI to scale. You’ll focus on technical excellence, platform reliability, and scalable agent frameworks, enabling field teams to transform how construction projects are executed.
Responsibilities- Translate product requirements and user stories into production-grade AI solutions using AWS Bedrock, Lambda, ECS/EKS, and Databricks.
- Implement RAG pipelines with Delta tables, Unity Catalog, and Vector Search.
- Design and deploy multi-model agents that dynamically select between LLMs (Claude, GPT, Llama, Titan, etc.) based on task context, cost, and latency.
- Implement multi-agent orchestration frameworks enabling collaboration among specialized agents (e.g., data retriever, planner, summarizer, and action executor) for complex construction workflows.
- Own full lifecycle delivery — design, development, testing, deployment, monitoring, and maintenance.
- Build APIs, backend services, and agentic workflows using Python, FastAPI, Lang Chain, and AWS SDKs.
- Create reusable connectors and orchestration layers for multi-model agents (Claude, GPT, Llama, etc.).
- Develop front-end integrations for Teams and web SPAs via REST or Graph
QL endpoints. - Partner with Data Engineering to design robust ETL/ELT pipelines from enterprise systems to the Databricks Lakehouse.
- Ensure efficient data access, caching, and vectorization for low-latency AI response.
- Build tools to monitor and improve data quality, latency, and observability.
- Use Terraform, AWS CDK, and Git Hub Actions to automate infrastructure and deployments.
- Implement LLMOps: cost monitoring, latency optimization, usage analytics, and model versioning.
- Enforce security, governance, and access standards in line with enterprise policies.
- Work closely with product managers, site AI engineers, and data scientists to iterate rapidly in Agile sprints.
- Communicate technical progress clearly to non-technical stakeholders; contribute to internal AI playbooks and templates.
- 4-6 years of professional software development experience on AWS, with 2+ years focused on AI/ML engineering (LLMs, RAG, Bedrock, or similar). Strong coding proficiency in Python (Lang Chain, FastAPI, boto3) and solid experience with SQL, Databricks, and vector databases.
- Experience designing and deploying production systems using AWS Lambda, ECS/EKS, API Gateway, Step Functions, S3, Cloud Front, and KMS.
- Strong foundation in CI/CD, IaC (Terraform/CDK), and Git Hub Actions
- Experience training, retraining and performing transfer learning on ML models desirable.
- Bachelor’s in Computer Science, Engineering, Physics, or a related field;
Master’s preferred. - Prior hands-on work in construction or heavy process industries (manufacturing, oil & gas, chemicals) is a significant plus.
- Excellent collaboration and communication skills — able to work cross-functionally but not dependent on business-side facilitation.
- Integration & ETL skills:
Foundational understanding of ETL/ELT design, Airflow or Databricks Workflows, and REST/Graph
QL API development; proven collaboration with Data Engineering on source-to-lake and lake-to-agent pipelines.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).