×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Data Architect

Job in Loveland, Hamilton County, Ohio, 45140, USA
Listing for: Beehive Industries
Full Time position
Listed on 2025-12-01
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Beehive Industries is dedicated to Powering American Defense by revolutionizing the design, development, and delivery of jet propulsion systems to support the warfighter. Through the integration of additive manufacturing, the company aims to meet the growing and urgent needs for unmanned aerial defense by dramatically improving a jet engine’s speed to market, fuel efficiency, and cost.

Founded in 2020, the company is headquartered in Englewood, Colorado, with additional facilities in Knoxville, Tennessee, Loveland, Ohio, and Mount Vernon, Ohio. Beehive is committed to grow and advance the defense industrial base while manufacturing exclusively in the USA. This role can be hybrid, but you would need to live close to one of our facilities.

Role Overview

We are seeking a skilled Senior Data Architect to design, develop, and maintain data pipelines and integration solutions for enterprise systems, including our ERP, Net Suite, PLM, Teamcenter, a custom Manufacturing Execution System (MES), and integration platforms. The ideal candidate will have deep expertise in data engineering, system integration, and hands‑on development experience with Palantir’s Foundry platform. The Senior Data architect will support our software developers, database architects, analysts, machine‑learning and data scientists on initiatives to ensure consistent architecture and delivery.

This role requires a strategic thinker who can bridge the gap between business requirements and technical implementation, providing both leadership and execution for our Foundry buildout.

What You’ll Do
  • Design and execute ETL/ELT pipelines:
    Lead the development of robust, scalable data pipelines using Foundry’s native tools, including Pipeline Builder and Code Repositories. You will manage the entire data lifecycle, from initial ingestion to transformed, clean, and trusted datasets.
  • Establish data connections:
    Configure and manage secure and reliable data connections to various internal and external sources. This includes databases, APIs, file systems, and other core applications using a variety of protocols.
  • Architect Foundry solutions:
    Lead the design and architecture of end‑to‑end data solutions, including ontology modeling and operational workflows, ensuring alignment with our business goals.
  • Define the Ontology:
    Codesign, develop, and maintain the Foundry Ontology, defining the core object types, link types, and action types that represent our business entities and their relationships.
  • Drive data strategy:
    Partner with senior leadership and business stakeholders to define conceptual, logical, and physical data models.
  • Establish governance:
    Develop and enforce foundational data governance standards, quality metrics, and security controls within the Foundry environment.
  • Mentor and guide:
    Act as the primary subject matter expert, providing guidance and mentorship to other team members on data engineering and Foundry best practices as the team grows.
  • Optimize performance:
    Ensure the scalability, reliability, and efficiency of all data pipelines and architectural components.
You Have
  • Experience:

    Minimum 10 years of professional experience in data architecture, data engineering, or a related field, with significant hands‑on experience with Palantir Foundry.
  • Education:

    Bachelor’s or master’s degree in Computer Science, Data Engineering, or a related field.
  • Palantir Foundry Core:
    Deep, demonstrable expertise with Palantir Foundry development and architecture, including:
    • Data Connection:
      Hands‑on experience configuring and managing connections to diverse data sources.
    • Pipeline Builder:
      Proven ability to build and deploy complex, scalable ETL/ELT pipelines.
    • Code Repositories:
      Proficiency in developing data transformations using Python/PySpark and Scala.
    • Ontology Management:
      Expert‑level knowledge of defining and maintaining the Foundry Ontology, including object types, link types, and action types.
  • ETL/ELT and Data Warehousing:
    Extensive experience in data modeling (relational and graph), data warehousing, and the design of performant ETL/ELT processes.
  • Database Connectivity:
    Hands‑on experience connecting to, querying, and manipulating data from various sources…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary