×
Register Here to Apply for Jobs or Post Jobs. X

Sr. AI​/ML Data Engineer

Job in Plano, Collin County, Texas, 75086, USA
Listing for: Public Storage
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer, AI Engineer
Job Description & How to Apply Below

Since opening our first self‑storage facility in 1972, Public Storage has grown to become the largest owner and operator of self‑storage facilities in the world. With thousands of locations across the U.S. and Europe, and more than 170 million net rentable square feet of real estate, we’re also one of the largest landlords. We have been recognized as a Great Place to Work by the Great Place to Work Institute, and employees have voted us as having Best Career Growth, ranked in the Top 5% for Work Culture, and Top 10% for Diversity and Inclusion.

Public Storage is a member of the S&P 500 and FT Global 500, and our common and preferred stocks trade on the New York Stock Exchange.

Sponsorship for Work Authorization is not available for this posting. Candidates must be authorized to work in the U.S. without requiring sponsorship now or in the future.

Job Description Overview

Public Storage’s Data and AI organization operates like a high‑velocity startup inside the enterprise—modern cloud stack, rapid iteration, small expert teams, and direct impact on revenue‑critical decisions. Our platform is built on Google Cloud (Big Query, Vertex AI, Pub/Sub, Data Flow, Cloud Run, GKE/Terraform), dbt cloud, Airflow/Cloud Composer, and modern CI/CD practices. We build solutions that drive significant business impact across both digital and physical.

Engineers work end‑to‑end: designing systems, shipping production workloads, influencing architecture, and shaping how AI is applied at national scale.

We build for both short and long‑term: a dynamic, high‑velocity engineering team that moves quickly from idea to production. This role is for someone who wants to own key parts of the data & ML platform, make immediate impact, and thrive where requirements evolve, decisions matter, and results are visible.

Data Engineering & Pipeline Development (Primary) (60%)
  • Architect, build and maintain batch and streaming pipelines using Big Query, dbt, Airflow/Cloud Composer, and Pub/Sub
  • Define and implement layered data models, semantic layers, and modular pipelines that scale as use‑cases evolve
  • Establish and enforce data‑quality, observability, lineage, and schema governance practices
  • Drive efficient Big Query design (clustering, partitioning, cost‑awareness) for structured tabular data primarily and unstructured data when the use‑case requires it
  • Leverage ML/DS capabilities in BQML for anomaly detection and disposition
  • You will be accountable for delivering reliable, performant pipelines that enable downstream ML and analytics
ML/AI Platform Engineering (20%)
  • Transform prototype notebooks / models into production‑grade, versioned, testable Python packages
  • Deploy and manage training and inference workflows on GCP (Cloud Run, GKE, Vertex AI) with CI/CD, version tracking, rollback capabilities
  • Evaluate new products from GCP or vendors; build internal toolkits, shared libraries and pipeline templates that accelerate delivery across teams
  • You will enable the ML team to ship faster with fewer failure‑modes
Applied AI & Real‑Time Decisioning (20%)
  • Support real‑time, event‑driven inference and streaming feature delivery for mission‑critical decisions such as real‑time recommendation systems, dynamic A/B testing and agentic AI interfacing
  • Contribute to internal LLM‑based assistants, retrieval‑augmented decision models, and automation agents as the platform evolves
  • Implement model monitoring, drift detection, alerting, and performance tracking frameworks
Cross‑Functional Collaboration
  • Partner with data scientists and engineers to operationalize models, semantic layers and pipelines into maintainable production systems
  • Work with pricing, digital product, analytics, and business teams to stage rollouts, support experiments and define metric‑driven success
  • Participate in architecture reviews, mentor engineers, and drive technical trade‑offs with clarity
Qualifications
  • MS in CS + 4+ years experience or BS in CS + 6+ years experience.
  • 3+ years hands‑on building data pipelines in a code‑first environment (Python, SQL, dbt)
  • At least 1 year experience in real‑time or event‑driven systems (Pub/Sub, Data Flow batch/streaming frameworks)
  • At least 2 years owning…
Position Requirements
5+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary