Data Engineer - Machine Learning; Marketing Analytics
Listed on 2025-12-27
-
IT/Tech
Data Engineer, Data Analyst
Get AI‑powered advice on this job and more exclusive features.
At PODS (Portable On Demand Storage), we’re not just a leader in the moving and storage industry – we redefined it. Since 1998, we’ve empowered customers across the U.S. and Canada with flexible, portable solutions that put them in control of their move. Whether it’s a local transition or a cross‑country journey, our personalized service makes any experience smoother, smarter, and more human.
We’re driven by a culture of trust, authenticity, and continuous improvement. Our team is the heartbeat of our success, and together we strive to make each day better than the last. If you’re looking for a place where your work matters, your ideas are valued, and your growth is supported, PODS is your next destination.
The Data Engineer – Machine Learning is responsible for scaling a modern data & AI stack to drive revenue growth, improve customer satisfaction, and optimize resource utilization. As an ML Data Engineer, you will bridge data engineering and ML engineering: build high‑quality feature pipelines in Snowflake/Snowpark, Databricks, produce and operate batch/real‑time inference, and establish MLOps/LLMOps practices so models deliver measurable business impact at scale.
Note: This role is required onsite at PODS headquarters in Clearwater, FL. The onsite working schedule is Monday‑Thursday onsite with Friday remote. It is NOT a remote opportunity.
General Benefits & Other Compensation- Medical, dental, and vision insurance
- Employer‑paid life insurance and disability coverage
- 401(k) retirement plan with employer match
- Paid time off (vacation, sick leave, personal days)
- Paid holidays
- Parental leave / family leave
- Bonus eligibility / incentive pay
- Professional development / training reimbursement
- Employee assistance program (EAP)
- Commuter benefits / transit subsidies (if available)
- Other fringe benefits (e.g., wellness credits)
- Design, build, and operate feature pipelines that transform curated datasets into reusable, governed feature tables in Snowflake.
- Productionize ML models (batch and real‑time) with reliable inference jobs/APIs, SLAs, and observability.
- Set up processes in Databricks and Snowflake/Snowpark to schedule, monitor, and auto‑heal training/inference pipelines.
- Collaborate with our Enterprise Data & Analytics (ED&A) team to replicate operational data into Snowflake, enrich it into governed, reusable models/feature tables, and enable advanced analytics & ML – with Databricks as a core collaboration environment.
- Partner with Data Science to optimize models that grow customer base and revenue, improve CX, and optimize resources.
- Implement MLOps/LLMOps: experiment tracking, reproducible training, model/asset registry, safe rollout, and automated retraining triggers.
- Enforce data governance & security policies and contribute metadata, lineage, and definitions to the ED&A catalog.
- Optimize cost/performance across Snowflake/Snowpark and Databricks.
- Follow robust version control and Dev Ops practices.
- Create clear runbooks and documentation, and share best practices with analytics, data engineering, and product partners.
Deliver quality results: Deliver top‑quality service to all customers, adhere to company policies, ensure all details are covered, meet commitments, and uphold high standards for quality.
Take initiative: Exhibit self‑starter tendencies, proactively initiate action, volunteer for new assignments, and complete work independently.
Be innovative / creative: Examine the status quo, recommend changes, develop solutions, and identify opportunities.
Be professional: Project a positive image with internal and external contacts and gain respect and trust.
Advanced computer user: Use required software applications (reports, presentations, spreadsheets, databases) and general office equipment.
What You Will Need- Bachelor’s or Master’s in CS, Data/ML, or related field (or equivalent experience) required.
- 4+ years in data/ML engineering building production‑grade pipelines with Python and SQL.
- Strong hands‑on experience with Snowflake/Snowpark and Databricks; comfort with Tasks & Streams for orchestration.
- 2+ years…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).