×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

VP of Enterprise Data Platform

Job in Clearwater, Pinellas County, Florida, 34623, USA
Listing for: Amerilife Group, LLC
Full Time position
Listed on 2025-11-27
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
** Our Company
** Explore how you can contribute  over 50 years, Ameri Life has been a leader in the development, marketing and distribution of annuity, life and health insurance solutions for those planning for and living in retirement.

Associates get satisfaction from knowing they provide agents, marketers and carrier partners the support needed to succeed in a rapidly evolving industry.
** Job Summary
** Ameri Life is seeking a strategic and technically adept Vice President of Enterprise Data Platform to lead the design, delivery, and operation of its enterprise data ecosystem. This role owns the platform architecture, data modeling standards, data quality and observability frameworks, and delivery practices that support SOX-grade controls and M&A scalability. The VP will lead a high-performing team of data engineers and architects, fostering a culture of innovation, accountability, and continuous improvement.
** Job Description
**** Key Responsibilities**
* ** Architect the Enterprise Data Platform**:
Define and maintain a reference architecture spanning ingestion, storage, compute, modeling, quality, observability, orchestration, and serving layers.
* ** Build Scalable Pipelines**:
Design and govern resilient pipelines from business applications into the enterprise data platform and downstream analytics services, ensuring schema drift tolerance and backward compatibility. Leverage Spark and PySpark for distributed processing, ETL optimization, and scalable ML workflows.
* ** Establish Enterprise Data Standards**:
Publish and maintain a governed enterprise data model and glossary, including SCD2 dimensions, point-in-time facts, conformed dimensions, lineage, SLAs, and usage policies.
* ** Implement SOX-Grade Controls**:
Deliver immutable logging, segregation of duties, maker-checker workflows, and reconciliation processes to ensure compliance and audit readiness. Expand compliance to include discovery and classification of PII and other sensitive data, encryption/masking, access controls, third-party risk, and audit-ready logging.
* ** Create 3rd Party Data Hub**:
Standardize intake patterns (SFTP, APIs, managed portal extracts) and enforce versioned data contracts per source for consistent 3rd party data onboarding.
* ** Partner Across Integration & Analytics**:
Collaborate with Application and Data Integration teams for API scalability, idempotent event processing, and batch patterns for large carrier files.
* ** Enable Secure Access & Hierarchies**:
Deliver a Hierarchy Service and enforce role-based and attribute-based access across systems and data domains.
* ** Power Advanced Analytics & AI**:
Operationalize workflows and model-serving capabilities to enable anomaly detection, enrichment, and mapping to accelerate AI adoption. Partner directly with Applied AI Engineering to design and operationalize the enterprise feature store for ML feature reuse and governance.
* ** Partner on Data Governance**:
Work closely with the Head of Data Governance to implement data quality frameworks and ensure metadata completeness across domains.
* ** Mentoring and Upskilling**:
Build a learning culture by coaching engineers on Spark and PySpark, cloud-native data engineering, observability, security, and cost-aware design. Provide technical reviews, pairing, and certification pathways to elevate team capabilities.
* ** Migrate from On-prem**:
Execute a phased migration from on-prem ETL to cloud-native pipelines, retiring technical debt while maintaining business continuity and SLAs. Sequence workloads by criticality, implement dual-run cutovers, and decommission legacy systems with clean lineage and documentation.
* ** Cost Optimization and Performance Management**:
Implement Fin Ops practices for cost baselining, right-sizing, autoscaling, and job-level cost allocation. Govern workloads with cluster policies, quotas, and prioritization. Optimize Spark and PySpark jobs for performance and cost efficiency.
** Required Qualifications
*** 10+ years leading data engineering and architecture for complex, multi-system enterprises.
* Hands-on expertise with Spark and PySpark for distributed compute, ETL optimization, and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary