FVP, Enterprise Data Management
Listed on 2025-12-31
-
IT/Tech
Data Engineer, Cloud Computing
Axos Bank
Axos Bank is a digital-first, customer‑focused financial services organization that provides technology‑driven solutions to individuals, small businesses, and companies.
Target RangeAnnual base pay: $ – $. Actual starting pay will vary based on geographic location, experience, skills, specialty, and education.
Eligible for an Annual Discretionary Cash Bonus Target: 10% and an Annual Discretionary Restricted Stock Units Bonus Target: 10%. These discretionary target bonuses may be awarded semi‑annually based upon achievement of performance goals and targets.
About This JobThis position will be located onsite at our HQ in San Diego, CA. Remote or Hybrid is not available for this role.
The First Vice President of Data Engineering will be responsible for managing various key data implementations across the organization and providing people and technology leadership. The First Vice President will perform data architecture analysis, design, development and testing to deliver data applications, services, interfaces, ETL processes, reporting and other workflow and management initiatives.
This role requires a highly motivated individual with strong leadership capability, technical ability, data capability, excellent communication and collaboration skills including the ability to develop and troubleshoot a diverse range of problems.
Responsibilities- Head the development and delivery of enterprise scale mission critical data platforms, i.e. Databricks Lakehouse, GCP services, to meet critical business priorities.
- Establish the roadmap and vision for enterprise scale data engineering, execute on this roadmap to deliver shared frameworks for the broader enterprise application development teams.
- Build technical excellence by defining and implementing robust data engineering standards and quality frameworks that govern data pipeline development, testing, and deployment across the data platform.
- Establish, enforce, and champion modern Data Ops and CI/CD practices within the team. Lead the automation of deployment, testing, and monitoring to accelerate delivery speed and ensure data quality.
- Provide technical leadership and hands‑on guidance to the Data Engineering team to ensure timely and high‑quality project delivery and enhancements. Mentor engineers in advanced PySpark, SQL, dbt, Fivetran and cloud‑native development to maximize team output.
- Collaborate with multiple teams, understand the overall enterprise Data Architecture, and develop end‑to‑end solutions for data sourcing, data processing/integration, data provisioning and delivery.
- Deliver projects to successful completion as defined by predetermined project success criteria, including those established by the business, ensuring that projects are delivered on time and within budget.
- Focus on designing and developing highly scalable, efficient, and reliable data infrastructure to scale and compute to meet business objectives.
- Lead, coach, and develop the team, while creating a vision for the team and educating cross‑functional teams on how to leverage data to identify opportunities to increase revenue and performance.
- Drive IT solutions to ensure they meet business needs balanced with a pragmatic and integrated approach to the design of technical solutions.
- Remain up to date on key technology, business, and industry trends.
- Perform other duties and responsibilities as assigned.
- Bachelor’s degree in computer science, MIS or related degree.
- 10+ years’ experience managing Data Engineering and leading technology teams.
- 7+ years’ working with SQL Server and other database management systems with cloud and on‑prem data ecosystems.
- 7+ years’ working with cloud‑native data ecosystems (GCP, Azure, AWS) with deep experience in cloud data warehouses (e.g., Big Query, Snowflake, Databricks), exposure to various database types (SQL and No
SQL) and on‑prem databases including SQL Server. - Expertise in Databricks and the Spark ecosystem, including designing and optimizing pipelines using PySpark, implementing the modern data architecture and strong hands‑on experience.
- Advanced level experience with Python and PySpark (for data processing), advanced SQL, and modern…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).