Asset & Wealth Management-Software Engineering-Associate-Dallas
Listed on 2026-02-03
-
Software Development
Data Engineer, Software Engineer
Opportunity Overview CORPORATE TITLE Associate language OFFICE LOCATION(S) Dallas assignment JOB FUNCTION Software Engineering DIVISION Asset & Wealth Management Who We Are
Wealth Management
Across Wealth Management, Goldman Sachs helps empower clients and customers around the world to reach their financial goals. Our advisor-led wealth management businesses provide financial planning, investment management, banking, and comprehensive advice to a wide range of clients, including ultra-high net worth and high net worth individuals, as well as family offices, foundations and endowments, and corporations and their employees. Our consumer business provides digital solutions for customers to better spend, borrow, invest, and save.
Across Wealth Management, our growth is driven by a relentless focus on our people, our clients and customers, and leading-edge technology, data, and design.
The Client Communications Platform is a strategic initiative establishing industry-leading standards for transparency, efficiency, and consistent service in client and advisor communications. The platform is modernizing through cloud-native infrastructure (AWS) and advanced data processing technologies (Snowflake and Spark) to enhance data quality and availability. Spanning five technical domains—data sourcing, content generation delivery, and workflow analytics—the platform automates processes, improves user experiences, and transitions from legacy systems to modern, scalable solutions that deliver operational efficiencies and superior client service.
We are seeking a Senior Software Developer with 5+ years of experience to design, build, and operate cloud-native, data-intensive systems on AWS. You will lead the development of resilient microservices and high-throughput data pipelines leveraging Spring Boot, Apache Spark, and Snowflake. The ideal candidate combines strong software engineering fundamentals with hands‑on cloud expertise, data engineering skills, and a pragmatic approach to reliability, security, and cost efficiency.
You are fluent with Git Lab for source control, code reviews, and CI/CD.
- Design, develop, and own cloud-native microservices and data pipelines on AWS using Spring Boot, Apache Spark, and Snowflake.
- Build RESTful and event-driven services with Spring (Boot, Data, Cloud), integrating with Snowflake for analytical and operational data use cases.
- Implement batch and streaming data processing using Spark (Data Frames, Spark SQL, Structured Streaming) on EMR or EKS; orchestrate with AWS Glue, Step Functions, or Airflow.
- Model and optimize Snowflake workloads (virtual warehouses, micro-partitioning, clustering, query profiling, caching); implement Snowpipe, Tasks, Streams, RBAC, and data integration.
- Apply AWS Well‑Architected best practices across reliability, security, performance, cost optimization, and operational excellence (VPC design, IAM least privilege, KMS, Secrets Manager, Cloud Watch).
- Implement observability and SRE practices: metrics, logs, tracing (Open Telemetry), dashboards (Cloud Watch, Grafana), alerting, SLOs, incident response, and postmortems.
- Perform performance engineering (API latency, P99 improvements, Spark job tuning, Snowflake warehouse sizing) and cost governance (right‑sizing, auto‑suspend, lifecycle policies).
- Collaborate closely with product, data, and platform teams; author technical designs, review merge requests, and mentor engineers.
- Uphold high standards for testing (unit, integration, contract, performance), code quality, and secure coding; leverage Git Lab pipeline gates for quality and security checks.
- 3‑5 years of professional software engineering experience building production systems.
- Strong Java with Spring Framework (Spring Boot, Spring Data; Spring Cloud preferred); familiarity with Scala is a plus.
- 3+ years of hands‑on AWS experience with core services: EC2, S3, IAM, VPC, RDS/Aurora, Lambda, ECS/EKS, Cloud Watch; understanding of networking (subnets, routing, security groups, NACLs).
- 2+ years working with Apache Spark (Data Frames, Spark SQL, Structured Streaming) including performance tuning (partitioning, join strategies,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).