Lead Software Engineer – Data Development AI
Listed on 2026-01-02
-
IT/Tech
Data Engineer, AI Engineer
Job Description
Lead Software Engineer – Data Development for AI applications
About the CompanyAt AT&T, we are connecting the world through the latest tech, top-of-the-line communications and the best in entertainment. Our groundbreaking digital solutions provide intuitive and integrated experiences for millions of customers across online, retail and care channels. Join our mission to deliver compelling communication and entertainment experiences to customers around the world as we continue to evolve as a technology-powered, human-centered organization.
As part of our team, you will transform the way we deliver a seamless customer experience with digital at the center of all you do. In our world, digital is much larger than just an eCommerce channel, we are transforming all channels to digitally perform as one team to create a better customer experience. As we move into 2026, the digital transformation will revolutionize the digital space, and you can build a career that will propel your future.
the Team
As a Lead Software Engineer – Data Development for Applied AI
, you will join our Digital Engineering and Customer Experience (DECX) Automation & Applied AI team
. This team is committed to developing autonomous AI agents that empower customers to purchase our products and services seamlessly, delivering exceptional sales experiences across both text and voice channels.
We are seeking a highly motivated and innovative Data Engineer to join our dynamic team. A key aspect of your work will involve data development, including collecting, curating, and analyzing customer interaction data to build AI models that understand individual preferences and behaviors. You will collaborate closely with cross-functional teams to create next-generation solutions that push the boundaries of natural language understanding, intelligent dialogue management, and tailored conversational experiences driven by robust data insights.
The ideal candidate must demonstrate proficiency in Python, showcasing the ability to write efficient, clean, and maintainable code.
In addition to technical expertise, we value candidates who display a willingness to learn and embrace new technologies, specifically in agentic AI development. An openness to acquiring knowledge and skills in this emerging field is essential for this role.
Roles and Responsibilities- Design, build, and maintain robust, scalable data pipelines.
- Perform data research to identify data sources within the ecosystem and apply enrichments to formulate meaningful data points.
- Implement, optimize and maintain scheduled jobs, batch processors and real-time data ingestion pipelines.
- Implement event-driven architecture to react to events.
- Optimize and fine-tune database performance to ensure it can support big data with ideal response times.
- Design data schemas that can evolve over time and align with strategic goals.
- Design, Implement and optimize microservices to expose the data to consuming applications.
- Design caching and data management practices to improve the performance.
- Ensure the data architecture supports the business requirements.
- Explore new opportunities for data acquisition and enhance data collection procedures.
- Explore and identify appropriate segmentation strategies to support RAG implementations.
- Demonstrate a commitment to learning and adopting emerging technologies, with a particular focus on agentic AI development.
- Over 10 years of experience as a Data Engineer or Software Engineer, with expertise in software engineering, data engineering, data warehousing, data research, and requirements gathering.
- Demonstrated expertise in programming languages such as Python and PySpark for executing data engineering tasks.
- Exceptional analytical and problem-solving skills, particularly in handling unstructured raw data and synthesizing meaningful patterns.
- Hands-on experience in developing complete ETL pipelines, from source to destination, including data cleansing, transformation and enrichment.
- Proficiency in PySpark for engineering data pipelines using Databricks on AWS or Azure.
- Technical prowess in data modeling, data mining, data architectures,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).