Principal Consultant - AWS+ Snowflake
Listed on 2026-01-13
-
IT/Tech
Data Engineer, Data Analyst
About the job Principal Consultant - AWS+ Snowflake
Role: Principal Consultant - AWS+ Snowflake
Location: Beaverton, Oregon, United States
Relocation Assistance Available: No
Interview Travel Reimbursed: No
Compensation: USD $110,000 to $130,000
Experience: 7+ to 10 years
Seniority Level: Mid-Senior
Minimum Education: Bachelor's Degree
Willingness to Travel: Never
Security
Clearance Required:
No
Visa Candidate Considered: No
On-site role: Yes
Industry: Retail / Wholesale - Other
Job Category: Information Technology - Computer Network Security
Job type: Full-time
- Design and build reusable components, frameworks, and libraries at scale to support analytics products.
- Design and implement product features in collaboration with business and technology stakeholders.
- Identify and solve issues concerning data management to improve data quality.
- Clean, prepare and optimize data for ingestion and consumption.
- Collaborate on the implementation of new data management projects and restructure of the current data architecture.
- Implement automated workflows and routines using workflow scheduling tools.
- Build continuous integration, test-driven development, and production deployment frameworks.
- Analyze and profile data for designing scalable solutions.
- Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
- Strong understanding of data structures and algorithms.
- Strong understanding of solution and technical design; strong problem solving and analytical mindset.
- Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders.
- Able to quickly pick up new programming languages, technologies, and frameworks.
- Experience building cloud scalable, real-time and high-performance data lake solutions.
- Fair understanding of developing complex data solutions; experience working on end-to-end solution design.
- Willing to learn new skills and technologies.
- Has a passion for data solutions.
- Hands-on experience in AWS – EMR [Hive, Pyspark], S3, Athena or equivalent cloud.
- Familiarity with Spark Structured Streaming.
- Minimum experience working with Hadoop stack dealing with huge volumes of data in a scalable fashion.
- Hands-on experience with SQL, ETL, data transformation and analytics functions; hands-on Python experience including batch scripting, data manipulation, distributable packages.
- Experience working with batch orchestration tools such as Apache Airflow; preferred knowledge of Airflow.
- Experience with code versioning tools such as Git Hub or Bit Bucket; expert-level understanding of repo design and best practices.
- Familiarity with deployment automation tools such as Jenkins.
- Hands-on experience designing and building ETL pipelines; expert with data ingest, change data capture, data quality; hand-on experience with API development.
- Designing and developing relational database objects; knowledge of logical and physical data modelling concepts; some experience with Snowflake.
- Familiarity with Tableau or Cognos use cases.
- Familiarity with Agile; working experience preferred.
The actual offer, reflecting the total compensation package plus benefits, will be determined by a number of factors including but not limited to the applicant’s experience, knowledge, skills, abilities, geographic location, and internal considerations.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).