More jobs:
AWS Data Bricks Spark/Scala Engineer
Job in
Morris Plains, Morris County, New Jersey, 07950, USA
Listed on 2025-12-27
Listing for:
Highbrow LLC
Full Time
position Listed on 2025-12-27
Job specializations:
-
Software Development
Data Engineer
Job Description & How to Apply Below
Job Title:
AWS Data Bricks Spark/Scala Engineer
Job :
Job Location:
Morris Plains, NJ (remote to start);
Remote, Memphis, TN (preferred), Morris Plains, NJ, Austin, TX (preferred), St. Louis, MO
Job Travel Location(s):
# Positions: 1
Employment Type:
W2
Candidate Constraints:
Duration:
Long Term
# of Layers: 0
Work Eligibility:
All Work Authorizations are Permitted
Data Bricks, AWS, Spark
Job Responsibilities:- Drive mentality of building well architected applications for AWS Cloud.
- Drive the mentality of quality being owned by the entire team
- Identify code defects and work with developers to address quality issues in product code.
- Finding bottlenecks and thresholds in existing code through the use of automation tools.
- Articulate clear business objectives aligned to technical specifications and work in an iterative agile pattern daily.
- Ownership over your work task and are comfortable interacting with all levels of the team and raise challenges when necessary.
Required:
- 6-8 years of strong experience in Spark, Python, Shell Scripting, Postgre
SQL, Hadoop, AWS (S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and Cloud Watch), Databricks (Delta lake, Notebooks, Pipelines, cluster management, Azure/AWS integration) - In depth knowledge on Hadoop architecture and its components. Troubleshooting the Hadoop Yarn jobs is must. Implement Hadoop job orchestration using Shell scripting, and Airflow
- In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.
- Strong experience in writing complex and effective SQLs, and Stored Procedures. Strong query optimizations skills to improve performance of ETL process.
- Should have strong hands-on cloud experience with AWS suite
- Should be able to demonstrate work experience in Hadoop, Spark, hive
- Should have strong development experience on Databricks components
- Strong hands-on experience on Spark is must
- Must have work experience on Hadoop as data warehouse/Data Lake implementations
- Moderate to senior Terraform development experience needed
- Health care domain knowledge is a plus
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×