Big Data Engineer/Sr. Engineer
Listed on 2026-01-01
-
IT/Tech
Data Engineer, Cloud Computing
Implify, Inc is a Global IT Solutions and services firm. Since it's inception, Implify, Inc has been providing best-quality and cost-effective IT solutions to fortune 1000 companies, mid-range companies and upcoming companies via its onsite, Offshore and in-house service models.
IMPLIFY is an IT consulting services and software development firm dedicated to business success through long-term relationships with our clients and staff. IMPLIFY has built a dynamic, profitable, service-oriented enterprise, and is positioned to successfully respond to trends and changes in the information technology industry.
Job Title:
Big Data Engineer / Sr. Engineer
Location:
Jersey City NJ
Full Time Permanent
RESPONSIBILITIES
Our Big Data capability team needs hands-on developers who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer with an aptitude to learn and implement using new technologies, and who loves to push the boundaries to solve complex business problems innovatively, then we would like to talk with you.
• You would be responsible for evaluating, developing, maintaining and testing big data solutions for advanced analytics projects
• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into business insights
• The role would also involve testing various machine learning models on Big Data, and deploying learned models for ongoing scoring and prediction. An appreciation of the mechanics of complex machine learning algorithms would be a strong advantage.
QUALIFICATIONS & EXPERIENCE
• 3+ years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Ideally, this would include work on the following technologies:
• Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage.
• Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc..
• Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage.
• Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI)
• Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
• Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works
In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.
Must Have (hands-on) experience:
• Java or Python or C++ expertise
• Linux environment and shell scripting
• Distributed computing frameworks (Hadoop or Spark)
• Cloud computing platforms (AWS).
Desirable (would be a plus):
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra
• Familiarity with API design
EDUCATION
•
B.E/B.Tech in Computer Science or related technical degree
All your information will be kept confidential according to EEO guidelines.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).