Previous Job
Hadoop/Spark Big Data Architect - RTH
Ref No.: 17-00473
Location: Newark, New Jersey
Our direct client is seeking a Hadoop/ Spark Big Data Architect to work in their Newark, NJ location.


Big Data Architect primary responsibility will be designing and developing new applications. The candidate will be part of a team that works with Business Analysts, Investment Professionals, Operations, Quality Assurance Testers and other System Professionals.


  • Understand & follow the complete agile SDLC methodology. Analyze business requirements then design & develop applications to support them, suggesting the innovative use of newer technologies where appropriate.
  • Build and maintain positive relationships with internal clients & IT co-workers.
  • Possess excellent oral and written communication skills.
  • Have the ability to manage multiple tasks and projects simultaneously.
  • Possess the ability to associate past experience with current events in order to resolve problems or generate new ideas.
  • Develop & execute test scenarios ensuring the stability/performance of applications.

Required Skills:
  • Minimum of 15+ years of experience working with Java, Linux/Unix, Shell Scripting, and RDBMS.
  • Minimum of 5 years solid experience w/Hadoop & related Stack.
  • Must have working knowledge of Architecting MapReduce, Pig , Hive , Spark, Spark Streaming, Kafka applications.
  • Knowledge of Impala is a must.
  • Cloudera certification added advantage.
  • Minimum of 2 years' experience with Spark components. (Spark Core, Spark SQL are mandatory).
  • Prior experience with Pentaho Data Integration (PDI) is desirable.
  • Experience with architecture, design & development of Big Data & Data Marts utilizing Hadoop and/or in-memory databases such as MongoDB, HBase
  • Knowledge of Jira, Confluence, Agile development methodology & DevOps is a +.
  • Desire to learn the Fixed Income Business

  • Bachelor's degree in Computer Science, Engineering, or equivalent work experience.