Previous Job
Previous
Sr. Hadoop Developer
Ref No.: 18-00166
Location: Charlotte, North Carolina
Position Type:Direct Placement
Job duties:
  • Design and develop applications utilizing the Bigdata technologies like Hive, Spark, HBase and Hadoop Frameworks.
  • Read, extract, transform, stage and load data to multiple targets, including Hadoop and other databases
  • Translate complex functional and technical requirements into detailed design.
  • Migrate existing data processing from standalone or legacy technology scripts to Hadoop framework processing.
  • Identify and apply performance tuning in Hive, Spark, Hbase and Kafka
  • Perform POC deployments and conversions
  • Maintain security and data privacy.
  • Propose best practices/standards.
Requirements:
5 years designing and developing Enterprise-level data, integration, and reporting/analytic solutions. Proven track record of delivering backend systems that participate in a complex ecosystem
Minimum 3 years development experience on Big Data/Hadoop platform including Hive, Spark, Sqoop, Hbase, Kafka and related tools.
Experience with Hadoop 2.0+ and Yarn applications
Experience working with multiple Hadoop distributions (eg. Hortonworks, MapR)
Current knowledge of Unix/Linux/python scripting, solid experience in code optimization and high performance computing.
Knowledge in Solr/Elastic Search and NiFi is preferred
• Strong written and verbal communication skills
• Excellent analytical and problem-solving skills