Previous Job
Previous
Senior Data Engineer
Ref No.: 18-00163
Location: Charlotte, North Carolina
Position Type:Direct Placement
Job duties:
• Manage the Hadoop on cloud or on-prem Linux instances, including configuration, capacity planning, expansion, performance tuning and monitoring
• Work with data engineering team to support development and deployment of Spark and Hadoop jobs
• Work with end users to troubleshoot and resolve incidents with data accessibility
• Contribute to the architecture design of the cluster to support growing demands and requirements
• Contribute to planning and implementation of software and hardware upgrades
• Recommend and implement standards and best practices related to cluster administration
• Research and recommend automated approaches to cluster administration
Requirements:
• 5+ years' experience with administering Linux production environment
• Candidates with strong linux experience could apply if they have strong passion to learn big data, even if currently they dont have deep big data experience.
• 1+ years' experience managing full stack any Hadoop distribution Including monitoring.
• 1+ years' experience with implementing and managing Hadoop related security in Linux environments (Kerberos, SSL...)
• Knowledge of Yarn configuration in a multi-tenancy environment. Candidate should have experience with Yarn capacity scheduler.
• Knowledge of disaster recovery related to Hadoop platforms
• Working knowledge of automation tools
• 3+ years administration experience with Kafka, HBase, Spark, Hive and Ambari
• Strong written and verbal communication skills
• Excellent analytical and problem-solving skills