Required Qualifications and Experience:
Expertise and hand-on experience with the Hadoop ecosystem – MapReduce, HDFS, HBase and Hive/Pig.
Expertise in Python/PERL/Shell scripting and hands-on programming skills.
Experience in Linux/Unix technologies and Shell scripting along with Perl or Python. Experience in Java a plus.
Advanced knowledge in performance troubleshooting and tuning Hadoop clusters.
Deep understanding and experience with Hadoop/Big Data Concepts and Technologies.
Good knowledge of Hadoop cluster connectivity and security.
Sound knowledge of relational databases (SQL). Experience with large SQL based systems like Teradata is a plus.
Experience in troubleshooting Map Reduce job failures and issues with Hive, Pig, HBase etc.
Hands-on experience with large scale Big Data environments build, capacity planning, performance tuning and monitoring.
Design, configure and manage the backup and disaster recovery for Hadoop data.
Familiar with industry best practices and how to drive efficiencies while maintaining a robust service offering.
Development experience in Hive, PIG, HBase is desired. Hands-on development experience with Java programming is a plus.
Strong IT consulting experience in handling huge data volumes, architecting big data environments.
Excellent knowledge of Hadoop integration points with enterprise BI and EDW tools.
Familiarity with installing and configuring monitoring tools for the Hadoop environment.