Lead Hadoop Admin – Bay Area, CA (Onsite Hybrid 3 days a week)

Lead Hadoop Admin

Bay Area, CA (Onsite Hybrid 3 days a week)
Interview Process: 3 Rounds of Interview
Job Description:
·       Responsible for implementation and ongoing administration of Hadoop infrastructure.
·       Responsible for Cluster maintenance, trouble shooting, Monitoring and followed proper backup & Recovery strategies.  
·       Provisioning and managing the life cycle of multiple clusters like EMR & EKS. Infrastructure monitoring, logging & alerting with Prometheus/Grafana/Splunk.
·       Performance tuning of Hadoop clusters and Hadoop workloads and capacity planning at application/queue level. Responsible for Memory management, Queue allocation, distribution experience in Hadoop/Cloud era environments.
·       Should be able to scale clusters in production and have experience with 18/5 or 24/5 production environments. Monitor Hadoop cluster connectivity and security, File system (HDFS) management and monitoring.
·       Investigates and analyzes new technical possibilities, tools, and techniques that reduce complexity, create a more efficient and productive delivery process, or create better technical solutions that increase business value. Involved in fixing issues, RCA, suggesting solutions for infrastructure/service components.
·       Responsible for meeting Service Level Agreement (SLA) targets, and collaboratively ensuring team targets are met.
·       Ensure all changes to the Production systems are planned and approved in accordance with the Change Management process.
·       Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
·       Maintain central dashboards for all System, Data, Utilization, and availability metrics.
Required Skills:
·       8-12 years of Total experience with at least 5 years of work experience in developing, maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users.
·       Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
·       Strong knowledge of anyone programming language Python/Scala/Java/R with Debugging skills.
·       Experience in Hadoop (Map Reduce, Hive, Pig, Spark, Kafka, HBase, HDFS, H-catalog, Zookeeper and Oozie/Airflow)
·       Experience in Hadoop security (Kerberos, Knox, TLS).
·       Hands-on Experience in SQL and No SQL Databases (HBASE) with performance optimization.
·       Experience in tool Integration, automation, configuration management in GIT, Jira platforms.
·       Excellent oral and written communication and presentation skills, analytical and problem-solving skills

jeevanTechnologies powered by

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments