Need Ex- Walmart H1B candidate who are local to California state

Hi          

Greetings of the day,    

This is Amar from Amaze Systems. I do have below excellent job opportunity for you with one of my Fortune 100 clients. Please help me with your updated resume. Else, you can reach me at ama[email protected]

Title: Senior Data Engineer

Location: Sunnyvale, CA (05 days Onsite)

Duration: Long term

 

 Note: need Ex- Walmart H1B candidate who are local to California state.

 

Top Skills-

3+ years of recent GCP experience

Spark scala coding will be happening during the interviews

5+ years of hands-on experience Hadoop, Hive or Spark, Airflow or a workflow orchestration solution

4+ years of hands-on experience designing schema for data lakes or for RDBMS platforms

Experience with programming languages: Python, Java, Scala, etc.

Experience with scripting languages: Perl, Shell, etc.

  

Responsibilities:

As a Senior Data Engineer, you will

• Design and develop big data applications using the latest open source technologies.

• Desired working in offshore model and Managed outcome

• Develop logical and physical data models for big data platforms.

• Automate workflows using Apache Airflow.

• Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.

• Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.

• Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.

• Mentor junior engineers on the team

• Lead daily standups and design reviews

• Groom and prioritize backlog using JIRA

• Act as the point of contact for your assigned business domain

Requirements:

GCP Experience

• 3+ years of recent GCP experience

• Experience building data pipelines in GCP

• GCP Dataproc, GCS & BIGQuery experience

 

• 5+ years of hands-on experience with developing data warehouse solutions and data products.

• 5+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required

• 4+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.

• Experience with programming languages: Python, Java, Scala, etc.

• Experience with scripting languages: Perl, Shell, etc.

• Practice working with, processing, and managing large data sets (multi TB/PB scale).

• Exposure to test driven development and automated testing frameworks.

• Background in Scrum/Agile development methodologies.

• Capable of delivering on multiple competing priorities with little supervision.

• Excellent verbal and written communication skills.

• Bachelor's Degree in computer science or equivalent experience.

 

The most successful candidates will also have experience in the following:

• Gitflow

• Atlassian products – Bitbucket, JIRA, Confluence etc.

• Continuous Integration tools such as Bamboo, Jenkins, or TFS

 

 

Regards

Amar Verma
Amaze Systems Inc.

USA: 8951 Cypress Waters Blvd, Suite 160, Dallas, TX 75019

E: [email protected]

Note: Amaze Systems is an Equal Opportunity Employer (EOE), and does not discriminate based on age, gender, religion, disability, marital status, race and also adheres to laws relating to non-discrimination on the basis of national origin and citizenship status.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments