AWS Data Engineer – Mountain view, California (2-3 Days from office) – Contract

Hi,

 

Greetings from E-IT,

 

Kindly share your updated resume if interested.

 

Role : AWS Data Engineer

Location : Mountain view, California (2-3 Days from office)

Hire type : Contract.

 

No Green Card profiles

 

Must Have : AWS Lambda , S3 , IAM , Spark , Big query , Python or Java , SQL, Databricks or Privacera

Secondary skills:  Terraform , GCP , Hive, Glue, and Unity Catalogs

 

Qualifications

•            8+ years’ experience designing and developing web, software, or mobile applications.

•            3+ years’ experience building and operating cloud infrastructure solutions.

•            BS/MS in computer science or equivalent work experience.

•            Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE, C#, VB.NET, Python, or sometimes C++. Java and Python preferred.

•            Expertise with AWS (IAM, VPC), Spark, and Terraform are preferred. Expertise with Databricks is a strong bonus.

•            Expertise with the entire Software Development Life Cycle (SDLC), including: system design, code review, unit/integration/performance testing, build and deploy automation

•            Operational excellence: minimizes costs and maximizes uptime

•            Excellent communication skills: demonstrated ability to explain complex technical topics in an engaging way to both technical and non-technical audiences, both written and verbally

 

Scope of Work

•            The team will be working in one of the following areas:

o            multi-cloud data exploration

            Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Starburst, Privacera, and Databricks) and Intuit services (LDAP, data decryption)

            Testing user flows for data analysis, processing, and visualization with Python Spark notebooks and SQL running on distributed compute to join data between AWS S3 and GCP BigQuery

            Developing data pipelines in Python Spark or SQL to push structured enterprise tool telemetry to our data lake

o            Fine-grained access control for data exploration

            Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Databricks and Privacera)

            Evaluating Databricks capabilities to sync Hive, Glue, and Unity Catalogs

            Evaluating Privacera capabilities or building new capabilities (AWS Lambda with Python) to sync Intuit access policies with Unity Catalog

            Testing user flows for data analysis, processing, and visualization with Python Spark notebooks on distributed compute or Databricks’ serverless SQL runtime

 

Responsibilities

•            Develop and implement operational capabilities, tools, and processes that enable highly available, scalable, and reliable customer experiences

•            Resolve defects/bugs during QA testing, pre-production, production, and post-release patches

•            Work cross-functionally with various Intuit teams including: product management, analysts, data scientists, and data infrastructure

•            Work with external enterprise support engineers from Databricks, Starburst, and Privacera to resolve integration questions and issues

•            Experience with Agile Development, SCRUM, or Extreme Programming methodologies

 

Thanks and Regards,

 

Siddharth EIT Professionals Corp

 

A picture containing text, clipart  Description automatically generated17199 N Laurel Park Dr. Ste 402, Livonia, MI 48152
Direct: (734) 619-8993
[email protected] 
www.eitprofessionals.com

 

 

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments