Hi,
Hope you are doing well.
Please go through the job description and share the resumes
Note: • Only H1B candidates • Passport number is required
Role: Data Engineer Location: Sunnyvale, CA and Hoboken, NJ (Hybrid) Duration: 12+ Months
Must Have:- Spark, Scala, Kafka
Must have: • 5+ years of recent GCP experience • 5+ years of hands-on experience Hadoop, Hive or Spark, Airflow or a workflow orchestration solution • 4+ years of hands-on experience designing schema for data lakes or for RDBMS platforms • Experience with programming languages: Python, Java, Scala, etc. • Experience with scripting languages: Perl, Shell, etc.
Requirements: • Bachelor’s degree in Computer Science, Computer Engineering or a software related discipline. A Master’s degree in a related field is an added plus • 3+ years of recent GCP experience. • Experience building data pipelines in GCP. • GCP Dataproc, GCS & BIGQuery experience. • 5+ years of hands-on experience with developing data warehouse solutions and data products. • 5+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required. • 4+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms. • Experience with programming languages: Python, Java, Scala, etc. • Experience with scripting languages: Perl, Shell, etc. • Practice working with, processing, and managing large data sets (multi-TB/PB scale). • Exposure to test driven development and automated testing frameworks. • Background in Scrum/Agile development methodologies. • Capable of delivering on multiple competing priorities with little supervision. • Excellent verbal and written communication skills. • Bachelor’s Degree in computer science or equivalent experience.
Hope you are doing well.
Please go through the job description and share the resumes
Note: • Only H1B candidates • Passport number is required
Role: Data Engineer Location: Sunnyvale, CA and Hoboken, NJ (Hybrid) Duration: 12+ Months
Must Have:- Spark, Scala, Kafka
Must have: • 5+ years of recent GCP experience • 5+ years of hands-on experience Hadoop, Hive or Spark, Airflow or a workflow orchestration solution • 4+ years of hands-on experience designing schema for data lakes or for RDBMS platforms • Experience with programming languages: Python, Java, Scala, etc. • Experience with scripting languages: Perl, Shell, etc.
Requirements: • Bachelor’s degree in Computer Science, Computer Engineering or a software related discipline. A Master’s degree in a related field is an added plus • 3+ years of recent GCP experience. • Experience building data pipelines in GCP. • GCP Dataproc, GCS & BIGQuery experience. • 5+ years of hands-on experience with developing data warehouse solutions and data products. • 5+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required. • 4+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms. • Experience with programming languages: Python, Java, Scala, etc. • Experience with scripting languages: Perl, Shell, etc. • Practice working with, processing, and managing large data sets (multi-TB/PB scale). • Exposure to test driven development and automated testing frameworks. • Background in Scrum/Agile development methodologies. • Capable of delivering on multiple competing priorities with little supervision. • Excellent verbal and written communication skills. • Bachelor’s Degree in computer science or equivalent experience.