Opening for Lead Data Engineer

Hi,

Hope to do well

 

My name is Abhishek Kumar, a Senior Recruiter with Saibber. We are looking for a suitable candidate for the position, I came across your resume and found this a perfect  fit for this role. I would appreciate it if you can provide the best time and number to reach you to discuss this further

Position: Lead Data Engineer

Location: Remote (Candidate should be comfortable working in PST timezone)

 

3 level of Coding test is required

 

Client is basically looking for a very strong Spark & Scala Engineer or Developer

 

Lead Data Engineer

Primary Skills

    1. MapReduce, HDFS, Spark – Pyspark, ETL Fundamentals, SQL, SQL (Basic + Advanced), Spark – Scala, Python, Data Warehousing, Hive, Modern Data Platform Fundamentals, Data Modelling Fundamentals, PLSQL, T-SQL, Stored Procedures, Oozie

Specialization

    1. Big Data Engineering: Lead Data Engineer

Job requirements

    1.  

Job Description:

    1. Remote (for California based team)

we’re looking for a Senior Data Engineer to join our team. What we’re looking for: You’re a talented, creative, and motivated engineer who loves developing powerful, stable, and intuitive apps – and you’re excited to work with a team of individuals with that same passion. You’ve accumulated years of experience, and you’re excited about taking your mastery of Big Data and Java to a new level. You enjoy challenging projects involving big data sets and are cool under pressure. You’re no stranger to fast-paced environments and agile development methodologies – in fact, you embrace them. With your strong analytical skills, your unwavering commitment to quality, your excellent technical skills, and your collaborative work ethic, you’ll do great things here.

What you’ll do:

    1. As a Senior Data Engineer, you’ll be responsible for building high performance, scalable data solutions that meet the needs of millions of agents, brokers, home buyers, and sellers.
    1. You’ll design, develop, and test robust, scalable data platform components.
    1. You’ll work with a variety of teams and individuals, including product engineers to understand their data pipeline needs and come up with innovative solutions.
    1. You’ll work with a team of talented engineers and collaborate with product managers and designers to help define new data products and features. Skills, accomplishments, interests you should have:
    1. BS/MS in Computer Science, Engineering, or related technical discipline or equivalent combination of training and experience.
    1. 7+ years core Scala/Java experience: building business logic layers and high-volume/low latency/big data pipelines.
    1. 5+ years of experience in large scale real-time stream processing using Apache Flink or Apache Spark with messaging infrastructure like Kafka/Pulsar.
    1. 7+ years of experience on Data Pipeline development, ETL and processing of structured and unstructured data.
    1. 5+ years of experience using NoSQL systems like MongoDB, DynamoDB and Relational SQL Database systems (PostgreSQL) and Athena.
    1. Experience with technologies like Lambda, API Gateway, AWS Fargate, ECS, CloudWatch, S3, DataDog.
    1. Experience owning and implementing technical/data solutions or pipelines.
    1. Excellent written and verbal communication skills in English.
    1. Strong work ethic and entrepreneurial spirit.

 

 

0 0 votes
Article Rating
Subscribe
Notify of
guest


0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments