Hope to do well
My name is Abhishek Kumar, a Senior Recruiter with Saibber. We are looking for a suitable candidate for the position, I came across your resume and found this a perfect fit for this role. I would appreciate it if you can provide the best time and number to reach you to discuss this further
Position: Lead Data Engineer
Location: Remote (Candidate should be comfortable working in PST timezone)
3 level of Coding test is required
Client is basically looking for a very strong Spark & Scala Engineer or Developer
Lead Data Engineer
Primary Skills
- MapReduce, HDFS, Spark – Pyspark, ETL Fundamentals, SQL, SQL (Basic + Advanced), Spark – Scala, Python, Data Warehousing, Hive, Modern Data Platform Fundamentals, Data Modelling Fundamentals, PLSQL, T-SQL, Stored Procedures, Oozie
Specialization
- Big Data Engineering: Lead Data Engineer
Job requirements
Job Description:
- Remote (for California based team)
we’re looking for a Senior Data Engineer to join our team. What we’re looking for: You’re a talented, creative, and motivated engineer who loves developing powerful, stable, and intuitive apps – and you’re excited to work with a team of individuals with that same passion. You’ve accumulated years of experience, and you’re excited about taking your mastery of Big Data and Java to a new level. You enjoy challenging projects involving big data sets and are cool under pressure. You’re no stranger to fast-paced environments and agile development methodologies – in fact, you embrace them. With your strong analytical skills, your unwavering commitment to quality, your excellent technical skills, and your collaborative work ethic, you’ll do great things here.
What you’ll do:
- As a Senior Data Engineer, you’ll be responsible for building high performance, scalable data solutions that meet the needs of millions of agents, brokers, home buyers, and sellers.
- You’ll design, develop, and test robust, scalable data platform components.
- You’ll work with a variety of teams and individuals, including product engineers to understand their data pipeline needs and come up with innovative solutions.
- You’ll work with a team of talented engineers and collaborate with product managers and designers to help define new data products and features. Skills, accomplishments, interests you should have:
- BS/MS in Computer Science, Engineering, or related technical discipline or equivalent combination of training and experience.
- 7+ years core Scala/Java experience: building business logic layers and high-volume/low latency/big data pipelines.
- 5+ years of experience in large scale real-time stream processing using Apache Flink or Apache Spark with messaging infrastructure like Kafka/Pulsar.
- 7+ years of experience on Data Pipeline development, ETL and processing of structured and unstructured data.
- 5+ years of experience using NoSQL systems like MongoDB, DynamoDB and Relational SQL Database systems (PostgreSQL) and Athena.
- Experience with technologies like Lambda, API Gateway, AWS Fargate, ECS, CloudWatch, S3, DataDog.
- Experience owning and implementing technical/data solutions or pipelines.
- Excellent written and verbal communication skills in English.
- Strong work ethic and entrepreneurial spirit.