Urgent Req Fremont CA

Let Us Help You!
Helping You Realize Your Business Goals!

Hi,

 

I hope you’re doing well!

 

Please look at the requirements below, let us know of your interest, and send us your updated resume to [email protected]

Role: Sr Data Engineer

Location: Onsite-Fremont CA

Job Description:

  1. Design and Implement Scalable Data Pipelines:
  • Build robust data pipelines to efficiently extract, transform, and load (ETL) data from diverse sources into Meta’s data warehouse.
  • Ensure that data processing pipelines are scalable and maintainable to handle large volumes of data.
  1. Develop Data Processing and Analytics Applications:
  • Utilize programming languages such as Python, Java, and SQL to create and maintain applications that process and analyze large datasets.
  • Implement data models and ensure data is ready for analysis by data scientists and analysts.
  1. Collaborate with Cross-Functional Teams:
  • Work closely with data scientists, analysts, and business stakeholders to understand their data requirements and deliver tailored solutions.
  • Design and implement data-driven solutions that meet business needs, ensuring the solutions are aligned with the company’s goals.
  1. Optimize Data Access and Retrieval Performance:
  • Apply performance optimization techniques such as caching, indexing, and other strategies to improve data access and retrieval times.
  • Ensure that data retrieval processes are efficient and cost-effective.
  1. Ensure Data Quality and Integrity:
  • Implement data validation and testing processes to ensure the accuracy and reliability of data at all stages of the pipeline.
  • Use automated testing to ensure continuous data quality.
  1. Stay Up-to-Date with Emerging Technologies:
  • Keep current with the latest advancements in data engineering, big data technologies, and best practices.
  • Continuously improve systems and solutions by integrating innovative technologies and techniques.

Requirements:

  • Strong Programming Skills:
  • Proficiency in Python, Java, and SQL for data processing and application development.
  • Experience with Data Pipeline Tools:
  • Familiarity with data pipeline orchestration tools like Apache Airflow, Luigi, or Dataswarm for automating and scheduling workflows.
  • Big Data Technologies:
  • Experience working with Hadoop, Spark, Hive, or other big data technologies to process and analyze large datasets at scale.
  • Cloud Computing Platforms:
  • Familiarity with cloud platforms such as AWS, GCP, or Azure to build and deploy data infrastructure and manage cloud-based data storage solutions.
  • Problem-Solving and Independence:
  • Strong analytical and problem-solving skills with the ability to work independently and tackle complex technical challenges.
  • Communication and Collaboration:
  • Excellent communication skills to collaborate effectively with data scientists, analysts, and cross-functional teams.
  • Ability to clearly articulate technical concepts to non-technical stakeholders.

Key Skills and Technologies:

  • Programming: Python, Java, SQL
  • Data Pipeline Tools: Apache Airflow, Luigi, Dataswarm
  • Big Data: Hadoop, Spark, Hive
  • Cloud Platforms: AWS, GCP, Azure
  • Performance Optimization: Caching, Indexing
  • Data Quality: Data validation, Testing

With Regards

kishore Reddy

Submit Your Resume

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments