Hi, Bench sales Recruiter.
This is Sai Ganesh Technical Recruiter at Hectadata, LLC ‘DBA’ Vilwaa We have an Opening for Data Engineer .
If you are interested, kindly review the job description below and let me know if you are available and do call me at your earliest.
Title – Data Engineer
Location – Remote
Job Description:
We are seeking an experienced and highly skilled Data Engineer with strong expertise in Snowflake, Python, PySpark, and AWS. This remote position offers the flexibility to work from anywhere while playing a crucial role in developing and maintaining our data infrastructure. As a Data Engineer, you will be responsible for creating efficient data pipelines and ensuring the seamless processing of large datasets.
Key Responsibilities:
-
Data Pipeline Development:
- Design, develop, and maintain robust data pipelines using Python and PySpark.
- Implement ETL (Extract, Transform, Load) processes to ingest and transform data from various sources.
-
Cloud Data Management:
- Utilize AWS services (e.g., S3, Lambda, Glue) to build and manage scalable cloud-based data solutions.
- Optimize cloud infrastructure for performance, cost, and reliability.
-
Data Warehousing:
- Design and manage data warehousing solutions using Snowflake.
- Ensure efficient data storage, retrieval, and management.
-
Data Integration and Quality:
- Integrate data from multiple sources, ensuring high levels of data quality and consistency.
- Implement data governance practices to maintain data accuracy and security.
-
Collaboration and Communication:
- Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights.
- Provide technical support and guidance to team members on data engineering best practices.
-
Performance Optimization:
- Monitor and optimize the performance of data pipelines, queries, and cloud resources.
- Troubleshoot and resolve data-related issues in a timely manner.
-
Documentation and Reporting:
- Document data workflows, processes, and system architectures.
- Generate reports and visualizations to communicate data findings and project progress to stakeholders.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer or in a similar role.
- Proficiency in Python and PySpark.
- Strong experience with AWS services (e.g., S3, Lambda, Glue).
- Substantial experience with Snowflake.
- Strong knowledge of SQL and database management.
- Familiarity with data integration tools and techniques.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and in a remote team environment.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Master’s degree in a relevant field.
- Experience with other big data technologies and tools.
- Knowledge of data visualization tools (e.g., Tableau, Power BI).
- Understanding of data governance and security practices
|