Role: Azure Data engineer
Location::-Remote
Client:-HCL
Experience
Job Title:
Role/Responsibilities
•Design, implement, and manage Azure cloud solutions for various projects.
•Develop and maintain data pipelines using Databricks (PySpark) for data ingestion, processing, and analysis.
•Configure and manage Azure Data Factory (ADF) to orchestrate data workflows and ETL processes.
•Implement and optimize Azure Data Lake Storage (ADLS) for efficient data storage and retrieval.
•Collaborate with cross-functional teams to design and implement data warehouse solutions.
•Utilize Git for version control and collaboration on codebase.
•Monitor, troubleshoot, and optimize data processes for performance and reliability.
•Implement security best practices and manage access controls using Azure Active Directory (AAD).
•Document technical designs, processes, and procedures.
•Stay updated with the latest Azure cloud technologies and best practices.
Requirements experience and skills :
•Bachelor's degree in Computer Science, Engineering, or a related field.
•Proven experience working with Azure cloud services, including Azure Data Lake Storage, Azure Data Factory, and Azure Active Directory.
•10+ Years of experience required.
•Strong proficiency in PySpark and experience with Databricks for data engineering and analytics.
•Hands-on experience with Git for version control and collaboration.
•Familiarity with data warehousing concepts and technologies.
•Experience with SQL and relational databases.
•Strong analytical and problem-solving skills.
•Excellent communication and collaboration skills.
•Ability to work effectively in a fast-paced, dynamic environment.
•Azure certifications (e.g., Azure Administrator, Azure Data Engineer) are a plus.
Skills:
•Snowflake
•Azure Databricks
•Azure Data Factory
•Pyspark
•Python
•SQL
Location::-Remote
Client:-HCL
Experience
Job Title:
Role/Responsibilities
•Design, implement, and manage Azure cloud solutions for various projects.
•Develop and maintain data pipelines using Databricks (PySpark) for data ingestion, processing, and analysis.
•Configure and manage Azure Data Factory (ADF) to orchestrate data workflows and ETL processes.
•Implement and optimize Azure Data Lake Storage (ADLS) for efficient data storage and retrieval.
•Collaborate with cross-functional teams to design and implement data warehouse solutions.
•Utilize Git for version control and collaboration on codebase.
•Monitor, troubleshoot, and optimize data processes for performance and reliability.
•Implement security best practices and manage access controls using Azure Active Directory (AAD).
•Document technical designs, processes, and procedures.
•Stay updated with the latest Azure cloud technologies and best practices.
Requirements experience and skills :
•Bachelor's degree in Computer Science, Engineering, or a related field.
•Proven experience working with Azure cloud services, including Azure Data Lake Storage, Azure Data Factory, and Azure Active Directory.
•10+ Years of experience required.
•Strong proficiency in PySpark and experience with Databricks for data engineering and analytics.
•Hands-on experience with Git for version control and collaboration.
•Familiarity with data warehousing concepts and technologies.
•Experience with SQL and relational databases.
•Strong analytical and problem-solving skills.
•Excellent communication and collaboration skills.
•Ability to work effectively in a fast-paced, dynamic environment.
•Azure certifications (e.g., Azure Administrator, Azure Data Engineer) are a plus.
Skills:
•Snowflake
•Azure Databricks
•Azure Data Factory
•Pyspark
•Python
•SQL
Sent with Mailsuite · Unsubscribe | 28/10/24, 11:11:10 |