Candidates local to WA or in the PST time zone are preferred.
Do not share already submitted profiles to HCL
Hello,
My name is Adarsh Nandi, and I work as a Technical Recruiter for K-Tek Resourcing.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
Please see the Job Description and if you feel Interested then send me your updated resume at [email protected].
Job Title: Data Engineer
Location: Redmond, WA
Duration: Long Term
Mandatory:
- Strong data engineer with solid Kusto experience
Job Description:
• Should have experience in using Pyspark
• Working experience on KUSTO for telemetry data
• Should be able to write python code in Azure Synapse Notebook.
• Good understanding & working experience of Azure Synapse end-to-end.
• Should have experience in working with Spark ( structured streaming data – possibly from Event hubs)
• Should have experience in handing large volume of data (possibly in 100s of GB).
• Should be able to debug and fine tune the jobs/notebook for optimum memory consumption and processing.
• Should have knowledge of Data warehousing, data management for efficient data handling.
• Should have good knowledge of data modeling & data engineering work
• Experience working with large data sets using SQL/Azure Data Lake/PySpark/ADF/Synapse, etc., to derive actionable insights is highly desirable.
• They should be productive soonest possible post joining.
• Working experience on KUSTO for telemetry data
• Should be able to write python code in Azure Synapse Notebook.
• Good understanding & working experience of Azure Synapse end-to-end.
• Should have experience in working with Spark ( structured streaming data – possibly from Event hubs)
• Should have experience in handing large volume of data (possibly in 100s of GB).
• Should be able to debug and fine tune the jobs/notebook for optimum memory consumption and processing.
• Should have knowledge of Data warehousing, data management for efficient data handling.
• Should have good knowledge of data modeling & data engineering work
• Experience working with large data sets using SQL/Azure Data Lake/PySpark/ADF/Synapse, etc., to derive actionable insights is highly desirable.
• They should be productive soonest possible post joining.