Hello,
My name is Sparsh Shrivastava, and I work as a Technical Recruiter for K-Tek Resourcing.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
Please see the Job Description and if you feel Interested then send me your updated resume at Sparsh.Shrivastava@ktekresourcing.com or give me a call at 832 .613.6213.
Linkedin Profile: https://www.linkedin.com/in/sparsh-shrivastava-394633227/
Job Title: Data Lead
Location: New jersey
Duration: Long Term
Implementation partner – HCL AMERICA
Mandatory Skills:
- Azure Expertise: Experience with Azure Data Lake, Blob Storage, Data Factory, and Databricks.
- Data Engineering: Proficiency in developing scalable data pipelines using Azure and PySpark.
- ETL/ELT Processes: Strong knowledge of data ingestion, transformation, and cloud-native data warehouses.
- SQL Proficiency: Expertise in SQL for data retrieval, optimization, and troubleshooting.
- Python & PySpark: High-level programming skills in Python and PySpark for data manipulation and pipeline development.
- Database Design: Experience in database structure design, schema management, and database testing.
- Data Governance: Understanding of enterprise data governance, security guidelines, and reference architecture.
- API & SFTP: Ability to distribute data extracts via APIs, SFTPs, and other mediums.
- Collaboration: Excellent communication skills to work with both technical and non-technical stakeholders.
- Healthcare Data (Preferred): Experience with healthcare marketing analytics, data claims, and medical coding sets (ICD, HCPCs, NPIs).
Job Description
- Develop and maintain scalable data pipelines using Azure and PySpark.
- Perform ETL processes to load data into cloud data warehouses.
- Format and distribute data extracts via APIs, SFTPs, and other mediums.
- Design and optimize data storage solutions based on business needs.
- Assist in database design, schema management, and testing.
- Define data assets, models, and quality protocols.
- Troubleshoot database issues and optimize SQL query performance.
- Collaborate with stakeholders across business units.
- Facilitate database connectivity and troubleshoot issues.
- Implement enterprise reference architecture, data governance, and security protocols