VISA- USC, GC, GCEAD, H4 EAD
need local to Houston, TX with local DL
Hands-on experience with leading public cloud data platforms (Azure preferred, AWS, or GCP).
Extensive experience with Databricks and Spark frameworks.
Expertise in data modeling, warehouse design, and database technologies (relational, NoSQL, and timeseries databases).
Strong programming skills in Python, Spark, PySpark, Java, JavaScript, or Scala.
Hi,
Hope you are doing Good!!!
Please find the attached Job Description. If you feel comfortable then please send me your updated resume or call me back on 512-898-7112.
Position: Senior Azure Data Engineer
Location: Houston, TX(3-4 days onsite)
Duration: 12 months Contract
Interview Process: Video Interview
Job Description:
* 6+ years of professional experience
* Azure cloud experience
* experience implementing data pipelines for both streaming and batch integrations using tools/frameworks like Azure Data Factory, Glue ETL, Lambda, Google Cloud DataFlow, Spark, Spark Streaming
* experience with data modeling
Nice to Have Skills:
* Bachelor's Degree
* programming experience in Python, Spark, PySpark, Java, Javascript, and/or Scala
* relevant certifications
Interview Process:
* 1-2 internal Microsoft Teams video interviews
Job Description:
We are seeking an experienced and motivated Senior Data Engineer to join our team and drive the implementation of scalable, end-to-end data pipelines on modern cloud data platforms. The ideal candidate will have extensive hands-on experience with Databricks, cloud-native data platforms (preferably Azure), and a strong understanding of data modeling, warehouse design, and advanced data pipeline implementations. You will play a critical role in building, optimizing, and maintaining data solutions that enable robust analytics and drive business insights.
Key Responsibilities:
Design, implement, and maintain end-to-end data pipelines for both streaming and batch data integrations using tools such as Azure Data Factory, Glue ETL, Spark, and PySpark.
Develop and optimize data models, data lakes, and data warehouses, including fact/dimension implementation.
Utilize cloud data platforms (Azure preferred, AWS, or GCP) for deploying scalable and reliable data solutions.
Implement column-oriented, NoSQL, and relational database technologies (e.g., Big Query, Redshift, Vertica, DynamoDB, Cosmos DB, SQL Server, MySQL).
Design and implement cloud-native data platforms focusing on streaming and event-driven architectures.
Develop data processing programs using SQL, Python, DBT, and similar tools.
Perform data ingest, validation, and enrichment pipeline design.
Manage metadata definition and governance using tools like Unity Catalog, OpenMetadata, DataHub, Alation, or AWS/Google Data Catalogs.
Optimize database performance and conduct query tuning for various database systems.
Write test programs using automated testing frameworks and ensure data validation, quality, and lineage.
Conduct code reviews, provide mentorship, and contribute to a culture of technical excellence.
Collaborate across teams to support the delivery of robust, scalable data solutions.
Required Qualifications:
Proven experience in designing and implementing end-to-end data pipelines.
Hands-on experience with leading public cloud data platforms (Azure preferred, AWS, or GCP).
Extensive experience with Databricks and Spark frameworks.
Expertise in data modeling, warehouse design, and database technologies (relational, NoSQL, and timeseries databases).
Strong programming skills in Python, Spark, PySpark, Java, JavaScript, or Scala.
Experience with code repositories, continuous integration, and data validation frameworks.
Familiarity with metadata management, governance, and stewardship tools.
Proficiency in test programming, data lineage tracking, and automated quality frameworks.
Strong problem-solving skills and ability to work “hands-on” at module or track levels.
Preferred Qualifications:
Experience with Azure-native tools and services such as Azure Data Factory, Cosmos DB, and Azure SQL.
Knowledge of data catalogs and service catalogs for metadata management (e.g., Unity Catalog, Alation).
Familiarity with event-driven architectures and streaming technologies like Kafka or Kinesis.
Strong communication skills with the ability to mentor and guide junior engineers.
Thanks & Regards-
Tarun Gupta || Mob:- 512-898-7112
E-mail 📩 [email protected]
5900 Belcones drive Suit #100, Austin, TX , 78731
To unsubscribe from future emails or to update your email preferences click here