Looking for Sr. Data Engineer with Airbyte_ Remote

Role : Sr. Data Engineer

Skills : Spark(PySpark), SQL, FiveTran, Airbyte, HVR, Apache flink,

Need the Above Skills Mentioned. Please check the Skills before sharing and Dont share if they Dont have

Location : Maryland/ Remote  

Long Term.

 

Job Responsibilities/KRAs:

                •             Design, develop and maintain ETL processes using Spark (PySpark) to integrate data as Iceberg on ADLS from multiple source systems

                •             Petabyte size database migration to ADLS on Iceberg tables

                •             Real-time data processing from Azure Service Bus using Spark streaming/airbyte/flink

                •             Optimize ETL workflows to ensure efficient data processing and loading.

                •             Develop scripts to automate data processing and loading tasks.

                •             Implement data quality checks and validation processes within ETL workflows.

                •             Understanding business requirements/scope of projects, create ETL code as per business logic/process; be able to  provide estimation for the tasks as required with supporting data points.

                •             Ensure data governance policies are adhered to, including data lineage and metadata management.

                •             Provide support for data-related issues and troubleshoot ETL-related problems.

                •             Create and maintain technical documentation and reports for stakeholders.

Requirements –

                •             10 years of total technical experience on designing, developing, and implementing ETL solutions using Apache Spark.

                •             Experience in complete Software Development Life Cycle which includes Systems Analysis of various applications in Client/Server Environment.

                •             Working with various software applications with advanced knowledge (FiveTran, Airbyte, HVR, Apache flink, SQL)

                •             Experience in creating Streaming data processing using Spark streaming, shell scripting, deployment activities, performance tuning and error handling skills.

                •             Excellent hands-on experience in various data sources, transformations, Partition/De-partition, Databases, Datasets and JSON/XML/Parquet file formats

                •             Experience in data replication from RDBMS source using HVR

                •             Experience in working on airbyte/Apache flink is preferred

                •             Analytical problem solving and business interaction skills.

                •             Effective communication with entire offshore team, customer and all concerned teams on day-to-day basis; provide daily status reporting to all stakeholders.

                •             Experience in the insurance (e.g., UW, Claim, Policy Issuance) or financial industry preferred.

 

 

 

Regards

Tejaswi Athili

Insoursys | Ph: 9724400065 |  [email protected]

www.linkedin.com/in/tejaswi-attili-35709a1a6

To unsubscribe from future emails or to update your email preferences click here

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments