Big data / Hadoop || REMOTE || NO H1B / OPT

Hello !! 

 

Here As I am Looking For !! 

 

JobTitle : Senior Application Developer / Big Data 

 

Location : REMOTE 

 

Experience : 5 to 7 years 

 

Work Authorization : No H1B AND OPT 

 

Tier : Working With Prime Vendor

 

End Client : HIGH MARK

 

 

JOB DESCRIPTION : 

 

• 5 years of experience as a developer

• 3 years of experience on HL7 standards like HL7v2 and HL7v3 • Strong knowledge on FHIR Profile, implementation guide and FHIR Architecture • Working knowledge on Google Cloud Platform (GCP) cloud technologies like BigQuery, Dataflow and Pub/Sub • Exposure to at least one FHIR based Clinical Data Repository • Strong experience with common data engineering tools (Spark, Python, shell scripting) Beeline for timesheets

 

This job is responsible for designing and engineering data solutions for the enterprise and, working closely with business, analytic and IT stakeholders, assists with the development and maintenance of these solutions. This includes coding data ingestion pipelines, transformations and delivery programs/logic for people and systems to access data for operational and/or analytic needs. Duties include but are not limited to the coding, testing, and implementation of ingestion and extraction pipelines, transformation and cleansing processes, and processes that load and curate data in conformed, fit-for-purpose data structures. The incumbent is expected to partner with others throughout the organization (including other engineers, architects, analysts, data scientists, and non-technical audiences) in their daily work. The incumbent will work with cross-functional teams to deliver and maintain data products and capabilities that support and enable strategies at business unit and enterprise levels. The incumbent is expected to utilize technologies such as, but not limited to: Google Cloud Platform, Hadoop, Hive, NoSQL, Kafka, Spark, Python, Linux shell scripting, SAS, Teradata, Oracle, and Informatica.

 

Responsibilities

•             In partnership with other business, platform, technology, and analytic teams across the enterprise, design, build and maintain well-engineered data solutions in a variety of environments, including traditional data warehouses, Big Data solutions, and cloud-oriented platforms. Create high performance cloud and big data systems to be used with operational and analytic applications.

•             Work with internal and external platforms and systems to connect and align on data sourcing, flow, structure, and subject matter expertise. Work with business stakeholders and strategic partners to implement and support operational and analytic platforms. This may include products purchased by the organization that must be ingested or modeled/derived data maintained by enterprise platforms and data consumers.

•             Working across multiple, disparate systems and platforms, design, code, test, implement, and maintain scalable and extensible frameworks that support data engineering services.

•             Align with security, data governance and data quality programs by driving assigned components of metadata management, data quality management, and the application of business rules. Develop and maintain associated data engineering processes and participate in required operating procedures as part of the enterprise’s overall information management activities. Includes data cleansing, standardization, technical metadata documentation, and the de-identification and/or tokenization of data.

•             Develop, optimize and/or maintain machine learning and AI engineering processes (MLOps) that are deployed to cloud or big data environments. These may be based on prototypes built by data scientists or capability frameworks implemented to allow data scientists to build efficiently in production environments.

•             Develop tasks across multiple projects with limited need for guidance. This includes providing guidance and education to Intermediate and Junior contributors within team. Manage relationships with customers of the function. Attend meetings with customers on a stand-alone basis or with team as needed.

•             Establish standards and patters for high performance data ingestion, transformation, and delivery of data analytic needs. Keep current with Big Data technologies in order to recommend best tools in order to perform current and future work

•             Other duties as assigned or requested.

 

Required Qualifications

•             Bachelor's Degree in Software Engineering, Information Systems, Computer Science, Data Science or related field

•             5 years of experience as a developer

•             3 years of experience on HL7 standards like HL7v2 and HL7v3

•             Strong knowledge on FHIR Profile, implementation guide and FHIR Architecture

•             Working knowledge on Google Cloud Platform (GCP) cloud technologies like BigQuery, Dataflow and Pub/Sub

•             Exposure to at least one FHIR based Clinical Data Repository

•             Strong experience with common data engineering tools (Spark, Python, shell scripting)

 

Preferred Qualifications

•             Master's Degree in Software Engineering, Information Systems, Computer Science, Data Science or related field

•             3 years in Healthcare Industry

•             3 years of Data Warehousing

•             3 years of Database Administration

 

 

 

Regards,

Ritika Negi

Technical Recruiter

[email protected]

VSG Business Solutions Inc

3240 East State ST Ext Suite 203

Hamilton, New Jersey 08619

To unsubscribe from future emails or to update your email preferences click here

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments