Urgent Hiring for 8 Roles : UKG DevOps Automation Engineer | UKG Device Manager and Migration Lead | Google Doc.AI Lead Engineer | Sr. Functional Architect – Oracle Cloud HCM Workforce Compensation | Sr Software Engineer (C++, RUST) | Senior Data Modeler Sr Data Scientist | Stibo Architect |Technical Documentation Specialist | TIBCO Consultant | Software Reliability Engineer | Do Not Send without Looking Skills and Filling Details

Hi Partners,

Connect over me on linkedin for more updates :  https://www.linkedin.com/in/revanth-yadav-parvathi-55b48b225/


Role 1 :
Job Title: UKG DevOps Automation Engineer
Experience:12 to 15 years
Contract: Long term
Location:Remote
Rate:$75/hr on C2C 
Visa: No OPT/CPT/H1B
Job Description:
We are seeking an experienced DevOps Automation Engineer with expertise in Terraform, Puppet, and scripting (Bash, Python, or Shell) to join our team. The ideal candidate will have a deep understanding of automation in cloud environments and experience working within a UKG (Ultimate Kronos Group) infrastructure. The successful candidate will play a crucial role in automating, maintaining, and optimizing our deployment processes and infrastructure in the UKG ecosystem.
Responsibilities:
  • Design, develop, and maintain scalable infrastructure automation using Terraform, Puppet, and custom scripting.
  • Automate and manage UKG environments for seamless deployment, monitoring, and scaling.
  • Implement CI/CD pipelines to support UKG infrastructure changes and application deployment.
  • Collaborate with cross-functional teams, including UKG and Network teams, to ensure stable, reliable, and automated environments.
  • Perform regular system monitoring, performance tuning, and disaster recovery exercises.
  • Troubleshoot and resolve issues related to automation, deployment, and infrastructure in the UKG environment.
  • Create and maintain technical documentation for automation processes and configurations.
Required Skills:
  • 5+ years of experience in DevOps automation with expertise in Terraform and Puppet.
  • Strong scripting skills using Python, Bash, Shell, or similar.
  • Experience with CI/CD pipelines and tools such as Jenkins, GitLab CI, or similar.
  • Hands-on experience in UKG (Kronos) environments, including deployment, configuration, and maintenance.
  • Familiarity with cloud infrastructure (AWS, Azure, or GCP) is a plus.
  • Strong problem-solving skills and the ability to work independently or in a collaborative environment.
  • Excellent communication skills, with the ability to report progress and solutions to leadership.

Role 2 :

Job Title: UKG Device Manager and Migration Lead – Remote
Experience:12 to 15 years
Rate:$70/hr on C2C
Contract: Long term
Visa: No OPT/CPT/H1B
We are seeking a highly skilled professional with 12 to 15 years of overall experience, including 5+ years working on UKG Device Manager and Clock configuration. This role involves leading the migration from Workforce Central (WFC) to Workforce Management Pro (WFM Pro) at large Enterprise customers, working closely with Network teams, and reporting progress to client leadership. The ideal candidate will have a strong background in managing complex UKG projects and will take ownership of the migration plan, ensuring smooth execution and timely reporting.
Key Responsibilities:
  • Lead and oversee the migration of clocks from UKG Workforce Central (WFC) to UKG Workforce Management Pro (WFM Pro).
  • Collaborate with network teams to configure UKG clocks and devices across large enterprise environments.
  • Develop and implement detailed migration plans, ensuring minimal disruption to operations.
  • Act as the point of contact for client leadership teams, providing regular updates and progress reports.
  • Troubleshoot and resolve issues related to device configurations, ensuring optimal performance.
  • Ensure seamless integration of hardware with UKG software products.
  • Provide technical expertise and leadership during project delivery phases.
  • Conduct testing and validation of device functionality post-migration.
  • Train and mentor junior team members on UKG Device Manager configurations and best practices.
Required Qualifications:
  • 12 to 15 years of experience in enterprise-level IT environments.
  • 5+ years of hands-on experience with UKG Device Manager and Clock Configuration.
  • Proven ability to lead large-scale UKG migration projects.
  • Strong understanding of UKG WFC and WFM Pro platforms.
  • Experience working with network teams to troubleshoot and configure devices.
  • Excellent communication and leadership skills, with experience reporting to senior management.
  • Problem-solving mindset with the ability to work independently and manage multiple priorities.

Role 3:

Title: Google Doc.AI Lead Engineer

 

Location: Woodland Hills, CA (Hybrid)

 

Type/Duration: Contract/Long Term

 

Experience: 8+ Years

 

JOB DESCRIPTION:

• Hands on and have expertise with LLM, prompt engineering, RAG models – deploying, improving.

• Hands on experience with DocAI; Build Extractors and Classifiers for documents

• Collaborate with stakeholders to engineer innovative solutions for new system rollouts, enhancements, and maintenance of existing applications, ensuring adherence to programming standards.

• Develop system specifications, create test plans, and conduct project and issue management for the assigned scope of work.

• Design/Build solutions using AI services and machine learning models to address unique use cases, employing cutting-edge techniques and methodologies.

• Utilize machine learning and advanced AI techniques to extract valuable insights from complex datasets and solve intricate problems.

• Deploy trained models into production environments, ensuring scalability, reliability, and optimal performance, while integrating seamlessly with existing systems and applications

Must Have:

• A degree in Computer Science, Engineering, AI, or a related field; advanced degree is a plus.

• Proven experience in GenAI and DocAI

• Google Cloud Expertise

• Python Coding

• Excellent communication and leadership skills, with the ability to influence cross-functional teams.

• Experience in managing Global Delivery Model.

Role 4:

Role: Sr. Functional Architect – Oracle Cloud HCM Workforce Compensation

Location: Hybrid, Irving, TX

Job Summary:

We are seeking a dedicated Oracle HCM Cloud Work Force Compensations Consultant with more 10 years of experience to join our team.

The ideal candidate will have expertise in Oracle Cloud HCM-Compensation

Required Skills : Oracle Cloud HCM-Compensation

Provide technical expertise in Oracle Cloud HCM-Compensation.

Design and implement Oracle Cloud HCM-Compensation Plans.

Conduct Requirement gathering sessions.

– Oversee the development of reports and alerts. Ensure data quality and integrity through regular audits and updates. Stay updated with the latest trends and technologies in Oracle Cloud HCM. Provide training and support to team members and users on Oracle Cloud HCM – Compensation

– Contribute to the development of best practices and standards for Oracle Cloud HCM.

– Collaborate with stakeholders to understand their needs and requirements.

Demonstrate expertise in Oracle Cloud HCM-Compensation. Show proficiency in Oracle Cloud HCM design. Exhibit strong analytical and problem-solving skills. Display excellent communication and collaboration abilities. Have experience with configuration and deployment of modules . Be familiar with the latest trends and technologies Oracle Cloud HCM.

– Show a commitment to data quality and integrity. Be capable of providing training and support to team members. Demonstrate the ability to develop best practices and standards.

– Have experience in troubleshooting and resolving Oracle Cloud HCM Workforce Compensation issues.

– Be able to work effectively in a hybrid work model

Role 5:

Job Title: Sr Software Engineer (C++, RUST)

Client Address: 1680 Capital One Dr, McLean, VA 22102, United States(Hybrid)

Mandatory Skill need to check while submitting resumes:

Required Skills

C++

RUST

 

Job Description:

• 9+ years of experience in Software Development with hands-on experience in C++ and RUST and other related technologies.

• Preferred location is Mclean and the work model is Hybrid. This is not an constraint for the right candidate.

• Demonstrated proficiency in troubleshooting, root-cause analysis, and implementing major components for large projects.

• Excellent programming skills and proven expertise in OOP and other programming concepts.

• Experience working with Agile teams.

• Excellent oral and written communication.

• A Bachelor's degree in Computer Science or Engineering

Years of Experience:       9.00 Years of Experience

Role 6:

Position –    Senior Data Modeler

Location :  Oak Park Heights MN

Type – Long term

Job Summary:

Required Skills : Database and SQL ,Pstf-SQL Tuning, Logical Data modelling, Data Modelling, Data Build Tool, Snowflake SQL, Physical Data Modelling

Certifications Required : Certified Data Management Professional (CDMP), Snowflake Certification, SQL Certification

Job summary

We are seeking a highly skilled Senior Data Modeler with 11 to 15 years of experience to join our team.

The ideal candidate will have extensive expertise in SqlDBM Modeling tool, Dimension Modeling,SQL.

Nice to have: DBT, DataVault, Performance optimization Pstf-SQL Tuning, Snowflake SQL, Database and SQL, Data Build Tool, Physical Data Modelling, and Logical Data Modelling.

This role is based in our office and requires proficiency in English.

Responsibilities

Lead the planning and execution of data-related projects, ensuring alignment with company goals.

Oversee the optimization of Pstf-SQL queries to enhance database performance

Provide expertise in Snowflake SQL to manage and analyze large datasets efficiently

Develop and maintain robust database structures using advanced SQL techniques

Utilize Data Build Tool to streamline data workflows and improve data quality.

Implement Physical Data Modelling to create efficient and scalable database designs.

Apply Logical Data Modelling to ensure data integrity and consistency across systems.

Collaborate with cross-functional teams to gather requirements and deliver data solutions.

Monitor and troubleshoot database performance issues to ensure optimal operation.

Ensure compliance with data governance and security policies.

Conduct regular reviews and updates of data models to adapt to changing business needs.

Mentor and guide junior team members in best practices for data management.

Report on project progress and outcomes to senior management.

Qualifications

Possess a deep understanding of Pstf-SQL Tuning and its impact on database performance

Demonstrate proficiency in Snowflake SQL for managing complex data environments

Have extensive experience with Database and SQL for robust data management.

Show expertise in using Data Build Tool for efficient data workflows.

Exhibit strong skills in Physical Data Modelling for scalable database design

Display proficiency in Logical Data Modelling for data integrity.

Be fluent in English with excellent reading, writing, and speaking skills.

Have a proven track record of managing data projects from inception to completion.

Show ability to work effectively in a team-oriented environment.

Demonstrate strong problem-solving skills and attention to detail.

Be capable of mentoring and guiding junior team members.

Have excellent communication skills for reporting and collaboration

Be committed to continuous learning and staying updated with industry trends.

Role 7:

Position –   Sr Data Scientist

Location :  Houston, TX (Onsite)

Type –  Long Term

Certifications Required :Certified Kubernetes Administrator (CKA), Microsoft Certified: Azure AI Engineer Associate, AWS Certified Solutions Architect, Databricks Certified Data Engineer

 

Job summary :

NRG is looking for a Data Scientist to develop and apply models and statistical analysis within our enterprise data science group. This individual will work within a functionally organized Enterprise Data & Analytics group with support from IT, Data Engineering and Governance, Business Intelligence, and Product Management. Its scope encompasses electricity and gas contracts, smart home products, plant operations, trading, and energy services. The role will report to data science lead and will work on cross-functional project teams to deliver repeatable and scalable data solutions to drive value for the organization. This will require both an understanding of the intersection between business economics and the use of algorithms

to make or aid business decisions.

 

Essential Duties/Responsibilities:

• Understanding of machine learning and deep learning models to select and implement for prediction, classification, and clustering projects

• Apply machine learning or reinforcement learning to optimize marketing efforts with respect to customer acquisition, retention, pricing, cross-selling, operations and trading

• Passion to learn latest AI techniques and explore applications for large language models

• Understanding of the business context of projects and able to identify areas where models will be less predictive or have caveats to their predictive powers

• Ability to translate complex business issues into achievable analytical learning objectives and actionable analytic projects

• Ability to communicate and establish good relations with multi-disciplinary teams

• Proficiency with Python, including pandas, scikit-learn

• Experience in Spark or Pyspark

 

Minimum Requirements:

• Bachelor’s degree in a quantitative field, such as Statistics, Mathematics, Computer Science, Economics, Engineering, or Operations Research required.

• 2+ years of experience in statistical modeling and quantitative analysis in industry or full-time academic research

Preferred Qualifications:

• Advanced Degree (MS or PhD) in Statistics, Mathematics or Quantitative Marketing with a focus on machine learning is strongly preferred.

 

Additional Knowledge, Skills and Abilities:

• Experience with Databricks

• Experience with AWS SageMaker

• Experience with Azure AI Studio

• Retail electricity or gas experience

• Comfortable working in Linux

• Experience with Git

• Experience with Docker containers

• Ability to learn and apply new quantitative techniques quickly and appropriately

• Ability to interpret and communicate complex analytics results

 

Responsibilities :

– Lead the design and implementation of solutions using advanced technologies.

– Oversee the integration of Dockers & Containers to streamline deployment processes.

– Provide expertise in GIT for version control and collaborative development.

– Utilize Linux for system administration and optimization of applications.

– Implement Azure AI solutions to enhance data analysis and insights.

– Leverage AWS services to build scalable and efficient systems.

– Utilize Databricks for big data processing and advanced analytics.

– Apply Data Science techniques to extract meaningful insights from data.

– Collaborate with cross-functional teams to ensure seamless integration of solutions.

– Ensure the security and integrity of data through best practices.

– Develop and maintain documentation for all projects and processes.

– Mentor junior engineers and provide guidance on best practices and technical solutions.

– Continuously evaluate and adopt new technologies to improve geospatial capabilities.

 

-Qualifications

– Possess a strong background in Dockers & Containers for efficient application deployment.

– Demonstrate proficiency in GIT for effective version control and collaboration.

– Have extensive experience with Linux for system administration and optimization.

– Show expertise in Azure AI for advanced data analysis and insights.

– Be skilled in AWS services for building scalable geospatial systems.

– Have hands-on experience with Databricks for big data processing.

– Apply Data Science techniques to derive insights from complex geospatial data.

– Exhibit strong problem-solving skills and attention to detail.

– Possess excellent communication and collaboration abilities.

– Have a proven track record of leading successful geospatial projects.

– Be committed to continuous learning and staying updated with industry trends.

– Demonstrate the ability to mentor and guide junior team members.

– Show a passion for leveraging technology to drive impactful solutions.

Role 8:

Job title: Stibo Architect

Location: Remote

 

Job summary:  We are seeking an experienced Architect with 18 to 22 years of experience to join our dynamic team. The ideal candidate will have a strong background in JavaScript and Stibo MDM, and will be responsible for designing and implementing robust solutions. This role is hybrid, with no travel required, and operates during day shifts. The Architect will play a crucial role in driving the companys technological advancements and ensuring the success of our projects.

 

Required Skills : JavaScript, Stibo, Stibo MDM

 

Responsibilities :

·Lead the design and implementation of scalable and efficient architecture solutions using JavaScript and Stibo MDM.

·Oversee the development and integration of Stibo MDM into existing systems to ensure seamless data management.

·Provide technical guidance and mentorship to junior team members to foster a collaborative and innovative work environment.

·Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.

·Ensure the architecture aligns with business goals and objectives, providing strategic direction for technology initiatives.

·Conduct regular code reviews and ensure adherence to best practices and coding standards.

·Develop and maintain comprehensive documentation for all architectural designs and implementations.

·Monitor system performance and implement optimizations to enhance efficiency and reliability.

·Stay updated with the latest industry trends and technologies to incorporate innovative solutions into the architecture.

·Facilitate communication between stakeholders to ensure alignment and clarity on project goals and deliverables.

·Manage project timelines and deliverables, ensuring projects are completed on time and within budget.

·Provide support and troubleshooting for complex technical issues, ensuring minimal disruption to operations.

·Drive continuous improvement initiatives to enhance the overall quality and performance of the architecture.

 

Qualifications

·Possess extensive experience in JavaScript and Stibo MDM, demonstrating a deep understanding of both technologies.

·Have a proven track record of designing and implementing complex architecture solutions in a hybrid work model.

·Exhibit strong problem-solving skills and the ability to think critically and strategically.

·Demonstrate excellent communication and collaboration skills, with the ability to work effectively with diverse teams.

·Show a commitment to continuous learning and staying current with industry advancements.

·Have experience in mentoring and guiding junior team members to achieve their full potential.

·Display a strong attention to detail and a commitment to delivering high-quality work.

·Be capable of managing multiple projects simultaneously and prioritizing tasks effectively.

·Possess a strong understanding of data management principles and best practices.

·Have experience in conducting code reviews and ensuring adherence to coding standards.

·Show proficiency in developing and maintaining technical documentation.

·Be able to troubleshoot and resolve complex technical issues efficiently.

·Demonstrate a proactive approach to identifying and implementing process improvements.

Role 9:

Technical Documentation Specialist

Pay Rate: $56/hrC2C

Client Address: 8200 Jones Branch Dr, McLean, VA 22102 (Hybrid)

 Mandatory Skill need to check while submitting resumes:


Top Skills Required

Architecture, design, and implementation patterns, Confluence and experience in application development (Java, Spring boot, Microservices and AWS)

 

Job Description

• Previous experience in application development ( Java, Spring boot, Microservices and AWS)

• Proven experience in application development with a strong understanding of architecture and design patterns.

• Strong writing and editing skills, with the ability to explain complex technical information in a clear and concise manner.

• Help document and transfer application-level reference architectures, such as architecture, design, and implementation patterns onto t Confluence portal

• Ability to work effectively with cross-functional teams, including developers, architects, and product managers.

• Familiarity with application, data, and security domains, and the ability to document related patterns and architectures.

• Previous experience in technical writing or documentation in Confluence

 

8+ years of experience minimumYears of Experience:8.00 Years of Experience



Role 10:

Job title: TIBCO Consultant
Location: Duluth, GA (onsite)
 
Required Skills:
1. TIBCO BW, BE
2. TIBCO Active Space
3. Kibana , Kafka, Java, Elastic search and Apache Flink

Job Description/ Responsibilities:
·         Required resource with good technical and implementation knowledge (Architectural, Design and Coding) in TIBCO BW, TIBCO ESB, TIBCO ActiveSpaces tech stack who can seamless do changes or coding.
·         Looking for a technically strong resource who is quick in learning and coordinate with required internal and external stakeholder and do the design and coding with near to zero quality issues.
·         The resource should also be able to come out with better suggestions and design improvements that will be a value add to the customer.
·         Resource should also be interacting with QEA team and provide support during the various test phases in the project. The resource should also ensure there are no performance degradation due to the design / code changes.
·         Participate in daily stand-ups and weekly calls as appropriate and collaborate with required teams to unfold the dependencies
·         Good to have knowledge on “Java, Apache Kafka, Elastic Search(Apache flume, elastic, Kibana), Apache Spark, Shell Scripting.
·         Good written and verbal communication skills
·         Additional Information:
·         Collaborate with Infrastructure and Application teams to complete OCI migration work, change and deploy code to use secure ports,
·         Support migration discovery, certificate renewals for non-prod etc.
·         Skills – TIBCO BW, TIBCO BE, TIBCO ActiveSpace, Java, Kafka, ElasticSearch, Kibana, Apache Flink, Shell Scripting, REST API
·         Must have: TIBCO BW 5.x/6.x, TIBCO BE, TIBCO ActiveSpaces 2.4, REST API
·         Good to Have: Java, Apache Kafka, Elastic Search(Apache flume, elastic, Kibana), Apache Spark, Shell Scripting


Role 11:

Job Title: Software Reliability Engineer

*FULLY REMOTE (EST)*

 

Job Summary:

You will play an important role on a product team to performance tune the Pentaho ETL Server, install, configure cron job monitoring tool.

The Digital and eCommerce team currently operates several B2B websites and direct digital sales channels via a globally deployed cloud-based platform that are a growth engine for Merck’s life science business. We provide a comprehensive catalog of all products, enabling our customers to find products and purchase products as well as get detailed scientific information on those products.

 

Top 4 Must Haves:

1. Pentaho

2. ETL

3. Cronitor scheduler

4. Light Weight Processes

 

Required Skills:

• 8-10+ years of hands-on Server side, SRE experience

• Recent experience in Pentaho, ETL

• Ability to provide solutions based on best practices.

• Ability to collaborate with cross-functional teams.

• Ability to work with global teams and a flexible work schedule.

• Must have excellent problem-solving skills and be customer centric.

• Excellent communication skills.

 

Nice to have skills:

• Experience with Big Data tools and big data handling

• Experience with cloud environments (e.g., Google Cloud Platform, Azure, Amazon Web Services, etc.)

• Experience in product-oriented engineering development teams is a plus

• Familiarity with web technologies (e, g, JavaScript, HTML, CSS), data manipulation (e.g., SQL), and version control systems (e.g., GitHub)

• Familiarity with DevOps practices/principles, Agile/Scrum methodologies, CI/CD pipelines and the product development lifecycle

• Familiarity with modern web APIs and full stack frameworks.

• Experience with Java, Google Analytics, BigQuery, Cassandra, Docker, Kubernetes, Kafka, in memory caching are a plus

• Experience developing eCommerce systems – especially B2B eCommerce – is a plus.

 

Responsibilities:

• Work as part of an Agile development team, taking ownership of performance tune(heap, memory, cpu), upgrades of Pentaho ETL Server.

• Have a high-quality software/infrastructure mindset – making sure that the changes you implement works

• Experience in Cronitor scheduler, Optimized resource allocation based on constraints.

• Experience in Light weight process (LWP)

Years of Experience:           8.00 Years of Experience

Please attach with DL& Visa copy

Submission process format

First Name

Middle Name

Last Name

LinkedIn

Contact Number

Email Address

Skype Id / Zoom ID

Available Start Date

Best time to call you in Working hours

Your preferred Interview time Slot

Work Authorization

Visa expire date

Highest Qualification

Year of Passing

University

Comfortable working on Dayone onsite role  Yes / No :-

Last 4 Digits of SSN

Total Years of Experience

Total U.S.A. Years of Experience

Current Location

Willing to Relocate (Yes/No)

Passport Number

Pay Rate (W2/1099/C2C)

Profile Sourced from Vendor/Partner company

is Consultant on there W2/1099/H1b ?

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments