Location : Bay Area, CA(Onsite)
Duration : Long Term Contract.
JOB DESCRIPTION:
Below is Key Requirements:
2+ years of designing, developing, testing, optimizing python microservices using FASTAPI, FLASK, DJANGO.
5+ years of Performance Center/Load runner, Blaze meter or JMeter
2+ years of designing, developing, testing, optimizing python microservices using gRPC
3+ years of Linux experience
2+ years of Service virtualization (SV) experience with DevTest and/or other SV tools
2+ years of experience with monitoring tools – AppD, Elastic, Grafana, Splunk
Hands-on MLOps / LLMOps Experience
Solid understanding and development experience with of python
Solid understanding of ML and DL Models
Key Responsibilities:
Lead multiple AI-ML/LLMs platforms/applications for performance engineering/testing and Chaos Engineering.
Conduct performance testing on Python microservices (RESTful and gRPC) for AI-ML platforms to ensure scalability, resilience and efficiency.
Identify and propose solutions for performance bottle necks on Python microservices (RESTful and gRPC) considering platform limitations, throughput, and interservice overhead.
Optimize the performance of python microservices (RESTful and gRPC) across the platform
Work with key partners and stakeholders in understanding performance requirements and mapping them to testing requirements by creating the test plan.
Act as an industry expert and go to person for all supported application’s performance analysis and recognized as an expert on various performance management and analysis tools such as Load runner, Dev Test, AppD , Elastic, Harness, Jenkins.
Partner and lead the key performance metrics, bottlenecks and obtain the buy in from key stakeholders.
Looking beyond current performance management practices and leading and mentoring the team to ensure bug free and one of the best customer experiences for AI/ML platforms.
Drive performance analytics capability using Elastic, Splunk and other tools for quick resolutions.