Role: Senior ML Engineer
Location: Atlanta, GA or Remote
Duration: 6+ Months
Do not share with me Data Analyst Profile,
We need 10 years of experience on $55/hr on C2C it’s max rate,
About Us
- Warehousing and logistics systems play an increasingly critical role in contributing to the competitiveness of many companies and, at the same time, to the effectiveness of the general world economy. The modern intra-logistics solutions combine state-of-the-art mechatronic, complex software, advanced robotics, modern computational perception, and sophisticated AI and operations research algorithms to provide high throughput and efficient processing for many missions critical commercial logistics applications.
- Our Warehouse Execution Software leverages advances in classical and modern optimization techniques to bring intelligent execution to the world of intralogistics and warehouse automation. We synchronize the discrete and low-level logistics related processes to create a real-time decision engine that drives labor and equipment at the highest efficiency. Our software provides customers the operational agility they need to efficiently handle the demands of an Omni-channel environment. We are looking for a highly motivated individual who can engineer/develop cutting edge M frameworks to deploy AI models. The candidate should have a solid grasp of state-of-the-art cloud technologies, best in class deployment architectures/frameworks and production grade software. Finally, the role requires strong team and interdisciplinary collaboration to see products through the development cycle from beginning to end.
Core Job Responsibilities:
- Develop end-to-end ML pipelines encompassing the ML lifecycle from data ingestion, data transformation, model training, model validation, model serving, and model evaluation over time.
- Collaborate closely with AI scientists to accelerate productionization of ML algorithms.
- Setup CI/CD/CT pipelines, model repository for ML algorithms
- Deploy models as a service both on-cloud and on-prem.
- Learn and apply new tools, technologies, and industry best practices.
Key Qualifications
- MS in Computer Science, Software Engineering, or equivalent field
- Experience with Cloud Platforms, especially GCP and related skills: Docker, Kubernetes, edge computing
- Familiarity with task orchestration tools such as MLflow, Kubeflow, Airflow, Vertex AI, Azure ML, etc.
- Fluency in at least one general purpose programming language. Python – Required
- Strong skills in the following: Linux/Unix environment, testing, troubleshooting, automation, Git, dependency management, and build tools (GCP Cloud Build, Jenkins, Gitlab CI/CD, Github Actions, etc.).
- Data engineering skills are a plus, such as Beam, Spark, Pandas, SQL, Kafka, GCP Dataflow, etc.
- 3+ years of experience, including academic experience, in any of the above.
|