MULTIPLE REQUIREMENTS – SAP IBP & Enterprise Data Architect

Hi There,

Greeting from WiseSkulls

Please find the below-mentioned urgent requirements and share a suitable consultant for the same.

CPT candidates won’t work

 
(1) SAP IBP
Location: Newyork City, NY (Onsite)
Duration: 6+ Months
Implementation Partner: Infosys
End Client: To be disclosed
JD:

Minimum of 8 years of experience in a Consulting/ Supply Chain Demand Planning role
Minimum of 5 years experience designing and configuring solutions in Demand planning.
Experience in IBP Demand forecast tuning. Ability to work with Business / Planners team closely to improve the forecast accuracy
Good understanding of statistics and data science. Ability to determine the forecast model (based on trends, seasonality, white noise in the data ) and its parameters (like Phi, Beta, Alpha, Gamma etc which are coefficients) based on the data and business use cases
Experience of integrating other SAP and non-SAP tools to SAP IBP using CI-DS and other interfaces.

(2) Enterprise Data Architect
Location: Denver, CO (On-site)
Duration: 6+ months
Implementation Partner: Infosys
End Client: Dish Network
Job Duties and Responsibilities:
Deploy enterprise-ready, secure and compliant data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks
Optimizing data engineering and machine learning pipelines
Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
Support data and cloud transformation initiative
Contribute to our cloud strategy based on prior experience
Understand the latest technologies in a rapidly innovative marketplace
Independently work with all stakeholders across the organization to deliver point and strategic solutions
Assist solution providers with the definition and implementation of technical and business strategies
Skills – Experience and Requirements – A successful Architect will have the following:
Should have prior experience in working as a Data Warehouse/Big Data Architect.
Experience in advanced Apache Spark processing framework, spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting.
Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data
Specifically, this experience must be in writing Big Data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is an added advantage.
Advanced SQL experience including SQL performance tuning is a must.
Should have worked on other big data frameworks such as MapReduce, HDFS, Hive/Impala, AWS Athena.
Experience in logical & physical table design in Big Data environment to suite processing frameworks
Knowledge of using, setting up and tuning resource management frameworks such as Yarn, Mesos or standalone spark.
Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required
Should have knowledge in variety of data platforms such as Redshift, S3, Teradata, Hbase, MySQL/Postgres, MongoDB
Experience in AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch and Data pipeline
Must have used the technologies for deploying specific solutions in the area of Big Data and Machine learning.
Experience in AWS cloud transformation projects are required.
Telecommunication experience is an added advantage.

Thanks & Regards,

Arif Gaha

Email

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments