Hello Partners,
Job Title: Data Architect
Location: Parsippany, NJ (onsite only local)
Required Skills : Python, Amazon Redshift, Amazon S3,Data Architect, Data Modelling, DB Performance Optimization
Job summary
We are seeking a highly skilled Sr. Architect with 12 to 15 years of experience to join our team.
The ideal candidate will have extensive experience in Cloud Data pipeline, with Architecting and modelling skills.
Must haves
Experience in AWS and enterprise data warehousing project/ETL (building ETL pipeline), Enterprise Data Engineering and Analytics projects.
Data Modelling design (ER/Dimensional Modelling) – Conceptual/Logical/Physical.
Clear understanding Data warehousing and Data Lake concepts.
Redshift implementation with hands-on experience in AWS.
Understand business requirements and existing system designs, enterprise applications, IT security guidelines, Legal Protocols
Should possess Data modelling experience and should be able to collaborate with other teams within project/program.
Proven experience in data modelling & analysis, data migration strategy, cleanse and migrate large master data sets, data alignment across multiple applications, data governance.
Should be able to assist in making technology choices and decisions in an enterprise architecture scenario.
Should possess working experience in different database environment/applications like OLTP, OLAP etc.
Design, build and operationalize data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
EMR, RedShift, Kinesis, Glue.
Actively participate in optimization and performance tuning of data ingestion and SQL processes
Knowledge on basic AWS services like S3, EC2, etc
Experience in any of the following AWS Athena and Glue PySpark, EMR, Redshift
Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala.
Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming
Analyze, re-architect, and re-platform on-premises data warehouses to data platforms on AWS cloud using AWS or 3rd party services
Understand and implement security and version controls
Support data engineers with design of ETL processes, code reviews, and knowledge sharing
Roles and Responsibility
Ability to explain data lake architecture using AWS services
In-depth knowledge on AWS well-architected framework – Good programming skill using scripting (e.g., Python)
Good to have experience on one ETL tool
Clarify and finalize detailed scope for migration
Conduct customer interviews to understand existing standards, policies and quality compliance, enterprise metadata standards
Work with various SMEs in understanding business process flows, functional requirements specifications of existing system, their current challenges and constraints and future expectation
Document current state & prepare target state architecture
Excellent client interfacing and communication skills
Very good understanding of Data Intelligence concepts technologies etc.
Address customer issues with speed and efficiency
Develop and manage relations with key client stakeholders
Identify resources required for project completion
Please attach with DL& Visa copy
Submission process format
First Name |
|
Middle Name |
|
Last Name |
|
|
|
Contact Number |
|
Email Address |
|
Skype Id / Zoom ID |
|
Available Start Date |
|
Best time to call you in Working hours |
|
Your preferred Interview time Slot |
|
Work Authorization |
|
Visa expire date |
|
Highest Qualification |
|
Year of Passing |
|
University |
|
Comfortable working on Dayone onsite role Yes / No :- |
|
Last 4 Digits of SSN |
|
Total Years of Experience |
|
Total U.S.A. Years of Experience |
|
Current Location |
|
Willing to Relocate (Yes/No) |
|
Passport Number |
|
Pay Rate (W2/1099/C2C) |
|
Profile Sourced from Vendor/Partner company |
|
is Consultant on there W2/1099/H1b |