Hi
Job Title: AWS Data Engineer –
Job Location: Newark, NJ – 07102 (Hybrid)
Contract Duration: 12+ Months
Years of Experience: Level 2 (3 –5 years of experience)
At-a-Glance:
Are you ready to build your career by joining a global financial company? If so, our client is hiring an AWS Data Engineer!
What You’ll Do:
- Designing, building and maintaining efficient, reusable, and reliable architecture and code.
- Build reliable and robust Data ingestion pipelines (within AWS, onprem to AWS, etc).
- Ensure the best possible performance and quality of high scale data engineering project.
- Participate in the architecture and system design discussions.
- Independently perform hands on development and unit testing of the applications.
- Collaborate with the development team and build individual components into complex enterprise web systems.
- Work in a team environment with product, production operation, QE/QA and cross functional teams to deliver a project throughout the whole software development cycle.
- Responsible to identify and resolve any performance issues.
- Keep up to date with new technology development and implementation.
- Participate in code review to make sure standards and best practices are met.
What You Bring:
- Bachelor’s degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience.
- Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises.
- Programming experience with Python, Shell scripting and SQL.
- Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, and SM etc.
- Solid experience implementing solutions on AWS based data lakes.
- Should have good experience with AWS Services – API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, ElasticSearch.
- Serverless application development using AWS Lambda.
- Experience in AWS data lake/data warehouse/business analytics.
- Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.
- Knowledge of ETL/ELT.
- End-to-end data solutions (ingest, storage, integration, processing, access) on AWS.
- Architect and implement CI/CD strategy for EDP.
- Implement high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred).
- Migrate data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift.
- Migrate data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.
- Implement POCs on any new technology or tools to be implemented on EDP and onboard for real use-case.
- AWS Solutions Architect or AWS Developer Certification preferred.
- Good understanding of Lakehouse/data cloud architecture.
Pranveer Pratap Singh
Account Manager
Tech Center technologies Inc
[email protected]
|
|
|