Hello,
Hope you are doing well..!!
Job Title : Python Spark AWS
Location : Columbus, OH (onsite) – non local candidates accepted
Contract
NO GC & USC Candidate on C2C please.
Need 10+years experience
If you have a Senior candidate on GC whose year of DOB is less than 1984, then we can consider.
Job Responsibilities:
Develop and maintain data platforms using Python, Spark, and PySpark.
Handle migration to PySpark on AWS.
Design and implement data pipelines.
Work with AWS and Big Data.
Produce unit tests for Spark transformations and helper methods.
Create Scala/Spark jobs for data transformation and aggregation.
Write Scaladoc-style documentation for code.
Optimize Spark queries for performance.
Integrate with SQL databases (e.g., Microsoft, Oracle, Postgres, MySQL).
Understand distributed systems concepts (CAP theorem, partitioning, replication, consistency, and consensus).
Skills:
Proficiency in Python, Scala (with a focus on functional programming), and Spark.
Familiarity with Spark APIs, including RDD, DataFrame, MLlib, GraphX, and Streaming.
Experience working with HDFS, S3, Cassandra, and/or DynamoDB.
Deep understanding of distributed systems.
Experience with building or maintaining cloud-native applications.
Familiarity with serverless approaches using AWS Lambda is a plus
Thanks and Regards
Rishav Leo
linkedin.com/in/kumar-mba
Eros Technologies Inc
530 Lytton Avenue 2nd Floor-#120 Palo Alto, California 94301
USA | CANADA | UK | SINGAPORE | MALAYSIA | INDIA
Disclaimer:
Eros provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. We especially invite women, minorities, veterans, and individuals with disabilities to apply. EEO/AA/M/F/Vet/Disability.