PySpark Developer with Kafka Experience

Role – PySpark Developer with Kafka Experience

Location – Tampa, FL – Onsite

Job Duties:
    Design, develop, and maintain PySpark applications for data processing and analysis.
    Build and optimize data pipelines for large-scale data processing using PySpark.
    Integrate PySpark applications with Kafka for real-time data streaming and processing.
    Collaborate with data engineers and data scientists to understand data requirements and implement solutions.
    Optimize and tune PySpark jobs for performance and scalability.
    Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability.
    Work closely with DevOps and infrastructure teams to deploy and manage PySpark applications in production environments.
    Keep up-to-date with the latest technologies and best practices in data engineering and distributed computing.

Best Regards
Prashant Kumar 

​​​​​​​Team Lead

Email ID: [email protected]

Email

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments