Pyspark Developer
Capgemini
Software Engineering
India
Posted 6+ months ago
Job Description
Pyspark+SQL
Proficient in leveraging Spark for distributed data processing and transformation.
Skilled in optimizing data pipelines for efficiency and scalability.
Experience with real-time data processing and integration.
Familiarity with Apache Hadoop ecosystem components.
Strong problem-solving abilities in handling large-scale datasets.
Ability to collaborate with cross-functional teams and communicate effectively with stakeholders.
Primary Skills
Pyspark
SQL