Resume Score
CV/Résumé Score
  • Expertini Resume Scoring: See how well your CV/Résumé matches this job: Senior Data Engineer (Scala/Spark + AWS + Kafka).
United States Jobs Expertini

Urgent! Senior Data Engineer (Scala/Spark + AWS + Kafka) Job | Publicis Groupe

Senior Data Engineer (Scala/Spark + AWS + Kafka)



Job description

**Company description**

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally enabled state, both in the way they work and the way they serve their customers.

We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity.

United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting, and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

**Overview**

**As a Senior Data Engineer,** you will be responsible for designing, developing, and maintaining scalable Big Data solutions.

You will work with large datasets, real-time processing frameworks, and cloud-based platforms to ensure clean, reliable, and actionable data for decision-making.

You’ll be joining a **large-scale data modernization project within the hospitality industry** , focused on migrating a legacy reservation platform to a modern, cloud-native architecture.

This initiative is one of the most strategic and long-standing Data Engineering programs at Publicis Sapient, offering the opportunity to work on complex data challenges with global impact and cross-regional collaboration.

**Responsibilities**

**Your Impact**

+ Design, develop, and maintain large-scale data pipelines using Scala, Apache Spark, Apache Kafka, SQL and no-SQL on AWS.
+ Reverse engineer SQL queries and Scala code to understand functionality, validate performance, and ensure maintainability.
+ Clean and prepare data by removing corrupted files, fixing coding errors, and handling data quality issues.
+ Review reports and performance indicators to identify and correct code or pipeline inefficiencies.
+ Create and maintain database designs, data models, and techniques for data mining and segmentation.
+ Identify, analyze, and interpret patterns and trends in complex datasets to support diagnosis, forecasting, and strategic decisions.
+ Implement data processing workflows leveraging AWS services such as EMR, Glue, Kinesis, Lambda, S3, and Redshift.
+ Automate workflows, monitoring, and alerting processes to improve operational efficiency and reliability.
+ Ensure data quality, security, and compliance with AWS best practices and industry standards.

**Qualifications**

**Your Skills & Experience**

+ 5+ years of experience in Data Engineering, with a strong focus on Apache Spark, SQL and No-SQL (Ideally MongoDB/DocumentDB)
+ Ideally 2+ years of recent, hands-on experience with Scala and a strong willingness to ramp up and train in Scala (Scala training will be provided)
+ Prior exposure to stream processing projects with Kafka or related stream processing tools.
+ Proven ability to reverse-engineer and optimize SQL queries — and ideally Scala code — to enhance performance and understand existing systems.
+ Experience designing and implementing ETL/data pipelines on AWS using services such as EMR, Glue, Kinesis, Lambda, S3, and Redshift.
+ Strong knowledge of database design, data modeling, and data mining techniques.
+ Demonstrated experience in data cleaning, preparation, and validation, ensuring accuracy and reliability across datasets.
+ Solid understanding of distributed computing, cloud storage, and performance optimization in large-scale data environments.

**Set Yourself Apart With**

+ Familiarity with containerization and orchestration tools (Docker, Kubernetes) for deploying Big Data workloads.
+ Knowledge of data governance, security, and compliance frameworks.
+ AWS Certifications such as AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.

**Additional information**

**This is a full-time role (40 hours per week) open to both permanent and contractor hires in Mexico, Colombia, and Costa Rica, and available only as a contractor position in Brazil, Peru and Argentina.**


Required Skill Profession

Other General



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Senior Data Potential: Insight & Career Growth Guide


Advance your career or build your team with Expertini's smart job platform. Connecting professionals and employers United States.