LinkedIn

Ritam Mukherjee

Principal BigData & Cloud Engineer @ Mobilewalla | Ex-Walmart, Deloitte

Greater Kolkata Area  ·  12+ years experience

Accomplished professional with 12+ years of experience designing and deploying cloud-native architectures and big data solutions. Strong focus on creating business impact through technical excellence in cloud platforms (AWS, Azure, GCP), real-time analytics, big data pipelines, and distributed systems. Proven track record of delivering 30% cost savings, improving system performance by 40%, and achieving 100% SLA compliance.


Experience

Mobilewalla

Principal Engineer  ·  Aug 2024 – Present  ·  Kolkata

  • Designed framework for petabyte-scale data ingestion and processing to generate custom consumer behaviour segments with predictive models at scale.
  • Built end-to-end Fintech API backend delivering near real-time feature and risk assessment data — multi-region, highly available, with enhanced security, metering, and logging.
  • Designed and developed a Feature Integrator product as a single source of truth for all predictive model features (including age and gender models), reading petabyte-scale aggregate data once to save computation time and cost.

Mobilewalla

Lead Data Engineer  ·  Sep 2020 – Jul 2023 (2 yrs 11 mos)  ·  Kolkata

Mobilewalla

Senior Data Engineer  ·  Jun 2019 – Aug 2020 (1 yr 3 mos)  ·  Kolkata


Walmart Labs

Senior Developer  ·  May 2018 – May 2019 (1 yr 1 mo)  ·  Bangalore

  • Designed and built workflows to ingest high-volume clickstream data via Adobe Omniture into a Hive staging environment.
  • Implemented a data pipeline ingestion framework producing primary DWH and secondary NoSQL aggregated feeds.
  • Tuned Spark code and resolved streaming bottlenecks in the execution environment.

Deloitte

Technical Consultant  ·  Mar 2016 – May 2018 (2 yrs 3 mos)  ·  Bengaluru

  • Implemented real-time analytics with Apache Kafka & Spark Streaming; built custom Kafka consumer for network outage data.
  • Developed a scalable framework for end-to-end data pipeline integration via Apache Spark and Alluxio.
  • Created and validated IVR files with alternate paths for audit reject records.
  • Built complex Airflow DAGs for Spark jobs; drove projects from RFP submission through high-level design, execution, and warranty support.

Capgemini

Associate Consultant  ·  Jul 2014 – Mar 2016 (1 yr 9 mos)  ·  Pune

  • Implemented Teradata–Hive data migration project using Teradata Sqoop connector, Hive, Sqoop, and Oozie.
  • Solved Hive script performance issues with optimised joins and aggregations.
  • Implemented DWH integration on Teradata using FLOAD/MLOAD utilities and built ELT scripts.
  • Used Informatica to develop mappings and workflows scheduled via WLM.

Infosys

System Engineer  ·  Sep 2011 – Aug 2013 (2 yrs)  ·  Hyderabad

  • Worked on Oracle Warehouse Builder 11g, Oracle 11g PL/SQL, and OLAP/OLTP systems for a data integration project at a leading oil drilling corporation.
  • Developed shell scripts, PL/SQL stored procedures, and triggers for data curation.
  • Published a KX paper on Materialized Views and Triggers.

Certifications

  • Building Intelligent Agent Workflows
  • Machine Learning Nanodegree — Udacity
  • Generative AI Applications with Amazon Bedrock
  • GenAI and LLMs on AWS

Education

West Bengal University of Technology

Bachelor of Technology (B.Tech.), Information Technology  ·  2007–2011

Udacity

Nanodegree, Machine Learning Engineer  ·  2018