Mukilan Karuppasamy

I'm a predoctoral researcher at IISc, Bangalore, working with Anirban Chakraborty on visual instruction tuning for multimodal large language models. Previously, I had the opportunity to work with CV Jawahar at IIIT Hyderabad on concept-based interpretability in video action recognition, and with Yogesh Singh Rawat at CRCV, UCF, on robustness analysis of vision-language models. I was also a MITACS Globalink Fellow in 2023, working on distributed message passing with Amine Mezhghani and Faouzi Bellili at the University of Manitoba. Earlier, I worked on virtual try-on systems using GANs with Brejesh Lall at IIT Delhi.

Email  /  CV  /  Scholar  /  Github  /  Linkedin

profile photo

Research

I'm interested in computer vision, interpretability, and robustness analysis. My current interest lies in understanding the theoretical basis of interpretability in overparameterized deep neural networks. Some papers are highlighted.

Towards Safer and Understandable Driver Intention Prediction
Mukilan Karuppasamy, Shankar Gangisetty, Shyam Nandan Rai, Carlo Masone, CV Jawahar,
ICCV, 2025
project page / arXiv

Proposed a concept based inherently intrepretable model for Video models using concept bottleneck model and token merging. This was applied in Driver Intention Prediction task.

Distributed Vector Approximate Message Passing
Mukilan Karuppasamy, Mohamed Akrout, Amine Mezghani, Faouzi Bellili,
ICASSP, 2024  
github / pdf

Derived a collaborative signal estimation from multiple agents with different measurement channels through distributed message passing algorithm.