Bangalore, Karnataka
India, 560-066
Hey Voila!
Iβm currently a Research Enginner @ AMD GenAI, where my focus is mostly on building in-house LMMs (Large Multimodal Models) π€. Previously, I was affiliated to CVIT Lab, IIIT, Hyderabad ποΈ from where I completed my MS by Research in 2024. I was a part of the Katha-AI group where I had the privilege of working with Prof. Makarand Tapaswi.
I am currently focusing π― on RL, Diffusion, and partially, Efficient Training β‘. Trying to π transition from my usual research area of Multimodal Learning π.
π οΈ Before IIITH:
I was a Research Intern at Indian Statistical Institute, Kolkata (2020-21) π, where I worked with Prof. B. Uma Shankar in the Machine Intelligence Unit on topic Multi-label Classification of Remote Sensing Images π°οΈ.
π I graduated with an BS-MS degree in Mathematics & Statistics π from IISER Kolkata in 2021. In my spare time, I enjoy playing badminton πΈ, swimming π, and sometimes biking ποΈ.
β‘ Riding the wave of AGI innovation! π
Hiring π₯
AMDβs GenAI team is driving the future of foundation models π€ and is hiring!
π’ We have multiple open roles, including:
- Research Scientist (Senior & Junior)
- Research Engineer
- Research Intern
Feel free to reach out βοΈ if youβre interested!
news
| Aug 05, 2024 | π Joined the AMD GenAI team as a Research Engineer π§βπ». π₯ Building fully open-source LMMs from scratch on AMD Instinct GPUs β‘ (MI300 / MI250). |
|---|---|
| Jul 13, 2024 | π Graduated with a MS by Research degree in CSE π₯οΈ from IIIT Hyderabad π. |
| Jun 18, 2024 | π Visited Seattle, US πΊπΈ for my π poster presentation at 41st CVPR conference π. |
| Apr 23, 2024 | π‘οΈ Defended my Masterβs thesis π required for the completion of my MS degree π at IIIT-H! π― |
| Apr 18, 2024 | Paper acceptance! π₯ to FSE 2024. Topic: Leverage LLMs to automatically recommend OCEs on quickly identifying and mitigating critical issues (RCA). Read More |
| Mar 06, 2024 | π₯π₯Best Paper Awardπ₯π₯. Acceptance to FOSS-CIL 2024. Read More |
selected publications
-
"Previously on ..." From Recaps to Story Summarization
In IEEE Conference on Computer Vision and Pattern Recognition,, 2024
We introduce multimodal story summarization by leveraging TV episode recaps β short video sequences interweaving key story moments from previous episodes to bring viewers up to speed. We propose PlotSnap, a dataset featuring two crime thriller TV shows with rich recaps and long episodes of 40 minutes. Story summarization labels are unlocked by matching recap shots to corresponding substories in the episode. We propose a hierarchical model TaleSumm that processes entire episodes by creating compact shot and dialog representations, and predicts importance scores for each video shot and dialog utterance by enabling interactions between local story groups. Unlike traditional summarization, our method extracts multiple plot points from long videos. We present a thorough evaluation on story summarization, including promising cross-series generalization. TaleSumm also shows good results on classic video summarization benchmarks.
-
How you feelinβ? Learning Emotions and Mental States in Movie Scenes
In IEEE Conference on Computer Vision and Pattern Recognition,, 2023
Movie story analysis requires understanding charactersβ emotions and mental states. Towards this goal, we formulate emotion understanding as predicting a diverse and multi-label set of emotions at the level of a movie scene and for each character. We propose EmoTx, a multimodal Transformer-based architecture that ingests videos, multiple characters, and dialog utterances to make joint predictions. By leveraging annotations from the MovieGraphs dataset, we aim to predict classic emotions (e.g. happy, angry) and other mental states (e.g. honest, helpful). We conduct experiments on the most frequently occurring 10 and 25 labels, and a mapping that clusters 181 labels to 26. Ablation studies and comparison against adapted state-of-the-art emotion recognition approaches shows the effectiveness of EmoTx. Analyzing EmoTxβs self-attention scores reveals that expressive emotions often look at character tokens while other mental states rely on video and dialog cues.