Aditya Agarwal

I am a research intern at the Robotics and Embodied AI Lab (REAL) at Université de Montréal and Mila. I work on representation learning for robotics systems, supervised by professors Liam Paull (UdeM) and Florian Shkurti (UofT). I will be joining the Learning and Intelligent Systems (LIS) Group as a PhD student in EECS at MIT CSAIL this fall. I will work at the intersection of perception and task & motion planing for robotics systems supervised by professors Leslie Pack Kaelbling and Tomas Lozano-Perez, with the overarching goal of building general-purpose and autonomous robots that can seamlessly integrate with humans.

I completed my MS by research from IIIT Hyderabad supervised by professors C V Jawahar and Vinay Namboodiri in the computer vision (CVIT) lab and by Prof. Madhava Krishna in the robotics (RRC) lab. My work spanned the areas of 3D shape completion, video understanding, implicit representations, robotic manipulation, and talking-face generation.


Previously, I was a Software Engineer at Microsoft India in the People Also Ask (PAA) team. I worked on techniques in deep learning and NLP to show a block of related questions and answers for a user query on Bing's search page.


I completed my Bachelors from PES University (formerly PESIT) Bangalore in Computer Science. At PESIT, I worked in the areas of sound event detection and localization. I also spent a summer as a MITACS research intern at the University of Calgary on localizing an audio noise nuisance called the Ranchlands Hum, supervised by Prof. Mike Smith, and a year as an intern at Microsoft Research India, working in the areas of blended learning and AI in healthcare.

Research Interests: My research interests lie broadly at the intersection of computer vision and robotics. My goal is to integrate representation learning with task & motion planning to achieving general-purpose robot autonomy.


2013 - 2017
2021 - Current
Spring ('15 - '16, '16 - '17)
Summer '16
Fall '17
2018 - 2021


Publications


HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork

Bipasha Sen*, Gaurav Singh*,

Aditya Agarwal*

, Madhava Krishna, Srinath Sridhar

ArXiv 2023

Paper / Code / Video

We propose HyP-NeRF, a latent conditioning method for learning generalizable category-level NeRF priors using hypernetworks. We use hypernetworks to estimate both the weights and the multi-resolution hash encodings resulting in significant quality gains. To further improve quality, we incorporate a denoise and finetune strategy that denoises images rendered from NeRFs estimated by the hypernetwork and finetunes it while retaining multiview consistency.

Disentangling Planning and Control for Non-prehensile Tabletop Manipulation

Vishal Reddy Mandadi, ...,

Aditya Agarwal

, ..., Madhava Krishna

CASE 2023

Paper (Coming Soon)/ Video (Coming Soon)

We propose a framework that disentangles planning and control for tabletop manipulation in unknown scenes using a pushing-by-striking method (without tactile feedback) by explicitly modeling the object dynamics. Our method consists of two components: an A* planner for path-planning and a low-level RL controller that models object dynamics.

SCARP: 3D Shape Completion in ARbitrary Poses for Improved Grasping

Bipasha Sen*,

Aditya Agarwal*

, Gaurav Singh*, Brojeshwar B., Srinath Sridhar, Madhava Krishna

ICRA 2023

Paper / Project Page / Short Video / Code / Poster / Long Video

We propose a mechanism for completing partial 3D shapes in arbitrary poses by learning a disentangled feature representation of pose and shape. We rely on learning rotationally equivariant pose features and geometric shape features by training a multi-tasking objective. SCARP improves the shape completion performance by 45% and grasp proposals by 71.2% over existing baselines.

FaceOff: A Video-to-Video Face Swapping System

Aditya Agarwal*

, Bipasha Sen*, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar

WACV 2023

Paper / Project Page / Video / Poster / Code / Supplementary

We propose a novel direction of video-to-video (V2V) face-swapping that tackles a pressing challenge in the moviemaking industry: swapping the actor's face and expressions on the face of their body double. Existing face-swapping methods preserve only the identity of the source face without swapping the expressions. In FaceOff, we swap the source's facial expressions along with the identity on the target's background and pose.

Towards MOOCs for Lipreading: Using Synthetic Talking Heads to Train Humans in Lipreading at Scale

Aditya Agarwal*

, Bipasha Sen*, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar

WACV 2023

Paper / Project Page / Video / Poster / Supplementary

Hard-of-hearing people rely on lipreading the mouth movements of the speaker to understand the spoken content. In this work, we developed computer vision techniques and built upon existing AI models, such as TTS and talking-face generation, to generate synthetic lipreading training content in any language.

INR-V: A Continuous Representation Space for Video-based Generative Tasks

Bipasha Sen*,

Aditya Agarwal*

, Vinay P. Namboodiri, C.V. Jawahar

TMLR 2022

Paper / OpenReview / Project Page / Video / Code

Inspired by the recent works on parameterizing 3D shapes and scenes as Implicit Neural Representations (INRs), we encode videos as INRs. We train a hypernetwork to learn a prior over these INR functions and propose two techniques, i) Progressive Training and ii) Video-CLIP Regularization to stabilize hypernetwork training. INR-V shows remarkable performance on several video-generative tasks on many benchmark datasets.

Approaches and Challenges in Robotic Perception for Table-top Rearrangement and Planning

Aditya Agarwal*

, Bipasha Sen*, Shankara Narayanan V*, Vishal Reddy Mandadi*, Brojeshwar Bhowmick, K Madhava Krishna

3rd in ICRA 2022 Open Cloud Table Organization Challenge

Paper / Competition / Video / Slides / Code / News1 / News2

In this challenge, we proposed an end-to-end pipeline in ROS incorporating the perception and planning stacks to manipulate objects from their initial configuration to a desired target configuration on a tabletop scene using a two-finger manipulator. The pipeline involves the following steps - (1) 3D scene registration, (2) Object pose estimation, (3) Grasp generation, (4) Task Planning, and (5) Motion Planning.

Personalized One-Shot Lipreading for an ALS Patient

Bipasha Sen*,

Aditya Agarwal*

, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C.V. Jawahar

BMVC 2021

Paper / Video

We tackled the challenge of lipreading medical patients in a one-shot setting. There were two primary issues in training existing lipreading models - i) lipreading datasets had people suffering from no disabilities, ii) lipreading datasets lacked medical words. We devised a variational encoder-based domain adaptation technique to adapt models trained on large amounts of synthetic data to enable lipreading with one-shot real examples.

REED: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models

Bipasha Sen*,

Aditya Agarwal*

, Mirishkar Sai Ganesh, Anil Kumar Vuppala

SLT 2021

Paper / Slides / MLADS Paper

We tackled the problem of building a multilingual acoustic model in a low-resource setting. We proposed a mechanism to bootstrap and validate the compatibility of multiple languages using CNNs operating directly on raw speech signals. Our method improves training and inference times by 4X and 7.4X, respectively, with comparable WERs against RNN-based baseline systems.

An Approach Towards Action Recognition using Part Based Hierarchical Fusion

Aditya Agarwal*

, Bipasha Sen*

ISVC 2020

Paper / Slides / MLADS Paper

The human body can be represented as an articulation of rigid and hinged joints, which can be combined to form the parts of the body. In this work, we think of human actions as a collective action of these parts. We propose a Hierarchical BiLSTM network to model the spatio-temporal dependencies of the motion by fusing the pose-based joint trajectories in a part-based hierarchical fashion.

Minimally Supervised Sound Event Detection using a Neural Network

Aditya Agarwal

, Syed Munawwar Quadri, Savitha Murthy, Dinkar Sitaram

ICACCI 2016

Paper / Poster / Code

We solve the task of polyphonic sound event detection by training on a minimally annotated dataset of single sounds. Single sounds represented as MFCC features are used to train a neural network. Polyphonic sounds are preprocessed using PCA and NMF, and source-separated sounds are inferred using the learned network. Our system achieves reasonable accuracy of source separation and detection with minimal data.



News & Announcements



Forked and modified from Viraj Prabhu's adaptation of Pixyll theme