About Me

My research focuses at the intersection of Artificial Intelligence (AI) and Human Computer Interaction (HCI) with one goal: make AI accessible to everyone. I build human-centered AI systems that empower people to achieve more and extend the boundaries of their capabilities and imagination. I am currently a PhD Candidate in Computing and Information Science Department at Cornell University. I also hold a Masters in Computer Science from Cornell University and a Masters in Human Computer Interaction from Indiana University. I love hiking, playing squash, classical music and engaging in scientific discussions.


Interactive Transfer Learning

Transfer Learning is a technique of incorporating the knowledge of one task into another task in order to improve performance. I designed and built this interactive machine learning tool to explore how ML non-experts can leverage models curated by experts to build interesting models of their own. The extensive user evaluation of the systems demonstrates how perceptions of machine learning process play a critical role.

Explaining Attention: A human-centered approach

Attention mechanism is an important tool used in Natural Language Processing to improve statistical performance on various tasks like text classification, machine translation and question answering.  However, the machine view and human view of attention have very different meanings. In this research, I explore how attention mechanism can lead to confusing interpretations of machine decisions. I proposed a human-centered approach to understanding and using attention as a potential explanation.

Crowdsourcing Concept based Explanations

Concepts are abstract representations that humans use to make sense of their world, and distinguish meaningfully between two entities (distinguishing a tiger and leopard using the concepts of stripe or spots). In this research, I investigate how concept based explanations can be used to improve the interpretability of black-boxed machine learning models. I propose a concept elicitation technique using crowdsourcing, and design ML explanations that incorporate them. A thorough evaluation of these designs show that concept based explanations are useful when context is clearly defined.

Role of Concepts in Visual Data Analysis

Conceptual models play a critical role in how we make sense of our world. This tool demonstrates a novel interaction framework that allows users to share their concepts and hypothesis with the system. Experimental evaluations with this tool demonstrate how encouraging analysts to share their conceptual models with the system helps resolve disagreements effectively within an interactive data analysis ecosystem.

Gestural interactions with data

This fun installation I designed, engages visitors of science museum in data exploration using full-body interactions. I explored the design of crafting engaging Human-Data Interaction experience for the visitors of Discovery Place Science Museum, Charlotte, NC, using a fully functional prototype that employs strategies in instrumenting the floor, forcing collaboration, implementing multiple body movements to control the same effect; and, using the visitors’ silhouette.

Gesture Analysis Tool for Interaction Designers

As more designers seek to incorporate gestural interactions into their systems, they face significant hurdles in designing optimal gestures that are both human friendly and machine interpretable. I designed and built this tool to help interaction designers analyze gestures that are gathered using context specific elicitation techniques. The visual overlay of a given hand gesture with its decomposed machine interpretable components, highlights the differences between gestures that are conceptually similar but statistically different. Visual analysis tools of complex human gestures are useful to dissect why a system might be behaving in a undesirable ways, and design interventions.

Pedestrian Detection System

I was part of a team that designed and built pedestrian detection system to assist drivers on the road. I was part of this research during the “pre-Deep Learning” boom, so we handcrafted the features for different pedestrians using good old fashioned Computer Vision techniques, and meticulously optimized them on application specific hardware. We later experimented with deep learning features that proved to be more accurate, but less interpretable. Trade-off between accuracy and interpretability is a frequent design choice to be made when deploying products to the market with deep learning models. (Product Website: Mando, Picture Credit: Dalal and Triggs)