CerebroVis awarded Best Poster Award at IEEE Vis 2018! (Award)
Submitted our recent work done in collaboration with the School of Psychology(Northeastern University) to CHI 2019.
Will be at Vis 2018 in Berlin from Oct 21-26.

Summary: Developing visual analytics tools for exploratory data analysis and insight generation from machine learning models, visualization and analysis of hierarchical and network datasets, quantitative – qualitative evaluation of visualization interfaces, and studying human perception and cognition with data glyphs.

Short-term Goal: Develop a novel visualization tool to support the diagnosis of cerebrovascular diseases like Stroke and Aneurysms.

Long-term Goal: Work on interdisciplinary projects with the focus on the design and development of 'contextually' aware and 'user-centered' data visualization tools.

Industry Projects Research Projects Publications

Selected Industry Projects

Photograph of a flying saucer over the US Capitol building

Visual Bayesian fusion to navigate a data lake

A data-fusion based visual analytics platform for navigating a data lake to derive insights. Our platform allows for rich interactive visualizations, querying and keyword-based search within and across datasets or models, as well as intuitive visual interfaces for value-imputation or model-based predictions.

Photograph of a flying saucer over the US Capitol building

Multi-sensor Visual Analytics Supported by Machine-Learning Models

A platform designed to explore multi-dimensional sensor data. Special focus on visualization of temporal sensor data using time-series visualization. Real-time time-series query visualization support, where the similar and frequent patterns can be queried using suitable machine learning algorithms.

Research Projects

Predicting Survival of Patients suffering with Glioblastoma

In this work we use machine learning to predict the survival of patients suffereing with Glioblastoma (Grade 4) tumor. Our results show in addition to size of tumor the variation in the white matter intensity may also be a strong indicator of the time of survival of each patient.


Aditeya Pandey, Harsh Shukla, Geoffrey S. Young, Lei Qin, Cody Dunne and Michelle Borkin. CerebroVis: Topology- and Constraint-based Network Layout for Visualization of Cerebrovascular Arteries (Poster at IEEE Vis 2018)

Abstract: Blood circulation to the human brain is provided through a network of cerebral arteries. A blockage or leakage of an artery in this system may be due to diseases such as a stroke or aneurysm. To identify and diagnose these conditions, doctors obtain and examine CTA or MRA radiological images. The doctor’s diagnostic tasks include examination of artery branches for abnormalities and identification of paths of abnormal flow from a deformed artery. These tasks are in actuality the network analysis tasks of browsing and path following. In this work, we introduce the definition of the cerebral artery system as a network. This framing allowed us to develop a novel network representation of the cerebrovascular arteries. The layout uses a topology- and constraint-based technique to present the structure intuitively and preserve the spatial context. Experts validated the layout design, and robustness is demonstrated through testing with 56 MRA datasets.

Michail Schwab, Aditeya Pandey and Michelle A. Borkin. Maximizing Resolvable Items: A Mantra of Mobile Visualization. (2018).

Abstract: Mobile data visualization is becoming more and more common, but only few guidelines for their design exist. In this paper, we describe the design process of taking a pan and zoom-based linear timeline from a desktop visualization to the mobile platform by turning it into a static and elliptical timeline with a draggable handle for selection. The lessons we learned are general design principles for mobile data visualization: 1. Interaction should be simple, directly manipulate one object, and avoid two-finger gestures. 2. Linked and coordinated views are challenging, but beneficial if context is maintained. 3. Overview first, details later. 4. Because of display size and the fat finger problem, the number of items that can be reached with one interaction is low by default, and needs to be carefully considered and improved for effective navigation. We provide a framework to aid future design processes for mobile visualization.

K. Singh et al., "Visual Bayesian fusion to navigate a data lake," 2016 19th International Conference on Information Fusion (FUSION)

Abstract:The evolution from traditional business intelligence to big data analytics has witnessed the emergence of `Data Lakes’ in which data is ingested in raw form rather than into traditional data warehouses. With the increasing availability of many more pieces of information about each entity of interest, e.g., a customer, often from diverse sources (social-media, mobility, internet-of-things), fusing, visualizing and deriving insights from such data pose a number of challenges: First, disparate datasets often lack a natural join key. Next, datasets may describe measures at different levels of granularity, e.g., individual vs. aggregate data, and finally, different datasets may be derived from physically distinct populations. Moreover, once data has been fused, queries are often an inefficient and inaccurate mechanism to derive insight from high-dimensional data. In this paper we describe iFuse, a data-fusion based visual analytics platform for navigating a data lake to derive insights. We rely on Bayesian graphical models to provide useful rudder with which to fuse and analyze disparate islands of data in a systematic manner. Our platform allows for rich interactive visualizations, querying and keyword-based search within and across datasets or models, as well as intuitive visual interfaces for value-imputation or model-based predictions. We illustrate the use of our platform in multiple scenarios, including two public data challenges as well as a real-life industry use-case involving the probabilistic fusion of datasets that lack a natural join-key.

Aditeya Pandey, Kunal Ranjan, Geetika Sharma, and Lipika Dey. 2015. Interactive Visual Analysis of Temporal Text Data. In Proceedings of the 8th International Symposium on Visual Information Communication and Interaction (VINCI '15)

AbstractThis paper presents a novel interactive visualization technique that helps in gathering insights from large volumes of text generated through dyadic communications. The emphasis is specifically on showing content evolution and modification with passage of time. The challenge lies in presenting not only the content as a stand-alone but also understand how the present is related to the past. For example analyzing large volumes of emails can show how communication among a set of people have progressed or evolved over time, may be along with the roles of the communicators. It can also show how the content has changed or evolved. In order to depict the changes, the email repositories are first clustered using a novel algorithm. The clusters are further time-stamped and correlated. User-insights are provided through visualization of these clusters. Results of implementation over two different datasets are presented.

G. Sharma, G. Shroff, A. Pandey, B. Singh, G. Sehgal, K. Paneri,and P. Agarwal, “Multi-sensor visual analytics supported by machine-learning models,” in ICDM Workshop on Data Analytics meets Visual Analytics, 2015

Abstract: Machines, such as engines, vehicles, or even aircraft, go through extensive controlled trials during their development. Each machine is typically instrumented with hundreds of sensors that produce voluminous time-series data. Engineers analyze suchdata to improve their understanding of how machines are used in practice, which in turn helps them in taking design decisions. Most often they study operational profiles various sensors fora given day of operation using histograms, or examine time-series from multiple sensors together. However, when confrontedwith data from dozens of sensors, over many years of operation, they are challenged by the large number of histograms toanalyze, and the sheer length of time-series’ to explore. Traditional approaches such as hierarchical histograms, time-series semantic zooming etc. often cannot cope with the volume of data encountered in practice. We augment basic data visualizations such as histograms, heat-maps and basic time-series visualizations with machine-learning models that aid in summarizing, querying, searching, and interactively linking visualizations derived fromlarge volumes of multi-sensor data. In this paper we describe our machine-learning augmented approach to visual analytics in thecontext of its actual use in practice for answering questions ofinterest to engineers analyzing large-scale multi-sensor data.

G. Sharma, G. Shroff, A. Pandey, P. Agarwal, and A. Srinivasan, “Interactively visualizing rule exception summaries,” in Proceedings of the EuroGraphics Workshop on Visual Analytics, 2014

Abstract: Rules along with their exceptions have been used to explain large data sets in a comprehensible manner. In this paper we describe an interactive visualization scheme for rules and their exceptions. Our visual encoding is based on principles for creating perceptually effective visualizations from literature. Our visualization scheme presents an overview first, allows semantic zooming and then shows details on demand using established principles of interactive visualization. We assume that rules and exceptions have been mined and summarized using available techniques; however our visualization is applicable for more general rule hierarchies as well. We illustrate our visualization using rules and exceptions extracted from real customer surveys as well as on rule sets derived from past literature.