Hello! Welcome to my website! I am currently a Researcher in the Geospatial Artificial Intelligence (GeoAI) group at Oak Ridge National Laboratory (ORNL).

At ORNL, I work on developing scalable machine learning driven geospatial image analytics workflows for humanitarian applications. I earned my Doctoral Degree in Geoinformatics, under the supervision of Prof. Surya Durbha in the GeoComputational Systems Lab at Indian Institute of Technology Bombay. My research interests include Deep Learning for Computer Vision, Large-Scale Satellite Image Processing, Remote Sensing and GIS, High Performance Computing, Geospatial Knowledge Representation and Reasoning, Natural Language Processing, and Internet Of Things. My doctoral research explored the areas of Geospatial Semantics and Deep Learning for Satellite Image Processing, towards leveraging Knowledge Graphs for enhanced Scene Understanding of Remote Sensing Scenes.

As a part of the Google Summer of Earth Engine 2019 Research Program, I worked on "Machine Learning based Mapping of Croplands with Google Earth Engine for Identifying Human-Wildlife Conflict Locations" with Centre for Wildlife Studies. I have been a two-time Google Summer of Code student for the organizations - Cesium in 2015, where I contributed to the NASA's Global Imagery Browse Services (GIBS) by processing and visualizing 3-dimensional LiDAR data from the CALIPSO satellite; and Liquid Galaxy in 2016, where I worked on enabling Cesium for a panoramic experience on the Liquid Galaxy hardware. Prior to joining IIT Bombay as a graduate student for my Masters and PhD, I completed my Bachelors in Computer Engineering from University of Mumbai.

I am an open source enthusiast and have been a code contributor to Mozilla Firefox. When not at my terminal or engrossed in a sci-fi novel, I enjoy travelling places, capturing and captioning the world through my camera.

Talks

Research

Semantics-Driven Remote Sensing Scene Understanding Framework for Grounded Spatio-Contextual Scene Descriptions
Abhishek Potnis, Surya Durbha, Rajat Shinde
Earth Observation data possess tremendous potential in understanding the dynamics of our planet. We propose the Semantics-driven Remote Sensing Scene Understanding (Sem-RSSU) framework for rendering comprehensive grounded spatio-contextual scene descriptions for enhanced situational awareness. To minimize the semantic gap for remote-sensing-scene understanding, the framework puts forward the transformation of scenes by using semantic-web technologies to Remote Sensing Scene Knowledge Graphs (RSS-KGs). The knowledge-graph representation of scenes has been formalized through the development of a Remote Sensing Scene Ontology(RSSO)—a core ontology for an inclusive remote-sensing-scene data product. The RSS-KGs are enriched both spatially and contextually, using a deductive reasoner, by mining for implicit spatio-contextual relationships between land-cover classes in the scenes. The Sem-RSSU, at its core, constitutes novel Ontology-driven Spatio-Contextual Triple Aggregation and realization algorithms to transform KGs to render grounded natural language scene descriptions. Considering the significance of scene understanding for informed decision-making from remote sensing scenes during a flood, we selected it as a test scenario, to demonstrate the utility of this framework. In that regard, a contextual domain knowledge encompassing Flood Scene Ontology (FSO) has been developed. Extensive experimental evaluations show promising results, further validating the efficacy of this framework.
  @article{Potnis2021,
    doi = {10.3390/ijgi10010032},
    url = {https://doi.org/10.3390/ijgi10010032},
    year = {2021},
    month = jan,
    publisher = {{MDPI} {AG}},
    volume = {10},
    number = {1},
    pages = {32},
    author = {Abhishek V. Potnis and Surya S. Durbha and Rajat C. Shinde},
    title = {Semantics-Driven Remote Sensing Scene Understanding Framework for Grounded Spatio-Contextual Scene Descriptions},
    journal = {{ISPRS} International Journal of Geo-Information}
  }
LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud
Rajat Shinde, Surya Durbha, Abhishek Potnis
Lidar scanning is a widely used surveying and mapping technique ranging across remote-sensing applications involving topological, and topographical information. Typically, lidar point clouds, unlike images, lack inherent consistent structure and store redundant information thus requiring huge processing time. The Compressive Sensing (CS) framework leverages this property to generate sparse representations and accurately reconstructs the signals from very few linear, non-adaptive measurements. The reconstruction is based on valid assumptions on the following parameters- (1) sampling function governed by sampling ratio for generating samples, and (2) measurement function for sparsely representing the data in a low-dimensional subspace. In our work, we address the following motivating scientific questions- Is it possible to reconstruct dense point cloud data from a few sparse measurements? And, what could be the optimal limit for CS sampling ratio with respect to overall classification metrics? Our work proposes a novel Convolutional Neural Network based deep Compressive Sensing Network (named LidarCSNet) for generating sparse representations using publicly available 3D lidar point clouds of the Philippines. We have performed extensive evaluations for analysing the reconstruction for different sampling ratios {4%, 10%, 25%, 50% and 75%} and we observed that our proposed LidarCSNet reconstructed the 3D lidar point cloud with a maximum PSNR of 54.47 dB for a sampling ratio of 75%. We investigate the efficacy of our novel LidarCSNet framework with 3D airborne lidar point clouds for two domains - forests and urban envi­ ronment on the basis of Peak Signal to Noise Ratio, Haussdorf distance, Pearson Correlation Coefficient and Kolmogorov-Smirnov Test Statistic as evaluation metrics for 3D reconstruction. The results relevant to forests such as Canopy Height Model and 2D vertical profile are compared with the ground truth to investigate the robustness of the LidarCSNet framework. In the urban environment, we extend our work to propose two novel 3D lidar point cloud classification frameworks, LidarNet and LidarNet++, achieving maximum classification ac­ curacy of 90.6% as compared to other prominent lidar classification frameworks. The improved classification accuracy is attributed to ensemble-based learning on the proposed novel 3D feature stack and justifies the robustness of using our proposed LidarCSNet for near-perfect reconstruction followed by classification. We document our classification results for the original dataset along with the point clouds reconstructed by using LidarCSNet for five different measurement ratios - based on overall accuracy and mean Intersection over Union as evaluation metrics for 3D classification. It is envisaged that our proposed deep network based convolutional sparse coding approach for rapid lidar point cloud processing finds huge potential across vast applications, either as a plug-and-play (reconstruction) framework or as an end-to-end (reconstruction followed by classification) system for scalability.
  @article{Shinde2021,
      doi = {10.1016/j.isprsjprs.2021.08.019},
      url = {https://doi.org/10.1016/j.isprsjprs.2021.08.019},
      year = {2021},
      month = aug,
      publisher = {Elsevier},
      volume = {180},
      pages = {313-334},
      author = {Rajat C. Shinde, Surya S. Durbha, Abhishek V. Potnis},
      title = {LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud},
      journal = {{ISPRS Journal of Photogrammetry and Remote Sensing}}
Towards Visual Exploration of Semantically Enriched Remote Sensing Scene Knowledge Graphs (RSS-KGs)
Abhishek Potnis, Surya Durbha, Rajat Shinde, Pratyush Talreja
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021)
There has been an increase in the adoption of Linked Data and subsequently representing data in the form of knowledge graphs across a wide spectrum of domains. There has also been significant interest in the remote sensing community to publish Earth Observation data in the form of Linked Data. As the geospatial Linked Data cloud on the internet grows, there arises a need for efficient methods of exploratory analysis of such information-rich geospatial knowledge graphs. Knowledge graph representation of remote sensing scenes has proved to add significant value for effective mining of implicit information in addition to seamless integration with other data sources. This work is geared towards visual exploration of semantically enriched Remote Sensing Scene Knowledge Graphs (RSS-KGs). In this paper, we propose and implement an interactive web-based interface to visually explore and interact with RSS-KGs using Cesium. The proposed interface seeks to visualize the knowledge graph in the form of nodes and edges, mapped over the remote sensing scene consisting of different land use land cover regions and their inferred characteristics in addition to their spatial relationships with one another. It is envisaged that visualization in the form of nodes and edges would aid in visually validating the spatial relations in the knowledge graph, thus enhancing the understanding of the geospatial knowledge graph from the end user perspective. We demonstrate the efficacy of the interface through the visual exploration of an enriched geospatial knowledge graph of a remote sensing scene captured during an urban flood event.
@INPROCEEDINGS{9554836,
      author={Potnis, Abhishek V. and Durbha, Surya S. and Shinde, Rajat C. and Talreja, Pratyush V.},
      booktitle={2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS}, 
      title={Towards Visual Exploration Of Semantically Enriched Remote Sensing Scene Knowledge Graphs (RSS-KGs)}, 
      year={2021},
      pages={5783-5786},
      doi={10.1109/IGARSS47720.2021.9554836}}

Towards Enabling Deep Learning Based Question-Answering for 3D LiDAR Point Clouds
Rajat Shinde, Surya Durbha, Abhishek Potnis, Pratyush Talreja, Gaganpreet Singh
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021)
Remote sensing lidar point cloud dataset embeds inherent 3D topological, topographical and complex geometrical information which possess immense potential in applications involving machine-understandable 3D perception. The lidar point clouds are unstructured, unlike images, and hence are challenging to process. In our work, we are exploring the possibility of deep learning-based question-answering on the lidar 3D point clouds. We are proposing a deep CNN-RNN parallel architecture to learn lidar point cloud features and word embedding from the questions and fuse them to form a feature mapping for generating answers. We have restricted our experiments for the urban domain and present preliminary results of binary question-answering (yes/no) using the urban lidar point clouds based on the perplexity, edit distance, evaluation loss, and sequence accuracy as the performance metrics. Our proposed hypothesis of lidar question-answering is the first attempt, to the best of our knowledge, and we envisage that our novel work could be a foundation in using lidar point clouds for enhanced 3D perception in an urban environment. We envisage that our proposed lidar question-answering could be extended for machine comprehension-based applications such as rendering lidar scene descriptions and content-based 3D scene retrieval.
@INPROCEEDINGS{9553785,  
      author={Shinde, Rajat C. and Durbha, Surya S and Potnis, Abhishek V. and Talreja, Pratyush and Singh, Gaganpreet},  
      booktitle={2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS},   
      title={Towards Enabling Deep Learning-Based Question-Answering for 3D Lidar Point Clouds},   
      year={2021},  
      pages={6936-6939},  
      doi={10.1109/IGARSS47720.2021.9553785}
    }

Real-Time Embedded HPC Based Earthquake Damage Mapping Using 3D LiDAR Point Clouds
Pratyush Talreja, Surya Durbha, Rajat Shinde, Abhishek Potnis
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021)
In the early hours following the earthquake, supporting humanitarian actions like rescue operations and relief distribution is the primary objective of the rescue managers. The damage mapping can be performed using reliable data that can be obtained from high-resolution satellite imagery but obtaining satellite imagery can be challenging for some days post disaster due to revisit time. Considering the disaster response timing, Unmanned Aerial Vehicles (UAV) are used because ground transportation systems are ineffective due to road blockage. In this work, we make use of Light Detection and Ranging (LiDAR) 3D point cloud data obtained for Haiti Earthquake. The focus of our work is to develop and implement an approach for LiDAR data classification to enable Earthquake damage mapping and detection. This is obtained by running our deep learning network on NVIDIA Jetson Nano embedded supercomputing platform. This approach takes the advantage of embedded High-Performance computing and low power consumption capabilities of Jetson Nano which enhances the classification and promotes rapid response which is the key to manage post-disaster activities. Jetson Nano is a feasible option which provides a GPU architecture that is optimized for running energy-aware deep learning models and which generates the results in real or near-real time. We envisage that our work could be extended to perform near real-time classification of LiDAR point clouds in a post earthquake scenario.
@INPROCEEDINGS{9554481,
      author={Talreja, Pratyush and Durbha, Surya S and Shinde, Rajat C. and Potnis, Abhishek V.},
      booktitle={2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS}, 
      title={Real-Time Embedded HPC Based Earthquake Damage Mapping Using 3D LiDAR Point Clouds}, 
      year={2021},
      volume={},
      number={},
      pages={8241-8244},
      doi={10.1109/IGARSS47720.2021.9554481}
    }
Towards Natural Language Question Answering over Earth Observation Linked Data using Attention-based Neural Machine Translation
Abhishek Potnis, Rajat Shinde, Surya Durbha
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2020)
With an increase in Geospatial Linked Open Data being adopted and published over the web, there is a need to develop intuitive interfaces and systems for seamless and efficient exploratory analysis of such rich heterogeneous multi-modal datasets. This work is geared towards improving the exploration process of Earth Observation (EO) Linked Data by developing a natural language interface to facilitate querying. Questions asked over Earth Observation Linked Data have an inherent spatio-temporal dimension and can be represented using GeoSPARQL. This paper seeks to study and analyze the use of RNN-based neural machine translation with attention for transforming natural language questions into GeoSPARQL queries. Specifically, it aims to assess the feasibility of a neural approach for identifying and mapping spatial predicates in natural language to GeoSPARQL’s topology vocabulary extension including - Egenhofer and RCC8 relations. The queries can then be executed over a triple store to yield answers for the natural language questions. A dataset consisting of mappings from natural language questions to GeoSPARQL queries over the Corine Land Cover(CLC) Linked Data has been created to train and validate the deep neural network. From our experiments, it is evident that neural machine translation with attention is a promising approach for the task of translating spatial predicates in natural language questions to GeoSPARQL queries.
    @INPROCEEDINGS{9323183,
      author={A. V. {Potnis} and R. C. {Shinde} and S. S. {Durbha}},
      booktitle={IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium},
      title={Towards Natural Language Question Answering Over Earth Observation Linked Data Using Attention-Based Neural Machine Translation},
      year={2020},
      volume={},
      number={},
      pages={577-580},
      doi={10.1109/IGARSS39084.2020.9323183}}
Online Point Cloud Super Resolution Using Dictionary Learning For 3D Urban Perception
Rajat Shinde, Abhishek Potnis, Surya Durbha
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2020)
Real-time embedded vision tasks require extraction of complex geometric and morphological features from the raw 3D point cloud acquired using range scanning systems like lidar, radar etc. and depth cameras. Such applications are found in autonomous navigation, surveying, 3D mapping and localization tasks such as automatic target recognition (ATR). Typically, a dataset acquired during surveying by remote sensing lidar scanners, known as point cloud, is (1) huge in size and requires a big chunk of memory for processing at a single instance and, (2) experiences missing information due to rapid change in orientation of the sensor while scanning. In our work, we are addressing both the issues combinedly by proposing an online point cloud super-resolution approach for translating a low dimensional point cloud to a high dimensional dense point cloud by learning dictionaries in the low-dimensional subspace. We are presenting our approach for an urban road scenario by reconstructing dense point clouds of 3D objects and comparing results based on PSNR and Hausdorff distance.
    @INPROCEEDINGS{9323992,
      author={R. C. {Shinde} and A. V. {Potnis} and S. S. {Durbha}},
      booktitle={IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium},
      title={Online Point Cloud Super Resolution using Dictionary Learning for 3D Urban Perception},
      year={2020},
      volume={},
      number={},
      pages={4414-4417},
      doi={10.1109/IGARSS39084.2020.9323992}}
Semantics-enabled Spatio-Temporal Modeling of Earth Observation Data: An application to Flood Monitoring
Kuldeep Kurte, Abhishek Potnis, Surya Durbha
ACM SIGSPATIAL International Workshop on Advances on Resilient and Intelligent Cities(ARIC 2019), USA
Extreme events such as urban floods are dynamic in nature, i.e. they evolve with time. The spatiotemporal analysis of such disastrous events is important for understanding the resiliency of an urban system during these events. Remote Sensing (RS) data is one of the crucial earth observation (EO) data sources that can facilitate such spatiotemporal analysis due to its wide spatial coverage and high temporal availability. In this paper, we propose a discrete mereotopology (DM) based approach to enable representation and querying of spatiotemporal information from a series of multitemporal RS images that are acquired during a flood disaster event. We represent this spatiotemporal information using a semantic model called Dynamic Flood Ontology (DFO). To establish the effectiveness and applicability of the proposed approach, spatiotemporal queries relevant during an urban flood scenario such as, show me road segments that were partially flooded during the time interval t1 have been demonstrated with promising results.
@inproceedings{Kurte:2019:SSM:3356395.3365545,
 author = {Kurte, Kuldeep and Potnis, Abhishek and Durbha, Surya},
 title = {Semantics-enabled Spatio-Temporal Modeling of Earth Observation Data: An Application to Flood Monitoring},
 booktitle = {Proceedings of the 2Nd ACM SIGSPATIAL International Workshop on Advances on Resilient and Intelligent Cities},
 series = {ARIC'19},
 year = {2019},
 isbn = {978-1-4503-6954-1},
 location = {Chicago, IL, USA},
 pages = {41--50},
 numpages = {10},
 url = {http://doi.acm.org/10.1145/3356395.3365545},
 doi = {10.1145/3356395.3365545},
 acmid = {3365545},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {discrete mereotopology, flood disaster, ontology, semantics, spatial relations, spatiotemporal},
}
Multi-Class Segmentation of Urban Floods from Multispectral Imagery using Deep Learning
Abhishek Potnis, Rajat Shinde, Surya Durbha, Kuldeep Kurte
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Japan
Natural disasters such as floods, earthquakes, hurricanes, etc. have a huge impact on a society—causing destruction of life and property in their wake. During disasters such as flood, it is crucial to understand the dynamics of the situation as it occurs for effective response. In this paper, we address the problem of satellite image classification for urban floods using deep learning. We propose an encoder-decoder neural network based on the Efficient Residual Factorized Convnet(ERFNet), for multi-class segmentation of urban floods from multi-spectral satellite imagery. The ERFNet architecture capitalizes on skip connections and one dimensional convolutions to achieve the best possible trade-off between accuracy and efficiency. Since time is of essence during a disaster, the choice of the ERFNet architecture on a high performance computing (HPC) platform is apt. Satellite imagery from WorldView-2 of floods in Srinagar, India during September 2014 have been used for this study. The tool ‘markGT’ has been developed to assist end-to-end annotation of satellite imagery. The urban flood dataset used for this study has been generated using markGT. The proposed deep learning model over urban flood satellite imagery gives promising results on Nvidia Tesla K80 GPU. We envisage that the proposed model could be extended and improved for real-time classification of urban floods, thereby aiding disaster response personnel in making informed decisions.
                        @INPROCEEDINGS{8900250,
                        author={A. V. {Potnis} and R. C. {Shinde} and S. S. {Durbha} and K. R. {Kurte}},
                        booktitle={IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium},
                        title={Multi-Class Segmentation of Urban Floods from Multispectral Imagery Using Deep Learning},
                        year={2019},
                        volume={},
                        number={},
                        pages={9741-9744},
                        keywords={segmentation;flood;multi-class;classification;neural networks},
                        doi={10.1109/IGARSS.2019.8900250},
                        ISSN={2153-6996},
                        month={July},}

                      
A Semantic Framework for Spatial Query Reformulation for Disaster Monitoring Applications
Kuldeep Kurte, Abhishek Potnis, Surya Durbha, Rajat Shinde
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Japan
In disasters, since time is of the essence, quick decision making based on actionable insights is desired. In our earlier work, we have demonstrated that the spatial relationships-based queries can play a vital role in the disaster response phase. However, we found that the utilization of spatial relationships rules (i.e. encoded spatial knowledge) via rule reasoning process do not scale well with the increased number of image regions. Most of the available Resource Description Framework (RDF) triplestores do not support rule reasoning due to the computational complexity and undecidable nature of the rule reasoning process. In this paper, we propose an alternative approach for utilizing spatial knowledge encoded in the form of spatial relationship rules. The proposed approach reformulates the spatial query by expanding it with the configuration encoded in the corresponding spatial relationship rule. The preliminary results are promising and show the applicability of the proposed approach during the time critical events such as flood disaster.
                        @INPROCEEDINGS{8898986,
 author={K. R. {Kurte} and A. V. {Potnis} and S. S. {Durbha} and R. C. {Shinde}},
 booktitle={IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium},
 title={Semantic Framework for Spatial Query Reformulation for Disaster Monitoring Applications},
 year={2019},
 volume={},
 number={},
 pages={9946-9949},
 keywords={Spatial relations;Query reformulation;SPARQL;RDF;SWRL;Linked data;Disaster response},
 doi={10.1109/IGARSS.2019.8898986},
 ISSN={2153-6996},
 month={July},}
                     

Compressive Sensing based Reconstruction and Classification of VHR Disaster Satellite Imagery Using Deep Learning
Rajat Shinde, Abhishek Potnis, Surya Durbha, Prakash Andugula
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Japan
Disasters such as earthquakes, floods, landslides etc. create great economic and social loss by destroying the balance of life and property and create chaos. In the wake of a disaster, it becomes very significant to take real-time and on-the-fly actions to minimize the effects of the event. Remote Sensing data acquired through airborne or spaceborne platforms is usually huge in size and requires huge time in generating actionable insights during the disaster scenario. In this work, we propose a two-fold analysis of the Very High Resolution (VHR) satellite imagery based on Compressive Sensing (CS) and Deep Learning. We propose employing a deep learning approach for inferencing over compressed sensing satellite imagery. We hypothesize that this could be beneficial in generating real-time actionable insights during a catastrophe. In our work, we are using the satellite imagery from GeoEye-1 of Haiti Earthquake. Our objectives are: (1) To generate CS images for 75%, 50%, and, 25% sampling on the sparse space and (2) To develop a deep learning pixel-level classification model based on the UNet architecture using the original and reconstructed images. The UNet architecture has shown promising results for pixel-level classification in the recent literature. We envisage to combine both the objectives into an end-to-end learning framework for on-board processing which we foresee would be of great significance in various applications for rapid disaster management response.
                        @INPROCEEDINGS{8899871,
                    author={R. C. {Shinde} and A. V. {Potnis} and S. S. {Durbha} and P. {Andugula}},
                    booktitle={IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium},
                    title={Compressive Sensing Based Reconstruction and Pixel-Level Classification of Very High-Resolution Disaster Satellite Imagery Using Deep Learning},
                    year={2019},
                    volume={},
                    number={},
                    pages={2639-2642},
                    keywords={Compressed Sensing;Earthquake Disaster Response;Deep Learning},
                    doi={10.1109/IGARSS.2019.8899871},
                    ISSN={2153-6996},
                    month={July},}


                      
Rapid Earthquake Damage Detection using Deep Learning from VHR Remote Sensing Images
Ujwala Bhangale, Surya Durbha, Abhishek Potnis, Rajat Shinde
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Japan
Very High Resolution (VHR) remote sensing optical imagery is a huge source of information that can be utilized for earthquake damage detection and assessment. Time critical task such as performing the damage assessment, providing immediate delivery of relief assistance require immediate response; however, processing voluminous VHR imagery using highly accurate, but computationally expensive deep learning algorithms demands the High Performance Computing (HPC) power. To maximize the accuracy, deep convolution neural network (CNN) model is designed especially for the earthquake damage detection using remote sensing data and implemented using high performance GPU without compromising with the execution time. Geoeye1 VHR disaster images of the Haiti earthquake occurred in year 2010 is used for analysis. Proposed model provides good accuracy for damage detection; also significant execution speed is observed on GPU K80 High Performance Computing (HPC) platform.
  @INPROCEEDINGS{8898147,
  author={U. {Bhangale} and S. {Durbha} and A. {Potnis} and R. {Shinde}},
  booktitle={IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium},
  title={Rapid Earthquake Damage Detection Using Deep Learning from VHR Remote Sensing Images},
  year={2019},
  volume={},
  number={},
  pages={2654-2657},
  keywords={Deep learning;Deep CNN;GPU;damage detection;HPC},
  doi={10.1109/IGARSS.2019.8898147},
  ISSN={2153-6996},
  month={July},}



A Geospatial Ontological Model for Remote Sensing Scene Semantic Knowledge Mining for the Flood Disaster
Abhishek Potnis, Surya Durbha
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2018), Spain
Numerous remote sensing applications – flood monitoring, forest fires monitoring, earthquake analysis etc. require users to query satellite images based on their content. Such requirements have led to the evolution of Content-based Image Information Mining Systems over the last decade. Recent developments in the area of Image Information Mining(IIM) are geared towards bridging the gap between low level image features and higher-level semantics. This research focuses on improving the semantic understanding of a remote sensing scene during the flood disaster from a spatio-contextual standpoint. During a flood occurrence, it is crucial to understand the flood inundation and receding patterns in context to the spatial configurations of the landuse/land-cover in the flooded regions. This study focuses on bridging the spatio-contextual semantic gap in understanding of the remote sensing imagery during a flood, thereby attempting to improve the machine interpretability of a flood remote sensing imagery. In this regard, the Flood Scene Ontology (FSO) has been developed to mine the topological and directional knowledge in context to the flood disaster phenomenon. The FSO is envisaged to form the basis for developing applications that would utilize the spatio-contextual semantics of the flood disaster to aid in the Disaster Assessment and Management process. This paper describes the conceptual framework that was developed to address the same.
@INPROCEEDINGS{8517680,
author={A. V. Potnis and S. S. Durbha and K. R. Kurte},
booktitle={IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium},
title={A Geospatial Ontological Model for Remote Sensing Scene Semantic Knowledge Mining for the Flood Disaster},
year={2018},
volume={},
number={},
pages={5274-5277},
keywords={content-based retrieval;data mining;disasters;fires;floods;geographic information systems;geophysical image processing;hydrological techniques;image retrieval;ontologies (artificial intelligence);remote sensing;flood scene ontology;remote sensing scene semantic knowledge mining;content-based image information mining systems;IIM;numerous remote sensing applications;low level image features;satellite images;forest fires monitoring;geospatial ontological model;spatio-contextual semantics;flood disaster phenomenon;flood remote sensing imagery;spatio-contextual semantic gap;flooded regions;flood inundation;flood occurrence;spatio-contextual standpoint;semantic understanding;higher-level semantics;Floods;Remote sensing;Ontologies;Roads;Semantics;Buildings;Resource description framework;geospatial;ontology;contextual;flood;disaster;semantics},
doi={10.1109/IGARSS.2018.8517680},
ISSN={2153-7003},
month={July},}

Image Source: Nvidia
On-Board Biophysical Parameters Estimation using High Performance Computing
Pratyush Talreja, Surya Durbha, Abhishek Potnis
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2018), Spain
Jetson TK1 is the first mobile processor from NVIDIA having similar features and architecture as that of a modern desktop GPU and still using low power from a mobile chip. Therefore, Jetson TK1 runs the same CUDA code (running on desktop GPU) with similar level of performance. Also, with the dawn of GPU technology, it has become possible to perform tasks (that are computationally intensive) in realtime or near-real time. In the agricultural domain, retrieving the biophysical parameters of the crop is important as it provides insights into the plant growth status. Inversion of the Radiative Transfer Model enables to obtain these parameters. However, such a process is highly computationally intensive. The focus of this work is to develop and implement an approach that takes the advantage of embedded High-Performance Computing (HPC) capability of Jetson TK1 to significantly improve the inversion process of a Radiative Transfer Model. The experimental results show that Jetson TK1 based biophysical parameters estimation gives significant speedup, which opens-up the possibility of having a Jetson based embedded platform for on-board biophysical parameters estimation in the future. In such a scenario, where there are constraints related to energy and power, Jetson TK1 can become a practicable option by providing a GPU based architecture for running energy-aware computationally intensive algorithms in parallel for processing the data, and generating the results in real-time or near-real time while taking care of the power usage.
@INPROCEEDINGS{8518403,
author={P. V. Talreja and S. S. Durbha and A. V. Potnis},
booktitle={IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium},
title={On-Board Biophysical Parameters Estimation Using High Performance Computing},
year={2018},
volume={},
number={},
pages={5445-5448},
keywords={crops;embedded systems;graphics processing units;parallel architectures;parameter estimation;power aware computing;radiative transfer;embedded platform;highly computationally intensive;plant growth status;agricultural domain;CUDA code;mobile chip;NVIDIA;mobile processor;embedded high-performance computing;radiative transfer model;energy-aware computationally intensive algorithms;GPU based architecture;modern desktop GPU;Jetson TK1;on-board biophysical parameters estimation;Jetson TK1;Biophysical parameters estimation;GPU;HPC;Radiative Transfer Model},
doi={10.1109/IGARSS.2018.8518403},
ISSN={2153-7003},
month={July},}

A Spatio-Temporal Ontological Model for Flood Disaster Monitoring
Kuldeep Kurte, Surya Durbha, Roger King, Nicolas Younan, Abhishek Potnis
IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2017), United States of America
During an extreme event such as flood disaster, it is important to study flood inundation and receding patterns to understand the dynamic spatio-temporal behavior of flood. In addition to the general change detection techniques in RS, a proper conceptualization of `change' during flood disaster is necessary to model its dynamic behavior. This motivated the development of Ontology, which is able to capture the dynamically evolving phenomenon. This Ontology is envisaged as a precursor for developing applications that integrate the spatio-temporal dimensions of a dynamically evolving system such as floods. This paper describes the conceptual framework that was developed to address the same.
@INPROCEEDINGS{8128176,
author={K. R. Kurte and S. S. Durbha and R. L. King and N. H. Younan and A. V. Potnis},
booktitle={2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
title={A spatio-temporal ontological model for flood disaster monitoring},
year={2017},
volume={},
number={},
pages={5213-5216},
keywords={disasters;emergency management;floods;hydrological techniques;ontologies (artificial intelligence);spatiotemporal phenomena;Ontology;dynamically evolving system;flood disaster monitoring;flood dynamic spatio-temporal behavior;flood inundation;general change detection techniques;spatio-temporal dimensions;spatio-temporal ontological model;Buildings;Floods;Geospatial analysis;OWL;Ontologies;Roads;Semantics;4D-fluent;Ontology;Spatio-Temporal},
doi={10.1109/IGARSS.2017.8128176},
ISSN={},
month={July},}
}
Exploring Visualization of Geospatial Ontologies Using Cesium.
Abhishek Potnis, Surya Durbha
International Workshop on Visualization and Interaction for Ontologies and Linked Data(VOILA), International Semantic Web Conference (ISWC 2016), Japan
In recent years, there has been a substantial increase in the usage of geospatial data, not only by the scientific community but also by the general public. Considering the diverse and heterogeneous nature of geospatial applications around the world and their inter-dependence, there is an impending need for enabling sharing of semantics of such content-rich geospatial information. Geospatial ontologies form the building blocks for sharing of semantics of this information, thus ensuring interoperability. Visualization of geospatial ontologies from a spatio-temporal perspective can greatly benefit the process of knowledge engineering in the geospatial domain. This paper proposes to visually explore and reason over the instances of a geospatial ontology – the geopolitical ontology developed by the Food and Agriculture Organization of the United Nations using Cesium – a WebGL based virtual globe. It advocates the usage of Cesium for visualization of geospatial ontologies in general by demonstrating visualizations of geospatial data and their relationships.
@inproceedings{DBLP:conf/semweb/PotnisD16,
  author    = {Abhishek Potnis and
               Surya S. Durbha},
  title     = {Exploring Visualization of Geospatial Ontologies using Cesium},
  booktitle = {Proceedings of the Second International Workshop on Visualization
               and Interaction for Ontologies and Linked Data co-located with the
               15th International Semantic Web Conference, VOILA@ISWC 2016, Kobe,
               Japan, October 17, 2016.},
  pages     = {143--150},
  year      = {2016},
  crossref  = {DBLP:conf/semweb/2016voila},
  url       = {http://ceur-ws.org/Vol-1704/paper14.pdf},
  timestamp = {Wed, 12 Oct 2016 15:50:13 +0200},
  biburl    = {https://dblp.org/rec/bib/conf/semweb/PotnisD16},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Image Source: OGC
Poster Presentation: Semantic Mediation and Reasoning on Streaming Sensor Data over Geospatial Sensor Web
Abhishek Potnis, Surya Durbha,
12th Semantic Web Summer School (SSSW 2016), Italy

Projects

Machine Learning based Cropland Mapping in India for Identifying Human-Wildlife Conflict Locations
Mentor: Anubhav Vanamamalai, Centre for Wildlife Studies


Project Description:
  • Implemented Supervised Satellite Image Classification for identifying different crop types using Google Earth Engine for understanding wildlife conflict
  • Experimented with different classification approaches such as Random Forest, SVM and ANN, to maximize model performance
Identifying Solar Farms in India using Machine Learning with Google Earth Engine
Google Earth Engine India Advanced Summit Buildathon 2019


Project Description:
  • Worked in a team of 6, to solve the Binary Classification problem of detecting solar farms in India
  • Employed the Random Forest Classifier with R,G,B, NIR and VV Polarization as features to obtain an Accuracy of 81.07%
  • Added Wavelet Kernel-based Convolution as an additional feature to detect solar panels' texture thus improving the Accuracy to 83.65%
Pixel Purity Index and Spectral Angle Based Satellite Image Classifier
Advanced Satellite Image Processing Course Project



Project Description:
  • Studied and implemented End Member Extraction using Pixel Purity Index and Spectral Angle to develop a Classifier for classifying satellite imagery into land use land cover classes.
  • Developed an interactive web application to upload a satellite image and perform image classification
Emergency Response Route Navigation and Simulation of Bus Service in IIT Bombay Campus
Geographic Information Systems Course Project



Project Description:
  • Implemented a route navigation feature using PgRouting, that would identify the nearest bus from an emergency location and guide it using the shortest possible route computed using the Dijkstra's algorithm, displaying the time to reach the location
  • Developed an interactive web application that simulated the mini-bus service in the campus
Satellite Image Classifier using Parallelepiped Classification
Satellite Image Processing Course Project



Project Description:
  • Studied and implemented the pixel based Parallelepiped Classifier for classifying satellite imagery into land use land cover classes.
  • Developed an interactive web application for training the classifier to generate a model and perform satellite image classification

Image Source: OpenGeo
An Integrated Client-Server based Interoperable Geographic Information System for Forest Fire Monitoring
Geospatial Data Interoperability Course Project



Project Description:
  • Developed an AJAX driven interactive web client aimed at integrating and querying Geospatial Web Services using Geoserver and Google Web Toolkit
  • Integrated services such as Web Feature Service(WFS), Web Map Service(WMS), Web Coverage Service(WCS) and Sensor Observation Service(SOS) to form a web mash-up
Route Navigation and Pothole Monitoring using Crowd Sourced Pothole Mapping
This application won the Esri India's mApp Your Way App Development Challenge 2015



Project Description:
  • Developed a GIS based Android and web application to address the issue of monitoring and managing potholes on Indian roads
  • Intelligent pothole-free routing for senior citizens, pregnant women and patients


Open Source Contributions

Google Summer of Code 2016 - Enabling Cesium for Liquid Galaxy
Liquid Galaxy
Mentor: Andrew Leahy, Western Sydney University



Project Description:
  • Developed a Proof of Concept Prototype application that enabled Cesium - an open source virtual globe to run across the multiple displays, providing an immersible and a riveting experience to the users
  • Focused on endowing Cesium with features such as Camera Synchronization, Content Synchronization across the displays and Space Navigation Camera Control.
Google Summer of Code 2015 - NASA’s Data Curtains from Space
Cesium Community
Mentors: Mike McGann, Ryan Boller, NASA



Project Description:
  • Developed a web application to visualize LiDAR Profiles captured by the CALIPSO Satellite with the orbital tracks of the satellite and Aqua-MODIS-Reflectance as the base layer, using CesiumJS Library.
  • Proposed and implemented the structure of meta-data in .json format to be generated from .hdf files
  • Implemented scripts to extract imagery and meta-data from .hdf files, to be consumed by the web app

Image Source: Mozilla
Mozilla Firefox | Open Source Code Contributions





Contributions Description:
  • Fixed bugs by authoring code patches primarily in JavaScript for Mozilla Firefox
  • Edited and improved technical articles on Mozilla Developer Network
  • Recognized as a core contributor in the “about:credits” section of Mozilla Firefox
  • Invited to attend the Mozilla Summit 2013 at Santa Clara, USA

Academics


M.Tech. - Ph.D. Dual Degree [2014 - 2021]
Indian Institute of Technology Bombay, India
Specialization: Geoinformatics
CPI: 9.28




Bachelors of Engineering [2009 - 2013]
Vidyavardhini's College of Engg. and Tech., University of Mumbai, India
Specialization: Computer Engineering
Percentage: 74.26

Achievements and Awards

  • Recipient of the Academic Research Credits Grant under the framework of Google Cloud Platform Research Credits Programme
  • Invited to attend the Geo for Good Summit at Google, Sunnyvale, CA, USA in Sep. 2019
  • Successfully completed the project "Machine Learning based Mapping of Croplands with Google Earth Engine for Identifying Human-Wildlife Conflict Locations" with Centre for Wildlife Studies for the Google Summer of Earth Engine Research Program
  • Winner of Google Earth Engine India Advanced Summit Buildathon 2019 for the project - "Identifying Solar Farms in India using Machine Learning with Google Earth Engine"
  • Finalist in the Google Earth Engine India Challenge 2018
  • Recipient of the IEEE Geoscience and Remote Sensing Society Travel Grant to present at IEEE Geoscience and Remote Sensing Symposium (IGARSS) 2018, Spain
  • Quarter-Finalist for the India Innovation Challenge 2017 hosted by IIM Bangalore and conducted by Government of India and Texas Instruments
  • Recipient of the International Semantic Web Conference 2016 Student Travel Grant funded by Semantic Web Science Association (SWSA) and the US National Science Foundation (NSF) to present at ISWC 2016 at Kobe, Japan
  • Recipient of the Ministry of Human Resource Development, Govt. of India Fellowship for Ph.D. students
  • Official Mozilla Representative [2013 - 2015]
  • Name listed as a Core Contributor on the Mozilla Monument, outside Mozilla’s office space at San Francisco, CA, USA
  • Successfully completed Google Summer of Code 2016
  • Successfully completed Google Summer of Code 2015
  • Represented IIT Bombay for the SAP InnoJAM Challenge 2016 held at SAP Labs, Bangalore
  • Winner of the Esri India's mApp Your Way 2015 - A National Level App Development Challenge for the application – 'Route Navigation and Pothole Monitoring using Crowd Sourced Pothole Mapping'
  • Successfully completed Module 1 of French Language Course conducted by International Relations Office, IIT Bombay, in association with Embassy of France, New Delhi
  • Invited as a Contributor to attend the Mozilla Summit 2013 in Santa Clara, USA

Certifications

Synergistic Activities

Professional Memberships

  • IEEE Student Member
  • IEEE Geoscience and Remote Sensing Society (GRSS) Student Member
  • Student Member of Resources Engineers Association (REA), CSRE, IIT Bombay

Selected Invited Talks

  • Delivered Lightning Talk at Google's Geo For Good Summit 2020 on "Machine Learning based Multi-Class Segmentation of Urban Flood Remote Sensing Scenes with Google Earth Engine"
  • Delivered Talk on "Machine Learning based Mapping of Croplands with Google Earth Engine" in Partner Panel at Google's Geo For Good Summit 2019
  • Delivered Session on "Google Earth Engine and TensorFlow" at the Google Earth Engine Student Summit 2019 at IIT Bombay
  • Delivered Talk on "Flood Mapping with Google Earth Engine" at the Community on Air Webinar organized by the Google Earth Engine India Community.
  • Delivered Talk on "Role of Deep Learning in Disaster Monitoring" at the Intel AI Meetup, Mumbai
  • Conducted a two-day workshop on QGIS with Dr. Kuldeep Kurte for the Geology Community as a part of the GeoWeek of October 2017, held at Fergusan College, Pune in Maharashtra, India
  • Delivered Talks on Preparing for Google Summer of Code for students at CSRE,IIT Bombay in January 2017 and January 2018
  • Delivered Talk on Getting Involved in Open Source - Contributing to Mozilla at the ISAAC 2014, the Technical Festival of Thadomal Shahani College of Engineering(TSEC), Mumbai in October 2014
  • Delivered Talk on Contributing to Open Source at IIT Bombay as a part of the MozTalk conducted by Web and Coding Club, IIT Bombay in June 2013

Summer School and Tutorials Attended