Srujana Gattupalli

Ph.D. Student    
Computer Science and Engineering Department
University of Texas at Arlington    
Office: ERB 308
Email: srujana DOT gattupalli AT mavs DOT uta DOT edu




CogniLearn: A Robust, Personalized Cognitive Behavior Assessment tool using Deep Visual Feature Analysis  


CogniLearn system is a novel tool designed to assist experts with the diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) by computerizing current cognitive-assessment practices. The proposed method takes advantage of state-of-the-art knowledge from both fields of Computer and Cognitive sciences, and aims to assist therapists in decision making, by providing advanced statistics and sophisticated metrics regarding the subject’s performance. In particular, CogniLearn is based on the existing framework of Head-Toes-Knees-Shoulders (HTKS) that serves as a useful measure for behavioral self-regulation. According to related literature, HTKS is well established for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. The proposed method exploits recent advances in the area of computer vision, by combining deep-learning and convolutional neural networks with traditional computer vision features, in an effort to automate capture and motion analysis of users performing the HTKS game. Our method targets to tackle several common problems in computer vision, such as multi-person analysis, point-of-view and illumination invariance, subject invariance,     and self-occlusions, under a very sensitive context where accuracy and precision come as our first priority. We perform extensive evaluation of our system, under varying conditions and experimental setups, and we provide detailed analysis regarding its capabilities. As an additional outcome of this work, a publicly available dataset was released, that is partially annotated. The dataset consists of different subjects performing the HTKS activities under different scenarios. Finally a set of novel user-interfaces is introduced, specifically designed to assist human experts with data-capturing and motion-analysis, using intuitive and descriptive visualizations.            

MAGNI – A Real-time Robot-aided Game-based Tele-Rehabilitation System                     

In this project we presents a tele-rehabilitation framework to en-able interaction between therapists and patients and is a combination of a graphical user interface and a high dexterous robotic arm. The system, called MAGNI, integrates a 3D exercise game with a robotic arm, operated by therapist in order to assign in real-time the prerecorded exercises to the patients. We propose a game that can be played by a patient who has suffered an injury to their arm (e.g. Stroke, Spinal Injury, or some physical injury to the shoulder itself).Here we developed a front-end user interface for therapist and patients that can be used in real hospitals and a back-end motion analysis for the Patient-Game and Robot interaction. In this work we combine the robot-assisted rehabilitation with a 3D video game that motivates the user in a GUI operated by therapists and allows them to interact in real-time with the patient. We evaluate the users’ exercises according to the prescribed therapist exercises using the robotic arm in order to capture the users’ upper limb range of motion and disabilities.
Our prototype demonstrates that 3D game in combination with robotic end-effector enhances the compliance of user by proving motivation to continue through the prescribed exercises.

·       

 Magni        magni

 Articulated  Human Body Pose estimation

In this project we perform extensive evaluatation of deep learning based pose estimation methods by performing user-independent experiments on our dataset. We also perform transfer learning on these methods for which the results show huge improvement and demonstrate that transfer learning can help improvement on pose estimation performance of a method through the transferred knowledge from another trained model. The dataset and results from these methods create a good baseline for future works and help gain significant amount of information beneficial for SLR. We also propose a dataset for human pose estimation for SLR domain called American Sign Language Image Dataset(ASLID).

pose    pose    pose   

A Dataset of Robot-aided Upper limb Exercises and the Motion Analysis ToolBox

 In this project we record patient’s whole arm during the execution of basic Robot-aided Rehabilitation exercises. Our dataset comprises of single arm exercises and the modalities that were recorded are: 1. Barrett Arm end-effector. 2. Skeleton Tracker - Kinect 2. 3. Vicon System (Each participant has to wear band on wrist, elbow and shoulder) We also created Graphical User Interface in Matlab in order to synchronize the modalities together and browse through annotations. Dataset Link

dataset

American Sign Language Recognition

This is an on-going project that aims to help simplify learning ASL and serve as medium of communication between the deaf and hearing people. The automatic human pose tracker can detect upper body joint positions over continuous sign language video sequences and performing motion analysis on the tracked spatial co-ordinates of body joints in the input video can help improve sign prediction accuracy. This system can be further extended to enable prediction of temporal information about the occurrence of any sign in a dataset of ASL videos. Figure below shows the flowchart of the proposed sign language recognition system. It consists of components for upper body detection and articulated human pose estimation, feature extraction, spatiotemporal matching. Here feature extraction element can extract feature vectors like motion, relative locations of coordinates, skin color etc.

ASL

 

Classification using Associative Rule Mining

This is a data mining project that performs study and organization of 20,000 news articles using concepts of Classification and Association Rule Mining. It is a web application to performs best match to categorize any incoming news article.

Here we perform removal of stopwords, perform stemming and frequency counting to obtain keywords. We researched and created an algorithm to perform a best match using concepts of data mining. We performed CRUD operations on the database that contains keywords from 20,000 news articles.

 

Face Recognition Software (Computer Vision)

This project includes implementation of a face detector that is able to detect a face and recognize the person from the 10 faces used during the training phase. It combines information from skin color (using histograms) and rectangle filters.The software is trained using AdaBoost and utilizes the ideas of bootstrapping and classifier cascade. 

 Online Survey Software (Object oriented concepts and programming)

 The Software generates customized surveys and stores the answers to be viewed by the admin user role. It handles conditional questions efficiently without having to reload a page. For this project we created a Java based Front- End user interface for the user to be able to view the survey and a Back-End based database in SQL to keep track of the answers. We used Data Structures to hold questions and their answering methods to generate user interface on the fly. We conduct role management to separate user that takes survey and the admin that views answers.