Amir Ghaderi

Computer Science Ph.D. student, UT Arlington

amir.ghaderi at

Google Scholar



I am currently a Computer Science Ph.D. Student at University of Texas at Arlington, working under direction of Professor Vassilis Athitsos in the Vision-Learning-Mining (VLM) Research Lab. My interests are in the fields of machine learning and computer vision. I am currently a graduate research assistant.


Deep Forecast : Deep Learning-based Spatio-Temporal Forecasting

amir ghaderi, Borhan M. Sanandaji, Faezeh Ghaderi

The paper presents a spatio-temporal wind speed forecasting algorithm using Deep Learning (DL)and in particular, Recurrent Neural Networks(RNNs). Motivated by recent advances in renewable energy integration and smart grids, we apply our proposed algorithm for wind speed forecasting. Renewable energy resources (wind and solar)are random in nature and, thus, their integration is facilitated with accurate short-term forecasts. In our proposed framework, we model the spatiotemporal information by a graph whose nodes are data generating entities and its edges basically model how these nodes are interacting with each other. One of the main contributions of our work is the fact that we obtain forecasts of all nodes of the graph at the same time based on one framework. Results of a case study on recorded time series data from a collection of wind mills in the north-east of the U.S. show that the proposed DL-based forecasting algorithm significantly improves the short-term forecasts compared to a set of widely-used benchmarks models.


Improving the Accuracy of the CogniLearn System for Cognitive Behavior Assessment

amir ghaderi, Srujana Gattupalli, Dylan Ebert, Ali Sharifara
Vassilis Athitsos, Fillia Makedon

HTKS is a game-like cognitive assessment method, designed for children between four and eight years of age. During the HTKS assessment, a child responds to a sequence of requests, such as "touch your head" or "touch your toes". The cognitive challenge stems from the fact that the children are instructed to interpret these requests not literally, but by touching a different body part than the one stated. In prior work, we have developed the CogniLearn system, that captures data from subjects performing the HTKS game, and analyzes the motion of the subjects. In this paper we propose some specific improvements that make the motion analysis module more accurate. As a result of these improvements, the accuracy in recognizing cases where subjects touch their toes has gone from 76.46% in our previous work to 97.19% in this paper.


Selective Unsupervised Feature Learning with Convolutional Neural Network (S-CNN)

International Conference on Pattern Recognition (ICPR)-2016

amir ghaderi, vassilis athitsos

Supervised learning of convolutional neural networks (CNNs) can require very large amounts of labeled data. Labeling thousands or millions of training examples can be extremely time consuming and costly. One direction towards addressing this problem is to create features from unlabeled data. In this paper we propose a new method for training a CNN, with no need for labeled instances. This method for unsupervised feature learning is then successfully applied to a challenging object recognition task. The proposed algorithm is relatively simple, but attains accuracy comparable to that of more sophisticated methods. The proposed method is significantly easier to train, compared to existing CNN methods, making fewer requirements on manually labeled training data. It is also shown to be resistant to overfitting. We provide results on some well-known datasets, namely STL-10, CIFAR-10, and CIFAR-100. The results show that our method provides competitive performance compared with existing alternative methods. Selective Convolutional Neural Network (S-CNN) is a simple and fast algorithm, it introduces a new way to do unsupervised feature learning, and it provides discriminative features which generalize well.


Evaluation of Deep Learning based Pose Estimation for Sign Language

International Conference on pervasive technologies related to assistive environments (PETRA), 2016

Srujana Gattupalli, amir ghaderi, vassilis athitsos

Human body pose estimation and hand detection being the prerequisites for sign language recognition(SLR), are both crucial and challenging tasks in Computer Vision and Machine Learning. There are many algorithms to accomplish these tasks for which the performance measures need to be evaluated for body posture recognition on a sign language dataset, that would serve as a baseline to provide important non-manual features for SLR. In this paper, we propose a dataset for human pose estimation for SLR domain. On the other hand, deep learning is on the edge of the computer science and obtains the state-of-the-art results in almost every area of Computer Vision. Our main contribution is to evaluate performance of deep learning based pose estimation methods by performing user-independent experiments on our dataset. We also perform transfer learning on these methods for which the results show huge improvement and demonstrate that transfer learning can help improvement on pose estimation performance of a method through the transferred knowledge from another trained model. The dataset and results from these methods can create a good baseline for future works and help gain significant amount of information beneficial for SLR.


  • Machine Learning

    Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".

  • Computer Vision

    Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.