Research topics

Brain Modeling and Simulations: Olfactory Bulb: The understanding of the nonlinear dynamics of olfactory bulb (OB) is essential to the modeling of brain and nervous system. Odor threshold, odor identification, detection and recognition are the basic measures of the medical studies detection and diagnosis of neuro-degenerative diseases as Alzheimer’s, Parkinson’s disease and schizophrenia.

On the base of our study of the OB activities and the analysis of the conditions governing neural oscillations and the nature of odor-receptor interactions, we proposed and developed mathematical models and simulations addressing the questions of how the brain recognizes odors, how it works in a noisy natural environment and why synchronization is used for decoding brain circuits, which are still not successfully solved.  We simulated the dynamic behavior of the olfactory system in order to understand the way in which odors are represented and processed by the brain.

Radial Basis Function Neural Networks Based on Potential Functions: A new strategy of shape-adaptive radial basis functions based on potential functions and optimization procedure for positioning of the centers during the learning process is suggested. The novelty of the presented approach includes: 1) Automatic capacity correction of functions separated the cluster and automatic NN topology selection in the sense that both the optimal number and locations of the basis functions are automatically obtained during training. 2) Extracting of a small number of supporting patterns from the training data that are relevant to the classification what further improves the convergence.

Self-Organizing Maps: Classical Self-Organizing Maps read the input values in random but sequential order one by one and thus adjust the network structure over space: The network will be built while reading bigger and bigger parts of the input. In contrast to this approach, we present a Self-Organizing Map that processes the whole input in parallel and organizes itself over time. The main reason for parallel input processing lies in the fact that knowledge can be used to recognize parts of patterns in the input space that have already been learned. This way, networks can be developed that do not reorganize their structure from scratch every time a new set of input vectors is presented but rather adjust their internal architecture in accordance with previous mappings.

Neural Network Models for Independent Component Analysis: ICA is a technique, which transforms a multivariate data vector into a new one whose components are as statistically independent as possible. We propose a multiplayer neural network structure based for feature extraction based on ICA. We built a graphical user interface (GUI) in order to impose different type of noise to the experimental images and extract the uncorrelated components from natural scenes. The analysis of the results shows that the proposed multilayered feedforward neural network for image feature extraction using ICA significantly reduces the noise by elimination of some small features which are in the original image. 

Feature Selection Fuzzy Algorithms in High Dimension Data Clustering: The effectiveness of pattern clustering (recognition) is highly dependent on the accurate identification of clusters shapes which can be influenced by considering prototypes with a geometric structure or by using different distance measures in the objective function. We developed an extension of FCM algorithm by using Mahalanobis (MFCM) and parametric Minkowski (&MFCM) dissimilarity distances which are suitable for processing datasets with high dimensions and high number of clusters. The proposed initialization and use of validation measures to determine the initial number of clusters is very effective. We also integrate Sammon mapping and Silhouette function for data visualization and analysis of the clustering results.

Algorithms for Handling Dataset Imbalance in Neural Networks with Deep Learning: We focus on the topology and training of  neural network classifiers with deep learning when using imbalance data and different ways of data preprocessing, implementation of some newly suggested optimizers as well as traditional one for Deep Learning (DL), different ways of learning rate adaptation including the one based on cyclic increase and decrease during the training and propose a taxonomy of modelling strategies for handling imbalanced datasets.

Back

HOME