<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://hdl.handle.net/123456789/10344">
<title>PhD (CE) (BUES)</title>
<link>http://hdl.handle.net/123456789/10344</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/18655"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/18643"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/14760"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/9950"/>
</rdf:Seq>
</items>
<dc:date>2026-04-04T12:04:31Z</dc:date>
</channel>
<item rdf:about="http://hdl.handle.net/123456789/18655">
<title>Analysis of Physiological Signals for Emotion Charting</title>
<link>http://hdl.handle.net/123456789/18655</link>
<description>Analysis of Physiological Signals for Emotion Charting
Amna Waheed Awan, 01-281171-005
Human emotions are complex mental and physiological states that arise in response to various internal and external stimuli. Emotional health is an important part of human physical and psychological health. An emotionally healthy person can perform multiple roles in his life in the best possible way, however, due to emotional imbalance and cognitive disorders a person can face so many physical and mental health issues. Therefore, the timely diagnosis of these mental illness can prevent severe mental disorders and can improve the quality of medical care. Recently, emotion recognition has gained great attention in affective computing and different modalities have been used for emotion recognition i.e. human physical signals and human physiological signals. Human physiological signals are considered to be most reliable source for emotion recognition as compared to human physical signals because they can’t be manipulated. Emotion charting using multimodal signals have grown in popularity due to wide multidisciplinary applications such as health care department, neuromarketing,, robotics, safety, security and e-gaming etc. There are number of physiological markers such as heart rate, respiration, electrodermal activity, conductance and brain activity , which can be used for performing emotion recognition. However, physiological signals such as Electrocardiogram (ECG) signals and Electroencephalograms (EEG) signals measure the cardiac and neuronal activities respectively, connected with different human emotional states . In addition to this, Galvanic Skin Response (GSR) signals are also very highly correlated with the emotional states of human. Therefore, these physiological signals are incorporated in the proposed method and they can be analyzed using different techniques of advanced signal processing and machine learning in order to identify the hidden patterns and classify the emotional states. Previously, researchers have developed different methods for classification of these signals for emotion detection but still there is a need to bridge the connection between the anatomy of human physiological signals and cognitive behaviors by critically analyzing the variation in the waveforms of physiological signals with respect to human emotions. Keeping this in view, this research work proposes two deep learning-based approaches for emotion charting using physiological signals. First approach is an Ensemble method using customized convolutional Stacked Autoencoder (ESA) for Emotion Charting. This approach performs preprocessing of physiological signals (EEG, ECG and GSR) using bandpass filtering and Independent Component Analysis (ICA) followed by Discrete Wavelet Transform (DWT). Then convolutional stacked autoencoder has been employed for feature extraction from the scalograms of physiological signals. Feature vector obtained from stacked autoencoder is then fed to three classifers SVM (Support Vector Machine), RF (Random Forest), and LSTM (Long Short-Term Memory). The outputs of classifers are combined using majority voting scheme for the fnal classifcation of signals into four emotional classes i.e. High Valence and High Arousal (HVHA), Low Valence and Low Arousal (LVLA), High Valence and Low Arousal (HVLA) and Low Valence High Arousal (LVHA). However, second approach is CNN-Vision Transformer (CVT) based emotion charting using ensemble classifcation. In this approach, initially signals are decomposed into non-overlapping segments, the noise is removed using bandpass fltering followed by ICA. Then two feature sets are obtained from 1D CNN and Vision Transformer, which are then combined to generate a single feature vector. Finally feature vector is fed to an ensemble classifer composed of LSTM (Long Short-Term Memory), ELM (Extreme Learning Machine) and SVM (Support Vector Machines) classifiers. The probabilities generated from each classifer is fed as input to few shot learning based technique Model Agnostic Meta Learning (MAML) which combines classifers outputs and generates a single output in the form of emotional classes. The proposed system is validated on AMIGOS and DEAP datasets with 10-fold cross validation and obtained the highest accuracy of 94.75 % , sensitivity of 99.15% and specifcity of 97.61 % with ESA based emotion charting. However, on the other hand the proposed system achieved the highest accuracy of 98.2 %, sensitivity of 98.4% and specifcity of 99.53% with CVT based approach. The proposed system outperforms the state-of-the-art emotion charting methods in terms of accuracy, sensitivity and specifcity.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/18643">
<title>Parallel Architecture of Convolutional Neural Network for Object Classification</title>
<link>http://hdl.handle.net/123456789/18643</link>
<description>Parallel Architecture of Convolutional Neural Network for Object Classification
Zahra Waheed Awan, 01-281171-006
Convolutional Neural Networks (CNNs) have been known for their high performance and are widely applied in many computer vision tasks including object detection and image classification. Large-scale CNNs are computationally intensive and require high-computing resources. This issue has been addressed by deploying them on Graphical Processing Units (GPUs). However, GPU-based implementation results in significant power consumption, limiting their application in embedded systems. In this context, FPGA-based implementations offer a reasonable solution with tolerable power consumption, but large-scale CNNs require substantial memory and computation requirements, making it challenging to deploy on resource-limited edge devices. This research work addresses the challenge of deploying CNNs on resource-restricted embedded systems by proposing a memory-efficient parallel architecture (MPA). The goal is to utilize CNN’s inherent parallelism while minimizing memory and power consumption. The MPA approach involves a software/hardware co-design, including a low-memory network compression framework at the software level which removes redundant parameters to reduce model size. In addition, it categorizes layers into No-Pruning (NP) and Pruning (P) layers. Additionally, weight quantization is applied to each category of layers to compress the model further by reducing bits of weight parameters into low-bit width. Depending upon the distribution of the weights, NP-layers undergo Optimized Quantization (OQ), however, P-layers are subjected to Incremental quantization (INQ). We propose OQ algorithm for NP-layers to perform quantization using optimal quantization levels (Q-levels) obtained from the Optimizer. High compression of 11x, 5x, 8.5x and 7.5x has been achieved for LeNet-5, VGG-16. AlexNet and ResNet-50 respectively, by using the proposed compression framework, resulting in significant reduction in memory utilization with negligible accuracy loss. To further minimize the resource utilization of the system, MPA presents a parallel architecture for the convolution (CONV) layers of the model at the hardware level. The compressed model obtained in the previous step is mapped onto target hardware i.e., FPGA by applying the proposed parallel hardware architecture. In parallel architecture, multiple 1D-processing elements (PEs) are connected in parallel as 2D-PE to achieve data-level and computation-level parallelism. Each 2D-PE executes the convolution operation of the CONV layer by performing multiple MAC operations involved in the convolution process in parallel, which further reduces the computational cost of the system. In the structure of each 1D-PE, we further achieve computation-level parallelism by connecting multiple registers, multipliers, adders, and multiplexers in a systolic-array manner. However, for P-layers, multipliers are replaced with barrel shifters to further reduce the system’s resource utilization. The model we have achieved after applying both software and hardware levels optimization is examined based on its resource optimization, which includes the area in terms of number of slice registers, LUTs, DSPs and ﬂipﬂops. It is found that using barrel-shifter-based PE consumes almost half of the resources as consumed by multiplier-based PE, hence resulting in a prominent decrease in overall resource utilization. Consequently, this makes the proposed system a reasonable solution to be deployed on embedded systems with constrained hardware resources.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/14760">
<title>A Deep Learning Approach for Prediction of Epileptic Seizures Using EEG Signals</title>
<link>http://hdl.handle.net/123456789/14760</link>
<description>A Deep Learning Approach for Prediction of Epileptic Seizures Using EEG Signals
SYED MUHAMMAD USMAN, 01-281182-002
Epilepsy is a brain disorder in which a patient undergoes frequent seizures.&#13;
Around 30% of patients affected with epilepsy cannot be treated with&#13;
medicines/surgical procedures. However, abnormal activity, known as the&#13;
preictal state, starts some time before the seizure actually occurs. Therefore,&#13;
it may be possible to deliver medication prior to the occurrence of a seizure&#13;
if initiation of the preictal state is predicted before the seizure onset and it&#13;
can also help in controlling the subsequent seizures. Electroencephalogram&#13;
(EEG) signals are used to analyze the states of epileptic seizures which&#13;
can be recorded by placing electrodes on scalp of subject known as scalp&#13;
EEG signals or by implanting electrodes inside the brain on the surface&#13;
called intracranial EEG signals. In this research, an epileptic seizure prediction method is proposed that predicts the start of preictal state before&#13;
the seizure’s onset using scalp and intracranial EEG. Proposed epileptic&#13;
seizure prediction method involves three steps; (i) Preprocessing of EEG&#13;
signals, (ii) Features extraction and (iii) Classification of preictal and interictal states. In this method, EEG signals are preprocessed using empirical&#13;
mode decomposition followed by bandpass filtering and conversion of time&#13;
domain signals into frequency domain using short time Fourier transform.&#13;
Class imbalance problem is mitigated by generating synthetic preictal segments using generative adversarial networks. A three layer customized&#13;
convolutional neural network is proposed to extract automated features&#13;
and combined with handcrafted features to get a comprehensive feature&#13;
set. To reduce the effect of curse of dimensionality, correlated features&#13;
have been dropped from feature set using Pearson correlation coefficient&#13;
and an optimal subset of features has been selected using particle swarm&#13;
optimization. Feature set is then used to train an ensemble classifier that&#13;
combines Support Vector Machine (SVM), Convolutional Neural Network&#13;
(CNN) and Long Short Term Memory Units (LSTMs) using Model agnostic&#13;
meta learning. CHBMIT scalp EEG and American epilepsy society-Kaggle&#13;
seizure prediction challenge intracranial EEG datasets have been used to&#13;
train and test the proposed method. An average sensitivity of 96.28 %ii&#13;
and specificity of 95.65 % with average anticipation time of 33 minutes&#13;
on all subjects of CHBMIT has been achieved by proposed method. On&#13;
American epilepsy society-Kaggle seizure prediction dataset, an average&#13;
sensitivity and specificity of 94.2 % and 95.8 % has been achieved on all&#13;
subjects. Results achieved by proposed method have been compared with&#13;
the existing state of the art epileptic seizure prediction methods. Proposed&#13;
method is able to achieve more than 3 % sensitivity, specificity and average&#13;
anticipation time compared to existing methods.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/9950">
<title>Optical character recognition for printed Urdu nastaliq font (T-0010) (Old 8755)</title>
<link>http://hdl.handle.net/123456789/9950</link>
<description>Optical character recognition for printed Urdu nastaliq font (T-0010) (Old 8755)
Israr Uddin, 01-281121-005
Optical Character Recognition (OCR) is one of the most investigated pattern classification problems that has deceived remarkable research attention for more than half a century. From the simplest systems recognizing isolated digits to end-to-end recognition systems, applications of OCRs vary from postal mail sorting to reading systems in scene images facilitating autonomous navigation or assisting the visually impaired. Despite tremendous research endeavors and availability of commercial recognition engines for many scripts, recognition of cursive scripts still remains an open and challenging research problem mainly due to the complexity of script, segmentation issues and large number of classes to recognize. Among these, Urdu makes the subject of our study. More specifically, this study investigates the recognition of printed Urdu text in Nastaliq style, the most widely employed script for Urdu text that is more complex than the Naskh style of Arabic. This work presents a holistic (segmentation-free) technique that exploits ligatures (partial words) as units of recognition. Urdu has a total of more than 26,000 unique ligatures, many of the ligatures, however, share the same main body (primary ligature) and differ only in the number and position of dots and diacritics (secondary ligatures). We exploit this idea to separately recognize the primary and secondary ligatures and later re-associate the two to recognize the complete ligature. Recognition is carried out using two techniques; the first of these is based on hand-crafted statistical features using hidden Markov models (HMMs). Features extracted using sliding windows are used to train a separate model for each ligature class. Feature sequences of the query ligature are fed to all the models and recognition is carried out through the model that reports the maximum probability. The second technique employs Convolutional Neural Networks (CNNs) to automatically extract useful feature representations from the classes and recognize the ligatures. We investigated the performance of a number of pre-trained networks using transfer learning techniques and trained our own set of networks from scratch as well.
Supervised by Dr.Imran Siddiqi
</description>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
