<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://hdl.handle.net/123456789/10341">
<title>Department of Computer Engineering (BUES)</title>
<link>http://hdl.handle.net/123456789/10341</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/18655"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/18643"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/14465"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/14467"/>
</rdf:Seq>
</items>
<dc:date>2026-04-04T10:43:16Z</dc:date>
</channel>
<item rdf:about="http://hdl.handle.net/123456789/18655">
<title>Analysis of Physiological Signals for Emotion Charting</title>
<link>http://hdl.handle.net/123456789/18655</link>
<description>Analysis of Physiological Signals for Emotion Charting
Amna Waheed Awan, 01-281171-005
Human emotions are complex mental and physiological states that arise in response to various internal and external stimuli. Emotional health is an important part of human physical and psychological health. An emotionally healthy person can perform multiple roles in his life in the best possible way, however, due to emotional imbalance and cognitive disorders a person can face so many physical and mental health issues. Therefore, the timely diagnosis of these mental illness can prevent severe mental disorders and can improve the quality of medical care. Recently, emotion recognition has gained great attention in affective computing and different modalities have been used for emotion recognition i.e. human physical signals and human physiological signals. Human physiological signals are considered to be most reliable source for emotion recognition as compared to human physical signals because they can’t be manipulated. Emotion charting using multimodal signals have grown in popularity due to wide multidisciplinary applications such as health care department, neuromarketing,, robotics, safety, security and e-gaming etc. There are number of physiological markers such as heart rate, respiration, electrodermal activity, conductance and brain activity , which can be used for performing emotion recognition. However, physiological signals such as Electrocardiogram (ECG) signals and Electroencephalograms (EEG) signals measure the cardiac and neuronal activities respectively, connected with different human emotional states . In addition to this, Galvanic Skin Response (GSR) signals are also very highly correlated with the emotional states of human. Therefore, these physiological signals are incorporated in the proposed method and they can be analyzed using different techniques of advanced signal processing and machine learning in order to identify the hidden patterns and classify the emotional states. Previously, researchers have developed different methods for classification of these signals for emotion detection but still there is a need to bridge the connection between the anatomy of human physiological signals and cognitive behaviors by critically analyzing the variation in the waveforms of physiological signals with respect to human emotions. Keeping this in view, this research work proposes two deep learning-based approaches for emotion charting using physiological signals. First approach is an Ensemble method using customized convolutional Stacked Autoencoder (ESA) for Emotion Charting. This approach performs preprocessing of physiological signals (EEG, ECG and GSR) using bandpass filtering and Independent Component Analysis (ICA) followed by Discrete Wavelet Transform (DWT). Then convolutional stacked autoencoder has been employed for feature extraction from the scalograms of physiological signals. Feature vector obtained from stacked autoencoder is then fed to three classifers SVM (Support Vector Machine), RF (Random Forest), and LSTM (Long Short-Term Memory). The outputs of classifers are combined using majority voting scheme for the fnal classifcation of signals into four emotional classes i.e. High Valence and High Arousal (HVHA), Low Valence and Low Arousal (LVLA), High Valence and Low Arousal (HVLA) and Low Valence High Arousal (LVHA). However, second approach is CNN-Vision Transformer (CVT) based emotion charting using ensemble classifcation. In this approach, initially signals are decomposed into non-overlapping segments, the noise is removed using bandpass fltering followed by ICA. Then two feature sets are obtained from 1D CNN and Vision Transformer, which are then combined to generate a single feature vector. Finally feature vector is fed to an ensemble classifer composed of LSTM (Long Short-Term Memory), ELM (Extreme Learning Machine) and SVM (Support Vector Machines) classifiers. The probabilities generated from each classifer is fed as input to few shot learning based technique Model Agnostic Meta Learning (MAML) which combines classifers outputs and generates a single output in the form of emotional classes. The proposed system is validated on AMIGOS and DEAP datasets with 10-fold cross validation and obtained the highest accuracy of 94.75 % , sensitivity of 99.15% and specifcity of 97.61 % with ESA based emotion charting. However, on the other hand the proposed system achieved the highest accuracy of 98.2 %, sensitivity of 98.4% and specifcity of 99.53% with CVT based approach. The proposed system outperforms the state-of-the-art emotion charting methods in terms of accuracy, sensitivity and specifcity.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/18643">
<title>Parallel Architecture of Convolutional Neural Network for Object Classification</title>
<link>http://hdl.handle.net/123456789/18643</link>
<description>Parallel Architecture of Convolutional Neural Network for Object Classification
Zahra Waheed Awan, 01-281171-006
Convolutional Neural Networks (CNNs) have been known for their high performance and are widely applied in many computer vision tasks including object detection and image classification. Large-scale CNNs are computationally intensive and require high-computing resources. This issue has been addressed by deploying them on Graphical Processing Units (GPUs). However, GPU-based implementation results in significant power consumption, limiting their application in embedded systems. In this context, FPGA-based implementations offer a reasonable solution with tolerable power consumption, but large-scale CNNs require substantial memory and computation requirements, making it challenging to deploy on resource-limited edge devices. This research work addresses the challenge of deploying CNNs on resource-restricted embedded systems by proposing a memory-efficient parallel architecture (MPA). The goal is to utilize CNN’s inherent parallelism while minimizing memory and power consumption. The MPA approach involves a software/hardware co-design, including a low-memory network compression framework at the software level which removes redundant parameters to reduce model size. In addition, it categorizes layers into No-Pruning (NP) and Pruning (P) layers. Additionally, weight quantization is applied to each category of layers to compress the model further by reducing bits of weight parameters into low-bit width. Depending upon the distribution of the weights, NP-layers undergo Optimized Quantization (OQ), however, P-layers are subjected to Incremental quantization (INQ). We propose OQ algorithm for NP-layers to perform quantization using optimal quantization levels (Q-levels) obtained from the Optimizer. High compression of 11x, 5x, 8.5x and 7.5x has been achieved for LeNet-5, VGG-16. AlexNet and ResNet-50 respectively, by using the proposed compression framework, resulting in significant reduction in memory utilization with negligible accuracy loss. To further minimize the resource utilization of the system, MPA presents a parallel architecture for the convolution (CONV) layers of the model at the hardware level. The compressed model obtained in the previous step is mapped onto target hardware i.e., FPGA by applying the proposed parallel hardware architecture. In parallel architecture, multiple 1D-processing elements (PEs) are connected in parallel as 2D-PE to achieve data-level and computation-level parallelism. Each 2D-PE executes the convolution operation of the CONV layer by performing multiple MAC operations involved in the convolution process in parallel, which further reduces the computational cost of the system. In the structure of each 1D-PE, we further achieve computation-level parallelism by connecting multiple registers, multipliers, adders, and multiplexers in a systolic-array manner. However, for P-layers, multipliers are replaced with barrel shifters to further reduce the system’s resource utilization. The model we have achieved after applying both software and hardware levels optimization is examined based on its resource optimization, which includes the area in terms of number of slice registers, LUTs, DSPs and ﬂipﬂops. It is found that using barrel-shifter-based PE consumes almost half of the resources as consumed by multiplier-based PE, hence resulting in a prominent decrease in overall resource utilization. Consequently, this makes the proposed system a reasonable solution to be deployed on embedded systems with constrained hardware resources.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/14465">
<title>Classification of EEG Signals for Neuromarketing applications</title>
<link>http://hdl.handle.net/123456789/14465</link>
<description>Classification of EEG Signals for Neuromarketing applications
Syed Mohsin Ali Shah, 01-242192-008
Every year, more than 400 billion dollars is spent on marketing campaigns. It is&#13;
a common practice to promote diverse customer goods through advertising campaigns&#13;
in order to boost revenues and consumer awareness. The effectiveness of business investments in marketing campaigns is entirely dependent on consumers’ willingness and&#13;
ability to describe how they feel after watching an advertisement. Conventional marketing techniques (e.g., television commercials and newspaper ads) are unaware of human&#13;
emotions/response while watching the advertisements. Traditional advertising techniques&#13;
seek to govern the consumer’s opinion toward a product, which may not reflect their actual behavior at the time of purchase. It is probable that advertisers misjudge consumer&#13;
behavior because predicted opinions do not always correspond to consumers’ actual purchase behaviors. Neuromarketing is the new paradigm of understanding customer buyer&#13;
behaviour, decision making as well as the prediction of their gestures for product utilization through an unconscious process. The field of neuromarketing has gained traction as&#13;
means of bridging the gap between traditional advertising methods that focus on explicit&#13;
consumer responses, and neuromarketing methodologies that focus on implicit consumer&#13;
responses. Choice prediction allows it to figure out what buyers really desire about the&#13;
product. Neuroscience information can be used in neuromarketing to know the behavior&#13;
of a consumer with the help of brain activity using EEG signals. EEG-based preference&#13;
recognition systems focus on three key phases. In previous studies, researchers did not&#13;
focus on effective preprocessing and classification techniques of EEG signals, so in this&#13;
study, an effective method for preprocessing and classification of EEG signals is proposed, using deep learning to determine the choices of consumers for various products by&#13;
measuring their “liking” and “disliking” as neuromarketing applications. The proposed&#13;
method involves effective preprocessing of EEG signals by removing noise and a synthetic&#13;
minority oversampling technique (SMOTE) to deal with the class imbalance problem.&#13;
The dataset employed in this study is a publicly available neuromarketing dataset. The&#13;
dataset consists of EEG data recordings taken from 25 participants, shown different sorts&#13;
of products. The responses of customers were recorded in terms of likes and dislikes.&#13;
Automated features were extracted using a long-short term memory network (LSTM)&#13;
and then concatenated with handcrafted features like power spectral density (PSD) and&#13;
discrete wavelet transform (DWT) to create a complete feature set. The classification&#13;
was done using the proposed hybrid classifier that optimizes the weights of two machine&#13;
learning classifiers and one deep learning classifier and classifies the data between like&#13;
and dislike. The machine learning classifiers include the Support Vector Machine (SVM),&#13;
Random Forest (RF), and Deep Learning Classifier (DNN). The proposed hybrid model&#13;
outperforms and achieves an accuracy of 96.89% among other different classifiers like&#13;
RF, SVM, and DNN. In the proposed method, accuracy, sensitivity, and specificity were&#13;
computed to evaluate and compare the proposed method with recent state-of-the-art methods.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/14467">
<title>Aerial Imagery Pile Burn Detection using Deep Learning</title>
<link>http://hdl.handle.net/123456789/14467</link>
<description>Aerial Imagery Pile Burn Detection using Deep Learning
Hoor Ul Ain Tahir, 01-242202-003
Wildfires are one of the costliest and deadliest natural disasters around the globe,&#13;
affecting millions of acres of forest resources and threatening the lives of human and animals.&#13;
Thousands of forest fires across the globe results in serious damage to the environment.&#13;
Further, industrial explosions, domestic fires, farm fires, and wildfires are huge problem that&#13;
causes negative effects on the environment contributing significantly towards the issue of&#13;
climate change. Damage caused by such incidents are time-sensitive and can be fatal&#13;
resulting in a huge loss to life and property if not timely dealt with. Recent advances in aerial&#13;
images show that they can be beneficial in wildfire studies. Among different technologies and&#13;
methods for collecting aerial images, drones have been used recently for manual/automatic&#13;
monitoring of potential risk areas. Images received from the drones can be processed using&#13;
vision and machine learning techniques for automated and timely detection of fires thus&#13;
shortening the response time and reducing the damage caused by the fire whilst minimizing&#13;
the cost of firefighting. Automated vision-based fire detection has therefore become an&#13;
important research topic in recent years. Desired properties of good vision-based fire&#13;
detection are low false alarm rate, fast response time, and high accuracy. This thesis presents&#13;
a comprehensive literature review of recent vision-based approaches for the automated&#13;
detection of fire from images and videos. It also includes computing the area under the fire&#13;
and planning to mitigate the fire. The literature has broadly been categorized into classic&#13;
vision/machine learning-based approaches and deep learning-based approaches. Based on the&#13;
comparison of these approaches using a variety of datasets and performance metrics, it has&#13;
been observed that deep learning-based approaches generally yield better performance as&#13;
compared to classic vision/machine learning-based techniques. In this research, we further&#13;
explored various deep learning alternatives for accurate fire detection. A Yolov5-based deep&#13;
learning model has been proposed in this research for efficient region-based detection and&#13;
segmentation of fire. Pixel level segmentation is also performed using Mask RCNN to&#13;
estimate the area under the fire so that planning can be done to mitigate with the fire. The&#13;
problem of availability of limited labeled training data as compared to the training samples&#13;
required for deep learning-based model training is mitigated through variety of preprocessing and augmentation techniques. Comparison with existing vision-based fire&#13;
segmentation approaches on publicly available datasets show the improved performance of&#13;
proposed approach as compared to the competitors.
Supervised by Dr. Shehzad Khalid
</description>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
