<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://hdl.handle.net/123456789/13173">
<title>PhD (CS) (BUIC-E-8)</title>
<link>http://hdl.handle.net/123456789/13173</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/19052"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/20508"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/17061"/>
<rdf:li rdf:resource="http://hdl.handle.net/123456789/17063"/>
</rdf:Seq>
</items>
<dc:date>2026-04-04T12:27:58Z</dc:date>
</channel>
<item rdf:about="http://hdl.handle.net/123456789/19052">
<title>Identification of Synthetically Generated Videos Using Forensic Analysis Techniques</title>
<link>http://hdl.handle.net/123456789/19052</link>
<description>Identification of Synthetically Generated Videos Using Forensic Analysis Techniques
Shahela Saif, 01-284181-002
Deepfake videos are created using Artificial Intelligence to generate realistic content that falsely depicts individuals saying or doing things they haven’t. By employing algorithms like convolutional neural networks (CNNs) and generative adversarial networks (GANs), these videos are crafted by training on extensive datasets of a target’s images and videos. This allows the model to capture and replicate facial features and expressions accurately, making the manipulated videos seem very realistic. Despite potential legitimate uses such as in entertainment, deepfake technology poses serious risks, including misinformation and defamation, making it crucial to develop effective detection methods. These methods typically analyze inconsistencies in facial features, movements, and audio, along with detecting anomalies absent in genuine media. However, evolving deepfake technologies continually challenge detection capabilities, necessitating ongoing advancements in detection techniques that focus on facial analysis. Our research comprises two phases. In the first phase, we used a time-distributed convolutional network to extract features from videos. The time-distributed layer enables us to create a single representation for a set of video frames, rather than treating each frame independently. These embeddings were optimized by using metric learning. For optimization, we used Contrastive loss along with the cross-entropy loss to create a network capable of producing embeddings that were less reliant on the identification of image anomalies and more tuned to identify the abnormal patterns in facial structure. Our experiments on the FaceForensics dataset, which includes various deepfake generation methods, demonstrated that our network exhibits generalization concerning different generation techniques. In the second phase of our thesis, we aimed to enhance efficiency by focusing on key facial areas for deepfake detection. By extracting and analyzing facial landmarks, which represent the face’s structure, we observed changes in expressions and emotions during speech over the course of a video. Through experiments, we identified an effective set of landmarks and created a dataset from three deepfake datasets with varying complexity. Local feature descriptors were used to generate a feature vector for each coordinate, selected based on their size, time complexity, and performance. For classification, a graph convolutional neural network was employed, ideal for sparse data and for creating a lightweight and robust deepfake detector. The graph construction involved segmenting facial regions and establishing semantic relationships based on their impact on natural and manipulated speech, which were tested experimentally for authenticity. The graph network was trained to detect relationships between facial landmarks and their temporal changes, enabling the algorithm to identify inconsistencies or unnatural movements indicative of manipulation. Our work has assessed the efficacy of employing facial features, as opposed to generic image features, for deepfake detection. Our temporal network demonstrated superior generalization compared to competing approaches achieving over 90% accuracy over different datasets. The results are comparable to SOTA works Additionally, our implementation of the Graph Convolutional Neural network offered a lightweight yet highly effective deepfake detection solution. Our research contributions include deepfake detection by developing a time-distributed CNN-LSTM network that leverages metric learning for optimized video embeddings. We enhanced our approach by analyzing physiological and local feature descriptors of facial structures, creating a comprehensive graphical facial feature vector. This analysis facilitated the development of a specialized graph kernel based on correlations between facial landmarks, improving speech pattern analysis. Building on these foundations, we introduced a lightweight deepfake detection framework using a graph convolutional network with neighborhood normalization. This framework utilizes spatial data and a correlationbased graph kernel for more effective deepfake classification, enhancing the generalization capabilities of our detection system.
Supervised by Dr. Samabia Tehsin
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/20508">
<title>Obfuscated Android Malware Detection Using Dynamic Analysis Techniques</title>
<link>http://hdl.handle.net/123456789/20508</link>
<description>Obfuscated Android Malware Detection Using Dynamic Analysis Techniques
Saneeha Khalid, 01-284172-002
Globally, the use of smartphones has surged dramatically in recent years. The development of user centric operating systems for smartphones has played a vital role in the adaptability of these devices. Android by Google is the most used smartphone operating system and captures a market share of more than 70%. This has encouraged hackers to exploit its vulnerabilities, resulting in a consistent increase in Android malware. Conventional signature based schemes provide a quick and efficient method for detecting known malware but are unable to cope with the rapid pace of emerging malware and their variants. This has led to the design of generic machine learning models for malware detection and categorization. These solutions are built on static and dynamic features, extracted from Android applications. The former aims at detecting malware without execution and the latter analyzes the runtime behavior of an application. Code obfuscation techniques are commonly used to evade the static malware analysis approaches. In recent years, researchers have turned to dynamic analysis based machine learning solutions for the detection of malware. Dynamic analysis features can be extracted from a number of sources, including network traffic, function call graphs, API calls, and volatile memory. Although, all these sources provide useful insights about application’s behavior, but existing studies indicate that volatile memory represents a comprehensive and holistic view of application’s runtime execution. It can be used to extract information related to OS Kernel, all executing processes, network activity, and application code. Thus, volatile memory is a rich collection of both system-specific and process-specific features. The aforementioned features have been investigated in more detail for malware detection for Linux andWindows platforms, as compared to Android. In addition, the researchers have focused more on binary classification of malware as compared to category classification such as Adware, random, banking and riskware. Category classification plays a vital role in formulating mitigation strategies against specific malware threats. In this research, a volatile memory based Android malware detection and categorization framework, MemDroidAnlayzer, is presented. The framework is capable of temporal acquisition of volatile memory dumps, which are then analyzed for extracting semantically rich information about application behavior. This study extracts two kinds of information from the memory; volatile memory state information and process-specific information in kernel task structure. The information is extracted by two components of MemDroidAnlayzer namely, VolDroid and KTSDroid. VolDroid extracts information related to the state of volatile memory after the execution of the application. It is pertinent to highlight that existing studies have not examined volatile memory state information w.r.t Android platform. VolDroid comprehensively analyzes various artifacts in volatile memory and forty valuable features for malware detection and categorization are reported. On the other hand, KTSDroid extracts processspecific information from the volatile memory by analyzing the kernel task structure, which is a hierarchical tree based data structure. To the best of our knowledge, KTSDroid is the most comprehensive kernel task structure analyzer for Android that extracts features from nine categories of the structure. In addition, each category tree is investigated to a depth of six levels. Significant kernel task structure categories and twenty-eight useful features have been identified. Further, the MemDroidAnalyzer synthesizes the features extracted by VolDroid and KTSDroid to increase the detection and categorization efficiency of the obfuscated and non-obfuscated malware. To the best of our knowledge, this work presents a unique combination of memory state information and process-specific information for Android malware analysis. MemDroidAnalyzer performance is validated against code obfuscation techniques including class encryption, string encryption, control flow modifications, class reordering, Android manifest transformation, identifier renaming, reflection and junk code insertion. The proposed framework is able to detect malicious Android applications with an F-score of 0.99 on known malware samples and 0.976 on obfuscated and new (unknown) samples, achieving an improvement of 4 percent on known malware samples and 2.5 percent on obfuscated (excluding preventive obfuscation) and unseen new malware samples in terms of F1-score as compared to existing memory-based studies for Android malware detection. MemDroidAnalyzer is also capable of categorizing malware into Adware, Banking Trojans, Riskware, and SMS Trojans classes. MemDroid- Analyzer stands out as the only solution for Android malware categorization solely based on memory based features (to the best of our knowledge) with an average F1-score of 0.965. The proposed features demonstrate resilience against obfuscation and tampering effects, unlike existing frameworks that exhibit similar performance but are susceptible to issues related to code hiding and tampering.
Supervised by Dr. Faisal Bashir Hussain
</description>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/17061">
<title>Design of Deep Learning and Shape-Based Models for Tuberculosis Detection Using Diagnostic Radiology</title>
<link>http://hdl.handle.net/123456789/17061</link>
<description>Design of Deep Learning and Shape-Based Models for Tuberculosis Detection Using Diagnostic Radiology
Zainab Yousuf Zaidi, 01-284152-002
Pulmonary diseases including Tuberculosis (TB) can be fatal and millions of people get infected by them each year. The timely and accurate detection of pulmonary diseases can save millions of lives around the globe. Chest x-ray (CXR) is considered the first and foremost diagnostic radiology technique for the initial screening of pulmonary diseases. Moreover, it is world widely adopted and is available to the remotest corner of the world due to its invasive effects, easy procedure, accessibility and economic feasibility. On the other hand, CXR interpretation can be tricky and can guzzle plentiful time. This can be problematic when radiologists need to interpret thousands of CXRs. To cope up with this, radiologist’s load can be shared by the Computer-Aided Diagnostic (CAD) systems. CAD systems can assist in performing the trivial processing and can introduce better disease specific information display helping the radiologist in making quick and more appropriate decisions that will save radiologist’s time. Most of the existing CAD systems are based on research using Montgomery County (MC), Shenzhen (SH) and Japanese Society of Radiological Technology (JSRT) datasets having few hundred images and thus cannot be generalized on large scale, while very few of them have utilized the National Institutes of Health (NIH) CXR dataset containing more than 112k CXR images. The proposed research work presents a hierarchical custom Convolutional Neural Network (CNN) model for i)improved pulmonary disease identification and classification, ii)automated report generation by learning physician’s reports against CXR images using Natural Language Processing (NLP), transformers and Recurrent Neural Networks (RNNs). The proposed hierarchical model is based on the NIH CXR dataset, the Indiana University (IU) dataset and the locally gathered HealthWays dataset. In the first approach of pulmonary disease classification, the proposed hierarchical model is utilized in different ways on the NIH CXR dataset for detecting the healthy or infected images, healthy or TB infected images with 0.92 F1 score, TB specific class label classification with 0.84 average accuracy and 0.82 average accuracy for 14 thoracic disease class label classification. The model evaluation is performed on the benchmark split of the NIH CXR dataset for 14 thoracic disease classifications and reported improved classification performance surpassing the results of state-of-the-art methods. In the second approach of CXR classification using automated report generation, the proposed hierarchical model is trained on the IU dataset using medical reports and CXR images. The medical reports are used as ground truths along with the CXR images using transformers and RNNs for automated report generation. After training, the proposed model generated alike reports by itself by just analyzing the CXR and then classifying it accordingly. Finally, the hierarchical model is evaluated on the locally gathered Health Ways dataset to find the localized patterns of pulmonary diseases. The proposed hierarchical model contributes to the accurate and better identification of pulmonary diseases using CXR images.
Supervised by Dr. Amina Jameel
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://hdl.handle.net/123456789/17063">
<title>Localization of Vertebrae and Deformity Analysis using Digital Spinal Cord Images</title>
<link>http://hdl.handle.net/123456789/17063</link>
<description>Localization of Vertebrae and Deformity Analysis using Digital Spinal Cord Images
Joddat Fatima, 01-284152-001
The spinal cord is an fundamental structure that creates a connection between the brain and the rest of the body. The long thin cord is made up of twisted nerves and tissues in combination with the 33 separate bones stacked over one another. Curvature deformity causes an extra bend in the spinal curve. The curvature deformities are of three types Kyphosis (thoracic region); Lordosis (cervical and lumbar region) , and Scoliosis (sideways). Different imaging techniques clinically used to diagnose these deformities include X-Ray, Computed Tomography and Magnetic Resonance Imaging. Many researchers have worked on deformity analysis of spinal curvatures, and numerous competitions and workshops have produced labeled datasets and new approaches as well. Recently, few semi-automated systems have been proposed for vertebrae segmentation and Scoliosis Cobb estimation but a fully automated method that can differentiate all three categories, and identify severity levels among the disorders with multiple imaging modalities is still missing. In this research, we present a two-step automated system for localization of vertebrae, segmentation of the spinal column, and classification of diseases on the basis of their curvature shape and Cobb estimation. A recent approach to object detection is utilized for vertebrae localization, in parallel to this spine column is segmented. Both of these results are used for the extraction of the midline curvature profile. These results supported in feature-based shape analysis mechanism for reliable classification of curvature, respectively. The proposed system also involves a traditional Cobb estimation procedure for curvature analysis and validation provides reliability to our predicted results. The evaluation of both modules has been carried out, using available datasets. The localization results achieved mean Average Precision (map) up to 0.94 for AASCE19, 0.97 for the Mendeley’s dataset and 0.95 for the CSI16 dataset. Segmentation of spine column attained dice score up to 0.971, 0.960 and 0.953 for Mendel’s, CSI16 and AASCE19 respectively. The comparison of segmentation block with literature shows improvement in dice score. The results of shape analysis using Random Forest (RF) classifiers attained an accuracy of 94.69%. Considering the same problem as that of image classification, the proposed feature-set performed better as compared with deep features of Efficient-Net B4 with a 2% improvement in the accuracy. The Cobb estimation results in comparison with latest state-of-the-art reduced the Mean Absolute Error (MAE) by 2 degrees. The classification of Lumbar Lordosis on the basis of proposed methodology achieved an accuracy up to 98.04% for Mendeley’s dataset and 81.25% for CSI16 dataset respectively.
Supervised by Dr. Amina Jameel
</description>
<dc:date>2023-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
