<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>BCE (BUES-FYP)</title>
<link href="http://hdl.handle.net/123456789/10315" rel="alternate"/>
<subtitle/>
<id>http://hdl.handle.net/123456789/10315</id>
<updated>2026-04-04T12:27:57Z</updated>
<dc:date>2026-04-04T12:27:57Z</dc:date>
<entry>
<title>Deep Learning-Based Cancer Detection in Histopathological Images of Breast Pathology</title>
<link href="http://hdl.handle.net/123456789/19914" rel="alternate"/>
<author>
<name>Muhammad Awais, 01-132212-024</name>
</author>
<author>
<name>Mujtaba Ahmed, 01-132212-034</name>
</author>
<author>
<name>Muhammad Jalal Haider, 01-132212-050</name>
</author>
<id>http://hdl.handle.net/123456789/19914</id>
<updated>2025-09-11T09:37:34Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Deep Learning-Based Cancer Detection in Histopathological Images of Breast Pathology
Muhammad Awais, 01-132212-024; Mujtaba Ahmed, 01-132212-034; Muhammad Jalal Haider, 01-132212-050
On a global scale there are an estimated 2.3 million new cases of breast cancer per year. For improvement of patient outcomes and increase of the survival rate, it is essential to detect early and diagnose accurately. Although considered the gold standard for diagnosis, histopathological examination in itself is both time-consuming and is subject to inter observer variability, which may aﬀect a poor consistency and accuracy when terminated by humans and this constitutes a problem. This research flls these well known gaps by developing a systematic deep learning framework to automate the classification and segmentation of breast cancer histopathological images. The framework consists of two main components in the form of tools to help pathologists in their diagnostic workﬂow. The frst approach leveraged a standalone Vision Transformer (ViT) architecture to process image patches as sequences in order to realize global dependencies. The second approach used a hybrid model of the VGG16 convolutional neural network augmented with a Data efcient Image Transformer (DeiT). This hybrid architecture leverages the local features captured by convolutions and global context modeling abilities of transformers. The hybrid model outperformed the ViT-only model with an accuracy of 95% compared to 91%. This supports the fact that architectural integration is worth pursuing for improving diagnostic precision. Specifically, the Trans U-Net architecture was used in the segmentation part of the framework to identify and delineate histopathology-specific structures. The model was trained to diﬀerentiate among Invasive carcinoma, In Situ carcinoma and Benign tissue. To train and evaluate the segmentation model, a set of pixel-wise annotated histopathological images was used. Excellent results were demonstrated with a mean Dice similarity coefficient value of 92%, showing good overlap between predicted and ground truth segmentations. More metrics are marked with high values of Intersection over Union (IoU), sensitivity and specificity. This framework may provide pathologists with a powerful tool for clinical implementation as a decision support system to improve diagnostic efciency, consistency and accuracy. Future work will consist of integration into existing laboratory information systems to maximize clinical impact and adoption.
Supervised by Prof. Dr. Shehzad Khalid
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smart Traffic Light System</title>
<link href="http://hdl.handle.net/123456789/19917" rel="alternate"/>
<author>
<name>Abdul Manan, 01-132212-001</name>
</author>
<author>
<name>Muhammad Abdullah, 01-132212-023</name>
</author>
<author>
<name>Haris Ajmal, 01-132212-048</name>
</author>
<id>http://hdl.handle.net/123456789/19917</id>
<updated>2025-09-11T10:00:53Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Smart Traffic Light System
Abdul Manan, 01-132212-001; Muhammad Abdullah, 01-132212-023; Haris Ajmal, 01-132212-048
Traffic congestion is a critical challenge in urban environments, particularly in densely populated regions like Pakistan, where static, time-based traffic signals fail to adapt to real-time traffic conditions. These traditional systems lead to increased delays, long vehicle queues, wasted fuel, and elevated emissions. To overcome these issues, Smart Traffic Light System (STLS) utilizes deep learning-based object detection and a dynamic green time algorithm to optimize traffic ﬂow at intersections. Our system focuses on real-time video-based traffic analysis at four-way intersections to dynamically allocate green signal time according to vehicle density. Initially, publicly available datasets were evaluated, but due to limited relevance to local traffic conditions, a custom dataset comprising approximately 3,000 images was collected across various intersections in Rawalpindi and Islamabad. This dataset includes five vehicle classes: car, bike, bus, truck, and van. After pre-processing and annotation, multiple YOLO models were trained on this dataset, achieving detection accuracies between 91% and 93%. A custom green time algorithm was developed to process real-time vehicle counts from each lane and allocate signal time proportionally, eliminating bias and minimizing average vehicle wait times. The complete system was implemented using Raspberry Pi 4 and cameras deployed at intersections to demonstrate low-cost, scalable deployment. The proposed solution significantly improves intersection throughput, reduces idle times, and shows strong potential for adoption in smart city infrastructure. Its adaptability, sustainability, and practical efficiency make it a compelling alternative to legacy traffic control systems in developing countries.
Supervised by Dr. Amina Jameel
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinesis Pro Standing Wheelchair</title>
<link href="http://hdl.handle.net/123456789/19921" rel="alternate"/>
<author>
<name>Ali Uzair, 01-132212-006</name>
</author>
<author>
<name>Jawad Ali Mirza, 01-132212-019</name>
</author>
<author>
<name>Muhammad Masharib Khan, 01-132212-028</name>
</author>
<id>http://hdl.handle.net/123456789/19921</id>
<updated>2025-09-11T10:24:53Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Kinesis Pro Standing Wheelchair
Ali Uzair, 01-132212-006; Jawad Ali Mirza, 01-132212-019; Muhammad Masharib Khan, 01-132212-028
Mobility limitations present significant challenges for individuals with physical disabilities, particularly in low- and middle-income countries where access to advanced assistive devices is restricted by high costs and limited availability. Commercial standing wheelchairs, while offering important health and social benefits through their ability to assist users in both sitting and standing positions, remain out of reach for most due to their prohibitive price and reliance on proprietary components. This study introduces the Kinesis Pro—a novel, low-cost motorized standing wheelchair designed to address these barriers through the use of locally sourced materials, open-source electronics, and a modular, user-centered approach. The Kinesis Pro integrates essential posture transition and mobility features within a robust, ergonomic frame, and utilizes a cost-effective electrical and mechanical system for reliable performance. Field testing and engineering analysis demonstrated that the Kinesis Pro provides smooth, safe transitions between sitting and standing, supports users up to 130 kg, and offers comparable core functionality to premium commercial models at a fraction of the price. Unlike existing solutions, which often require specialized maintenance and are inaccessible in resource-limited environments, the Kinesis Pro emphasizes affordability, ease of repair, and adaptability to different user needs. This work demonstrates the potential for innovative engineering to deliver essential mobility solutions to underserved populations and provides a foundation for future enhancements, including smart sensor integration and expanded customization.
Supervised by Engr. Waleed Manzoor
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mental Health Chatbot With AI</title>
<link href="http://hdl.handle.net/123456789/19916" rel="alternate"/>
<author>
<name>Ahmad Faraz, 01-132212-004</name>
</author>
<author>
<name>Sadeed Ullah, 01-132212-038</name>
</author>
<id>http://hdl.handle.net/123456789/19916</id>
<updated>2025-09-11T09:47:26Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Mental Health Chatbot With AI
Ahmad Faraz, 01-132212-004; Sadeed Ullah, 01-132212-038
An increasing number of individuals face mental health problems, including anxiety, stress, and depression, because academic, social, and personal demands keep rising. The conventional mental health services encounter multiple obstacles such as restricted accessibility as well as cost barriers and societal stigma that limit people from obtaining timely assistance. This project developed an AI-enabled mental health chatbot mobile app that provides scalable, accessible, and interactive methods for mental health evaluation and support for users. This Mobile app presents the Depression Anxiety Stress Scales (DASS-21) questionnaire in a conversational format, allowing users to conduct daily self-assessments. User interaction with this chatbot runs through its empathetic transformer-based system, which determines user sentiment using sentiment analysis to produce weekly mental health indexes. The application stores generated insights in Firebase, enabling users to track their progress over time and identify patterns in their mental health. Additionally, the app oﬀers customized coping strategies and resources based on individuals’ responses, ensuring that users receive targeted support that suits their specifc needs. This personalized approach not only fosters a deeper understanding of one’s mental well-being but also empowers users to take proactive steps towards improvement. Users update their self-analyses, conducted in the mental health application on a regular basis, they get a better idea of their emotional patterns. These self-estimates records also will be available to appointed therapists providing more rational decisions and individual care under the control of a human. Users beneft from the app because it implements intelligent communication to detect mental health conditions while building self-awareness and increasing involvement with mental health services.
Supervised by Engr. Muhammad Kashif Naseer
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
</feed>
