<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>BS (CS) (BUIC-FYP-E8)</title>
<link href="http://hdl.handle.net/123456789/13175" rel="alternate"/>
<subtitle/>
<id>http://hdl.handle.net/123456789/13175</id>
<updated>2026-04-04T11:08:33Z</updated>
<dc:date>2026-04-04T11:08:33Z</dc:date>
<entry>
<title>Face Diseases Detection and Recommendation System</title>
<link href="http://hdl.handle.net/123456789/20639" rel="alternate"/>
<author>
<name>Hamza, 01-134212-050</name>
</author>
<author>
<name>Muhammd Sajeel Arshad Hashmi, 01-134212-126</name>
</author>
<id>http://hdl.handle.net/123456789/20639</id>
<updated>2026-02-19T07:39:14Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Face Diseases Detection and Recommendation System
Hamza, 01-134212-050; Muhammd Sajeel Arshad Hashmi, 01-134212-126
Facial skin analysis is a rapidly evolving field within computer vision and artificial intelli- gence, offering valuable applications in dermatology, skincare, and cosmetic technology. The increasing demand for early skin disease diagnosis and personalized skin health solutions has accelerated the development of intelligent systems capable of identifying facial skin conditions with high precision and providing effective treatment recommenda- tions. This project aims to develop and evaluate a deep learning-based pipeline capable of performing multi-label, multi-class semantic segmentation of facial skin diseases. The primary objective is to identify co-existing skin conditions—such as acne, redness, dark circles, and white spots—with high precision and offer relevant, personalized skincare recommendations. To address this, we propose an end-to-end AI-powered framework titled the Face Disease Detection and Recommendation System, which integrates deep learning, image segmentation, and a Retrieval-Augmented Generation (RAG) pipeline built on a large language model. The proposed system functions in two main phases. In the first phase, facial skin disease detection and segmentation are performed using U-Net, U-Net++, and a custom CNN, trained on three annotated datasets: Acne 04, Dermatology Advisor, and Detection of Skin Diseases. These datasets, formatted in COCO JSON, were sourced from Roboflow and Kaggle. Nine model-dataset combinations were evaluated using accuracy, Dice coefficient, and IoU, with U-Net++ showing the best performance and selected for deployment. To improve facial ROI extraction, a YOLO model trained on a Kaggle Face Detection Dataset isolates the face from user-provided images (front, left, right). These images pass through a two-stage U-Net++ segmentation pipeline—one model trained on the Detection of Skin Diseases dataset and a second, acne-specific model trained on the ACNE-04 dataset—to boost segmentation accuracy for acne. In the second phase, a Retrieval-Augmented Generation (RAG) pipeline using the Gemini Flash 1.5 language model provides personalized skincare product recommendations and answers skin-related queries. This integration of image-based diagnosis with AI-driven recommendations offers users a comprehensive virtual dermatology assistant.
Supervised by Mr. Qazi Haseeb Yousaf
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hirely: AI-Powered Interview</title>
<link href="http://hdl.handle.net/123456789/20642" rel="alternate"/>
<author>
<name>Farkhanda Aftab Malik, 01-134212-042</name>
</author>
<author>
<name>Abdullah Qureshi, 01-134212-009</name>
</author>
<id>http://hdl.handle.net/123456789/20642</id>
<updated>2026-02-20T03:44:28Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Hirely: AI-Powered Interview
Farkhanda Aftab Malik, 01-134212-042; Abdullah Qureshi, 01-134212-009
As industries continue to scale and adapt to digital transformation, recruitment processes must also evolve to meet modern demands. Traditional interview methods—manual scheduling, face-to-face evaluations, and subjective assessments—can be time-consuming, resource-intensive, and inconsistent, especially when hiring at scale. With the rise of remote work and global talent acquisition, companies increasingly seek solutions that offer both efficiency and depth in evaluating candidates. Hirely is a web-based recruitment system designed to automate and enhance the remote interview process. At its core, Hirely enables candidates to participate in interviews conducted by an AI-powered bot that asks dynamic, resume-based questions. These interviews are carried out via integrated video conferencing, during which the system records the candidate’s responses and analyzes both facial expressions and speech patterns to detect seven distinct emotional states. The result is a comprehensive candidate evaluation report that combines performance metrics with emotional intelligence insights—giving recruiters a richer, data-driven perspective. The platform is built to support mass interviewing, making it ideal for large-scale recruitment drives, campus hiring, or organizations with high-volume hiring needs. In addition to automated interviews, Hirely offers powerful features such as candidate management, interview scheduling, video recording, real-time team collaboration, and future integration with applicant tracking systems. These features enable businesses to manage their hiring pipelines efficiently while enhancing the overall candidate experience.
Supervised by Dr. Adil Khan
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-Infused 3D Model Synthesis for Immersive Augmented Reality in Interior Design</title>
<link href="http://hdl.handle.net/123456789/20645" rel="alternate"/>
<author>
<name>Mohammad Faizan, 01-134211-044</name>
</author>
<author>
<name>Wakeel Furqan Ahmed, 01-134212-192</name>
</author>
<id>http://hdl.handle.net/123456789/20645</id>
<updated>2026-02-20T04:41:09Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">AI-Infused 3D Model Synthesis for Immersive Augmented Reality in Interior Design
Mohammad Faizan, 01-134211-044; Wakeel Furqan Ahmed, 01-134212-192
The integration of Augmented Reality (AR) in design processes is transforming the way designers, architects, and consumers interact with spatial environments and design ele- ments. The rapid growth of AR technology has led to the development of a variety of tools designed to assist with interior design and architectural visualization. However, many of these solutions have several limitations including limited customization and in- teractivity, lack of automation, complexity of use etc that hinder their effectiveness and accessibility. This project explores the applications of AR in the field of interior de- sign, focusing on furniture visualization, architectural visualization, space planning, and material selection. By leveraging AR technologies such as ARCore, the project aims to enhance user experience by providing interactive design customization that overlays dig- ital objects onto physical spaces. Current solutions, such as IKEA Place and Blender, offer valuable AR-based design visualization tools but are limited by the requirement for advanced modeling skills, lack of AI-driven automation, and limited interactivity for customization. This project seeks to address these gaps by proposing a more accessible, automated, and interactive AR design tool that allows users to generate, customize, and visualize 3D models seamlessly. Through this innovation, the project aims to stream- line the design process, making it more intuitive, engaging, and personalized, while also improving the efficiency and accuracy of interior design decisions.
Supervised by Dr. Sumaira Kausar
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speech-to-Speech Translation with Automated Lip-Synced Video Using Deepfake and TTS</title>
<link href="http://hdl.handle.net/123456789/20647" rel="alternate"/>
<author>
<name>M. Shaheer Ijaz, 01-134212-127</name>
</author>
<author>
<name>M. Umar Farooq, 01-134212-139</name>
</author>
<id>http://hdl.handle.net/123456789/20647</id>
<updated>2026-02-20T04:48:48Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Speech-to-Speech Translation with Automated Lip-Synced Video Using Deepfake and TTS
M. Shaheer Ijaz, 01-134212-127; M. Umar Farooq, 01-134212-139
In today’s interconnected world, video content dominates communication, entertainment, and education, but language barriers often limit its accessibility. This project introduces a web application that seamlessly translates videos into a target language, enhancing inclusivity and cultural exchange. Using advanced AI technologies, it aligns translated audio with video through precise lip-syncing and facial expression adjustments, delivering an immersive and authentic user experience. At the core of this solution are generative adversarial networks (GANs) and deepfake technologies, which ensure high-quality translations and realistic synchronization of audio-visual elements. Developed with React, the application offers a responsive, scalable, and user-friendly interface optimized for computationally intensive tasks. Key challenges, such as audio-visual synchronization and contextual translation accuracy, were addressed to maintain video quality. This project demonstrates the potential of AI-driven multimedia applications in education, entertainment, and content localization. By bridging language gaps, it fosters cross-cultural collaboration and redefines interactions with multilingual video content.
Supervised by Ms. Aima Zahoor
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
</feed>
