1. Project Overview

The Emotion-Aware AI Complaint Management System represents a pioneering voice-based application meticulously engineered to significantly enhance student well-being within educational institutions. This innovative platform provides university students with a secure and intuitive channel to articulate their concerns through voice recordings. These recordings undergo sophisticated analysis utilizing a deep learning-driven emotion recognition model, which accurately identifies the speaker's underlying emotional state.

By discerning emotional cues such as anger, fear, sadness, happiness, or neutrality, the system intelligently assesses the inherent urgency of each complaint. This assessment facilitates the automatic classification of complaints into distinct priority levels. For instance, a complaint imbued with emotions indicative of distress, such as anger or fear, is automatically escalated to a high-priority status. This proactive flagging mechanism empowers university counselors, administrators, and relevant support staff to initiate prompt, informed, and empathetic interventions, thereby fostering a more responsive and supportive academic environment.

Core Functional Modules:

The system is architected on a robust modular framework, with each module meticulously designed to execute a critical function within the complaint management lifecycle:

Ultimately, this system endeavors to bridge the communication gap between students' emotional states and institutional responsiveness. It is particularly beneficial for individuals who may find traditional complaint submission methods challenging or intimidating. By offering a non-intrusive, intelligent, and empathetic platform, the Emotion-Aware AI Complaint Management System empowers universities to proactively address mental health challenges, mitigate potential crises, and cultivate a safer, more inclusive, and supportive learning environment for their entire student body.

2. System Architecture: Modular Design for Scalability and Efficiency

The Emotion-Aware AI Complaint Management System is meticulously engineered with a modular architecture, a design philosophy chosen to ensure unparalleled scalability, maintainability, and seamless integration across all operational components. The system is logically segmented into seven distinct core modules, each assigned specific responsibilities within the comprehensive end-to-end complaint analysis and management workflow.

Architectural Components and Workflow:

  1. Voice Input Collection Module:

    This module serves as the primary interface for user interaction. Students can effortlessly upload their audio complaints through a dedicated web-based portal. The module is designed to accommodate various audio formats (e.g., .wav, .mp3, .ogg), ensuring broad compatibility and preparing the raw audio data for subsequent processing stages.

  2. Audio Preprocessing & Feature Extraction Module:

    Upon receipt of an audio input, this module initiates a sophisticated preprocessing pipeline. Key operations include:

    • Noise Reduction: Advanced algorithms, leveraging spectral flatness and RMS thresholds, are applied to minimize background noise and enhance audio clarity.
    • Segmentation: Audio streams are intelligently segmented into manageable 2.5-second chunks, optimizing them for efficient processing by the emotion detection model.
    • Feature Extraction: Critical acoustic features are extracted to represent the emotional content of the speech. These include:
      • MFCC (Mel-Frequency Cepstral Coefficients): Captures the spectral envelope of the sound, mimicking human auditory perception.
      • ZCR (Zero Crossing Rate): Indicates the rate at which the speech signal changes sign, providing insights into the noisiness and tonality.
      • RMSE (Root Mean Square Energy): Measures the short-term energy of the audio signal, reflecting loudness and intensity.

    These extracted features are subsequently standardized and encoded, ensuring a consistent and optimized input format for the machine learning model.

  3. Emotion Detection Engine (CNN-Based):

    At the very core of the system's intelligence lies a highly optimized Convolutional Neural Network (CNN). This CNN model has been rigorously trained on a diverse array of emotional speech datasets (e.g., RAVDESS, TESS, CREMA-D+), enabling it to accurately process the preprocessed audio features. The output of this module is a probabilistic distribution across predefined emotion categories, including angry, sad, fear, happy, and neutral, providing a nuanced understanding of the speaker's emotional state.

  4. Emotion Intensity-Based Complaint Classification Module:

    Leveraging the primary emotion detected by the CNN, this module automatically assigns a priority level to each complaint. The classification logic is as follows:

    • High Priority: Typically assigned to complaints associated with intense emotions such as anger or fear, necessitating immediate attention.
    • Medium Priority: Often linked to neutral emotional states, indicating a standard processing urgency.
    • Low Priority: May be assigned to complaints with emotions like happiness, suggesting a less urgent or positive feedback context.

    This intelligent classification mechanism directly influences the urgency and subsequent response actions undertaken by institutional staff.

  5. Alert and Notification System:

    An integrated, real-time alerting mechanism is a critical component, designed to ensure that high-priority complaints are immediately brought to the attention of designated university counselors or administrators. All notifications are securely stored within the system and are readily accessible via a dedicated administrative dashboard, providing a comprehensive audit trail.

  6. Automated Report Generation and Visualization Module:

    For every processed complaint, this module automatically generates a comprehensive PDF report. Each report is a detailed compendium of information, including:

    • Student metadata (anonymized where appropriate)
    • Precise complaint timestamp
    • Detailed emotion scores and confidence levels
    • Accurate audio transcription

    These reports are securely saved on the server and can be downloaded by authorized users, facilitating offline analysis, record-keeping, and accountability.

  7. Security and Privacy Enforcement Module:

    This module implements robust security measures to ensure the confidentiality, integrity, and availability of all sensitive data. Key features include:

    • Authentication: Strict authentication protocols ensure that only authorized users (students for their data, administrators for system oversight) can access the platform.
    • Session Management: User sessions are securely managed using Flask-Login, preventing unauthorized access and session hijacking.
    • Data Encryption: Sensitive information is encrypted both in transit and at rest, providing an additional layer of protection.
    • Access Control: Granular access control mechanisms restrict data visibility and functionality based on user roles.
    • Audit Trails: Comprehensive logging ensures data integrity and provides an auditable record of all system interactions.

System Integration Overview:

The system's various components are seamlessly integrated to form a cohesive and efficient application:

System Architecture Diagram

System Architecture Diagram

Use Case Diagram

Use Case Diagram

Activity Diagram

Activity Diagram

System Flow Diagram

System Flow Diagram

3. Technology Stack: Robust and Scalable Foundations

The development of the Emotion-Aware AI Complaint Management System is underpinned by a carefully selected, robust, and modular technology stack. This stack integrates cutting-edge web development tools, advanced audio processing libraries, powerful machine learning frameworks, and scalable deployment capabilities, ensuring the system's high performance, reliability, and future extensibility. Below is a detailed breakdown of the key technologies employed across different layers of the system architecture:

Frontend Technologies: Crafting an Intuitive User Experience

Backend Technologies: Powering Core System Logic

Machine Learning & Audio Analysis: The Intelligence Core

Database Layer: Persistent Data Management

Deployment & Scalability: Ensuring Operational Excellence

4. Model Details: Deep Learning for Speech Emotion Recognition

The analytical core of the Emotion-Aware AI Complaint Management System is a sophisticated deep learning-based Speech Emotion Recognition (SER) model. This model has been meticulously designed and rigorously trained to accurately identify and interpret emotional cues embedded within voice inputs, ensuring high precision and reliability in its predictions. Below is a comprehensive overview of the model’s architecture, its training methodology, and key technical specifications.

Dataset Integration: Comprehensive Training for Robustness

To ensure the model's ability to generalize across diverse vocal characteristics and emotional expressions, it has been trained on a synergistic combination of three widely recognized and emotionally rich speech datasets:

The integration of these datasets collectively contributes to a balanced and comprehensive representation of emotions across variations in gender, age, and speaking style. This multi-dataset approach significantly enhances the model's generalization capabilities, making it robust to real-world variations in speech.

Emotion Classes: Targeted Emotional Categorization

The model is engineered to classify audio inputs into the following five primary emotional categories, which were strategically selected for their direct relevance in assessing distress levels and stability within student complaints:

Model Architecture: 1D Convolutional Neural Network (CNN)

The emotion classifier is built upon a 1D Convolutional Neural Network (CNN) architecture. This specific design is optimally suited for processing time-series data, such as audio signals, due to its inherent ability to capture temporal dependencies and local patterns. The architecture is designed to be lightweight yet sufficiently expressive to discern intricate temporal patterns and acoustic features within speech.

This optimized architecture ensures rapid inference times while maintaining exceptional precision in emotion classification.

Performance Metrics: Validated Accuracy and Robustness

Feature Extraction: Capturing Acoustic Nuances

To effectively capture the subtle and overt emotional characteristics embedded within speech, the model leverages a set of carefully selected acoustic features:

These extracted features undergo standardization using a pre-fitted scaler to ensure a consistent input format, which is then fed into the deep learning model for emotion classification.

Audio Processing & Chunking: Handling Variable Inputs

Evaluation

The accuracy plot demonstrates a clear upward trend for both training and validation datasets over 50 epochs. Initially, the model showed rapid improvement, reaching over 90% accuracy by epoch 15. From epochs 20 to 50, both curves stabilize above 95%, indicating consistent learning with no significant overfitting. By the final epoch, training accuracy approaches 0.99, and validation accuracy mirrors this trend closely, demonstrating excellent generalization capability.

5. Setup and Installation: Getting Started Locally

This section provides comprehensive instructions for setting up and running the Emotion-Aware AI Complaint System on your local development environment. Please follow these steps carefully to ensure a smooth installation process.

1. Clone the Repository

Begin by cloning the project repository from GitHub to your local machine using the following command:

git clone https://github.com/saky-semicolon/Emotion-Aware-AI-Support-System.git
cd Emotion-Aware-AI-Support-System

2. Install Required Python Dependencies

It is highly recommended to create a dedicated Python virtual environment to manage project dependencies. This prevents conflicts with other Python projects on your system. Execute the following commands:

python3 -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
pip install -r requirements.txt

This will install all necessary Python libraries listed in the requirements.txt file.

3. Install FFmpeg

FFmpeg is an essential tool for audio processing within the system. Install it globally on your system. For Debian/Ubuntu-based systems, use:

sudo apt update
sudo apt install ffmpeg

For other operating systems, please refer to the official FFmpeg documentation for installation instructions. Verify the installation by running:

ffmpeg -version

Ensure the command executes successfully and displays version information.

4. Configure PostgreSQL Database

The system utilizes PostgreSQL as its primary database. Ensure PostgreSQL is installed and running on your machine. Then, perform the following database configurations:

DB_NAME=fyp
DB_USER=flask_user
DB_PASSWORD=your_password
DB_HOST=localhost
DB_PORT=5432

Replace your_password with the actual password for your PostgreSQL user.

5. Apply Database Migrations

After configuring the database, apply the latest database schema changes using Flask-Migrate (or similar tool, if configured differently in the project):

flask db upgrade

This command will synchronize your database schema with the application's models.

6. Launch the Application

With all dependencies installed and the database configured, you can now launch the Flask development server:

python run.py

The application will typically become accessible at http://127.0.0.1:5000 in your web browser.

6. API Reference: System Endpoints

The Emotion-Aware AI Complaint Management System exposes a set of RESTful API endpoints to facilitate interaction with its core functionalities. Below is a detailed reference for each primary endpoint, including its purpose, HTTP method, and expected behavior.

Endpoint Method Purpose Request Body / Parameters Response
/api/predict POST Upload an audio file for emotion prediction. multipart/form-data with an audio file (e.g., audio.wav) JSON object containing predicted emotion, confidence scores, and transcription.
/api/respond POST Submit feedback or responses related to a complaint. application/json with fields like complaint_id, response_text, user_id. JSON object indicating success or failure of the submission.
/download_report/<id> GET Download the comprehensive emotion analysis report for a specific complaint. Path parameter: id (unique identifier of the complaint). PDF file of the report.
/admin/* GET Access various administrative dashboards and manage complaints. (Requires authentication and appropriate admin roles) HTML content of the admin dashboard or specific management interface.

All API endpoints are designed to be RESTful and primarily accept standard application/json or multipart/form-data formats where applicable, ensuring broad compatibility with client applications.

7. Student-Side Features: Empowering Student Voice and Well-being

The interface of the Emotion-Aware AI Complaint Management System has been thoughtfully designed to provide an accessible, intuitive, and emotionally supportive experience. Each component is crafted with the understanding that students may be in vulnerable emotional states when accessing the system, requiring interfaces that promote comfort, trust, and ease of use.

1. Homepage: Creating a Welcoming Digital Environment

The Homepage serves as the initial point of interaction, offering a welcoming and emotionally sensitive interface tailored for students. It emphasizes clarity, accessibility, and visual calmness to ensure users—especially those in distress—can navigate comfortably.

Key Highlights:

2. Registration Page: Secure and User-Friendly Onboarding

The Registration Page provides a secure and structured process for new users to create an account. It simplifies onboarding through a well-designed form layout and real-time user feedback, ensuring that the initial interaction with the system is positive and confidence-building.

Core Features:

This intuitive and visually cohesive interface ensures that the registration process is seamless, efficient, and trustworthy, especially for first-time users navigating an emotionally sensitive system. The design prioritizes user comfort and confidence-building from the very first interaction.

3. Login Page and Student Dashboard: Secure Access and Personalized Experience

The Login Page and Student Dashboard together form the core of the user's personalized experience within the Emotion-Aware AI Support System. These modules ensure secure access, emotional transparency, and intuitive navigation across all system features.

Login Page

The Login Page provides a clean, secure, and efficient authentication mechanism for both students and administrators. Built with simplicity and responsiveness in mind, it ensures smooth access while maintaining a professional aesthetic.

Key Features:

Student Dashboard

Once logged in, users are greeted with a personalized dashboard that acts as a central hub for emotional insights and system interaction. It provides a real-time snapshot of the student's emotional analysis, complaint status, and access to additional services.

Key Features:

Together, the Login Page and Student Dashboard ensure that every student not only enters the platform securely but is also empowered with actionable emotional insights and intuitive system interaction. The dashboard serves as a comprehensive command center where students can monitor their emotional journey, access support resources, and engage with the system's AI-powered features in a meaningful way.

4. User Profile Management: Personalized and Secure Information Control

The User Profile Management feature allows students to maintain and update their personal and academic information securely. It ensures that the platform stays aligned with the user's current data while providing a personalized experience throughout their interaction with the system.

Key Features:

5. Emotion-Aware Complaint Interface: AI-Powered Empathy

This is the core innovation of the system — an intelligent interface that interprets the emotional state of a student through their voice input. By leveraging a CNN-based speech emotion recognition model, the system classifies and prioritizes complaints with empathy and precision, ensuring that student distress is properly identified and addressed.

Key Features:

6. Complaint Status and Withdraw Complaints

The Complaint Submission & Status module empowers students to file grievances in their own voice and keep track of their resolution in a transparent and structured format. It ensures that users are always informed about the progress of their submissions while maintaining complete visibility into the complaint lifecycle.

Key Features:

8. Administrator-Side Features: Crisis Management and Oversight

The administrative interface provides comprehensive oversight capabilities, enabling university staff to monitor emotional trends, manage complaints efficiently, and respond to crisis situations with appropriate urgency and care. The admin interface is designed for rapid decision-making and effective resource allocation.

1. Admin Login and Dashboard: Command Center for Crisis Management

The Admin Login and Dashboard module serves as the command center for system administrators, offering both secure access and comprehensive monitoring tools. It ensures only authorized personnel can view and manage sensitive complaint and emotional data.

Admin Login

The administrative login interface prioritizes security while maintaining simplicity for efficient access during critical situations.

Key Features:

Admin Dashboard

Once logged in, administrators gain access to a comprehensive dashboard that consolidates key metrics and emotional insights across the platform, enabling data-driven decision making for student support services.

Key Features:

This module allows university administrators and counselors to identify emotional trends, allocate counseling resources effectively, and intervene where needed — all through a centralized, data-rich interface that supports evidence-based student support strategies.

2. Complaints Management: Operational Backbone for Student Support

The Complaints Management page serves as the operational backbone for admins to oversee student submissions and maintain an up-to-date resolution workflow. This comprehensive interface ensures systematic handling of all student concerns.

Key Features:

The module ensures that no emotional cry for help goes unnoticed, and supports efficient resolution tracking and escalation protocols as needed for comprehensive student support.

3. Sending Replies: Empathetic Administrative Communication

The "Sending Replies" feature empowers administrative users and support staff to respond directly to student complaints through the platform in a structured, efficient, and empathetic manner. It ensures that emotional concerns raised via the voice-based system are met with timely human feedback and personalized support.

🔹 Key Functionalities:

This feature transforms the traditional complaint resolution process into a dynamic, empathetic communication system that bridges the gap between automated emotion detection and human compassion, ensuring that no student concern goes unaddressed or unacknowledged.

4. Alerts & Notifications: Real-Time Crisis Response System

This section serves as the real-time alert center of the system, surfacing emotionally critical incidents and complaint escalations for quick administrative response. The notification system acts as an early warning system for potential student crises.

Key Features:

This intelligent notification system helps administrators stay ahead of emotional crises, ensuring students in distress are responded to with appropriate empathy and urgency. The system supports proactive intervention strategies and helps prevent escalation of student mental health situations.

9. Demonstration: Visualizing the System in Action

To provide a comprehensive understanding of the Emotion-Aware AI Complaint Management System's capabilities and user experience, a dedicated demonstration video has been prepared. This video visually walks through the system's functionalities from end-to-end.

Video Title:

Emotion-Aware AI Support System — Full Walkthrough

Description:

This demonstration video offers an in-depth showcase of the full functionality of our intelligent voice-based complaint handling system, specifically developed for higher education institutions. The platform is designed to capture student voice inputs, perform real-time analysis of emotional content using a sophisticated CNN-based speech emotion recognition model, and automatically prioritize complaints based on detected emotional distress levels. Furthermore, the system supports automated PDF report generation, real-time administrative alerts, and comprehensive complaint tracking capabilities. The entire system is robustly built using Python, Flask, TensorFlow, and PostgreSQL, seamlessly integrated with modern web technologies to deliver a responsive and intuitive user experience.

10. Versions

To ensure clarity, consistency, and transparency in the evolution of the Emotion-Aware AI Complaint Management System, we follow a structured versioning approach. Each version milestone represents a meaningful progression in system capability, architecture, and intelligence.

Version Milestones:

Inital Development(v1.0.0)

Summary: Marks the establishment of the project’s foundational structure, interfaces, and functionality.

Full Feature Release(v2.0.0)

Summary: This release includes the well-updated and nearly complete version of the project with all planned features fully implemented.

ML Model Integration (v2.1.0)

Summary: This release includes the integration of the Machine Learning model into the system, marking a major enhancement in functionality.

Intelligence Enhancement: This update brings intelligence to the platform, making it smarter and more responsive to student emotional needs. The system now leverages advanced AI capabilities for accurate emotion detection and prioritization.

Model Repository: The trained model is also available on Hugging Face: Emotion Detection from Voice Model

11. Contribution Guidelines: Extending the System

We warmly welcome and encourage contributions from the community that align with the overarching mission and technical vision of this project. Your contributions can help improve the system, add new features, or enhance existing functionalities. Please adhere to the following guidelines to ensure a smooth and effective contribution process:

Contribution Workflow:

  1. Fork and Clone:

    Begin by forking the main repository on GitHub. After forking, clone your personal fork to your local development machine:

    git clone https://github.com/saky-semicolon/Emotion-Aware-AI-Support-System.git
    cd Emotion-Aware-AI-Support-System
  2. Create a Feature Branch:

    Before making any changes, create a new Git branch for your feature or bug fix. This keeps your changes isolated and makes the pull request process cleaner:

    git checkout -b feature/your-feature-name

    Choose a descriptive name for your branch (e.g., feature/add-email-notifications, bugfix/login-issue).

  3. Code Standards:

    Write clean, modular, and well-documented code. Please adhere to Python's PEP 8 style guide for consistency. Ensure your code is readable and follows established patterns within the project.

  4. Commit Best Practices:

    Make atomic commits with clear and concise commit messages. Your commit message should summarize the changes made. Use imperative mood (e.g., "Add: new feature" instead of "Added new feature").

    git add .
    git commit -m "Add: concise summary of your changes"
    git push origin feature/your-feature-name
  5. Submit a Pull Request (PR):

    Once your changes are complete and thoroughly tested, push your feature branch to your GitHub fork and then open a Pull Request against the main branch of the original repository. Provide a descriptive summary of your changes in the PR description and tag relevant reviewers if necessary.

Important: Please ensure that all existing automated tests pass, and if you've added new features, include corresponding unit or integration tests. This helps maintain the stability and reliability of the system.

12. Licensing

This project is released under the MIT License, which allows for wide usage, modification, and distribution of the code. Below are the key points regarding the licensing and intellectual property rights:

© All Rights Reserved — 2025

Developed by: S M Asiful Islam Saky, Rania Kherba, Ahmed Dugje Kadiri

This project is licensed under the MIT License. You are free to use, modify, and distribute this code under the terms of the license. However, please ensure that you retain the original copyright notice and include this license in any copies or substantial portions of the software.

For any inquiries regarding the use of this project, please contact the authors via the project's GitHub repository or through the provided Contact Information. We appreciate your interest and contributions to the Emotion-Aware AI Complaint Management System.

MIT License

Copyright (c) 2025 S M Asiful Islam Saky, Rania Kherba, Ahmed Dugje Kadiri

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Thank you for your interest in this work!