Academic Thesis & Projects

CSE 400 Thesis — Anomaly Detection
CSE 400 — Final Year Thesis

Context-Aware Zero-Shot Anomaly Detection in Surveillance Using Contrastive and Predictive Spatiotemporal Modeling

Supervisors: Prof. Dr. Md. Ashraful Alam  ·  Md Tanzim Reza

Zero-Shot LearningVideo Anomaly Detection Computer VisionContrastive Learning Transformer Architecture

Developed a novel zero-shot anomaly detection framework that identifies abnormal events in surveillance footage without requiring any anomaly examples during training. The system combines spatiotemporal transformers with vision-language understanding to detect previously unseen threats in real-time.

Key Innovation: Introduced a dual-stream architecture integrating TimeSformer for spatiotemporal feature extraction, DPC-RNN for predictive temporal modeling, and CLIP for semantic context alignment — enabling true zero-shot detection with context awareness.

Technical Approach: Jointly trained using InfoNCE and Contrastive Predictive Coding (CPC) losses. A context-gating mechanism modulates predictions based on scene-specific cues, reducing false alarms by adapting to different surveillance environments.

Results: Achieved 84.5% ROC-AUC and 72.3% PR-AUC on the UCF-Crime dataset, outperforming AnomalyCLIP (82.4%) with a detection latency of 0.45 s.

Tech Stack: Python · PyTorch · Hugging Face Transformers · TimeSformer · CLIP · DPC-RNN · OpenCV · UCF-Crime

CSE 424 — Blue-Light Glasses ML
CSE 424 — Pattern Recognition

Blue-Light Blocking Glasses Using Machine Learning

Instructors: Annajiat Alim Rasel  ·  Sadiul Arefin Rafi

Machine LearningRegression Models SpectrophotometryHealth Informatics Data Analysis

Developed ML models to predict and analyze the spectrophotometric properties of commercial blue-light blocking lenses, evaluating their effectiveness in filtering circadian-proficient wavelengths (455–560 nm) across diverse lighting conditions.

Research Methodology: Tested 50 commercial lenses under 5 distinct light sources (sunlight, fluorescent, incandescent, LED, tablet displays), measuring absolute irradiance across 380–780 nm to calculate transmission and specificity metrics.

Model Performance: KNN achieved superior performance with 91.4% R², RMSE 5.921, MAE 3.844 — significantly outperforming SVM (48.3% R²) and Linear Regression (79.7% R²).

Tech Stack: Python · Scikit-learn · Pandas · NumPy · Matplotlib · OpenRefine

CSE 425 — Clustering Analysis
CSE 425 — Neural Networks

Comparative Evaluation of Clustering Algorithms on the Wine Dataset with Stability Analysis

Instructor: Moin Mostakim

Unsupervised LearningVariational Autoencoders Clustering AnalysisUncertainty Quantification Neural Networks

Developed a novel Stochastic Clustering Neural Network (SCNN) inspired by Variational Autoencoders to perform uncertainty-aware clustering on the Wine dataset, incorporating probabilistic latent representations to quantify cluster assignment confidence.

Novel Architecture: VAE-inspired encoder-decoder with reparameterization trick, enabling gradient-based optimization through stochastic sampling while learning meaningful latent representations.

Key Findings: GMM achieved highest accuracy (78.44% ARI), but SCNN provided unique uncertainty quantification with 0.4885 stability variance — exposing cluster ambiguities absent in deterministic methods.

Tech Stack: Python · PyTorch · Scikit-learn · NumPy · Pandas · Matplotlib · Seaborn

CSE 427 — Loan Eligibility Prediction
CSE 427 — Machine Learning

Loan Eligibility Prediction Using Machine Learning Models with SMOTE for Class Imbalance

Instructor: Prof. Dr. Chowdhury Mofizur Rahman

Machine LearningEnsemble Methods Class ImbalanceSMOTE Financial ML

Built an automated loan eligibility prediction system using ML classifiers to streamline the approval process for Dream Housing Finance Company, based on applicant demographics, financial data, and credit history.

Data Preprocessing: Implemented median/mode imputation, one-hot encoding, feature scaling, and SMOTE (Synthetic Minority Over-sampling Technique) to address severe class imbalance.

Model Comparison: Evaluated five algorithms — Random Forest, Logistic Regression, AdaBoost, KNN, and MLP. Random Forest & MLP achieved 89.29% accuracy, with MLP showing exceptional recall (96%) for eligible applicants.

Tech Stack: Python · Scikit-learn · Pandas · NumPy · Matplotlib · Imbalanced-learn (SMOTE)