About VibeCheck
An AI-powered tool for detecting cyberbullying and toxic content in YouTube comments.
The Problem
YouTube comment sections are fertile ground for harassment, hate speech, and coordinated abuse. Manual moderation is slow, expensive, and inconsistent. VibeCheck automates toxicity detection at scale — surfacing the worst comments instantly so creators and researchers can act fast.
The Model — ToxicBERT
VibeCheck uses a BERT (bert-base-uncased) model fine-tuned on the Jigsaw Toxic Comment Classification dataset. The model scores each comment across 6 toxicity categories:
Each score is a sigmoid probability (0–1). A score above 0.5 indicates the label is predicted positive.
Tech Stack
- 🐍 Python / Flask — web framework
- 🤗 HuggingFace Transformers — model serving
- 🔥 PyTorch — inference engine
- 📺 YouTube Data API v3 — comment fetching
- 📊 Chart.js — interactive visualizations
- 🗄️ SQLite — analysis history storage
- 🚀 Gunicorn — production WSGI server
Limitations
- Analysis is limited to top-level comments (no reply threads)
- YouTube API quota limits may restrict large-scale use
- The model was trained on English text — performance may degrade on other languages
- AI scores are probabilistic guidance, not definitive verdicts
Meet the Team
The minds behind VibeCheck.
Niket Gupta
Analysed models & Built Platform
Designed, fine-tuned, and built the core VibeCheck architecture to scale AI comment moderation.