About VibeCheck

An AI-powered tool for detecting cyberbullying and toxic content in YouTube comments.

security

The Problem

YouTube comment sections are fertile ground for harassment, hate speech, and coordinated abuse. Manual moderation is slow, expensive, and inconsistent. VibeCheck automates toxicity detection at scale — surfacing the worst comments instantly so creators and researchers can act fast.

psychology

The Model — ToxicBERT

VibeCheck uses a BERT (bert-base-uncased) model fine-tuned on the Jigsaw Toxic Comment Classification dataset. The model scores each comment across 6 toxicity categories:

Toxic Severe Toxic Obscene Threat Insult Identity Hate

Each score is a sigmoid probability (0–1). A score above 0.5 indicates the label is predicted positive.

terminal

Tech Stack

  • 🐍 Python / Flask — web framework
  • 🤗 HuggingFace Transformers — model serving
  • 🔥 PyTorch — inference engine
  • 📺 YouTube Data API v3 — comment fetching
  • 📊 Chart.js — interactive visualizations
  • 🗄️ SQLite — analysis history storage
  • 🚀 Gunicorn — production WSGI server
warning

Limitations

  • Analysis is limited to top-level comments (no reply threads)
  • YouTube API quota limits may restrict large-scale use
  • The model was trained on English text — performance may degrade on other languages
  • AI scores are probabilistic guidance, not definitive verdicts

Meet the Team

The minds behind VibeCheck.

Niket Gupta

Niket Gupta

Analysed models & Built Platform

Designed, fine-tuned, and built the core VibeCheck architecture to scale AI comment moderation.

Sonakshi Panda

Sonakshi Panda

Analysed models & Research

Conducted deep structural research on NLP model variants and co-authored the official documentation on toxic comment classification.