The Twitch Toxic Tracker is an innovative application that provides real-time monitoring and analysis of Twitch chat messages. It is designed to help streamers and moderators maintain a positive and welcoming environment for their viewers by detecting and classifying toxic messages.
The app connects to Twitch and reads the chat messages for a specified user. Utilizing the Cohere.ai API, it classifies each message as either toxic or benign based on its machine learning algorithms and training data. The results are displayed on a user-friendly website in a clear and concise manner.
Not only does the app show the toxicity scores for each message, it also displays the overall percentage of toxic messages in the chat. This provides valuable insights into the health of the chat and allows for proactive measures to be taken to address any toxic behavior.
Additionally, the Twitch Toxic Tracker identifies the users who are sending toxic messages. This feature empowers moderators to take appropriate action against toxic users and maintain a safe and enjoyable environment for everyone.
Overall, the Twitch Toxic Tracker is an essential tool for streamers and moderators who want to ensure a positive and engaging experience for their viewers. With its real-time monitoring, user-friendly display, and robust features, it is the perfect solution for keeping Twitch chats healthy and enjoyable for all.
View project · Report Bug · Request Feature
The main idea is for the streamer to enter their username, resize the window, and place it next to OBS while they are streaming in order to visually detect toxic individuals in their chat in a much more efficient manner.
- Reads Twitch chat messages for a specified user
- Uses the Twitch API to show if the user is online or not
- Uses the Cohere.ai API to classify each message as toxic or not toxic
- Displays the chat messages and toxicity scores on a user-friendly website
- Shows the percentage of toxic messages in the overall chat
- Identifies the users who are sending toxic messages
e4a74d77-11f0-42ef-af96-2781f879bc1d-ft
- The application uses Cohere.ai and its classification algorithm to classify messages as toxic or benign.
- To achieve accurate results from the algorithm, a dataset from a Kaggle Challenge has been used. The dataset has 159,572 classified messages as toxic or bening.
Kaggle train dataset · Cleaned dataset (prepared for Co:here.ai)
- Go to the website https://twitch-chat-toxicity.vercel.app/ and enter the Twitch username for the chat you want to analyze
- The website will display the chat messages and toxicity scores in real-time
- The custom model It has been trained only with English messages, so it's best to test the app with English Twitch channels (it gives many false positives in other languages).
- Please note that in this demo the Cohere.ai Free Tier is being consumed, which only allows for 100 messages per minute to be sent. This limit can be exceeded with streamers who have many viewers and a busy chat. After one minute of being blocked, the application continues to function correctly.
- Clone the repository:
git clone https://github.com/ericrisco/twitch-chat-toxicity
- Install the dependencies:
npm install
- Rename
.env.example
file to.env
and add your own Cohere.ai API Key - Add your own Twitch Cliend id and Twitch Client Secret
- Start the development server:
npm run dev
- In future versions of this app, it will no longer be possible to enter the username of the person whose chat you want to read. Instead, you will have to log in with your Twitch account in order to view your own chat. This will allow for better control of app usage.
- With the previous point implemented, it will also be possible to integrate more features from the Twitch API, such as the ability to ban directly from the web app interface.
- With the two previous points implemented, a system will be set up where the streamer themselves can detect false positives or false negatives. In this way, the AI will be able to train incrementally and with the help of the users.