Idea Title: 360.ai Idea Description: Using communications metadata and content data to improve inclusion in the workplace, reduce bias, facilitate synergy, collaboration and teamwork. Contributors: Miranda, Fouzia, Yee, Angèle, Musa, Jenny, Girish (facilitator) Point of contact: angele.barbedette@gmail.com, fouziaahmad.me@gmail.com, miranda.mcclellan14@gmail.com, musabubakar3@gmail.com, … Read More
Policy
Chippy
Idea Title: Chippy Idea Description: 41% of Americans have personally experienced some form of online harassment. Chippy is a prosocial AI-powered chatbot that identifies and prevents negative online experiences/ behavior and empowers users through education and tools. Contributors: Avina Nunez, … Read More
AI Gold Standards
Idea Title: AI Gold Standards Idea Description: An objective and transparent methodology to evaluate an AI system to eliminate biases, increase accuracy, and enhance contextualization in policy design and enforcement by creating an internationally recognized Gold Standard Contributors: Brendon Schwarz, … Read More
AlgoSpeakeasy
Idea Title: AlgoSpeakeasy- Open-Source Moderation Evasion Detection Across Platforms Idea Description: Train an AI model using diverse multilingual takedown datasets to identify and flag instances of moderation evasion across platforms, enabling the labeling of posts and profiles that engage in … Read More
WatchDog
Idea Title: WatchDog: Ensuring Online Safety for All Problem Statement: Trust & Safety challenges tend to be consistent across platforms, while abuse detection strategies are siloed. There are few opportunities for industry information sharing, the current solutions are reactive and … Read More
One-Stop Policy Shop
Idea Title: One-Stop Policy Shop Idea Description: We are building an AI LLM on Trust & Safety policies for emerging tech/AI platforms. Enable new and emerging platforms to launch globally robust policies from the outset. Problem/Need: Available platform T&S policies … Read More
TrustAlign
Idea Title: TrustAlign Idea Description: AI simplifies the user agreement process by comparing the user’s values with platform’s policies and terms of agreement and highlight differences in simple terms. If user is about to violate platform policies, AI informs user … Read More
XCALibR
Idea Title: XCALibR (Xplatform Content Abuse Library Reference): Cross-Platform Online Safety Collaboration Software Idea Description: The database attempts to unify content policy and ethical standards across all platforms; to find a common ground. Based on that policy, ML models are … Read More
SafetySherpa
Idea Title: SafetySherpa Idea Description: We are designing a T&S Educational Chatbot for social media platforms which will educate users on policies and abuses when they seem to be interacting with suspected violating content. Contributors: Paola Maggiorotto, Zhamilya Bilyalova, Julia … Read More
UnSafeSets
Idea Title: UnSafeSets Idea Description: Real-world datasets to train models on harmful content are silo-ed and insufficient to support emerging platform needs. We propose UnSafeSets: generating synthetic datasets to train models on harmful, high sensitivity + low prevalence content. Contributors: … Read More