User Tools

Salama Button

Idea Title: “Salama” Button Idea Description: There is an increase in online abuse of children with low reporting of cases of abuse. This is attributed to the complexity of existing reporting systems across all platforms that don’t accommodate the brain … Read More

Chippy

Idea Title: Chippy Idea Description: 41% of Americans have personally experienced some form of online harassment. Chippy is a prosocial AI-powered chatbot that identifies and prevents negative online experiences/ behavior and empowers users through education and tools. Contributors: Avina Nunez, … Read More

one stop policy shop

One-Stop Policy Shop

Idea Title: One-Stop Policy Shop Idea Description: We are building an AI LLM on Trust & Safety policies for emerging tech/AI platforms. Enable new and emerging platforms to launch globally robust policies from the outset. Problem/Need: Available platform T&S policies … Read More

TrustAlign

Idea Title: TrustAlign Idea Description: AI simplifies the user agreement process by comparing the user’s values with platform’s policies and terms of agreement and highlight differences in simple terms. If user is about to violate platform policies, AI informs user … Read More

XCALibR

Idea Title: XCALibR (Xplatform Content Abuse Library Reference): Cross-Platform Online Safety Collaboration Software Idea Description: The database attempts to unify content policy and ethical standards across all platforms; to find a common ground. Based on that policy, ML models are … Read More

SafetySherpa

Idea Title: SafetySherpa Idea Description: We are designing a T&S Educational Chatbot for social media platforms which will educate users on policies and abuses when they seem to be interacting with suspected violating content. Contributors: Paola Maggiorotto, Zhamilya Bilyalova, Julia … Read More

Trauma-informed Tools

Idea Title: Trauma-informed content navigation tools Idea Description: Our tool lets people intentionally navigate their online content across all platforms and prevent exposure to individual mental health triggers. AI filters content with context to enable safely browsing online spaces without … Read More