Integrity Institute’s cover photo
Integrity Institute

Integrity Institute

Think Tanks

We are a community of integrity professionals protecting the social internet

About us

The Integrity Institute is a nonprofit organization run by a community of integrity professionals working towards a better social internet. The group has decades of combined experience in the integrity space across numerous platforms. We are here to explore, research, and teach how regulators and platforms can do better.

Website
http://integrityinstitute.org
Industry
Think Tanks
Company size
11-50 employees
Type
Nonprofit

Employees at Integrity Institute

Updates

  • Integrity Institute reposted this

    View profile for Sarah A.

    PM @ HumaneIntelligence | Trust & Safety Product Consultant

    Excited to share that I'll be co-facilitating a hands-on Red Teaming for AI Systems workshop at #Trustcon alongside the Theodora Skeadas and Jen Weedon. 📅 July 23, 2025 | 3:00-5:00 PM PT While signups are complete, Trustcon attendees can still join the waitlist! We're running an end-to-end mock AI Red Teaming exercise where participants will choose from two scenarios: a "Virtual Therapist" chatbot or "Ask the Historian" (K-12 educational AI assistant). The goal is to bridge the gap between technical Red Teaming findings and organizational impact. We'll guide participants through understanding how different users engage with AI technologies, developing effective testing approaches, and evaluating results in ways that drive real change. Whether your organization has formal AI governance teams or you're starting from scratch, this workshop is designed to give you practical frameworks you can implement immediately. Grateful for the TSPA Team for convening such a stellar group of professionals. Looking forward to this! #Trustcon #AIRedTeaming #TrustAndSafety #AIGovernance #ResponsibleAI

    • Promo for Workshop at Trustcon, reading: 
"Join us in San Francisco for 'From Edge Cases to Safety Standards: AI Red Teaming in Practice' 
Wednesday July 23rd | 3:00 - 5:00 pm PST

featuring facilitators Sarah Amos (Humane Intelligence), Theo Skeadas (Humane Intelligence), Jen Weedon (Columbia University)
  • Day 1 of #TrustCon2025 is in the books — and what a start! Charlotte Willner opened the conference with a powerful keynote, highlighting that even in the face of external challenges, Trust & Safety work is essential — and so are the people behind it. She reminded us that, by working together, we have the power to make the internet safer, more inclusive, and more human. If you're at TrustCon, come say hi! Jeff Allen, Sofia Bonilla, and Spencer Gurley are here and would love to chat. Onto Day 2! ✨

    • No alternative text description for this image
  • Integrity Institute reposted this

    View profile for Theodora Skeadas

    Technology Policy and Responsible AI Strategic Advisor | Harvard, DoorDash, Humane Intelligence, Twitter, Booz Allen Hamilton, King's College London

    I'm thrilled to be speaking at this year's TrustCon! Alongside Jen Weedon and Sarah A., we'll be doing a mock red teaming exercise from end-to-end: from threat modeling, to chatbot testing, to framing up recommendations. We've designed two scenarios for participants to choose from: a "Virtual Therapist" chatbot, or "Ask the Historian", an AI assistant geared towards K-12 educational contexts. We'll be looking for different types of bias, safety, and factuality, and helping participants understand the high-level steps in thinking about how different types of users engage with these types of technologies, varied testing strategies to simulate real user behavior, and how to evaluate results. Join us if you are interested in learning how to red team AI models! Additionally, I'll be supporting Sujata Mukherjee and Rachel Fagen for a Language Equity Roundtable. The internet, despite its global reach, remains largely an English-centric space. This digital divide excludes billions of users, hindering their access to information and online communities. This roundtable will explore the critical issue of language equity in AI-powered content moderation systems. We will delve into the technical challenges of developing AI models that effectively and fairly address harmful content across diverse languages, including low-resource languages. The discussion will focus on practical outcomes such as: ▶️ Language Equity Metrics: Developing and discussing measurable criteria for assessing the fairness and inclusivity of AI systems across different languages, drawing inspiration from existing benchmarks for English language models. ▶️ Data Diversity: Strategies for building robust and representative datasets for multilingual AI models, mitigating biases, and addressing data scarcity. ▶️ Technical Solutions: Exploring techniques like de-biasing algorithms, transfer learning, and explainable AI to enhance the fairness and accuracy of multilingual content moderation. ▶️ Policy Implications: Examining the role of policy in promoting the development and deployment of equitable AI-powered content moderation systems, including data governance, algorithmic accountability, and transparency requirements. As a new Board member for the Integrity Institute, I am looking forward to a gathering of our members tomorrow! And, as a Strategic Advisor for All Tech Is Human, I'm really excited for a meet-up the following day with our members there. A huge thanks to Trust & Safety Professional Association for their efforts in organizing this wonderful convening. Please let me know if you are in town - it would be great to meet you!

    • No alternative text description for this image
  • 🧠💬 New Resource: AI Chatbots & Youth Mental Health By David Jay, Eric Davis, Numa Dhamani, and Jen Weedon We are in the midst of an acute mental health crisis among teens. In 2021, several child health organizations such the American Academy of Pediatrics declared a “National Emergency in Child and Adolescent Mental Health.” LLMs and the chatbots that they power have emerged as major players in this crisis, with 70% of US teens using ChatGPT in some fashion and 15% using it for some form of companionship. As Integrity professionals working on platforms that develop or employ LLM-powered chatbots, it is crucial that we understand how these two trends intersect. To further that understanding, the Integrity Institute staff and membership have: 🔹 Compiled a review and analysis of research specifically focused on the impact of LLM-powered chatbots on youth mental health 🔹 Built upon research conducted with Integrity Institute members through the Generative Identity Initiative 🔹 Produced recommendations for Integrity teams working with these critical tools 📖 Read the full piece here: https://lnkd.in/gs7EHcSD #AI #YouthMentalHealth #OnlineIntegrity #TrustAndSafety #ChatBots #ResponsibleTech

    • No alternative text description for this image
  • 📢 New blog post: Protecting Kids from Abuse We hear a lot about how things go wrong on the social internet, and not enough about the practices that make them better. In this case study, members of the Integrity Institute-Abhi Chaudhuri, Dominique Wimmer, Jenna Dietz, Matt Motyl, Ph.D., and David Jay-explore how Trust and Safety teams work to address this type of harmful behavior online. These tactics include: 🔹 Why technical design choices matter 🔹 The role of external organizations 🔹 Policy development and tool implementation Online child abuse is a nuanced issue, and we’re proud to spotlight the people doing the hard, practical work to keep kids safe online. 👉 Read the full post here: https://lnkd.in/eT8nFEZZ #IntegrityInstitute #OnlineSafety #TrustAndSafety #ChildSafety #TechResponsibility

  • Integrity Institute reposted this

    View profile for Olivia Conti

    Trust & Safety Advisor helping companies navigate risk & build safer online communities | Ex-Twitter, Ex-Twitch, wrong kind of doctor (PhD in Communication)

    Safety is a growth strategy. It’s not just a compliance checkbox. It’s not a nice-to-have. It’s something you have to plan for if you want to scale with integrity. Working on trust & safety, especially at the current moment, often means being expected to do more with less — high expectations, shrinking resources, and safety deprioritized at every turn. But the hard truth? It's not that different when times are good. We always have to fight for safety because safety work acknowledges limits, which rubs hard against a system built on infinite growth. Now, companies are chasing AI as the answer to leaner ops. AI can help, but it cannot replace human judgment. It cannot understand context or nuance. If you work on a product that scales, you need a plan to scale safety with it — or risk pain later. I’m running a free workshop on Safety by Design next week for anyone trying to embed safety practices early and often—including how to use AI to help. It’s pragmatic, actionable, and built for real-world constraints. Hope to see you there. https://lnkd.in/gPnmY4WD

  • We’re proud to have contributed to this important effort to make digital spaces safer for young people. The GOSRN Youth Council’s open letter is a powerful reminder that platforms must prioritize the voices and experiences of their most vulnerable users.

    View organization page for eSafety Commissioner

    23,184 followers

    We recently hosted the Global Online Safety Regulators Network Youth Dialogues. The project connected youth representatives from 9 countries to discuss online safety issues that impact young people globally. The group heard from guest presenters, including eSafety Commissioner Julie Inman - Grant, Integrity Institute Co-Founder Jeff Allen, Head of International Affairs at 5Rights Marie-Ève N., and Family Online Safety Institute Policy Consultant Charlotte Aynsley. Discussions explored how digital spaces are vital to young people’s development and sense of connection. Unfortunately, they also reflected on the increased vulnerability of young people to digital harms. That’s why the group is now calling on global technology and policy leaders to invest in digital literacy and user empowerment. Special thanks to the agencies that supported this initiative: 🌍 Childnet International (United Kingdom) 🌍 Coimisiún na Meán (Ireland) 🌏 National Communications Commission (Taiwan) 🌏 Netsafe New Zealand & NZ Classification Office (New Zealand) 🌐 5Rights (global) And to the incredible young people who brought their insight, experience, and passion to the dialogues – your voices are shaping the future of online safety. 📢 Abby, Aideen, Ali, Andre, Anisa, Anna, Anna, Cosima, Ellie, Favour, Georgina, Grace, Ishita, Lauren, Liam, Meg, Minh, Sabina, Yuan-Ting. Read the full letter from the Youth Dialogues: https://lnkd.in/gFp_Ch5J

  • After a robust nomination process, we are pleased to welcome two of our members, Vaishnavi J and Theodora Skeadas, to the Integrity Institute’s Board of Directors. Vaishnavi and Theo bring deep experience in trust & safety, tech policy, and integrity work — and we’re excited for their leadership as we continue to grow. Learn more about them here: 🔗 https://lnkd.in/efgsbz5h 🔗 https://lnkd.in/eFYTExmA #InformationIntegrity #TrustAndSafety #TechPolicy #Leadership #IntegrityInstitute

    • No alternative text description for this image
  • 📺 Don't forget to check out our YouTube channel! 👉 https://lnkd.in/eNPqU4-F We cover a range of critical topics including global tech policy, tech-facilitated gender-based violence, AI governance, the Fediverse, platform accountability, and more. Subscribe and explore the conversations shaping the future of tech and integrity. #InformationIntegrity #TechPolicy #TrustAndSafety #DigitalGovernance #AI #Fediverse #PlatformResponsibility

Similar pages

Browse jobs