How Can We Trust What We See Online? Here's One Way Forward

In a world where AI can create photos, videos, and even voices that look and sound real, how do we know what to trust?
Every day, more content we see online is generated or altered by AI. That’s not always a bad thing. AI can help us create amazing art, get work done faster, or imagine new possibilities. But it also opens the door to misinformation, impersonation, and confusion. When anyone can create content that looks authentic, how do we tell what’s actually real?
To enhance human trust in AI systems and explore how AI itself can be used to address complex trust challenges in digital ecosystems, Trust over IP (ToIP), a project of LF Foundation Decentralized Trust, has launched a new AI and Human Trust (AIM) Working Group. It builds on the work done over the past three years by ToIP’s AIM task force.
The recently released white paper from the working group, ToIP Trust Spanning Protocol (TSP): Strengthening Trust in Human and AI Interactions, offers a way forward for building, maintaining and verifying interactions involving AI technologies. It brings together three powerful tools, the Trust Spanning Protocol (TSP)1, the C2PA Specification2, and the work of the Creator Assertion Working Group (CAWG)3, to build a system of authenticity for the digital world.
The key components include:
- TSP (Trust Spanning Protocol) provides a strong foundation for online trust between people, platforms, and tools—making sure that when something claims to come from someone, it actually does. (The “Connector”)
- The C2PA Specification is a growing standard that helps attach a digital “nutrition label” to content—showing when it was made, how it was edited, and by what capture devices or software. (The “How” and the “What”)
- CAWG (Creator Assertion Working Group at DIF) focuses on making sure that individual and organizational content creators can identify themselves with their content and provide additional information for their audience to understand their content. (The “Who”)
Why do we need all three? Because content authenticity isn’t just about how something is created. It’s also about who made it, and how it gets communicated through public networks while retaining the integrity of actions made to it. C2PA gives us technical metadata about tools and edits. CAWG ensures the human creator is identified and attributed. And TSP makes the entire chain, from camera or AI tool to multiple individual human collaborators to final distribution platform, trustworthy at every step. Together, they provide a complete system covering creation, collaboration, and distribution.
All put together, these can help us answer the most important question about this digital artifact: Can I trust this?
This isn’t just a technical fix. It’s a new way to think about digital truth. And the paper lays out a path toward a future where users can more confidently trust the source and actions made to digital content in a way that’s accountable, verifiable, and respectful of creators.
Read the full white paper here.
We invite technologists, developers, artists, policy makers, and everyday internet users to take a look. It’s about restoring trust in a world where AI has blurred the lines of what is real and what is artificially generated.
1. Trust Spanning Protocol (TSP) is an ongoing work by Trust over IP (ToIP), a project of LFDT: https://trustoverip.github.io/tswg-tsp-specification
2. The C2PA Specification is an ongoing work by The Coalition for Content Provenance and Authenticity (C2PA): https://c2pa.org/specifications/specifications/2.2/index.html
3. The Creator Assertions Working Group (CAWG) is a joint effort by the Decentralized Identity Foundation (DIF) and ToIP. See https://cawg.io
__
Want to dive deeper into ToIP’s work on verifying authenticity? Check out this LF Decentralized Trust Webinar: Verifiable Authenticity—Answering the Threat of AI Deep Fakes