ToIP and DIF Announce Three New Working Groups for Trust in the Age of AI

ToIP and DIF Announce Three New Working Groups for Trust in the Age of AI

Trust Over IP (ToIP), an LF Decentralized Trust (LFDT) project, and the Decentralized Identity Foundation (DIF) have launched three new Working Groups focused on digital trust for agentic AI:

  1. The Joint ToIP/DIF Decentralized Trust Graph Working Group
  2. The ToIP AI and Human Trust Working Group
  3. The DIF Trusted AI Agents Working Group

Each of these Working Groups will be tackling a different part of the problem of how AI agents can be trusted by the parties and counterparties who will need to rely on them in the Agentic economy. Per LF Decentralized Trust Executive Director Daniela Barbosa, this work is very timely: “As AI accelerates, the question is no longer if we need digital trust frameworks, but how quickly we can deliver them. ToIP, an LF Decentralized Trust project, has been at the forefront of this effort and, with these working groups, is teaming with DIF to speed new approaches to the market. At LFDT, we are committed to supporting this collaboration to make trust, identity, and accountability core pillars of AI." 

The Decentralized Trust Graph Working Group

The Decentralized Trust Graph Working Group (DTGWG) is standardizing a new approach to one of the oldest and hardest problems in digital trust: proof of personhood, (i.e., how to prove a person is a real unique human being without violating their privacy). It is a joint participation Working Group open to both ToIP and DIF members.

DTGWG’s new approach uses cryptographically verifiable identifiers (e.g., W3C Decentralized Identifiers) and verifiable digital credentials (e.g., W3C Verifiable Credentials) to build a decentralized trust graph of verifiable trust relationships. There is no centralized database because all parties control their own portable subgraph of trust relationships in their own digital agents and wallets.

The use of personhood credentials (PHCs) was originally proposed in a seminal August 2024 paper of that name. That was followed by a January 2025 Ayra Association paper proposing a complementary building block called a verifiable relationship credential (VRC). The combination of PHCs and VRCs is now being championed by the First Person Project as an alternative to any form of global biometric database. 

At the Linux Foundation Member Summit last March, LF Executive Director Jim Zemlin explained how a decentralized trust graph could prevent malware injection attacks in the open source supply chain. “This is just one example of the thousands of digital trust challenges that can be solved by an open standard decentralized trust graph,” said ToIP Steering Committee member Drummond Reed. “But perhaps the most important use case will be enabling authenticated delegation from individuals to personal AI agents. This is why the DTGWG is excited to work with the new Trusted AI Agent and AI and Human Trust Working Groups.”

Besides PHCs and VRCS, the scope of the DTGWG includes specifications for key management and recovery, privacy-preserving zero-knowledge proofs, relationship cards (r-cards), and social vouching. It also expects to establish task forces on the required trust task protocols, UI/UX affordances, and decentralized naming and discovery mechanisms.

Two kickoff meetings for the DTGWG are scheduled for Wednesday 24 September:

  • North America/Europe: 08:00-09:00 PT / 15:00-16:00 UTC
  • APAC: 18:00-19:00 PT / 01:00-02:00 UTC

Details for joining the meetings can be found in the ToIP community calendar.

ToIP AI and Human Trust Working Group

The AI and Human Trust Working Group (AIMWG) at ToIP is the continuation of its predecessor the AI and Human Trust Task Force launched in July 2022 at the very beginning of the AI revolution. Its renewed focus is studying and making recommendations on:

  1. Applying ToIP solutions to enhance human trust in AI technologies.
  2. Integrating AI technologies to better solve trust challenges in the context of human and AI interactions.

AIMWG Chair Wenjing Chu, Senior Director of Technology Strategy at Futurewei, explains, “AI is reshaping our relationship with technology. We must embed trust in AI — and harness AI to strengthen trust. Two sides of the same coin.” Building on the task force’s work over the past three years, the AIM Working Group’s current focus includes:

  • A Draft Specification of Running AI Agent Protocols over TSP (Trust Spanning Protocol), such as MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).
  • A documentation of canonical use cases of AI agents and how they relate to people in trust relationships and user experiences with various personas.
  • A white paper recently published by LFDT: “Trust Spanning Protocol (TSP): Strengthening Trust in Human and AI Interactions,” in collaboration with the DIF CAWG and the C2PA.
  • Planning of a new task force to work on a trust framework for AI agents.

Wenjing added that the overall scope of deliverables from the AIMWG may include insight reports, position white papers, specifications and recommendations covering:

  • How to strengthen trust in human and AI interactions leveraging the ToIP Trust Spanning Protocol (TSP) and other ToIP components.
  • Communication protocols between AI agents, and also between AI agents and non-AI data resources, services, tools etc.
  • Authenticity and provenance of content and data (in coordination with the DIF CAWG and the C2PA).
  • Personhood and agent identities (in coordination with the new Decentralized Trust Graph Working Group, below).
  • Delegation, accountability, and control of AI agents
  • How to build trust in AI Agents frameworks.

The AIMWG officially transitioned from a ToIP task force to a Working Group in July. Its wiki page contains detailed information of the WG, how to join, and all past meeting minutes and recordings and future meeting agenda. Meetings are held every week on Thursdays:

  • Thursdays: 09:00-10:00 PT / 16:00-17:00 UTC

DIF Trusted AI Agents Working Group

As autonomous agents gain real-world responsibility for business tasks such as composing tools, making decisions, exchanging verifiable data, or executing transactions, those deploying them will require robust mechanisms for identity, delegation, authority, and governance. The DIF Trusted AI Agents Working Group (TAIAWG) will build and maintain specifications, reference implementations, and governance patterns for enabling high-trust AI agent ecosystems.

"At the foundation, trusted AI agents require verifiable identities and adaptive, revocable delegation spanning organizational boundaries, all designed around zero-trust principles,” said Kim Hamilton Duffy, Executive Director of DIF. “Led by its chairs Andor Kesselman, Nicola Gallo, and Dmitri Zagidulin, this working group is taking a comprehensive approach, enabling solutions that are scalable, accountable, and human aligned.”

The (TAIAWG) will be focused directly on technical infrastructure and portable mechanisms for AI agent trust. 

The first planned deliverable is Agentic Authority Use Cases. The WG will assess existing authorization mechanisms and define a streamlined, interoperable specification for delegating authority in AI agent workflows, with a particular emphasis on object capabilities.

“We plan to leverage existing standards wherever they suffice, extending or adapting them only to address the unique requirements of agentic workflows,” said Andor Kesselman, Chair of the Trusted AI Agents WG. Examples of other deliverables in scope for the TAIAWG include agentic registries, agentic identity, agent protocol evaluation, trusted agent communication, agentic identity, trust frameworks for agents, agentic discovery, agentic access control systems, and many others.

The first meeting of the TAIAWG will be September 30, 2025, at 8 AM PST. Please go to the DIF Trusted AI Agents Working Group page for more details on joining. 

Back to all blog posts