You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Spreading of misinformation is a severe problem in popular social media systems today. This problem is particularly challenging because such systems must balance the needs for editorial control, e.g. filtering unwanted posts or other types of censorship, and the needs for allowing original and diverse opinions and information being shared freely to foster useful exchange. Examples of systems that try to tackle the shortcoming of a single centralized entity administering the system include: Matrix and Mastodon, and Nostr. Examples of alternative bridging based ranking algorithms, as opposed to typical engagement based ranking algorithms, include: Polis and X’s Community Notes. Another example of different ways of bubbling up posts by reader votes is Reddit.
The easy access of powerful (and rapidly developing) generative AI technologies can exacerbate the misinformation problem by further bypassing human user’s cognitive capability to intuitively judge the truthfulness of texts (e.g. by the master of language use and common sense reasoning) and multimodal contents, namely images, audios, and videos, i.e. contents beyond texts. Many traditional practices in human interactions, social media included, implicitly rely on the assumption that only genuine human users have these capabilities that require high cognitive abilities.
Problems to solve:
Authentic labeling of all types of content
Authentic identification of creators, editors, senders, …, human or AI, for provenance
Authentic identification of creators, editors, senders, … human or AI, for accountability
Reliable consent and agreement of sharing information
Weaknesses in the current practices:
Centralized social media systems have a long list of well publicized issues related to misinformation.
Decentralized social media systems like Matrix and Mastodon lack a high assurance, scalable, and privacy preserving identity system. Without such an id system, their operations can’t be fully decentralized and will face the same impossible editorial and governance challenges.
How C2PA+TSP can help:
TSP provides a verifiable identity (VID) scheme that can be applied in both central and decentralized systems.
TSP provides authenticity, confidentiality and privacy (i.e. E2EE PLUS stronger privacy).
C2PA provides labeling of contents.
TSP can facilitate agreement, e.g. consent.
TSP can facilitate accountability, e.g. auditing.
Not in scope:
The assumption is that parties involved are genuinely interested in the sharing of authentic information rather than actively avoiding it. It is out of scope for solving the active evasion problem in this proposal. If a large enough percentage of participants by default utilize the authentic schemes, then the lack of such use may be an effective signal for the end users to be extra careful with the content. This is similar to the use of https in the web ecosystems but applied to content and to decentralized parties beyond normal web servers.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Proposer: @wenjing
Spreading of misinformation is a severe problem in popular social media systems today. This problem is particularly challenging because such systems must balance the needs for editorial control, e.g. filtering unwanted posts or other types of censorship, and the needs for allowing original and diverse opinions and information being shared freely to foster useful exchange. Examples of systems that try to tackle the shortcoming of a single centralized entity administering the system include: Matrix and Mastodon, and Nostr. Examples of alternative bridging based ranking algorithms, as opposed to typical engagement based ranking algorithms, include: Polis and X’s Community Notes. Another example of different ways of bubbling up posts by reader votes is Reddit.
The easy access of powerful (and rapidly developing) generative AI technologies can exacerbate the misinformation problem by further bypassing human user’s cognitive capability to intuitively judge the truthfulness of texts (e.g. by the master of language use and common sense reasoning) and multimodal contents, namely images, audios, and videos, i.e. contents beyond texts. Many traditional practices in human interactions, social media included, implicitly rely on the assumption that only genuine human users have these capabilities that require high cognitive abilities.
Problems to solve:
Authentic labeling of all types of content
Authentic identification of creators, editors, senders, …, human or AI, for provenance
Authentic identification of creators, editors, senders, … human or AI, for accountability
Reliable consent and agreement of sharing information
Weaknesses in the current practices:
Centralized social media systems have a long list of well publicized issues related to misinformation.
Decentralized social media systems like Matrix and Mastodon lack a high assurance, scalable, and privacy preserving identity system. Without such an id system, their operations can’t be fully decentralized and will face the same impossible editorial and governance challenges.
How C2PA+TSP can help:
TSP provides a verifiable identity (VID) scheme that can be applied in both central and decentralized systems.
TSP provides authenticity, confidentiality and privacy (i.e. E2EE PLUS stronger privacy).
C2PA provides labeling of contents.
TSP can facilitate agreement, e.g. consent.
TSP can facilitate accountability, e.g. auditing.
Not in scope:
The assumption is that parties involved are genuinely interested in the sharing of authentic information rather than actively avoiding it. It is out of scope for solving the active evasion problem in this proposal. If a large enough percentage of participants by default utilize the authentic schemes, then the lack of such use may be an effective signal for the end users to be extra careful with the content. This is similar to the use of https in the web ecosystems but applied to content and to decentralized parties beyond normal web servers.
Beta Was this translation helpful? Give feedback.
All reactions