Discussion about this post

User's avatar
NubbyShober's avatar

Color me skeptical, but it seems hard to believe that LLM's will *not* soon be used to subtly or overtly reinforce specific biases. Especially political biases.

Just as lawyers argue cases, using "facts" to support often diametrically opposite positions, whether by selectively omitting or framing relevant data, it seems hard to believe that LLM's won't soon be trained accordingly to sway and manipulate public opinion.

Overtly authoritarian regimes like China would lead the charge. For example, how sympathetic would the CCP be of LLM arguments that criticize Communism, or specifically the policy decisions of Xi?

Sylvain Ribes's avatar

I have argued for almost two years now that the EU should, for once, use its vast regulatory powers to coerce social media platforms to deploy some sort of automated LLM-based fact checking.

The technology is ripe for it, and we could use open source "transparent" models. One could even consider a very cost efficient type of fact checking whereby the more virality a post has achived, the more compute/the best model gets thrown at the fact checking.

I'm not holding my breath though.

7 more comments...

No posts

Ready for more?