Discussion about this post

User's avatar
NubbyShober's avatar

Color me skeptical, but it seems hard to believe that LLM's will *not* soon be used to subtly or overtly reinforce specific biases. Especially political biases.

Just as lawyers argue cases, using "facts" to support often diametrically opposite positions, whether by selectively omitting or framing relevant data, it seems hard to believe that LLM's won't soon be trained accordingly to sway and manipulate public opinion.

Overtly authoritarian regimes like China would lead the charge. For example, how sympathetic would the CCP be of LLM arguments that criticize Communism, or specifically the policy decisions of Xi?

Matthew's avatar

This influencer effect seems like an opportunity for the owners of Grok or Chatgpt to inject advertising into their results.

"Actually, Ovaltine is a great way to help your kids get more calcium." (Probably a bit more subtle than this)

I think we can treat the "enshittification" process as a kind of law of the internet.

Can anyone give me a reason why this wouldn't happen?

77 more comments...

No posts

Ready for more?