Color me skeptical, but it seems hard to believe that LLM's will *not* soon be used to subtly or overtly reinforce specific biases. Especially political biases.
Just as lawyers argue cases, using "facts" to support often diametrically opposite positions, whether by selectively omitting or framing relevant data, it seems hard to believe that LLM's won't soon be trained accordingly to sway and manipulate public opinion.
Overtly authoritarian regimes like China would lead the charge. For example, how sympathetic would the CCP be of LLM arguments that criticize Communism, or specifically the policy decisions of Xi?
I have argued for almost two years now that the EU should, for once, use its vast regulatory powers to coerce social media platforms to deploy some sort of automated LLM-based fact checking.
The technology is ripe for it, and we could use open source "transparent" models. One could even consider a very cost efficient type of fact checking whereby the more virality a post has achived, the more compute/the best model gets thrown at the fact checking.
Definitely a worry, but I like how they have an initial revenue model where people actually pay for the service. Combine that with pretty low switching costs, and the incentives for not pissing off your users are higher than they are in some other parts of the internet.
Though if you are a free-tier LLM user, yeah I would expect that to get shitty.
Great piece. Social media is a regular discussion in our house. In part because as well as older children all in their 30s and 40s. We have recently navigated our teenager son to 19 - that has meant a greater level of vigilance. We are very proud of him and he has pro social values and a curious but critical eye on political matters - studies int relations and politics. I also worked in mental health and have long believed that as Noah describes, there is something qualitatively different to discourse on social media to other forms of discourse we have seen.
But all tools can be used for good or bad purposes. Didn't we used to think that social media would promote democracy, justice, and the American way of life? Why can't the trainers of AI figure out how to make them into even more effective propaganda?
This makes great sense. I use Chat gpt a lot. I find that its provision of fact based information has greatly improved my understanding of many of the complex issues affecting modern society.
The easy availability of factual information on any topic should appreciably raise the level of debate. If most participants are operating from a factual base, discussion of social and political topics will certainly be much more balanced and less extreme. That’s (un) common sense.
These media and social media tools and products just make it easier to harness and manipulate (and please) human nature. Humans like being part of a group engaged in reciprocal grooming and loyalty tests and focused on hating or lowering the status of “the other”, or at least it seems to be part of our wiring.
An AI Conkrite? No thanks. You can offer people the salad bar (or whatever the elites and store owners and right thinking people or mis-trained AI believe should be part of the salad bar), but there is a hot fudge Sunday right next to it that will be a bigger seller on the buffet.
We know the way to suppress or de-rank our urges and instincts - it can be using morality and religion and our conscious thought processes. The belief in something better and/or that we can be better and we should be better- not that we are better than others who disagree with us, but better than our worst selves. Unfortunately, partisan belief systems have taken on the patina of moral code (despite being focused on dehumanizing or deplatforming “others”to raise ourselves up) and substituted themselves for religion. The view that “others” need to be trained or educated or fed the “truth” can be part of that.
Obviously poltical parties, partisans, NGOs and click-driven digital and social media don’t want us to be open minded, loving, forgiving, less focused on the narrative of the day, more focused on being better rather than telling us we are already better there others, and the solution won’t be found there. They are selling the daily two minutes of hate. You don’t go to the ice cream shop for a salad. Walk on by it. IMO the solution isn’t to make their messages better but to make them less important in your life. Choose your own path.
Color me skeptical, but it seems hard to believe that LLM's will *not* soon be used to subtly or overtly reinforce specific biases. Especially political biases.
Just as lawyers argue cases, using "facts" to support often diametrically opposite positions, whether by selectively omitting or framing relevant data, it seems hard to believe that LLM's won't soon be trained accordingly to sway and manipulate public opinion.
Overtly authoritarian regimes like China would lead the charge. For example, how sympathetic would the CCP be of LLM arguments that criticize Communism, or specifically the policy decisions of Xi?
I have argued for almost two years now that the EU should, for once, use its vast regulatory powers to coerce social media platforms to deploy some sort of automated LLM-based fact checking.
The technology is ripe for it, and we could use open source "transparent" models. One could even consider a very cost efficient type of fact checking whereby the more virality a post has achived, the more compute/the best model gets thrown at the fact checking.
I'm not holding my breath though.
kind of like a pre-populated "community notes" tag? I could see that being helpful.
This influencer effect seems like an opportunity for the owners of Grok or Chatgpt to inject advertising into their results.
"Actually, Ovaltine is a great way to help your kids get more calcium." (Probably a bit more subtle than this)
I think we can treat the "enshittification" process as a kind of law of the internet.
Can anyone give me a reason why this wouldn't happen?
Definitely a worry, but I like how they have an initial revenue model where people actually pay for the service. Combine that with pretty low switching costs, and the incentives for not pissing off your users are higher than they are in some other parts of the internet.
Though if you are a free-tier LLM user, yeah I would expect that to get shitty.
Great piece. Social media is a regular discussion in our house. In part because as well as older children all in their 30s and 40s. We have recently navigated our teenager son to 19 - that has meant a greater level of vigilance. We are very proud of him and he has pro social values and a curious but critical eye on political matters - studies int relations and politics. I also worked in mental health and have long believed that as Noah describes, there is something qualitatively different to discourse on social media to other forms of discourse we have seen.
Social media, government, Hollywood… America is being overrun with bad actors.
But all tools can be used for good or bad purposes. Didn't we used to think that social media would promote democracy, justice, and the American way of life? Why can't the trainers of AI figure out how to make them into even more effective propaganda?
This makes great sense. I use Chat gpt a lot. I find that its provision of fact based information has greatly improved my understanding of many of the complex issues affecting modern society.
The easy availability of factual information on any topic should appreciably raise the level of debate. If most participants are operating from a factual base, discussion of social and political topics will certainly be much more balanced and less extreme. That’s (un) common sense.
These media and social media tools and products just make it easier to harness and manipulate (and please) human nature. Humans like being part of a group engaged in reciprocal grooming and loyalty tests and focused on hating or lowering the status of “the other”, or at least it seems to be part of our wiring.
An AI Conkrite? No thanks. You can offer people the salad bar (or whatever the elites and store owners and right thinking people or mis-trained AI believe should be part of the salad bar), but there is a hot fudge Sunday right next to it that will be a bigger seller on the buffet.
We know the way to suppress or de-rank our urges and instincts - it can be using morality and religion and our conscious thought processes. The belief in something better and/or that we can be better and we should be better- not that we are better than others who disagree with us, but better than our worst selves. Unfortunately, partisan belief systems have taken on the patina of moral code (despite being focused on dehumanizing or deplatforming “others”to raise ourselves up) and substituted themselves for religion. The view that “others” need to be trained or educated or fed the “truth” can be part of that.
Obviously poltical parties, partisans, NGOs and click-driven digital and social media don’t want us to be open minded, loving, forgiving, less focused on the narrative of the day, more focused on being better rather than telling us we are already better there others, and the solution won’t be found there. They are selling the daily two minutes of hate. You don’t go to the ice cream shop for a salad. Walk on by it. IMO the solution isn’t to make their messages better but to make them less important in your life. Choose your own path.