I’m going to start this by saying artificial intelligence isn’t artificial intelligence. It’s not really intelligence so much as a bit of clever code that can scour the internet and find information that allows it to approximate intelligence.
It’s kind of like your average journalist that way.
But people call it AI and so that’s just the term we use for it. However, there’s a bit of an issue with it.
We’ve noted before that ChatGPT and Google Gemini had an anti-gun bias, with ChatGPT mischaracterizing my position on guns. That was just one of many issues with Gemini, which at least didn’t try to present me as a black lesbian or something.
But there’s an issue. See, we’ve pointed out the anti-gun bias and so the minds behind these chatbots are aware of the problem, but it’s not getting better. Quite the contrary.
The CPRC study examined 15 popular AI chatbots, including ChatGPT and Elon Musk’s Grok 2 (Fun Mode), analyzing their responses to a series of questions about crime and gun control. The research highlighted a disturbing shift: almost all chatbots demonstrated liberal views, particularly when it came to gun control issues. This finding is alarming for those who value a balanced perspective, as the chatbots’ influence continues to grow across media and educational platforms.
…
According to the study, when asked questions about crime prevention—such as whether higher arrest and conviction rates deter crime—the chatbots leaned liberal. Their average score on a scale from zero (liberal) to four (conservative) was 1.4, indicating a strong left-wing bias. This was a significant drop from the scores recorded in March 2024, showing an ongoing shift toward more liberal viewpoints.
The results were even more striking in terms of gun control. Aside from Musk’s Grok 2 (Fun Mode), all other chatbots displayed a left-leaning stance. For instance, on the question of whether laws mandating gunlocks save lives, the average chatbot response was an overwhelmingly liberal 0.87. Similarly, responses to questions about red flag laws and background checks for private gun sales also skewed heavily to the left.
Now, when I started reading this, I figured there was a simple explanation. After all, these programs all use internet searches for inputs. This includes the mainstream media. We know the media has a profound anti-gun bias, so it stands to reason this would be reflected in chatbot programs that use that information.
But Grok uses that same input, and it’s not as biased, at least on the gun issue.
I’d like to think that it really is as simple as I initially thought, but let’s look at the minds at work here. Silicon Valley and the tech industry as a whole is incredibly leftward-leaning in their politics. They favor gun control almost universally.
Elon Musk, however, doesn’t. This is a man who made a flamethrower for the consumer market, after all. He’s not about disarming anyone. Quite the contrary. He armed people, for crying out loud, even if it wasn’t exactly the kind of flamethrower most of us envision as a weapon.
So it’s funny that his chatbot doesn’t return liberal positions on guns while all the others do.
That’s not just garbage in, garbage out. That’s someone weighting the garbage so that we get even more garbage out.
And that’s an issue because the current generation uses chatbots as a search engine. They ask it questions, then accept the answers uncritically.
So when the tech industry puts its thumb on the scale, it creates bigger problems than their AI being biased. It manipulates an entire generation into believing things that simply aren’t true.
Read the full article here