The study, published Wednesday by Common Sense Media, a nonprofit advocacy group, asked 1,000 teenagers aged 13 to 18 about their experiences with media made bygenerative AI tools. About 35 percent reported being deceived by fake content online. However, a larger 41 percent reported they had encountered content that was real yet misleading and 22 percent said they had shared information that turned out to be fake.
The findings come as a growing number of teenagers adopt artificial intelligence. A September Common Sense study showed that seven in 10 teenagers had at least tried generative AI.
Two years after ChatGPT’s launch, the AI arena has grown increasingly crowded, with DeepSeek’s meteoric arrival on Monday. But the top models are still prone to AI hallucinations, according to a July 2024 study from Cornell, the University of Washington and the University of Waterloo — meaning even the top AI platforms still create false information out of thin air.
And teenagers who encountered fake online content were more likely to say AI would exacerbate their verification of online information, according to Wednesday’s study from Common Sense, a media and tech nonprofit.
The survey also pressed teenagers about their thoughts on major tech corporations, including Google, Apple, Meta, TikTok and Microsoft. Nearly half of teenagers don’t trust Big Tech to make responsible decisions about how they use AI, according to the study.
“The ease and speed at which generative AI allows everyday users to spread unreliable claims and inauthentic media may exacerbate teens’ existing low levels of trust in institutions like the media and government,” Wednesday’s study said.
Teenagers’ distrust of Big Tech echo a growing dissatisfaction with major tech companies in the United States. American adults also have to contend with increases in misleading or altogether fake content, exacerbated by the erosion of already limited digital guardrails.
Since acquiring Twitter in 2022 and renaming the platform X, Elon Musk has gutted its moderation teams, allowed misinformation and hate speech to spread and reinstated the accounts of previously banned conspiracy theorists, among other moves. Recently, Meta moved to replace third-party fact-checkers with Community Notes, which CEO Mark Zuckerberg has noted will lead to more harmful content appearing across Facebook, Instagram and its other platforms.
“Teens’ perceptions of the accuracy of online content signal a distrust in digital platforms, pointing to an opportunity for educational interventions on misinformation for teens,” the study found, adding that there’s also a “need for tech companies to prioritize transparency and develop features that enhance the credibility of the content shared on their platforms.”