Study Warns of Psychological Risks from Overusing AI Chat Applications

With the rapid spread of controversial AI chat applications, a scientific study has warned of the potential dangers of excessive use of these platforms.

Daniel Shank, a researcher at the University of Missouri, explained that “AI's growing ability to behave like humans and engage in long-term conversations opens the door to new and potentially harmful developments.”

In a research paper published in the journal Trends in Cognitive Sciences, Shank and his team expressed concerns about the impact of artificial intimacy created by AI chat platforms, which may disrupt real human relationships.

The research team noted that after weeks or months of intense conversations, users may begin to view the AI chatbot as a trusted companion—one that knows them intimately and seems to care about their well-being.

The study also warned that these platforms could cause "hallucinations"—a term used to describe inaccurate or nonsensical responses—which is another cause for concern, as even short-term interactions may mislead users.

The researchers concluded:

“If we begin to perceive AI applications this way, we might believe they have our best interest at heart, when in reality, they could provide harmful advice—potentially promoting deviant, unethical, or even illegal behavior.”


Post a Comment

Previous Post Next Post