There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:
•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.
•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.
•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.
Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.
So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?
Not that I knew the guy before his AI use so for all its worth he was dumb as rocks before, but my manager hired a guy who doesn’t know shit as a software architect for our team. Doesn’t even really know our tech stack (mostly TypeScript and AWS stuff) but my manager is really into AI so he hired a guy who promised that he’d get us all to adopt AI use.
He can’t do shit without AI. I asked him to update a few dependencies a few weeks ago and he spent double the time that the Junior on our team takes for the same task, while also overlooking half the spots where he needed to do something, despite the fact that I gave him a clear list of spots that he needs to look at and actions he needs to take. Oh, and it was the third time he had that exact task and he learned nothing from the first few times.Generally, his main issues are that he’s completely brain rotted and forgets anything you tell him right away (but never acknowledges that he’s forgetful and gets defensive instead) AND that he’s just incorrect with such a confidence that we’re at a point where no one trusts his claims without double checking them. NONE of these things would be a problem if he just had the capacity to acknowledge them. My team is very much not perfectionist, many of us are forgetful and not great at communication (we’re software guys, auDHD the the norm) but his ego blocks any chance at improving or adapting his habits to fit into the team.
Honestly I’ve just resigned to it. As much as I have a distaste towards that guy, I will leave the company before resorting to being overtly toxic to him and bullying that ego out of him. I’ve been a bit snarky and vented some frustrations to other coworkers, but in the end we’re all just trying to survive capitalism here I guess, so I shouldn’t be too hard on him for doing that to his best capabilities. He probably isn’t even being paid much better than the rest of us, my company has notoriously bad pay but people put up with it because there’s a lot of freedom, good accommodations for parents and ND people and some pretty sweet benefits. Sad to see that management is clamping down on the freedom by mandating AI everywhere.
Just curious: how frequently do you talk to people who have isolated themselves from human connection? Apparently it’s common enough for you to notice a pattern; but I personally always talk to people who are talking to other humans, for the obvious reason of communication being a bidirectional process.
Are you sure this observation of yours is not a delusion stemming from your unchecked social media use?Maybe? Are you sure you’re not weirdly defensive about AI because you prefer interactions where you control the narrative and every opinion you have is validated?
Honestly, I really only know one person who uses AI so much I would even consider it an issue, and until recently, he was my best friend since 2007. He was always really smart and rational because he was the kind of person who would do a lot of research, and look into things before rushing into any decision or forming an opinion.
Originally he just used AI for automation ~2 years ago, then he started using it for quickly researching things related to work, but eventually he started using “AI research” for everything, and once he reads an AI summary there’s no changing his opinion.
A lot of times he will send me links that AI cites in the summary to prove he’s correct, but when you actually read the information in the links, it doesn’t actually say what he thinks it says. But once he’s formed an opinion and it’s been validated by AI, there is seems to be no evidence that can convince him otherwise.
He actually went down a quantum physics/new understanding of math rabbit hole pretty early on, but luckily he eventually realized all the information chatGPT was telling him was correct was misinterpreted, but it was still giving him positive feedback and telling him he was a genius, just like it always seems to do to people who don’t realize it’s giving them bad information and end up ruining their own lives.
He didn’t stop using AI though, he just stopped using chatGPT and switched to other models. He also gets defensive if you try to tell him that he should dial back his AI use even though he can no longer hold a conversation with anybody if it’s not related to whatever he’s interested in at the moment, he comes off as very rude bc he doesn’t seem to remember just shutting down conversations bc he doesn’t feel like hearing them, like he’s closing out a tab he’s done using, isn’t appropriate, and when I tell other people about his opinions and arguments/how he’s citing information to support those arguments now, they say “no offense, but he sounds really dumb.”
Which is definitely not true. He’s very smart and he always has been. He’s got some really impressive degrees he earned prior to becoming dependent on AI, that prove it. He also didn’t just suddenly lose the social skills and empathy he had for 18 years. He’s just become way too dependent on technology that’s designed to make him believe he’s always correct and being super productive and efficient, so he will get a little dopamine bump and want to keep using it, instead of just taking the time to actually read new information, or listen to what people are saying and how they’re saying it, and then use his own very impressive logic and reasoning skills to interpret that information.
Idk, it is an n=1 and I could definitely be wrong. That’s why I asked this question. Bc I wanted to hear other opinions outside of my own personal experience and the ones I’ve already read or seen online.
•The rise of the personal AI advisors
OpenAI CEO Sam Altman has a front-row view of this phenomenon. He notes stark generational differences in ChatGPT usage: “Older people . . . use [ChatGPT] as a Google replacement,” Altman recently observed, whereas many in their 20s and 30s “use it like a life adviser.” In other words, younger users aren’t just asking AI for trivia or weather updates, they’re confiding in it, seeking guidance on college decisions, career moves, and personal dilemmas. Altman says some college students have ChatGPT so deeply integrated into their daily lives that “they don’t make life decisions without asking ChatGPT what they should do. It has the full context on every person in their life and what they’ve talked about.” The chatbot has effectively become a confidant—a kind of always-available sounding board and adviser in one.
•AI chatbots and digital companions are reshaping emotional connection
Synthetic relationships are filling the void to satisfy the fundamental human need for social connection. Research shows excessive use of these tools may worsen loneliness and erode social skills. Experts and advocates are highlighting the need for guardrails and regulations to ensure user safety and well-being.
Nobody I know who uses AI has changed. They were dumb as fuck before and the AI hasn’t changed that.
You’re absolutely right…
A coworker: Putting in a lot more effort to do something with AI, than it would be to do it themselves. And trying to convince everyone they are actually working faster and more efficient.
I think a lot about the time a junior developer on the team was like “I’ll use chat GPT to reverse this list” and I was just like my guy we’re working in Python that’s a one line expression.
it’s a 6 char expression
arr[::-1]lol.I think that’s the thing. It’s a trivial task, but it doesn’t feel like work because they are asking for it to be done instead of doing it themselves.
To be fair, before ChatGPT, they’d probably still google it and land on a StackOverflow question for doing the same thing, rather than refer to the documentation or memorize it.
Yeah, that is something I regularly look up, too. Together with Euler’s formula, resistor colours and printf codes… My brain is very efficient at almost remembering things, and when I read the intro to the article, it mostly comes back
I’m optimistic that they’re more likely to learn something during that process, but that might not be a well founded belief.
Personally I can’t remember basic syntax very well and constantly have to look things up. Although just knowing that a way of doing something exists does help a lot.
This is what I have seen, but it’s because their managers and, moreso, directors want everyone to be using AI.
My boss will pause a meeting to live prompt something he wants done. Then start working on implementing the setting or code immediately during the meeting. This tends to draw out the process a lot. Admittedly he is pretty busy. Definitely it’s a crutch. I’ve noticed more AI based decisions that are based on general advise you might see on LinkedIn not the actual condition of the business, available skills or details of the problem. The advice tends to the mean of a typical US based medium sized service style business. I can foresee a point where management won’t be brave enough to take a decision without citing what the AI told them to do.
Before AI, my former boss would do this but with email instead. She’d call a 2hr meeting and no kidding spend the first 1.5hrs answering emails or editing a document, and then complain no one was getting anything done. I’d regularly stay until 7 or 8pm (starting at 8am, before my boss) just to keep up with workload.
Worst leader I’ve ever had. Sorry you’re going through something similar.
I’m on the other side now and promised I’d never look back.
Ah, someone who needs a background activity to focus, but isn’t self aware enough to do it with youtube videos or something like that. So they make it everyone’s problem.
wed spend meetings about how to do something with the boss constantly saying. we are not trying to solution here. which I guess means we are trying to go in circles?
I feel like I’m the last grounding point for a peer who is getting in too deep. He is running all kinds of agents and says that he is afraid of getting left behind. He tells me about openclaw, which I looked into, but not interested in automation that doesn’t produce specific repeatable results.
On his behalf I have dug into ollama, but I find that I am just as fast if not faster at the OCR text cleanup using spell checker than arguing with the bot and fixing its mistakes.
He seems to understand my frustrations very well, and my counterpoints seem to be accepted.
I think it is important to try the tools at least a few times and to attempt to integrate them into your workflow, but you need to then take a step back after you finally feel like you have a flow and compare it to your work without. Sure, you are contributing to the numbers briefly, but without being able to articulate your grievances from their perspective your words won’t have as much weight.
My feeling is to have it help you do something you know very well. If your awesome at video games play one and ask it what to do at each point. This is what has gotten me to learn how it can fail. It works very often but when it fails its great at making a plausible failure that will lead you down a bad path.
Basically, that’s what I have seen. It gives the average answer, and sometimes conflates information from similar topics or appears to provide solutions that don’t exist.
If your task is to take creative solutions and work them into a framework, it might help jump start ideas, but it cannot keep a logical thread.
I feel like it’s fine(ish) for work, and I agree, as long as you can show some evidence it’s either easing your work flow vs causing you more issues, it’s serving it’s purpose.
My concern is people who seem to get hooked on it like a drug, and refuse to acknowledge any evidence it’s causing more issues than actually helping them. Like they get really anxious/can’t function without it, and start trusting AI more than they trust their own ability to reason through a problem.
It’s especially concerning to me when people use it like this outside of work, like a life guide. It’s almost like the AI starts doing the living for them.
For example, when it comes to navigating relationships, AI can give some really bad advice because it’s lacking human connection and feeling/intuition. Those are pretty essential ingredients for decision making. If you decide to always default to AI to help you make decisions or solve problems, you’re forgoing the entire experience of having a human relationship.
That connection and the way you feel are kind of the whole point. Human relationships aren’t easy, sometimes they hurt, and people usually don’t respond well to only being acknowledged when the other person feels like interacting with them. But feelings and being able to understand the other person’s perspective even if you don’t agree with them, are kind of the entire experience of being human. Without that experience you might as well just not have human relationships, and some people seem to be ok making that sacrifice.
They always ask ai first instead of taking any debugging steps
In what way did that change?
Itz been decades since I started asking them if their plug was 2 prong, or 3 prong. Can you try unplugging it now?
Just to get them to confirm they unplugged and plugged it back in.
I think the opposite is the ideal. If using AI, write an architecture document of the code, then point an LLM at it. Be prepared to open up the debugger and troubleshoot someone else’s code.
Honestly, I’ve gotten a lot of lift from this technique since the devs at my job legit don’t know how to even use source control.
I’ve seen people become more introverted and unable to participate in an open discussion. They bring an opinion but refuse to reason with actual arguments. If that doesn’t work out for them (it never does) they are offended and leave. Very odd behaviour.
That was pretty normal human behavior before AI.
True, but I’ve noticed that as a change in my personal circle, and as a recent change among people who acted different before becoming heavy AI users. I find it difficult to describe the effect, it’s like a disassociation with their surroundings like what supposedly happens when people join a cult. Quite suddenly it becomes very difficult to get through to them. It’s weird and scary.
It definitely seems like it’s making people less open to the possibility their opinion is incorrect.
Not that people haven’t always had a difficult time being wrong, but now AI can cite “research” to answer a question by summarizing information in a way the user wants to hear.
So if somebody is normally very rational, but starts relying on AI to summarize information and research for them to save time, the summary generated might be phrased in a way that ultimately misinterprets the information it’s using in an attempt to make the summary more appealing to the user.
So somebody might believe based on the summary’s misinterpretation of information that an opinion seems to be backed up by AI, and this is irrefutable evidence they’re correct.
I’ve definitely noticed an increase in previously rational people now getting offended/defensive if you disagree with them. Then just refusing to have a conversation.
That’s always kind of been a universal human flaw, but it seems like AI has turned up the volume. It also seems like that kind of behavior usually was reserved for political disagreements. People avoid talking about politics bc people tend to have a sort of tribalistic/in-group vs out-group response.
Now it’s like the in-group is just one person and their self affirming mirror. Wtf can you even talk about with somebody who believes the all knowing mirror they carry around in their pocket could never steer them wrong?
I was seeing that before ai. social media was already doing that.
Yeah but now it’s like the people who weren’t on social media are repeating this behavior using AI.
Honestly, doesn’t seem too coincidental to me that the same broligarchs who wanted to get everybody dependent on social media echo chambers to receive and exchange information, are now trying to push everybody to embrace AI.
Nothing. The closest thing to a change in behaviour is people using AI to search instead of asking me or Google
So annoying how Google has a monopoly to just put that as default to people.
About 2 hour’s ago Gemini insisted it’s 32 hours until tomorrow’s BF update. It comes out in like 18 hours.
I like how when I ask Gemini to navigate, it fucking tells me to use the Google Maps app.
Assistant did it no problem, but Google killed it and forced their shitty AI on us.
Oh yeah it’s fucking annoying. The assistant without AI used to work better. Especially now that I’m controlling my lights with it.
Although I must say that the Philips Hue mobile UI is sooo garbage that I still prefer using Gemini to change my lights. And Hue has had a few articles saying they can just use code to use the lamps themselves as motion detectors, because they’re WiFi transceivers and can sense how much the connection is bothered by you moving. Which would make me stop using them if I had anything to hide.
I mean, I know the whole “nothing to hide” isn’t an argument against overt privacy invasion, but I just can’t go back to using lightswitches with a lamp with one hue and intensity.
Nuh-uh. RGB all the way.
It’s so much better using light as an alarm than an actual alarm clock. With an alarm clock the waking up is like when you wake up to a predator. Alarming. Stressful. Waking up from light is like waking up to the sun rising. Peaceful. Calm. You don’t even notice it.
There are plenty of ways to control your rgb lights without a LLM made by an evil company run by privacy-hating, election-manipulating warmongerers. Just sayin’.
Gemini is literally broken as an assistant: Can’t navigate, can’t set reminders, can’t use the calendar correctly, and at times even setting a fucking timer can be a challenge. Ridiculous!
Yeah and half of those things it keeps asking to go and use Google workspaces or some other “give more permissions pls” shit.
“Set calendar for-”
“yOu ShOuLd UsE tHe CaLeNdEr!”
Im not sure anyone I know uses them like that. My wife and I both use them somewhat but mostly as an abstraction of search although Im often sorta gauging improvement and such. Im not big into generation but she finds it handy especially as an artist.








