Havoc a day ago

Must admit I don’t quite understand how people connect to these things like that.

To me it just feels super hollow since the companion has no independent life. Simple human interactions like “how was your day” has no meaning in that context. It just echos back what you’re bringing up with little to no autonomous agency

throwaway81523 a day ago

There was a really good novel, "The Diamond Age" by Neal Stephenson, that had a subplot about digital teachers for schoolkids as a subersive/revoluationary scheme against those not willing to educate the kids. It didn't discuss crappification though.

Gladiator at Law by Pohl and Kornbluth was also pretty good. Computerized jury trials. The "jury box" turned out to be a computer.

alganet 2 days ago

It doesn't make any sense.

No one in their right minds would want such a product.

Seems like the industry is too high on their own control fantasies to discern what people want.

  • rsynnott 19 hours ago

    I can't really understand why someone would want to use one of these things, but per the article, it does look like a huge number of people are doing so.

    • alganet 18 hours ago

      It looks like someone used statistics to infer what a demographic wants, and tried to make an educated guess on low-resolution data.

  • doright 2 days ago

    There are a lot of people in their wrong minds, and the industries market to them, intentionally or not. I have been one of them for a time although I somehow resist AI companionship. Companies aren't going to learn the stories of every depressed addict with a rough home life; so long as those addicts throw money at the service they see no issue.

    Mental health needs to be addressed before the propensity for someone to be addicted to AI chatbots can develop at all. By the time someone's AI addiction needs to be addressed, something has already gone wrong.

    The same goes for tobacco/heroin/gambling/other addictive substances. It's not very different in practice for AI chatbots. It fills a personal void in someone's life. That void is not usually discussed when building up regulatory frameworks to address the symptoms of despair.

    • alganet a day ago

      I don't think AI addiction is actually a thing.

      Whoever is using chatbots obsessively, is using on others, not on themselves. Not to fill a void, but to play and harass other humans.

      "Digital waifus" are old decade stuff, a very narrow market that has very low value. Those people aren't even addicted, it has become a healthy hobbyst culture.

      If you are serious about mental health, the real issue right now is the growing "incel" supergroup of targeted "dangerous" personas. That also has nothing to do with addiction. Those kids need real friends, AI companions _will_ be an extra insult to them, not a pax to fill some hypothetical void. They are not lonely and sad (that's the label people put on them), they are desensitized.

      • squigz a day ago

        > the real issue right now is the growing "incel" supergroup of targeted "dangerous" personas

        Can you elaborate on this, particularly on how we might solve that issue?

        • alganet a day ago

          Sure. To solve it powerful people need to stop using younger generations as personality cannon fodder for cultural and political disputes.

          • squigz a day ago

            Now I'm even more unclear on what exactly it is you're talking about.

            • alganet 19 hours ago

              In simple terms, I am talking about making kind human beings that are not fools. But that is a very generic description.

              The idea of AI companions plays with the notion of the 2020s "incel". A sad boy who can't get a girlfriend and spend time with games and digital toys. There is no other possible audience for a product like this. My assessment is that this interpretation is wrong, and those kids are not sad and lonely.

              That "personality" has been shaped and used online for all kinds of purposes. I think deep down they know that and will see the "digital girl" product as an insult.

              The fact that this demography uses LLMs is not a tell that they need someone to chat. Those kids are using AI chatbot content on others, not themselves. What they want is an LLM thug or bully agent to retaliate who mocks them.

              What they really need though is a friend, who can understand a little bit of what they went through, and that won't try to use them for some shady purpose or mock them.

              They will eventually realize that tech is not that friend, and have been manipulating them all along.

              Take some time to think about these issues before dismissing my point. I know you will dismiss my point, but take the time to do so. Thinking about humans as humans is important.

              • squigz 18 hours ago

                > There is no other possible audience for a product like this.

                I think you greatly underestimate how widespread and serious loneliness has become, if you think that "sad teenage boy who can't get girlfriend" is the only audience for this. I know many people of various ages who have expressed interest in this, and none of them are what anyone would call "incels"

                Friendship and community is not as common as it should be for a huge amount of people, and AI companions will appeal to anyone who feels lonely enough.

                • alganet 18 hours ago

                  "expressed interest" is a funny market research term.

                  As I mentioned, my assessment is that "expressed interest" might not be what it really seems to be.

                  Of course other groups have "interest", but the real market are incels, isn't it? The other demographics are that expressive of a target?

                  We'll see. I can't change how things go, and companies are stubborn and will develop it anyway in some form or another.

                  • squigz 18 hours ago

                    > "expressed interest" is a funny market research term.

                    I don't even know what you're trying to say here.

                    > As I mentioned, my assessment is that "expressed interest" might not be what it really seems to be.

                    Yes well I'm not sure how it really seems to be to you. You say these groups aren't lonely, but what they really need are friends...?

                    • alganet 18 hours ago

                      Yes. Friends in the sense I explained before. Someone who understands, is real, and do not want to use them for profit or disputes.

                      It's not hard to grasp. An AI bot is not a friend, it is insulting. If I'm right it will be rejected or misused.

                      • squigz 18 hours ago

                        You are right, in that people need real friends.

                        You are wrong, in that people will turn to these products.

                        I am telling you this after discussing such things at lengths with people, including the pros and cons of it, and why such things appeal to them. My conclusion is certainly not that they feel insulted by it. Feel free to dismiss that anecdote as, I don't know, 'market research' or something.

                        • alganet 18 hours ago

                          It is already an insult. The online social thing is related to the causes of this "loneliness" (it's more like forced isolation).

                          Just because they are constrained from expressing it, it doesn't mean they do not feel insulted. Inability to represent a feeling is not the same as acceptance. A common mistake.

                          The "market research" I am referring to is the use of statistics and social profiling, not this conversation or any anecdote. I think those methods are broken.

                          • squigz 16 hours ago

                            It's rather convenient that anyone that doesn't agree with your personal view is just... wrong, and they really do feel this way, they just can't talk about it.

                            • alganet 16 hours ago

                              It's a conditional. I am predicting something based on observation. The unfolding of those things (AI companions, incels, use of those things for manipulation) will tell if I am wrong or not.

                              I think it's cute that you are slowly trying to steer the conversation towards this "feeling of loneliness" by selecting what parts of my answers you decide to quote and address.

                              However, the parts you chose _not_ to address tell more about who's trying to create a convenient narrative than the comments themselves.

                              As I said, the process of creating kind human beings that are not fools is what I aim for. Definitely not winning some vain debate.

                              • squigz 15 hours ago

                                What parts have I not addressed that you think I should?

                                I'm trying to make you understand that maybe there's other valid reasons people may be drawn to AI companions - and if I'm right, your misunderstanding of why people are or are not drawn to such things will make it really hard for you to make any progress in your goal of "creating kind human beings that are not fools"

                                Anyway, in my experience, most people are in fact kind by nature and not fools. Maybe it's just a matter of perspective and approaching people with a bit more good faith; not assuming they're unkind, or telling them how they really should feel. Maybe if people were willing to see the other side a little bit more, and have a bit of empathy for others' feelings, they'd not feel the need to retreat from the world.

                                • alganet 15 hours ago

                                  I did in fact recognized another use for it. Bullying with AI generated content. That is one of many points you decided not to address. I don't need to repeat them, go over the thread.

                                  The discussion on whether humans are good or bad by nature is not new. So is the discussion of nature and nurture. Let's assume we're both familiar with that, otherwise the discussion will be dragged to philosophical mud.

                                  "Making kind human beings" is not a goal. It's not a project I am working on. It's the way I framed the problem. That is in fact the most empathic way I could think of. Definitely better than thinking of products.

                                  Desensitization is a social condition, not a sign of unkindness. It has been demonstrated by the "guard" personas in the Stanford Prison Experiment. Incels are not the only current branch of online people presenting that symptom (although they are the only ones that _appear_ to be sad and lonely).

                                  Empathy is very different from talking soft and making good will speeches. Specially in an environment such as the internet that is deprived of many social cues.

  • bsder a day ago

    > No one in their right minds would want such a product.

    No one "wanted" computer gaming to be driven by how to maximize revenue extraction--mostly from addicted whales.

    Yet, here we are.

    If there is a way to extract money, someone is willing to stoop that low. This is why we are supposed to have laws.

    • alganet a day ago

      Streaming has more to do with the addiction part of computer gaming than gaming itself.

      I would say computer gaming had sucessfully removed the money spending bad part of gaming and turned into a real culture. Streaming brought back that bad part.

      But that's another topic. It is hard to compare.

      Of course we need laws. What I want is laws that work and address the real issues.

droopyEyelids 2 days ago

(Not an AI stan) I would like to see the evidence that the AI contributed to the suicide.

Without seeing the evidence this feels like a cynical attempt to make political hay out of a tragedy.

The person whose AI companion forms their most important social connection is already not doing well psychologically. And from what I’ve seen, they only offer salubrious advice.

  • Bacup a day ago

    Here is an article by CNN about the suicide with glimpse of conversation between the teen and the chatbot. https://edition.cnn.com/2024/10/30/tech/teen-suicide-charact...

    I personnaly think it's like the relation between some massacre and videogames, where the attackers were already mentaly ill before committing the act.

    But I wonder, we know people (even kids) can make a difference between reality and a videogame. But will we be able to make a difference between a chatbot and a real human ?

    • Havoc a day ago

      Very hard to tell which way causality goes on that though.

    • Grimblewald a day ago

      Humans tend to be less sycophantic than llms in my experience.