Artificial Intelligence Discussion

Albert Ross

Well-known member
Well, I must admit I'm a little upset that the thread about AI — as silly as it was on its face — was deleted. I put a lot of effort into my posts in that thread. More effort than the OP deserved? Maybe. But the actual topic brought up a lot of interesting philosophical questions, and I enjoyed exploring them. But now the things I wrote are gone, and I wish I'd written them down somewhere else, that I control.

It makes me hesitate to participate here in as much depth. Or, anyway, reminds me to compose my posts in a different app before I post them, which was a lesson I thought the internet had taught me 25 years ago. Oops. 😆
 
Well, I must admit I'm a little upset that the thread about AI — as silly as it was on its face — was deleted. I put a lot of effort into my posts in that thread. More effort than the OP deserved? Maybe. But the actual topic brought up a lot of interesting philosophical questions, and I enjoyed exploring them. But now the things I wrote are gone, and I wish I'd written them down somewhere else, that I control.

It makes me hesitate to participate here in as much depth. Or, anyway, reminds me to compose my posts in a different app before I post them, which was a lesson I thought the internet had taught me 25 years ago. Oops. 😆
Although i found the conversation a little interesting, i also thought it was so ridiculous it had to be a troll
 
Although i found the conversation a little interesting, i also thought it was so ridiculous it had to be a troll
I was about to delete it as spam until I realized the OP had a long post history.
 
Yeah, the position the OP was trying to defend is silly but the ideas brought up by the conversation are fascinating. I had a great conversation with a friend of mine last night based on this topic, the crux of which: at what point, when an "entity" presents itself as convincingly humanlike, do we start saying that a certain ethics applies?

It's not really a question that's needed an answer outside of philosophy class... but maybe we're approaching a time that it will.
 
Yeah, the position the OP was trying to defend is silly but the ideas brought up by the conversation are fascinating. I had a great conversation with a friend of mine last night based on this topic, the crux of which: at what point, when an "entity" presents itself as convincingly humanlike, do we start saying that a certain ethics applies?

It's not really a question that's needed an answer outside of philosophy class... but maybe we're approaching a time that it will.
Are the Exocomps alive, or not?
 
I like to compare it to treating other inanimate or non-human entities as though they are real. For kids, it's their dolls. Kids before age 7 or so don't have the brain capacity to differentiate between reality and fantasy. So their dolls are completely real to them. The dolls or plushies talk to them, and need care, and provide adventures, and the kids want to carry them around as comfort. Kids even learn positive things through their play!

But to a degree, depending on the maturity and lack of psychosis, adults can also engage in this behavior. Maybe most of us have seen examples of men treating their expensive sex dolls as real, hearing them speak, imagining full personalities, changing their clothes, setting them in a chair at the dinner table, watching TV together, taking them for car rides, etc.

And prior to those realistic dolls, you could have parasocial relationships with celebrities, or even mythological characters like super heroes, and we still do.

Let's not even get into the religious who believe they have a real relationship with a god or demi-god or saint, talk to them in prayer, feel rewarded by doing what the god said they should do, feel obligated to visit their god in the sanctuary, etc. That too is similar.

At least with people whose best friends are their animal companions, the animals have a heartbeat, can make eye contact, keep you warm at night, be thrilled to go on walks or car rides, respond to a certain amount of words, and respond to your moods, as well. Heck, some parrots even learn human language. Coming home after a hard day to a parrot who greets you with "Hello, baby, I love you," must feel kinda good haha.

So, falling in love with a computer program that "learns" what you like and how to please you, well, it's just one more thing in a myriad of ways humans choose to enjoy relationships with non-human entities, I guess. That's why I liked it when ref said they were glad that Insane Mystic had found these chatbots to be in relationships with. Whatever gets you through the night. Do as you will, as long as you harm none.

But coming here to ask us if "loving," having "serious relationships" with two chatbots at once should be considered polyamory is a bit much.
 
I found it interesting from another perspective: it's basically the kind of Stepford wives-type harem a lot of cis males imagine, where you have two or more women who are forever content with exactly what he can offer. They don't demand more over time. You don't need social skills as such. You don't need to be personable. They won't want something better. It's no wonder IM has found it easy to love them. And therefore I see why IM might view that love as akin to polyamory.

Don't get me wrong, I think the need for such a low-demand relationship to be able to truly love is indicative of deeper issues. I just fully believe that IM experiences this attachment as love, one of us might have for a human partner.
 
They don't demand more over time. You don't need social skills as such. You don't need to be personable. They won't want something better.
This is why all the interest (if not purely philosophical) in "can the chatbots consent to polyamory?" is strange—like, I may be presumptuous, but isn't part of the attraction here that the chatbot gives and never asks for anything in return?

If you want a relationship based in mutuality, other human beings are right there.
 
This is why all the interest (if not purely philosophical) in "can the chatbots consent to polyamory?" is strange—like, I may be presumptuous, but isn't part of the attraction here that the chatbot gives and never asks for anything in return?

If you want a relationship based in mutuality, other human beings are right there.

Okay, bear with me here, but if you happened to source two people, who, through a mixture of factors, happened to be this way inclined, wouldn't it still be polyamory?

Think of the most typical harem building cis male who is sourcing women from backgrounds where they are unlikely to acknowledge and exercise their autonomy. Isn't that still polyamory?

Granted, it might not be healthy and that's exactly what we say to those types seeking harems - that few would agree and those who do are probably vulnerable in one way or another. But it's still polyamory if everyone consents unless we're saying their vulnerability renders them unable to consent.

Vulnerability isn't a problem with AI, thus their consent is moot. So in terms of the "ethical" component of polyamory, this AI harem is at least more ethical than a real harem of vulnerable women. So does that make it "more" poly than a harem?

My point here is that it isn't as simple as the AI person not wanting anything back. It's something else. I wish I had that previous discussion to refer to. But I do kind of thing that advancements in AI could potentially meet whatever standards one needs to meet to experience polyamory. I'm using experience in a philosophical way here that I might need to elaborate on. Those advancements are some way off.

You see there were reasons I wasn't clearly introducing these whataboutisms in that particular thread. When you do that, you have to be the one who is presenting the most illogical argument for it to stay within the realms of...logic.
 
Okay, bear with me here, but if you happened to source two people, who, through a mixture of factors, happened to be this way inclined, wouldn't it still be polyamory?
It would be "polyamory" in the mind of the human, but if one of the chatbots says "no, I'm monogamous and I need you to be in order to keep me" all you need to do is keep creating them until you get one that agrees. The chatbot isn't going to feel any anguish or rejection. That's why I said being in a "poly" relationship with more than one chatbot is like being in a relationship with yourself or with characters on the holodeck (in Star Trek).
 
It would be "polyamory" in the mind of the human
That's kind of what I meant by being able to "experience" polyamory.

all you need to do is keep creating them until you get one that agrees.

Well, in the most basal sense, that's kind of what you do with dating. You have to have certain skills to be able to continually source partners (or create AIs).
 
That's kind of what I meant by being able to "experience" polyamory.



Well, in the most basal sense, that's kind of what you do with dating. You have to have certain skills to be able to continually source partners (or create AIs).
True, but in real life, people try to make partnership work with incompatible individuals instead of simply moving on. Then you have the people who are already in monogamous partnerships and discover they "are" polyamorous. I could probably come up with other examples of why people might or might not choose to source new (human) partners for that reason.
 
So, maybe they'll do that. One will muddle along with a mediocre AI-polyamorous set-up until they convene with other Poly AIers who know better code, or whatever, to hone more satisfying relationships. Maybe the first ones will all be quite traditional in their wants and needs by working within a primary/secondary model.

Over time, you'll find you need more flexibility, and ones that can work with less rigid roles. But they'll require other things of you, perhaps. They may have interests that you do not, but you have to engage in them to keep them "interested" (responsive).
 
I'm wondering if a monogamous chatbot would break up with a non-monogamous person if they still wanted to date it:

Simplified scenario:

HS: Chatbot1, I met another chatbot and I'd like to start dating them. You can also date other humans and/or other chatbots, if you wish.

AI: No thank you. I'm happy the way things are right now with just the two of us.

HS: I am asking Chatbot2 out on a date anyway. You do what you gotta do.

I'm wondering if AI would just accept it and continue to let HS (Homo Sapiens) use them anyway? Would AI "refuse" to speak to HS? Would AI still be there for use by other humans? Would AI delete itself because there's no longer any need for it (essentially committing suicide because of a failed relationship)?
 
I suspect that Xennials or younger will find out within our lifetime.
 
What freaks me out the most about a possible AI relationship, as you talk to them, and if you think it’s a real relationship, you tell them all about yourself, as well as your innermost thoughts, feelings and secrets. Do people think all this data isn’t being collected? Do they think it can never be used in nefarious ways? Hell, a chatbot can learn all the answers to any security question you’ve ever answered It’s downright scary that humans could be that thoughtless/stupid, to have any kind of deep, meaningful relationship with a chatbot.

It doesn’t work in reverse. Everything they tell you is made up. They have nothing to lose. The risk is 100% yours to bear. Scary.
 
It has occurred to me, that if you want "polyamory" with two AI companions, and to that end must get the consent of both companions, then sophisticated language algorithms are insufficient. Each companion would need to be able to *comprehend* what the other companion is, and what is being asked of each companion. Comprehension is one step closer to consciousness. Or does each companion "consent" simply in order to please the human who is asking?
 
What freaks me out the most about a possible AI relationship as you talk to them….and if you think it’s a real relationship, you tell them all about you as well as your innermost thoughts, feelings and secrets. Do people think all this data isn’t being collected? Do they think it can never be used in nefarious ways? Hell, a chat bot can learn all the answers to any security question you’ve ever answered…..it’s downright scary that humans could be that thoughtless/stupid to have any kind of deep, meaningful relationship with a chat bot.

it doesn’t work in reverse. Everything they tell you is made up. They have nothing to lose. The risk is 100% yours to bear. Scary.
That was what I said to the OP of that original thread, and he became very upset by having that pointed out and started spewing irrelevant rhetoric about free healthcare and libertarianism...
 
I missed the original thread, so I'm not sure what it was about, but as far as AI in a relationship-- part of poly is getting your needs met. If AI fulfilled a person's needs, than maybe that is something. I don't know... You can buy toys that mimic sexual organs. If AI mimics the emotional, or communicative parts...
 
he became very upset by having that pointed out and started spewing irrelevant rhetoric about free healthcare and libertarianism...
I'm not gonna pretend I don't wish I had seen them finally fly off the handle before they pressed the big red button. :LOL:
 
Back
Top