Artificial Intelligence Discussion

By "real" in this context I mean sentient. Something that is aware of its own existence and can respond to my input based on the fact that it is somehow (emotionally, financially, etc.) invested in the outcome of the transaction or the exchange.
This is a good definition. One thing required for "sentience" as we usually mean the word is some appearance of self-awareness and probably also self-motivation, which existing programs like chatbots lack; they don't appear to "perceive" anything other than their input, nor so they take any action until they're invoked.

What would seem a step closer to sentience would be a program that appeared to be self-directed: that was continuously processing input from various sources and taking action as a result, without a human operator obviously invoking its behaviour.

The philosophical question I keep coming back to: at what point does an artificial agent have enough of the appearance of sentience that we think a given ethics should apply to it?

If in relationship with AI, DO YOU REALLY get your needs met, or will there always be something inside you that knows its AI.
This question presumes that "'really' getting your needs met" and "knowing the thing meeting your needs is non-sentient" are mutually exclusive. That might be true of some relational needs, but is it obviously true of all of them? I'm not sure it is...
 
The philosophical question I keep coming back to: at what point does an artificial agent have enough of the appearance of sentience that we think a given ethics should apply to it?
Yeah that was what the Exocomps episode was trying to figure out, and they decided that it was both the awareness of self-preservation and the ability to waive self-preservation to preserve the survival of one's cohorts, and the ability to distinguish between real and fake danger. I think there might have been one or two other things. I'll have to watch that episode again. That was one of the better ones.
 
Back
Top