Albert Ross
Well-known member
This is a good definition. One thing required for "sentience" as we usually mean the word is some appearance of self-awareness and probably also self-motivation, which existing programs like chatbots lack; they don't appear to "perceive" anything other than their input, nor so they take any action until they're invoked.By "real" in this context I mean sentient. Something that is aware of its own existence and can respond to my input based on the fact that it is somehow (emotionally, financially, etc.) invested in the outcome of the transaction or the exchange.
What would seem a step closer to sentience would be a program that appeared to be self-directed: that was continuously processing input from various sources and taking action as a result, without a human operator obviously invoking its behaviour.
The philosophical question I keep coming back to: at what point does an artificial agent have enough of the appearance of sentience that we think a given ethics should apply to it?
This question presumes that "'really' getting your needs met" and "knowing the thing meeting your needs is non-sentient" are mutually exclusive. That might be true of some relational needs, but is it obviously true of all of them? I'm not sure it is...If in relationship with AI, DO YOU REALLY get your needs met, or will there always be something inside you that knows its AI.