Found this in the comments section in
a story about Artificial Intelligence on Hackaday. I thought it was pretty good. Personally I think it would be fairly simple to write a self-aware program. All you would need is a simple loop that asked 'what am I?' followed by 'what am I doing?' Take the results from the those two questions, put them in the meat grinder (universal processor) and see if you should make any adjustments to either what you are or what you are doing. Of course being able to answer those two questions might be a bit of a trick. I'm sure the whiz-kids will figure out how to do it eventually. When that happens we'll want to be ready with our moral compass, or
Asimov's 3 laws of robotics, if we are absurdly ambitious.
Asserting a relationship between intelligence and self-awareness seems like kind of a stretch to me, since nobody really understands self-awareness. (Or maybe I’m just missing something.)
The mystery that leads to the question of machine intelligence, is the phenomenon of consciousness – the ability to distinguish between yourself and everything else, and then to create an inner dialogue to explore what you perceive. We so much take this for granted, that we have no idea how we do it. When someone says, “yes, a machine can come up with the right answer, but it doesn’t understand it”, they’re saying that they can accept that something artificial can evaluate data and calculate an optimum response, but they CAN’T accept that this thing we do naturally and don’t remember ever not being able to do – “understanding”, or having that inner dialogue about the calculation and response, can be produced artificially. Why? Because it’s outside our own understanding! This is probably why most cultures invent a “soul” or “spirit”, but naming it doesn’t explain it.
Aside from the sociopaths among us, we recognize understanding and awareness in other animals, based on the ways they respond to things, which are often similar to how people respond to similar things. We can be surprised, and we can see something that looks very much like human surprise in other animals, so we guess that other animals can “feel” surprise. Same for many other feelings.
Some like to dismiss this with Darwinism – that we feel things because these feelings help us to prioritize things and thereby help us to survive. But that doesn’t even begin to touch HOW this works. You can code a computer program to avoid a particular state at all costs, but does this make the computer feel pain when conditions make that state appear imminent? Quite possibly – I can’t prove otherwise. I’m thinking right now about watchdog timers. We can program microcontrollers to a sort of self-awareness, in that they can recognize when they AREN’T THINKING, and take the extremely drastic action of resetting themselves. Does this feel to them like a defibrillator going off? Does it scare the bejeezus out of them? Well, maybe it shouldn’t, since the microcontroller generally doesn’t have the ability/intelligence to change its behavior to avoid getting into that state again. But what does a multi-tasking OS feel when it starts to run out of memory? That’s gotta hurt. The interrupts keep coming in, but you can’t keep up with them, and that just makes the situation worse and Worse and WORSE AND AAAAAAA!!!!!
But I’m getting off the subject. I think your claim that self-awareness follows intelligence is probably correct, and that people would be a lot more easily convinced that a machine that’s self-aware can be intelligent, than that a machine that exhibits intelligence can be self-aware. So the question of intelligence is probably the wrong question to ask. Or maybe questioning this about machines is to long a leap. Maybe we should ask if plants, or fungi, or viruses can be self-aware. - BrightBlueJim
1 comment:
It could be argued that plants, e.g. Mimosa, ARE self aware; they roll up in self-defense when touched ;-)
Post a Comment