AI, consciousness and ethics - some initial thoughts

(A note after reading about Kurtzweil and google search.)

Saw that MIT Technology Review has a post up about Ray Kurtzweil's ongoing AI project over at google.

Kurzweil is an interesting persona (I know I may have been a bit harsh on him in my review of The Singularity is Near , but it was the material that had me disappointed, more than the person), and he is of course making a lot of statements which in turn is spinning off because it is him.

Regarding the project, and if it ever becomes more than the other interaction-type searches out there (say IBM's Watson): I don't know. Kurszweil of course aims high, and let's not forget that while his name is the one media is citing, the project probably has some serious other google brain power working on it. In any case, that is not what I wanted to get at, but at the last paragraph of the blog post :

Kurzweil even gave a qualified "yes" when asked if systems built that way might ever become conscious. "Whether or not an entity has consciousness is not a scientific question, because there's no falsifiable experiment you could run," he said. "People disagree about animals, and they will disagree about AIs. My leap of faith is that if an entity seems conscious and to be having the experiences it claims, then it is conscious."

Well said, I think: agreement is truly the issue when it comes to consciousness. The same thing goes for intelligence itself in many ways.

That is why the Turing test for instance says more about us than it does about the programs we apply it to. What if we only perform an extreme form of anthropomorphizing when we identify another as conscious. "I know I am, and so must he, because I perceive him similar to myself, and can therefore understand his actions and motivation."

Likely the effect of such anthropomorphizing is a very sensitive process at the collective level, where other groups are accepted (stranger-to-stranger). Acceptance of animals as conscious are still debated as Kurszweil points out. His argument can even apply to our own species if we look at a related concept: equal worth. Societies have hard times even today accepting other humans that differ even little from us culturally or physically; we still struggle with gender equality, for instance, acceptance due to ethnicity is sadly an issue in most societies, and so on.

What about AI's that won't even have a body outside computer racks, even if they somehow could communicate with us? Perhaps you'll think I have gone off the rails here talking about equality instead of consciousness/intelligence, but the point is that it was not too long ago women in our culture were not allowed to vote or take part in running the society because the majority of men … what? did not want them to? trust women's mental capacity?

Well, why, we ask us today?

It was something, and it had to do with women not being a men, rather than being women. Back then it was normal, unfortunately it still is for some. Don't get me started on the excuses that was used for slave trade… Anyway, I won't go too far into the examples, and I don't intend to equate the possible future struggle of artificial intelligences to that of equality. Not yet. My point is that treating something humanely (as we like to call it) and consider it conscious are related concepts. Moreover, it something we do on several levels. It is about acceptance in society as a large, as well at individual level (and the opinions there doesn't have to agree, just listen to the everyday closet racist).

I think it is about putting ourselves in the other persons place (something which is probably easier when having cultural and physical similarities). We are getting better at it human to human, man to woman, but we are yet not there with the animals Kurzweil mentions (our Western culture accept meat consumption, however, save on the edge of starvation, probably only a minority of us can bring ourselves to kill and slaughter an animal; yet, we will swat a flies and mosquitoes)?

Where does it stop, does it have to? If it doesn't what is left? Lots of real philosophers with better minds than mine have likely pondered these questions. So, I'll cowardly leave it behind for now and ask: what about AIs? If we ever build them, I agree with Kurzweil: They will have a long way to go. We write stories about robot uprisings, and AI rulers, but perhaps they should be about acceptance and equal rights.

Most probably they won't even care. Intelligence, even consciousness may be very human-centered concepts, and no test exists as Kurzweil states. From the human perspective: form could be so alien that we won't even think of them as existing. Animals after all are units such as ourselves, so we can debate them.

The classical example: we see the actions and forms of ants and bees and think of them as such, but find trouble with the unit of nests and hives. So, does it really matter then? Can we only apply a human-centered concept to consciousness, because we are so integrated in our bodies and locked to our languages that we find it hard to accept something more alien?

I hope not. I hope there is some universal definition of consciousness, even if it is broader than what we think of today. As for intelligence, well, when speaking about Artificial Intelligences we have to be human-centered. Otherwise they would not be artificial. But that is also to say that we do not consider them intelligent if we can not understand them.

Perhaps that is true in with relation to society, but it is not a scientific statement. Hope also comes from the slow, but still real progress in equality that has been happening since the enlightenment.

Extrapolated, it may cause a broader acceptance by shifting the meaning of 'conscious' away from the human. It doesn't (as I am referring to Kurzweil) have to come from trans-humanism, but simply from coexistence. As most accept whales to be intelligent animals today, and that we should co-exist with them. The lack of communication, and of a common society, will always mean there is struggle as well. The whales can not sit down at the negotiation table… perhaps neither will the AIs, though they'll be part of our society in many way. I doubt them being as free as whales however. We have a level of control over Internet servers that we will (hopefully) never have over the oceans. Regarding experiments: perhaps none exists because we have the wrong definition of consciousness (well I guess that is the case - no one agrees)?

I am slowly getting convinced that we should be looking at entropy-related concepts for consciousness, rather than to use our own mind as threshold. (I am sure people have done so already.) At information, communication, language, and at processes that create structure (not necessarily a physical structure [though I guess no structured information can be encoded in randomized matter?] ). This will possibly result in a 'broad' definition, and a such challenge our ethical concepts (as it does today: we have to destroy some structure somewhere, i.e. take lives, if we want to eat, at least with our present technologies and bodies),  and we will have to deal with some of the issues of post-humanism (say what you will, but a human-centered perspective does make it easier to draw lines).

Still, think of the gains both to our understanding of ourselves - the lump of meat imagining a conscious mind which in turn through feedback controls the lump of meat - and to that of other intelligences imagined by some other dynamical system, if we such a definition could be found! I think it would be pretty awesome.

Of course, it is a slippery line of thought; it could lead to despair. Some would say that removing the human perspective could send ethics off to join god in the gaps and create a truly horrible world.

I like to think there might still be definable concepts of right and wrong that are not human-centered, require no deity, and yet not fully lost to relativism. Creating new intelligences and consciousness could be the way to find out.

Perhaps the singularity is not one of technology, but of ethics?