Superintelligence critique popping up

It is probably priming, but ever since I put up that other note from the net last week, with the view of transhumanism and religion, I see critiques of the idea of superintelligence / singularity concepts both here and there. Or it could be that now Bostroms's book spent enough time on the bestseller lists for the term 'superintelligence' to become part of the general set of concepts and thus generating enough talk. Enough even reach my scantily socially-netted attention.

And, yes, I agree that transhumanism and superintelligence are two distinct - some would even hold them opposite, or antagonistic - concepts, but they are in my opinion in any case related, relying as they are on a similar, reasonable, computational model of the mind. In any case, below are two of the critiques of the superintelligence idea (that a super-human AI might be superseding us) forwarded to me this week by people or algorithms. Posting them without endorsement as I believe that the discussion is perhaps more interesting than the outcome.

First, I saw the talk embedded below, shared over a couple of channels, found the slightly ammusing, and decided to watch it. It is a recording of a keynote talk by Maciej Cegłowski , and while I think that it has some faults it is also sometimes humoristic, and Cegłowski raises some interesting points. My critique of the talk is, in brief, that it generally fails to truly engage with the arguments of the superintelligence community, instead tending towards a strawman strategy at times, and the today common us-common-people-those-elites-rethoric. However, as I wrote, I found it also down to earth and interesting. I also endorse Cegłowski's first conclusion - we need more science fiction. In my opinion telling stories, e.g. literature/comics/film/theatre is a way for us to run our own simulations about the future, and at the same time embed the results into our culture. But, I'd also like to point out that it is, by the same reasoning, just as important to have philosophers and others looking into the issues of super intelligence. Cegłowski also, towards the end of the talk, compares the current state of AI with alchemy. Not sure I fully agree, but perhaps that is because I am too fond of my own simile with the proto-scientific meddling in propaganda/public relations/meme-creation from the early 1900's and the alchemical movement of the middle ages, which relies on other concepts. Anyway, Cegłowski's parallels to alchemy are drawn based on how some results seemed supernatural when experienced without a scientific basis; I understand him saying that once we properly understand AI and consciousness we may not be lured by the transhumanist narrative. I agree with that, or rather - that understanding mind will settle the case in either way. It is a good point.

Anyway, here's the talk, watch it if  you like, it is up at 50 min, or so: https://youtu.be/kErHiET5YPw

Second, Kevin Kelly posted an essay titled The AI Cargo Cult - The Myth of Superhuman AI which I think is well written and makes some very good points. In it Kelly lists five arguments against the concept of a superintelligent AI. With the risk of misinterpreting his ideas using my own language, I think what Kelly is saying is that the space of 'intelligence' isn't spanned by the human, and therefore it far from certain that whatever AI we might build would occupy the same local volume as us, and therefore not necessarily superceded us. Moreover, there may be physical limits preventing an intelligence explosion (I can't avoid providing a Science Fiction tip here as well: Vernor Vinge's Zone of Thought concept in e.g. A Fire Upon The Deep ), as well as time issues with embedding simulations of reality. I am a little bit hesitant to his treatment of the Church-Turing Thesis , though it is of course correct that available memory is not infinite, but in general I think it is worth a read.  Mostly, I guess, because I have been thinking in similar terms myself, though not been able to formulate it that well. Kelly, does keep the door open to the possibility of a superintelligence to emerge, which I think is wise. He is merely pointing out that the compelling narrative is far from certain.