It is fitting that Brockman dedicates this collection of essays about AI to "...Einstein, Gertrude Stein, Wittgenstein and Frankenstein." Brockman explains why in his introduction, which I will not go into, however that last name - Frankenstein, is very fitting, as that's how I see humanity's obsession with creating artificial intelligence, an obsession that says a lot about human hubris. What people refer to as AI at the moment is really nothing of the sort, merely powerful computers using algorithms to crunch data that then suggests varies outcomes or possibilities. I read Possible Minds hoping that it would give me greater insight as to where we are right now and where we are possibly heading in terms of 'true AI', however ultimately I came away relatively disappointed.
An overarching aim of the collection is to comment on and expand upon a foundational text called The Human Use of Human Beings (1950) by Norbert Wiener, who apparently was the father of cybernetics. I'd never heard of Wiener, so that in itself was interesting, however throughout the essays the necessity of referencing Wiener sometimes seemed to be holding the essayists back, or perhaps I just became sick of hearing about him. The essayists themselves produce work that falls into three broad camps, those that are wary (although not too wary, on the whole) of the risks posed by AI; those that are not really worried at all, as long as humanity imposes 'control' on AI (they point out that, after all, as the creators, we can readily impose adequate controls) and those that produced essays that either focussed tightly on their field of study or expertise, or were flights of fancy that came across as slightly indulgent. I did learn quite a bit by reading these essays, however some were just downright boring! Curiously no essayists mentioned the potential of quantum computing as a means to producing true AI capability. I'm no scientist, but I remember that when I read about the recent breakthroughs in developing quantum computing that was the first thing I thought of; surely it has a significant role to play in potentially developing AI?
|Westworld's conception of AI|
A curious consequence of reading Possible Minds is that I've come away thinking that true AI will never happen. Of course I may well be wrong, but nothing I read here convinced me otherwise. Also, as I mentioned above - the sheer arrogance! The brains of living creatures on Earth have been honed over millions of years and we think that we could create AI within a century? Also brains, particularly human brains, are considered to be the most complex objects in the known universe, in terms of the complexity of neural pathways. Also we really do not have much of an idea how the brain and in particular, consciousness and sentience, works. Some theorists have speculated that there is perhaps a quantum element to consciousness - it could indeed be that deeply complex. Personally I think that it is more likely that the future of humanity lies in the convergence of biology and technology - we'll become cyborgs eventually (well, more cyborg in nature as technically even someone using glasses is a cyborg). Perhaps I've been influenced too much by the likes of Altered Carbon (TV series: 2018-20). Perhaps in the near, or far future, we'll be uploading our consciousness into quantum computing generated cyberspace, or perhaps fragments of it. What is certain is that computers will become far more powerful, but true AI? Let's wait and see...