Heartificial Intelligence

John C. Havens

The title might seem like a bit of a corny wordplay, but I think you’d find it hard to come up with an alternative that best describes the premise of the book. Artificial Intelligence is slowly but surely becoming an inherent part of our lives, and I’d say that our situation is a bit like the ‘frog in boiling water’ scenario. That’s not to say that we will be ‘cooked’ but our sensitivity to the challenge is not really at the levels it should be at. Most of the discussions are around two themes – the extermination of our species by malevolent robots, and the increasing automation of jobs and the economic and societal repercussions. Both usually end up with polarising stances.

One of the reasons I liked this book is that the author is not on either of the extremes – doomsday or paradise – his approach is very pragmatic. The first six chapters take the reader through the process of understanding the lay of the land – from describing how our happiness is slowly getting defined by tracking algorithms, and the complete lack of transparency and accountability in those who have access to this data, to the economics and purpose of a human life and how it’s changing, to the (seeming) limits of artificial intelligence, and finally the need to have an ethics/value system in place as we go faster in our journey of designing increasingly complex AI.

That brings me to the other reason I liked this book. Every chapter begin with a fictional scenario that describes a quandary we could face as AI infiltrates our lives further. It not only adds a lot of nuance to the argument and illustrates it fabulously, but in the spirit of the book, brings out the human element superbly.

The second half of the book is the author’s perspective on how we can attempt to meet these challenges. This section didn’t impress me as much as the first. Not because I disagree with the author on the overall direction and philosophy, but largely due to what I’d call oversimplification of the challenges. For instance, the question of designing values/ethics for AI. I think it’s a hugely complex challenge because we’re all uniquely different in even our fundamental perspectives on values, and I cannot see a way in which we can codify something we cannot even agree on. Call me cynical, but I also feel that the author overestimates the capacity of systems and mindsets to change.

The good part is that he isn’t really being prescriptive, in fact he believes that we should understand our own value systems and use that to develop our unique relationship with AI and the changes it is bound to bring.

All things considered, a very good start to understanding a world that is grappling with AI.

Heartificial Intelligence

 

Leave a Reply

Your email address will not be published. Required fields are marked *