Category: AI

  • Loneliness and the AI evolution

    In a post that I found extremely poignant and true, the Guardian calls it out as The Age of Loneliness. It lists out the structural shifts causing this social collapse. “The war of every man against every man – competition and individualism, in other words – is the religion of our timeWhat counts is to win. The rest is collateral damage.” Seems we are but slaves of a ‘hedonic treadmill’, in denial.

    In earlier posts (The Art of Live In, Emotion as a Service) I’d written on how (IMO) even the micro-unit of society – the family- is ripe for disruption. At both societal and familial levels, I think the related fallout is an increasing lack of compassion and empathy, something that I notice a lot on Twitter, for example. Irony that the more connected we are, the more disconnected we are from each others’ emotions, and what impact our actions/inactions have. But guess who is coming to the rescue? Quite possibly, robots, that care! (12) (more…)

  • Artificial Humanity

    In Natural Law, I had touched upon the idea that we will have to make choices as a species in the context of the role of artificial intelligence in our lives, and how/if compassion towards each other would play a part in these decisions. As I watch thoughts and events unfolding around me, I am beginning to think that it will most likely not be one crucial decision later in time, but a lot of smaller choices, made at individual and regional levels now, that will shape our society in terms of acceptability, morality etc. And so, just as I wrote in a post around five years ago, that we might not be able to recognise the final step we make in our integration with AI, there might be an increasing inevitability about our choices as we move forward in time.

    What sparked this line of thought? On one hand, I read a New Yorker post titled “Better All the Time” which begins with how a focus on performance came to athletics and has now moved on to many other spheres of our life. On the other hand, I read this very scary post in The Telegraph “The Dark Side of Silicon Valley” and a bus that’s named Hotel 22 because it serves as an unofficial home for the homeless. It shows one of the first manifestations of an extreme scenario (the nation’s highest percentage of homeless and highest average household income are in the same area!) that could soon become common. The connection I made between these two posts is that increasingly, there will be one set of humans who have the will and the means to be economically viable and another much larger set that doesn’t have one, or both. This disparity is going to become even more stark as we move forward in time. I think, before we reach the golden age of abundance, (if we do) there will be a near and medium term of scarcity for the majority.

    (more…)

  • Natural Law

    After a couple of years of Samsung, I bought a Moto X (2nd gen) phone, the Droid Turbo and Nexus 6 also being considerations. In the first few days of use, the automation that Moto’s Assist, Actions and Voice allows has impressed upon me the potential of such technologies and the dependency we could have on them.  As Karen Landis states in the Pew Internet Project’s Killer Apps in the Gigabit Age, “Implants and wearables will replace tools we carry or purchase…It will also redefine what a ‘thought’ is, as we won’t ‘think’ unassisted.

    It reminded me of an article I’d read in Vanity Fair titled ‘The Human Factor“, and a particular observation in it – To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises. This thought is elaborated in ‘Automation Makes Us Dumb‘, drawing the difference between two design philosophies – “technology – centred automation” and “human- centred automation”. The former is dominant now and if one were to extrapolate this , a scary thought emerges.

    I think the best articulation of that scary thought is by George Dyson in Darwin Among the Machines – “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” I had seen this in Bill Joy’s amazing 2000 Wired article “Why the Future doesn’t need us“, which itself discusses the idea that Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species. (more…)

  • Artificial Morality

    It wasn’t my intention, but the title did make me think of the morality we impose on ourselves, and that perhaps has some amount of implication on the subject of this post too. The post is about this – we seemed to have moved from debating artificial intelligence to the arguably more complex area of morality in robots!  When I first read about robots and ethical choices, (did they mean moral?) my reaction was this


    It’s probably a good time to discuss this, since a robot has recently become a Board member in a VC firm as well. Ah, well, in the Foundation series, R. Daneel Olivaw pretty much influenced the mental state of others and controlled the universe. That seems to be one direction where we are headed. The Verge article mentions funding for an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot. They plan to start with studying moral development in infants.

    Thanks to this article, I learned that there were different kinds of morality – operational morality, functional morality, and full moral agency. This is all fascinating stuff and my mind was racing in multiple directions. For one, did morality develop because living in groups was more advantageous from a survival perspective and to live in groups, there had to be some rules that governed this coexistence? Did this ethics then evolve into an acceptable moral framework? These may or may not be in line with our individual instincts. Does that explain why each of us have a different moral code? If that is so, can we ever develop a uniform code for robots? To be noted that ethics are a tad more objective than morals, so they might be relatively more easier to ‘code’.

    I also began to think if the augmented human would serve as the bridge between humans and AI and as he develops, will find ways to transfer moral intelligence to AI. Or maybe it would just be logic. Alternately if, as per this awesome post on what increasing AI in our midst would mean, if we do start focusing on human endeavours beyond functional (and driven by money alone) maybe our moral quotient will also evolve and become a homogeneous concept.

    In Michener’s Hawaii, one man of science and spirituality discusses dinosaurs with a man of spirituality. I shared this on Instagram, wondering if humanity will be talked about in this manner.

    Hawaii

    The changes could be the ones we’re causing nature to make and ‘huge’ could be our gluttonous consumption of resources. In the context of robotics and morality, I immediately thought of Asimov’s Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” What would happen when one set of humans begin to do something that might harm humanity? What would a robot do?

    The answers, are evolving. It’s a good time to be human, and to be able to experience wonder.

    until next time, moral science

    P.S. On a  related note – Bicentennial Man – RIP Robin Williams :'(

  • Humachines and the role reversal

    In his post ‘Virtual People‘, Scott Adams writes that his generation would be the last of the ‘pure humans’  raised with no personal technology. Someday historians will mark the smartphone era as the beginning of the Cyborg Age. From this day on, most kids in developed countries will be part human and part machine. As technology improves, we will keep adding it to our bodies.

    Singularity has appeared on this blog in various forms, and in at least a couple of posts, I have written about the augmented human, and like the proverbial frog in the slowly-boiling water, we wouldn’t know when it happened. (check this post for a fantastic short film on the subject) In fact, medical applications of 3D Printing are already accepted and on the rise. Not just ‘accessories like hearing aids or dental braces, we have moved on to a lower jaw, (previous link) 75% of the skullan ear, and yes, ‘cyborg flesh‘! It’s obvious that the applications are improving the lives of many. My question though remains – as we replace more and more of ourselves, possibly the brain itself within my lifetime, what happens to the essence of us that makes us human – the feelings, the emotions, the zillion unique reactions to various physical and mental stimuli?

    In this wonderful post titled “How not to be alone“, in which the author writes about how we have begun to prefer (diminished) technological substitutes to face-to-face communication, (I couldn’t help but remember this)  he quotes Simone Weil, “Attention is the rarest and purest form of generosity.” And from that statement I realised how the the narrative might come full circle – I remembered this post I had read a few months back. It mentions bots that have passed the Turing test (“test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of an actual human”) and has a compelling argument that while we’re singular entities with a complex design, we’re still just blueprints –  with many similarities. This also  entails that we’re building machines that can mimic, and evoke, our emotions. Thus, he writes, the era of artificial emotional intelligence is not far.

    Perhaps, in the future we will outsource our humanity and reverse roles – half-machine former humans who deal with each other in mechanical ways and go back home to a humanoid bot that will give it all the empathy and emotional anchoring needed. Or would it need it at all? 🙂

    until next time, be human, comment 😀