Tag: humanity

  • Artificial Morality

    It wasn’t my intention, but the title did make me think of the morality we impose on ourselves, and that perhaps has some amount of implication on the subject of this post too. The post is about this – we seemed to have moved from debating artificial intelligence to the arguably more complex area of morality in robots!  When I first read about robots and ethical choices, (did they mean moral?) my reaction was this


    It’s probably a good time to discuss this, since a robot has recently become a Board member in a VC firm as well. Ah, well, in the Foundation series, R. Daneel Olivaw pretty much influenced the mental state of others and controlled the universe. That seems to be one direction where we are headed. The Verge article mentions funding for an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot. They plan to start with studying moral development in infants.

    Thanks to this article, I learned that there were different kinds of morality – operational morality, functional morality, and full moral agency. This is all fascinating stuff and my mind was racing in multiple directions. For one, did morality develop because living in groups was more advantageous from a survival perspective and to live in groups, there had to be some rules that governed this coexistence? Did this ethics then evolve into an acceptable moral framework? These may or may not be in line with our individual instincts. Does that explain why each of us have a different moral code? If that is so, can we ever develop a uniform code for robots? To be noted that ethics are a tad more objective than morals, so they might be relatively more easier to ‘code’.

    I also began to think if the augmented human would serve as the bridge between humans and AI and as he develops, will find ways to transfer moral intelligence to AI. Or maybe it would just be logic. Alternately if, as per this awesome post on what increasing AI in our midst would mean, if we do start focusing on human endeavours beyond functional (and driven by money alone) maybe our moral quotient will also evolve and become a homogeneous concept.

    In Michener’s Hawaii, one man of science and spirituality discusses dinosaurs with a man of spirituality. I shared this on Instagram, wondering if humanity will be talked about in this manner.

    Hawaii

    The changes could be the ones we’re causing nature to make and ‘huge’ could be our gluttonous consumption of resources. In the context of robotics and morality, I immediately thought of Asimov’s Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” What would happen when one set of humans begin to do something that might harm humanity? What would a robot do?

    The answers, are evolving. It’s a good time to be human, and to be able to experience wonder.

    until next time, moral science

    P.S. On a  related note – Bicentennial Man – RIP Robin Williams :'(

  • Once upon a place…

    Travel used to be something I looked forward to – I can still remember train journeys  – from Cochin to Bombay, Chennai to Kolkata and shorter ones, from packed home-cooked food and getting Amar Chitra Katha bought for me at Railway bookstores to bringing books I couldn’t find in railway stores and getting down at stations and sampling local specialty food, the first rides in the Rajdhani and Shatabdi in ’93, from traveling in a group to traveling alone, and from listening to a walkman to listening on a mobile phone, the stories are endless.

    Travel then became an escape from the mundane existence with known favourite destinations that would guarantee rejuvenation if only for a few days. Then travel became something I completely avoided, until slowly I began to unravel that mystery in my head, and here.

    These days I look forward to my vacations, planning months ahead and carefully choosing destinations. Meticulous planning and research that even D has now gained a knack for. 🙂 The idea of a mass of humanity that vastly differs from me in many ways, and yet connected to me by that sometimes intangible human chord. The sense of possibilities, the immense perspectives that one gathers just by observing a different way of life, and the comforting knowledge that I am not alone in matters of the human condition.

    until next time, we’re busy getting Balistic next week 🙂

  • A People Person?

    Scott Adams’ post titled “People who don’t need people” (via Surekha) reminded me of Asimov’s Spacers, the first humans to emigrate to space, and their life on Aurora, the first of the worlds they settled. Scott Adams predicts that “we will transfer our emotional connections from humans to technology, with or without actual robots. It might take a generation or two, but it’s coming. And it probably isn’t as bad as it sounds.

    In the huge canvas that Asimov had created, the Spacers chose low population sizes and longer lifespans (upto 400 years) as a means to a higher quality of living, and were served by a large number of robots. As per wiki, “Aurora at its height had a population of 200 million humans and 10 billion robots.

    These days, as I experience the vagaries of the cliques and weak ties – not just Malcolm Gladwell’s much flogged social media version, but even real life ones, I can’t help but agree with Scott Adams that it won’t be as bad as it sounds. I probably wouldn’t mind it at all.

    When I feel like a freak
    When I’m on the other end of someone’s mean streak
    People make fun I’ve got to lose myself
    Take my thin skin and move it somewhere else

    I’m setting myself up for the future
    Looking for the chance that something good might lie ahead
    I’m just looking for the possibilities
    In my mind I’ve got this skin I can shed

    Scott Adams began his post noting that humans are overrated. Sometimes, I wonder whether humanity is, and whether losing our current perceptions of it would actually make a difference. (earlier post on the subject)

    Lyrics: Invisible, Bruce Hornsby

    until next, bot.any

  • Insignificance

    I remember writing this post about 4 years back, with an insight on why I didn’t particularly like to travel. Things have changed since then, and I do travel as much as possible these days. The odd discomfort of viewing masses of humanity still persists, but the reasons are more nuanced.

    What reminded of that post was this article that beautifully expressed the discomfort with the title “The Sad, Beautiful Fact that We’re All Going to Miss Almost Everything“. The article uses this in the context of books, films, music, television and art. But I relate it more to places and people. I still remember that the saddest part of leaving Leh was that it was perhaps my only visit to the place and I had not seen everything that had to be seen. In the case of people, the rise of the statusphere (Facebook and Twitter) has only added to the feeling that one is constantly missing something significant.

    It is probably going to get worse, unless of course, we manage to do the Matrix-USB type thing of instant information absorption. Even then, it would probably go the way things are headed to these days anyway- consuming without experiencing. The real time challenge of being updated about people would still exist. And perhaps it will end up the way the line goes, “we will increasingly be defined by what we say no to”. But, as the author of the article I linked to, above, notes,

    It’s sad, but it’s also … great, really. Imagine if you’d seen everything good, or if you knew about everything good………That would imply that all the cultural value the world has managed to produce since a glob of primordial ooze first picked up a violin is so tiny and insignificant that a single human being can gobble all of it in one lifetime. That would make us failures, I think.

    If I had to adapt that to places and people, I could say that the creator might feel insignificant if we could discover all of it in a lifetime. However, the collective advance of humanity is not a complete solace when it comes to the individual’s existential angst. As one of my fave Calvin strips go

    until next time, insignificant choices too?

  • Farm Vile?

    While two movies, despite not being remotely connected in terms of geography or genre, are perhaps not a trend, it did remind me of a conversation from more than a year back – something I blogged about too. An excellent conversation with S, that started with the dystopian scenario of 1984 and human farms and moved on to time travel, all in the context of advancement of society and the species.

    The movies in question are Gamer and Peepli Live, and the one thing that links them – the value of the human life. While the former is set in a word of the future, in which a new technology allows replacement of brain cells to allow full control of a body by a third party and finds application in gaming (one game in which gamers control a real person in a proxy community, a far more ‘real’ version of Second Life, and its more violent avatar, a multiplayer third person shooter game in which death row inmates fight for freedom), the latter is seemingly less complex – a farmer is ‘encouraged’ to commit suicide for the betterment of his family – more specifically, for the money they’ll get as compensation.

    And the question they make me ask – at what point in the future does mankind stop treating human life as sacrosanct? One could argue that it never has been, with the amount of killing that happens around regularly, but what I mean here is as a species. So, when someone says ‘human farms’, there won’t be gasps or expressions of horror/disgust. With population figures soaring, virtual lives competing with real ones, rise of machines, increasing gaps between the haves and have-nots, do you think it will happen? Just in case you think I’ve completely lost it, we’ve already started experiments with living beings – microorganisms in games.

    until next time, knotty question.