Tag: ethics

  • Micro Singularity & Ethics

    The Guardian long read on “How algorithms rule our working lives” was a fantastic though distressing read, about employers using algorithms to filter out candidates based on reasons ranging from mental health to race to neighbourhoods to income. This in itself has massive implications on creating and expanding class divides and closing access to folks based on biases that are arguably unfair and lacking nuance.

    If we zoom out beyond work and jobs, it’s fairly easy to see that algorithms are having an increasing impact on our consumption and life in general. The biggest services in play – Facebook, (M, newsfeed items) Google, (search results, Google Now) Amazon, (Echo, recommended products) Apple (Siri) – all heavily have algorithms in play. And that brings us to biases in algorithms. Factor Daily had a couple of posts on teaching bots ‘good values‘. Slate had a great read on the subject too – on how Amazon’s computerized decision-making can also deliver a strong dose of discrimination. Both offer perspectives on how biases, both intentional and unintentional, creep into the algorithms, and the Slate article also brings out some excellent nuances on the expectation from algorithms, and how offline retail chains (selection of store locations, for instance) and human decisions compare to algorithms.  (more…)

  • Artificial Morality

    It wasn’t my intention, but the title did make me think of the morality we impose on ourselves, and that perhaps has some amount of implication on the subject of this post too. The post is about this – we seemed to have moved from debating artificial intelligence to the arguably more complex area of morality in robots!  When I first read about robots and ethical choices, (did they mean moral?) my reaction was this


    It’s probably a good time to discuss this, since a robot has recently become a Board member in a VC firm as well. Ah, well, in the Foundation series, R. Daneel Olivaw pretty much influenced the mental state of others and controlled the universe. That seems to be one direction where we are headed. The Verge article mentions funding for an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot. They plan to start with studying moral development in infants.

    Thanks to this article, I learned that there were different kinds of morality – operational morality, functional morality, and full moral agency. This is all fascinating stuff and my mind was racing in multiple directions. For one, did morality develop because living in groups was more advantageous from a survival perspective and to live in groups, there had to be some rules that governed this coexistence? Did this ethics then evolve into an acceptable moral framework? These may or may not be in line with our individual instincts. Does that explain why each of us have a different moral code? If that is so, can we ever develop a uniform code for robots? To be noted that ethics are a tad more objective than morals, so they might be relatively more easier to ‘code’.

    I also began to think if the augmented human would serve as the bridge between humans and AI and as he develops, will find ways to transfer moral intelligence to AI. Or maybe it would just be logic. Alternately if, as per this awesome post on what increasing AI in our midst would mean, if we do start focusing on human endeavours beyond functional (and driven by money alone) maybe our moral quotient will also evolve and become a homogeneous concept.

    In Michener’s Hawaii, one man of science and spirituality discusses dinosaurs with a man of spirituality. I shared this on Instagram, wondering if humanity will be talked about in this manner.

    Hawaii

    The changes could be the ones we’re causing nature to make and ‘huge’ could be our gluttonous consumption of resources. In the context of robotics and morality, I immediately thought of Asimov’s Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” What would happen when one set of humans begin to do something that might harm humanity? What would a robot do?

    The answers, are evolving. It’s a good time to be human, and to be able to experience wonder.

    until next time, moral science

    P.S. On a  related note – Bicentennial Man – RIP Robin Williams :'(

  • The right turn

    On my way to office, there is a junction, and like many junctions there are rights and lefts varying from 1 degree to 179 degrees, and many times, i end up giving directions thus, “not that right, the other right’…
    A few days back, i ended up playing whistle blower on an online plagiarism case.. no second guess required whether it was the right thing to do, but the implications did make me wonder whether i could have maybe not have done it at all.. But i also got a forward around the same time.. it went thus – there are two sets of railway tracks. One used, the other unused.There is a warning board on the used one that warns kids not to play on it. The scenario is that a train is approaching on the used track, there are five kids playing on it. A single kid is playing on the unused track. You are in charge of the tracks. With a single button, you can divert the train onto the unused track,saving five kids and sacrificing one. You obviously can’t do the movie hero stunt of running faster than the train and rescuing all kids concerned, and you have only moments to decide. Before you make the decision, remember that the single kid was following instructions and doing the right thing.
    There is no ‘correct’ answer, it is a choice, but it helped me stick to the decision i made. Maybe, it is a step towards a ‘no compromise’ policy towards doing the right thing… Meanwhile, the turn to my office is the exact right, all of 90 degrees 🙂
    until next time, i write, you read 🙂