Category: Choices

  • Happiness: The End

    A while ago, in Happiness and Compassion, I had written about what Fahadh Fasil described as the biggest lesson he learnt from failure – he said it made him decide that he would only do things that made him happy. The more I read, the more I think, and the more I live, the more I start relating to what Fahadh is doing, and what Aristotle said, “Happiness is the meaning and the purpose of life, the whole aim and end of human existence.” Everything else – fame, power, money, compassion, detachment etc – is probably just the means we create.

    The thing though is, even if happiness were indeed the purpose, I can see at least a couple of challenges. In this excellent read “10 truths you will learn before you find happiness“, the first point is “It is impossible for anyone else to define YOU”. This echoed my first challenge – a difficulty in defining what happiness is to me. At the next level, I felt that the paths to happiness are confusing and have many things going against them. For instance, fame – “..other people’s heads are a wretched place to be the home of a man’s true happiness.” (Schopenhauer) Or compassion/pity (not kindness, which I regard as a more active expression, though the following might apply to it as well) – “There is a certain indelicacy and intrusiveness in pity; ‘visiting the sick’ is an orgasm of superiority in the contemplation of our neighbour’s helplessness” (Nietzsche) As you can see, it isn’t difficult to bring each down.

    (more…)

  • A republic of convenience

    Masala Republic is a Malayalam movie I watched recently. First, my sympathies with those who attempted the heroic task of watching it in a theatre, but to be fair, it did give me some food for thought. No, not about my choice of movies, but things slightly more important in the scheme of things. It talked, for instance, of issues that needed a voice – the changing socio-political and economic dynamics of Kerala caused by a huge influx of people, mostly low wage workers from Bengal and the North East.

    The movie begins with the disruption brought about in the life of these folks by a ban imposed on Gutka, which apparently is part of their staple diet! This reminded me of the (real) scenario I witnessed when the liquor ban was announced in Kerala. Almost overnight, I saw an ecosystem disbanded – small shops around bars, auto-rickshaws that ferried drunk guys home, to name a few components.

    Notwithstanding the political play that brought about this ban, I was forced to ask – isn’t alcohol consumption an individual’s choice? One might cite domestic violence, decrease in productivity, drunken driving etc, but unlike say, smoking, it does not automatically cause damage to the larger society. Isn’t a blanket ban a bit like banning automobiles because of road accidents? If the justification is that individual choice must bow before collective progress, then can we really condemn Sanjay Gandhi for the infamous sterilisation programme? After all, population control would, at least arguably, have meant progress. What we are debating therefore, (I think) is the means. And means is exactly what an alcohol ban is. Does society really have the moral right to take such a decision? Who decides society’s collective moral compass and what can resist such selective applications of morality?

    Clipboard01

    (via)

    Who decides where the line is?

    P.S. Would be glad if you could point out whether I am missing some relevant piece of information or logic here.

  • Happiness and compassion

    Though I’d explored the idea of inculcating a sense of compassion in others in this post a fortnight back, I still think our own compassion needs to serve as a solid base. Not being judgmental is one way, but it’s not easy to practice. So I took a step back and wondered if compassion was a result and not a behaviour. The first behavioural direction I could think of was happiness. In myself, I have seen a correlation if not a causation. I am more compassionate when I’m happier. So I decided to explore this a bit. (more…)

  • The people we are….with

    After I shared the “We, the storytellers” post on Twitter, Surekha sparked off this interesting discussion on how we could persuade others to be less judgmental and more compassionate. I really didn’t have a fix-it-all answer and felt that it was more important that we simply practice this ourselves. That, however, did not stop me from thinking about it.

    The next day, my reading list had this post, which touched upon things that get people to change their behaviour. I remembered this William James quote used in the post from something I had seen a while back at Brain Pickings.

    Clipboard01

    (more…)

  • Artificial Morality

    It wasn’t my intention, but the title did make me think of the morality we impose on ourselves, and that perhaps has some amount of implication on the subject of this post too. The post is about this – we seemed to have moved from debating artificial intelligence to the arguably more complex area of morality in robots!  When I first read about robots and ethical choices, (did they mean moral?) my reaction was this


    It’s probably a good time to discuss this, since a robot has recently become a Board member in a VC firm as well. Ah, well, in the Foundation series, R. Daneel Olivaw pretty much influenced the mental state of others and controlled the universe. That seems to be one direction where we are headed. The Verge article mentions funding for an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot. They plan to start with studying moral development in infants.

    Thanks to this article, I learned that there were different kinds of morality – operational morality, functional morality, and full moral agency. This is all fascinating stuff and my mind was racing in multiple directions. For one, did morality develop because living in groups was more advantageous from a survival perspective and to live in groups, there had to be some rules that governed this coexistence? Did this ethics then evolve into an acceptable moral framework? These may or may not be in line with our individual instincts. Does that explain why each of us have a different moral code? If that is so, can we ever develop a uniform code for robots? To be noted that ethics are a tad more objective than morals, so they might be relatively more easier to ‘code’.

    I also began to think if the augmented human would serve as the bridge between humans and AI and as he develops, will find ways to transfer moral intelligence to AI. Or maybe it would just be logic. Alternately if, as per this awesome post on what increasing AI in our midst would mean, if we do start focusing on human endeavours beyond functional (and driven by money alone) maybe our moral quotient will also evolve and become a homogeneous concept.

    In Michener’s Hawaii, one man of science and spirituality discusses dinosaurs with a man of spirituality. I shared this on Instagram, wondering if humanity will be talked about in this manner.

    Hawaii

    The changes could be the ones we’re causing nature to make and ‘huge’ could be our gluttonous consumption of resources. In the context of robotics and morality, I immediately thought of Asimov’s Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” What would happen when one set of humans begin to do something that might harm humanity? What would a robot do?

    The answers, are evolving. It’s a good time to be human, and to be able to experience wonder.

    until next time, moral science

    P.S. On a  related note – Bicentennial Man – RIP Robin Williams :'(