Tag: morality

  • Prosperity’s moral code

    A few months ago, TechCrunch had a post debating the role of capitalism in a world that includes AI, where jobs are disappearing at a rate faster than new jobs coming in.  Capitalism has always been played as a finite game, focused on profit for a set of people, largely irrespective of the costs to others or society at large. As I wrote in “A shift in the world order“, its only real foe in the recent past has been the nation state, and its executive arm – the government. A foe increasingly struggling to even defend its own relevance, I’d say. As the dominant system of the world, we will then automatically (whether rightfully, is debatable) begin questioning capitalism’s morality codes. More than what we are doing currently, because the impact will not just be higher, it will also start affecting more people.

    Earlier this year, I had written on how if it intends to survive, capitalism needs to expand its scope, and play an infinite game – whose purpose is to continue the flow of the game, and bring in new players. Something similar to what Douglas Rushkoff calls digital distributism (read) a model that aims for the circulation of money rather than the extraction of money. An evolution that capitalism needs to go through, or it runs the risk of imploding. This, of course, is not really in line with the way an earlier generation of corporations, or Silicon Valley operates.  As Maciej Cegłowski writes in “The Moral Economy of Tech“, treating the world as a software project gives us a rationale for being selfish. We pretend that by maximizing our convenience and productivity, we’re hastening the day when we finally make life better for all those other people. (more…)

  • The redefinition of life

    This article about the man who was one-upping Darwin interested me a lot, because of the question he asked – What qualifies something as alive or not. His paper, currently under peer review, explains theoretically how, under certain physical circumstances, life could emerge from nonlife. Arguably, consciousness is the factor that separates life from non life. However, there’s also a new theory that proposes that consciousness is far less powerful than people believe, serving as a passive conduit rather than an active force that exerts control. The article compares it to the internet, and says that just like the internet can be used to discover, share, buy etc, it’s actually the person on the web/mobile who is actually deciding. It even argues that consciousness is not made to study itself.  (more…)

  • A republic of convenience

    Masala Republic is a Malayalam movie I watched recently. First, my sympathies with those who attempted the heroic task of watching it in a theatre, but to be fair, it did give me some food for thought. No, not about my choice of movies, but things slightly more important in the scheme of things. It talked, for instance, of issues that needed a voice – the changing socio-political and economic dynamics of Kerala caused by a huge influx of people, mostly low wage workers from Bengal and the North East.

    The movie begins with the disruption brought about in the life of these folks by a ban imposed on Gutka, which apparently is part of their staple diet! This reminded me of the (real) scenario I witnessed when the liquor ban was announced in Kerala. Almost overnight, I saw an ecosystem disbanded – small shops around bars, auto-rickshaws that ferried drunk guys home, to name a few components.

    Notwithstanding the political play that brought about this ban, I was forced to ask – isn’t alcohol consumption an individual’s choice? One might cite domestic violence, decrease in productivity, drunken driving etc, but unlike say, smoking, it does not automatically cause damage to the larger society. Isn’t a blanket ban a bit like banning automobiles because of road accidents? If the justification is that individual choice must bow before collective progress, then can we really condemn Sanjay Gandhi for the infamous sterilisation programme? After all, population control would, at least arguably, have meant progress. What we are debating therefore, (I think) is the means. And means is exactly what an alcohol ban is. Does society really have the moral right to take such a decision? Who decides society’s collective moral compass and what can resist such selective applications of morality?

    Clipboard01

    (via)

    Who decides where the line is?

    P.S. Would be glad if you could point out whether I am missing some relevant piece of information or logic here.

  • Artificial Morality

    It wasn’t my intention, but the title did make me think of the morality we impose on ourselves, and that perhaps has some amount of implication on the subject of this post too. The post is about this – we seemed to have moved from debating artificial intelligence to the arguably more complex area of morality in robots!  When I first read about robots and ethical choices, (did they mean moral?) my reaction was this


    It’s probably a good time to discuss this, since a robot has recently become a Board member in a VC firm as well. Ah, well, in the Foundation series, R. Daneel Olivaw pretty much influenced the mental state of others and controlled the universe. That seems to be one direction where we are headed. The Verge article mentions funding for an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot. They plan to start with studying moral development in infants.

    Thanks to this article, I learned that there were different kinds of morality – operational morality, functional morality, and full moral agency. This is all fascinating stuff and my mind was racing in multiple directions. For one, did morality develop because living in groups was more advantageous from a survival perspective and to live in groups, there had to be some rules that governed this coexistence? Did this ethics then evolve into an acceptable moral framework? These may or may not be in line with our individual instincts. Does that explain why each of us have a different moral code? If that is so, can we ever develop a uniform code for robots? To be noted that ethics are a tad more objective than morals, so they might be relatively more easier to ‘code’.

    I also began to think if the augmented human would serve as the bridge between humans and AI and as he develops, will find ways to transfer moral intelligence to AI. Or maybe it would just be logic. Alternately if, as per this awesome post on what increasing AI in our midst would mean, if we do start focusing on human endeavours beyond functional (and driven by money alone) maybe our moral quotient will also evolve and become a homogeneous concept.

    In Michener’s Hawaii, one man of science and spirituality discusses dinosaurs with a man of spirituality. I shared this on Instagram, wondering if humanity will be talked about in this manner.

    Hawaii

    The changes could be the ones we’re causing nature to make and ‘huge’ could be our gluttonous consumption of resources. In the context of robotics and morality, I immediately thought of Asimov’s Zeroth Law “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” What would happen when one set of humans begin to do something that might harm humanity? What would a robot do?

    The answers, are evolving. It’s a good time to be human, and to be able to experience wonder.

    until next time, moral science

    P.S. On a  related note – Bicentennial Man – RIP Robin Williams :'(

  • Moral Signs

    A little more than a year back, I remember writing a post on identity – what exactly constitutes the individual – work, relationships, consumption, combinations of these…….

    More recently, I read a Scott Adams post which actually asks the same question ‘Who are you?’ He also provides his best answer to it ‘You are what you learn’. It’s an interesting point and I do agree that what you learn is what gives you additional perspective. It changes the way you view older experiences and how you react to new experiences. And so, despite believing in being prisoners of birth to some extent, and knowing that the apple never falls far from the tree, and at the risk of generalisation, I would tend to agree.

    Which brings me to learning. In an earlier era, our ‘channels’ of learning were limited – parents, relatives, friends, teachers, literature, some amounts of media, and so on. Limited when compared to the abundance that a media explosion and the internet have brought into our lives. Sometime back, I read a post in the NYT titled ‘If it feels right‘, which discussed a study on the role of morality (rather, the lack of it) in the lives of America’s youth. The author clarifies that it isn’t as though they are living a life of debauchery, it’s just that they don’t even think of moral dilemmas, the meaning of life and such. The study ‘found an atmosphere of extreme moral individualism’, mostly because they have not been given the resources to develop their thinking on such matters.

    It led me to think about the moral frameworks that were instilled in us by our sources when were young. At the very least, value systems existed, though obviously their ‘quality’ would be a subjective affair. I wonder, if in this era of abundant sources, we are missing out on inculcating the basic moral guidelines that are necessary for a society’s sustenance and  evolution. If people are what they learn, then the least we could do is take a closer look at our own moral framework. The next generation, despite the abundance of sources, could be learning from it. Or perhaps this is the way it has always been, between generations. 🙂

    until next time, moral poultice

    PS: a beauuuutiful related video