Digital Beings And Deep Utopia: Dinis Guarda Interviews Renowned Philosopher, Author, Researcher, Nick Bostrom
Categories :
Nick Bostrom, philosopher, author, and researcher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, is the guest in the latest episode of Dinis Guarda YouTube podcast. He highlights the key aspects of an advanced technology era and insights from his latest book ‘Deep Utopia’. The podcast is powered by Businessabc.net and citiesabc.com.
Nick Bostrom is known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. The author of more than 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and the New York Times bestseller ‘Superintelligence’, Nick is also the author of recently launched Deep Utopia, Life and Meaning in a Solved World.
In this interview with Dinis Guarda, Nick explains that an integrated future with humans and machines will be driven by advanced technologies like artificial intelligence and superintelligence.
Speaking about the exponential rise in population aided by technology, Nick tells Dinis:
“With respect to population, it might well be that in addition to biological humans in the year 2100, there might be so much larger number of digital minds as well some of which might be artificial some of which might be humans uploaded into computers perhaps or uplifted animals and the world is big so there would be room for a lot more than 10 billion. At some point, you would have to kind of calibrate the rate of growth of the population to not exceed the rate of the growth of the resource pool available if you want to maintain a high per capita level of income but those are more practical concerns.”
Speaking about the pressing issues discussed in his book ‘Deep Utopia’, Nick says:
“What would be the best possible continuation of your life from the current point if you imagine abstracting from all kinds of contingent limitations or technological shortcomings and resource constraints. If you could just imagine the best possible way that your life could unfold and it does look like some values that seem important in our lives, it would be sort of trivially easy to satisfy a lot more in this technologically mature condition.”
The coexistence of humanity and digital minds
The coexistence of humanity and digital minds, Nick says, is a transformative era in the evolution of civilisation, essentially where the boundaries between organic and artificial intelligence blur.
“The big picture quest is for example about whether there are threats to the long-term survival of Earth originating intelligent life or whether we might develop technologies that fundamentally change the human condition in some way. Given these considerations, figure out today what kind of policy directions we should be moving in that connect these long-term consequences and outcomes with actions that people can take today.”
Elaborating on the threats and risks that impact the existence of humanity on Earth, Nick says:
“You might distinguish risks as: existential risks that arise from nature and existential risks that arise from human activity, or anthropogenic. You can say that the anthropogenic ones are by far the biggest if we're thinking on a time scale of a century. We are now introducing entirely new kinds of phenomena into the world with our technological innovation that we have no long track record of surviving. So if there are going to be any big existential risks over the coming decades they're almost certainly going to come from new things we are doing in the world.
I think in particular artificial intelligence and synthetic biology are two places where some of the largest existential risks exist.”
Presenting a picture of future projections, he says:
“Super intelligence is the last invention that humans will ever need to make because then future inventions will be more efficiently done by these machine brains that can think faster and better than humans. AI is ultimately all of technology that becomes fast forwarded. Once you have the super intelligence doing the research, it is really much more profound. It's not like mobile internet or blockchain or one of these other things people get excited about every few years. It's more akin to the emergence of homo sapiens in the first place or the emergence of life on Earth.”
The ethical and moral status of digital minds
Nick believes that as humanity continues to develop increasingly sophisticated AI technologies, it becomes increasingly essential to establish mechanisms that steer these digital minds towards outcomes that are beneficial and aligned with human interests.
“The problem of scalable alignment is methods whereby we can ensure that arbitrarily cognitively competent systems will do what we intend for them to do when we create them. Aligning them with human values or intentions or otherwise and having safeguards that ensure that they don't produce harmful consequences, is still, I think, an unsolved problem. Exactly how to do this in a way that would scale to superintelligence, it's an open question. Whether we will find the solution to that problem before or after people figure out how to find a solution to the problem of how to actually make machines superintelligent, there's a kind of race going on between capability increasing research and and kind of safety. How that race turns out might be like a critical factor in shaping what the future contains for us humans.”
He emphasised on the moral and ethical aspects of digital minds:
“A big problem is how can we make sure that we don't harm these digital minds that we will be creating, some of which might have moral status. The ethics of digital minds is really the most neglected. The moral status of digital minds is roughly where the alignment problem was 10 years ago, although a few people are starting to think about it.”
“I think with digital minds it's possible that they will have either degrees of sentients or other attributes that would ground moral status and we should make sure that we don't mistreat them and that we don't replicate what we've done with say the pigs in the meat industry and are often reared in in very bad conditions in the future. Most minds might be digital and so in order for us to have anything that you could even remotely call Utopia, it's important that things go well not just for biological humans but also for this much broader class of beings that we might co-inhabit the future with.”
Deep Utopia: Life and Meaning in a Solved World
Speaking about the situation where the world enters the phase of technological maturity, Nick says:
“At technological maturity, you would enter into a condition where the point and purpose of a lot of daily manual activities would be removed. That's the sense in which we would start to inhabit a post instrumental condition: a condition where at least you know, with some exceptions perhaps, but to first approximation that we don't need to do anything for instrumental reasons. Then you’re alluding to an even more radical conception I call it plastic Utopia where you realize it's not just that human efforts that seem to become obsolete in this technologically mature condition but human itself becomes malleable. In that, you could use these advanced technologies to shape your own psychology, your own cognition, your own attention, your own emotions, your own body in whichever way you want. A lot of the constraints, the instrumental necessities, the like fixed constants of human nature that define our existence currently that structure our lives would be removed at technological maturity.”
He also added:
“I think the best utopian lives will probably be in some ways quite fundamentally different from our current lives. The best future would probably not be one in which we just keep doing what we are doing for like 10 more million years but like living very much our current human lives with our current scarcity and our current biological limitations and just keep doing that. I think the most possible great scenarios would involve substantial amounts of gradual transformation and that we might end up becoming some kind of beings that are significantly transformed.
I think that, as it happens, there are certainly plausible scenarios in which the future does become wonderful literally beyond our ability to imagine. It's not that we should be confident about what would happen but like in our attitudes or emotional posture towards the future. We should include that as a possibility so that maybe the correct attitude is more one of ambivalent expectation or threatful optimism or something like that where you have an emotional attitude of uncertainty and humility.
Something big will happen, you don't know exactly what it is. It could be wonderful, it could be horrific, you might have some ability to notch things in the right direction but like it's also forces bigger than you. So, let's try our best on a small scale to nudge things in a good direction, trying to be kind and constructive, but then also realizing that ultimately there are bigger forces at play, and we are very small. Some attitude of guarded hopefulness maybe is what might be the most appropriate, given our current evidential situation.” .
With a driving passion to create a relatable content, Pallavi progressed from writing as a freelancer to full-time professional. Science, innovation, technology, economics are very few (but not limiting) fields she zealous about. Reading, writing, and teaching are the other activities she loves to get involved beyond content writing for intelligenthq.com, citiesabc.com, and openbusinesscouncil.org