What to do about the future

Published

May 19, 2024

I think I am often sucseptible to drinking the Silicon Valley Kool-Aid - build, build, build. Build and grind. Make tons of money, and then post snarky Tweets. Treat science and technology as the primary force of good in the world. As long as people pay for and use your product, you are fulfiling an unmet need. Making the world a better place, one line of code at a time.

It’s easy to consume this narrative and live out your life according to its doctrine.

But there is a more cautious line of thinking that is emerging, and it implores us to look beyond the first-order effect of our technology. It comes under many labels, but the general idea is clear: technology can have many unintended consequences, the nature of which is not at all clear to us right now, before the technology has been created. We should tread carefully.

I used to dismiss any attempts at this sort of thinking as unnecessary “doomerism”, because the world is a complex place so of course there will be some third-order effects. But that is true of anything we do. Well-intended actions sometimes do more harm than good, and vice versa. The public discouse has also mainly revolved around AGI.

But I think the conversation is much bigger than that, and it really requires some introspection on the very nature of our humanity. Because there will be an increasing amount of technology that can lead us astray, distract us from dealing with the “pain of being”. There will be technology that replicates aspects of our intelligence. There will be technology that lets us augment our worst insecurities. There will be room for our insecurities to manifest themselves in our children, by giving them genetic upgrades before birth.

This enthralls but also terrifies me. It is like we are starting to play with the essence of the human condition, without understanding its substance. For one - what is consciousness?

If we agree that our consciousness is what makes us human, then how will we know when we should start treating our AI counterparts as humans as well?