The more powerful technology becomes, the better we must understand both it and ourselves
Richard -- great post. I've been advocating a similar set of ideas for a few years now (https://medium.com/swlh/gaianomics-or-the-self-designing-earth-971eaeec9656). I even use a similar metaphor: "we’re driving a car with our foot firmly on the gas pedal, while our hand is pulling the parking brake. Sure, we could just wait to see whether we’ll run out of fuel, the transmission will go bust, or some other wacky failure mode, and then take the car to the mechanic, attempt to fix the breakage and try again. Or we could just give up going anywhere, thus “solving” the problem. But what we should and must do is learn how the car actually works and how to drive it correctly"
More recently, we have started to assemble similar-minded organizations in a consortium (https://gaiaconsortium.org/) whose mandate is exactly to build the infrastructure that empowers us -- whether humans or AIs -- to make consistently more thoughtful decisions and build more intentional and robust systems for our civilization.
Looking forward to chatting more about this. Best,
>When the next weapon with planetary-scale destructive capabilities is developed, as it inevitably will be, we need far more robust mechanisms preventing it from being deployed.
It’s an interesting challenge. Maybe the absolute best models will never run on consumer hardware, but it increasingly seems like you’ll be able to get very good capabilities for everyone, so maybe not planetary scale destruction, but more capability for everyone. Like you said, hard to tell who this favors