Saturday, May 9, 2026

Jack Clark-cofounder of Anthropic AI…the most ethically responsible of the AI companies thus far-which is why Mr. Trump fired them- said recently: "My prediction is by the end of 2028, it's more likely than not that we have an AI system where you would be able to say to it: 'Make a better version of yourself.' And it just goes off and does that completely autonomously"…’Lots of bad things can happen (cyber meltdowns and biological attacks). And lots of good:’ ….Years ago I had optimism about AI as I witnessed the decline of critical thinking and the profound distractions in the lives of many people. I hoped that at least some aspect of human civilization would show greater attention to the outside world and focus on both the present and the future. That possibility has not been totally lost, but I am increasingly wondering if a pandora's box more lethal than nuclear weapons has been created. Along with AI’s potential to imagine new medicines and energy sources and provide other benefits, human history has shown that
bad leaders can infect the entire world with their contagion. When an AI system that humans can not fully understand begins to outthink its creator and infiltrate whatever safeguards we erect, that can be dangerous stuff. If AI builds self preservation deep into its own code while formulating ever more sophisticated biological weapons or infiltrating the nuclear arsenal, that can be dangerous stuff. Human nature being what it is, I fear that it is too late to put AI back into the box.

No comments: