Sam Altman fears Meta taking huge risks with open source AI

Meta wants to open-source a GPT-5-level model and seems dead-set on open-sourcing right up until AGI. I want to be clear about what this means:

There is no kill-switch. If something goes wrong–an agent gets out of control or a bad actor weaponizes it–there’s no easy way to turn it off. It could be running on any small cluster. There will be no security.

Safety research becomes meaningless. All the work people have done into making AI systems honest, aligned, ethical, etc becomes (mostly) moot. The population of AI’s out in the world will evolve towards whichever systems produce the most economic output, irrespective of what values or motives they have. There will be no guardrails. Anyone can change their AI’s values or capabilities as they want, for good or bad.

If Meta continues to open-source as we get much smarter AI, it’s pretty clear to me that things will become a shitshow. The arrival of these alien intelligences in the world is already going to be chaotic, but much much moreso if we just fling off what little levers of human control we have.

As far as I can tell, Meta’s wish to open-source stems mostly from some software industry dogma that “open-source good”.

source

Leave a comment