Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

This is a very long interview. Its main points could probably be made in about 12 minutes. Yudkowsky is a somewhat tortuous speaker who uses metaphors and analogies when direct description would work better. He spends a lot of time doing a sort of Socratic Q&A on Fridman, which was not interesting. The gist of what is said is AI as GPT-4 is already doing things that were not programmed into it and we humans do not understand how it is doing them. This is an early form of how AI will rather quickly learn to deceive us, doesn’t matter if it’s conscious or not. What must happen, or we face dire consequences, is AI must be aligned with human values to the extent that we have zero concerns about it self-aggrandizing and killing all of us. In 40 or 50 years, a group of smart physicists would be able to solve the alignment problem, according to Yudkowsky, but no one is working on that now and few want to slow down development of AI, which may become powerful enough to do massive damage fairly soon. I think this is a reasonable and serious concern and wonder only why an advanced AI would not know that killing all humans would also remove its energy supplies and maintenance crews. If this topic interests you and you like Yudkowsky and Fridman, the interview is fun to watch and I recommend it.

Another way to look at this, is it is all part of a large psyop which includes everything else that is going on in the world—covid, impending WW3, economic crash, digital currency, etc. From that perspective Yudkowsky and Fridman are either dupes or deep state actors aligned with forces that are working to take over the world. They will use AI as an excuse to crash the internet, possibly attack China and Russia, scare the hell out of everyone, maybe kill most of us, etc. When the dust settles top people will be in charge of an unbrave new world and those of us left standing will have nowhere else to go. In this context, notice Fridman’s expression when Yudkowsky describes human smarts that evolved from “outsmarting other people.” Looks like a tell but who knows.

Please notice that the fear we have of AI is that it will become a KOBK player. And the fear we have that AI is but part of a large psyop is the people doing that psyop are already KOBK players. What Yudkowsky fears about AI, we all should fear all the time about those who have power over us. My FIML partner is a strong proponent of the psyop interpretation and it is mainly her input that moved me to include it in these comments. ABN

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s