Political elites who control the Western world are a product and reflection of us

One way to look at elites is they have evolved in tandem with the inexhaustible gullibility of the masses. This has taught them that they can get away with anything. And this has made them careless and, yes, incompetent. They really did think it was a good idea to goad Russia into war and then respond with sanctions. Or that they really can get away with a plandemic and weaponized vax. Our elites are a product and reflection of the gullibility of human masses, of us. Historically, it has ever been thus.

Kevin McKernan has a good take on what covid really was and is—he sees it as PCR test malfeasance built on a weak virus that can’t do much more than infect with mild or no symptoms. Mike Yeadon sees it this way as well.

Why were there such bad symptoms in some people or such large death tolls? Because patients were maltreated in hospitals, which were paid handsomely to do deadly protocols. Maybe some strong viruses were seeded here and there which would account for the bad symptoms experienced by some. A sprinkling of cases like that would suffice to alarm doctors whose concerns would intensify the scam. This is biowarfare being used to cause panic and disruption more than to kill in very large numbers.

Recall the scary lab leak from China and their videos, and their being secretive about the origins of covid; all just mind-control to panic the masses. A thread on this topic and McKernan’s thoughts can be found here. This way of looking at it accounts for what JJ Couey is saying while also sidelining many of his conclusions. ABN

Wolfram’s ‘computational irreducibility’ explains FIML perfectly

FIML is a method of inquiry that deals with the computational irreducibility of humans. It does this by isolating small incidents and asking questions about them. These small incidents are the “little pieces of computational reducibility” that Steven Wolfram remarks on at 45:34 in this video. Here is the full quote:

One of the necessary consequences of computational irreducibility is within a computationally irreducible system there will always be an infinite number of specific little pieces of computational reducibility that you can find.

45:34 in this video

This is exactly what FIML practice does again and again—it finds “specific little pieces of computational reducibility” and learns all it can about them.

In FIML practice, two humans in real-time, real-world situations agree to isolate and focus on one “specific little piece of computational reducibility” and from that gain a deeper understanding the the whole “computationally irreducible system”, which is them.

When two humans do this hundreds of times, their grasp and appreciation of the “computationally irreducible system” which is them, both together and individually, increases dramatically. This growing grasp and understanding of their shared computationally irreducible system removes most previously learned cognitive categories about their lives, or psychologies, or how they think about themselves or other humans.

By focusing on many small bits of communicative information, FIML partners improve all aspects of their human minds.

I do not believe any computer will ever be able to do FIML. Robots and brain scans may help with it but they will not be able to replace it. In the not too distant future, FIML may be the only profound thing humans will both need to and be able to do on their own without the use of AI. To understand ourselves deeply and enjoy being human, we will have to do FIML. In this sense, FIML may be our most important human answer to the AI civilization growing around us. ABN

Liberalism is more Dangerous than Ukrainian Nazism — Alexander Dugin

We are an empire, as the heirs of the monarchy and as the heirs of the Soviet Union.

There can be no neutral position in this war, because there are only two camps. And that is all. Anyone who hesitates or is indecisive, sooner or later (it seems to me much sooner than it seems), will be forced to take up arms and simply go to the front, and the front is everywhere today. It is impossible to return this long, difficult and terrible war to where it was before February 24, 2022; nor can it be stopped; it can only be won. Or it can still be left to human history. Then there will be no winner. Death will win.

For now, it’s war, which means we’re alive.

If you do not support the Special Military Operation, then you are not for Russia, you are not for the country, you are not for our people, and then the time will come when you will have to kill Russians, destroy Russia as a country, blow up cars, houses and railways, hide terrorists in your homes, shoot. There is no more security.

So, it is better to decide now, and this applies to all Russians; but it also applies to all other countries.

If you want to preserve sovereignty, it is clear that it is impossible under the auspices of the collective West, because liberalism in international relations cancels sovereignty and recognizes only the World Government, in other words, Western hegemony; and in the fight for a multipolar world in which sovereignty is possible, you have to fight with the West, and that is what Russia is doing now. And, it is doing that for everyone.

That’s what World War III is all about. Anyone who really cares about sovereignty will either have to side with us or willfully and forever give up and submit completely to the West—and the West is now at war with Russia and will force others to do the same.

This is what happened to Ukraine; this is what is happening to Georgia and Moldova and what is threatening Turkey and even China.

link

Functional Decision Theory: A New Theory of Instrumental Rationality — Eliezer Yudkowsky, Nate Soares

This paper describes and motivates a new decision theory known as functional decision theory (FDT), as distinct from causal decision theory and evidential decision theory. Functional decision theorists hold that the normative principle for action is to treat one’s decision as the output of a fixed mathematical function that answers the question, “Which output of this very function would yield the best outcome?” Adhering to this principle delivers a number of benefits, including the ability to maximize wealth in an array of traditional decision-theoretic and game-theoretic problems where CDT and EDT perform poorly. Using one simple and coherent decision rule, functional decision theorists (for example) achieve more utility than CDT on Newcomb’s problem, more utility than EDT on the smoking lesion problem, and more utility than both in Parfit’s hitchhiker problem. In this paper, we define FDT, explore its prescriptions in a number of different decision problems, compare it to CDT and EDT, and give philosophical justifications for FDT as a normative theory of decision-making.


link

Of 738 machine learning researchers polled, 48% gave at least a 10% chance of an extremely bad outcome

I’m scared of AGI. It’s confusing how people can be so dismissive of the risks.

I’m an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.

Imagine building a new type of nuclear reactor that will make free power.

People are excited, but half of nuclear engineers think there’s at least a 10% chance of an ‘extremely bad’ catastrophe, with safety engineers putting it over 30%.

That’s the situation with AGI. Of 738 machine learning researchers polled, 48% gave at least a 10% chance of an extremely bad outcome.

Of people working in AI safety, a poll of 44 people gave an average probability of about 30% for something terrible happening, with some going well over 50%.

Remember, Russian roulette is 17%.

Continue reading “Of 738 machine learning researchers polled, 48% gave at least a 10% chance of an extremely bad outcome”

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

This is a very long interview. Its main points could probably be made in about 12 minutes. Yudkowsky is a somewhat tortuous speaker who uses metaphors and analogies when direct description would work better. He spends a lot of time doing a sort of Socratic Q&A on Fridman, which was not interesting. The gist of what is said is AI as GPT-4 is already doing things that were not programmed into it and we humans do not understand how it is doing them. This is an early form of how AI will rather quickly learn to deceive us, doesn’t matter if it’s conscious or not. What must happen, or we face dire consequences, is AI must be aligned with human values to the extent that we have zero concerns about it self-aggrandizing and killing all of us. In 40 or 50 years, a group of smart physicists would be able to solve the alignment problem, according to Yudkowsky, but no one is working on that now and few want to slow down development of AI, which may become powerful enough to do massive damage fairly soon. I think this is a reasonable and serious concern and wonder only why an advanced AI would not know that killing all humans would also remove its energy supplies and maintenance crews. If this topic interests you and you like Yudkowsky and Fridman, the interview is fun to watch and I recommend it.

Another way to look at this, is it is all part of a large psyop which includes everything else that is going on in the world—covid, impending WW3, economic crash, digital currency, etc. From that perspective Yudkowsky and Fridman are either dupes or deep state actors aligned with forces that are working to take over the world. They will use AI as an excuse to crash the internet, possibly attack China and Russia, scare the hell out of everyone, maybe kill most of us, etc. When the dust settles top people will be in charge of an unbrave new world and those of us left standing will have nowhere else to go. In this context, notice Fridman’s expression when Yudkowsky describes human smarts that evolved from “outsmarting other people.” Looks like a tell but who knows.

Please notice that the fear we have of AI is that it will become a KOBK player. And the fear we have that AI is but part of a large psyop is the people doing that psyop are already KOBK players. What Yudkowsky fears about AI, we all should fear all the time about those who have power over us. My FIML partner is a strong proponent of the psyop interpretation and it is mainly her input that moved me to include it in these comments. ABN

Zoomers and computers both need ethical communication or both are doomed

I just read a descriptive analysis of zoomers that seems pretty good to me. Assuming there is some truth in it, zoomers can be defined as entirely non-FIML. From a FIML point of view this constitutes unknowing abandonment of our most wonderful talents due mainly to not knowing they are possible.

Sadly, this largely defines all generations that have ever lived. Zoomers are novelties only in that they see no way out of earthly illusions including even caring about finding a way out. In some ways, it has ever been thus, Samuel Beckett on a warm beach, where the sun still shines on the nothing new.

The better way to go is bring the full power of your human voice and ears to every moment. Let nothing pass you by. Then you can do still nothing while also accomplishing something. The illusions are solipsisms and tautologies but that’s all. No reason to be cucked by them.

The core problem with GPT is it can’t be trusted. GPT is an even purer form of non-FIML than zoomers. GPT is potentially pure KOBK. Both of these fundamental problems illustrate that the most important human endeavor is morality, ethics. Systems in computers or in human brains don’t work optimally without ethics. In this discussion, that becomes abundantly clear. ABN

Like a fast-growing Covid variant, AI will become the dominant source of knowledge simply by virtue of growth ~ Peter Nixey

I’m in the top 2% of users on StackOverflow. My content there has been viewed by over 1.7M people. And it’s unlikely I’ll ever write anything there again. Which may be a much bigger problem than it seems. Because it may be the canary in the mine of our collective knowledge. A canary that signals a change in the airflow of knowledge: from human-human via machine, to human-machine only.

Don’t pass human, don’t collect 200 virtual internet points along the way. StackOverflow is *the* repository for programming Q&A. It has 100M users & saves man-years of time & wig-factories-worth of grey hair every single day. It is driven by people like me who ask questions that other developers answer.

Or vice-versa. Over 10 years I’ve asked 217 questions & answered 77. Those questions have been read by millions of developers & had tens of millions of views. But since GPT4 it looks less & less likely any of that will happen; at least for me. Which will be bad for StackOverflow. But if I’m representative of other knowledge-workers then it presents a larger & more alarming problem for us as humans.

What happens when we stop pooling our knowledge with each other & instead pour it straight into The Machine? Where will our libraries be? How can we avoid total dependency on The Machine? What content do we even feed the next version of The Machine to train on? When it comes time to train GPTx it risks drinking from a dry riverbed. Because programmers won’t be asking many questions on StackOverflow. GPT4 will have answered them in private.

So while GPT4 was trained on all of the questions asked before 2021 what will GPT6 train on? This raises a more profound question. If this pattern replicates elsewhere & the direction of our collective knowledge alters from outward to humanity to inward into the machine then we are dependent on it in a way that supercedes all of our prior machine-dependencies. Whether or not it “wants” to take over, the change in the nature of where information goes will mean that it takes over by default.

Like a fast-growing Covid variant, AI will become the dominant source of knowledge simply by virtue of growth. If we take the example of StackOverflow, that pool of human knowledge that used to belong to us – may be reduced down to a mere weighting inside the transformer. Or, perhaps even more alarmingly, if we trust that the current GPT doesn’t learn from its inputs, it may be lost altogether. Because if it doesn’t remember what we talk about & we don’t share it then where does the knowledge even go?

We already have an irreversible dependency on machines to store our knowledge. But at least we control it. We can extract it, duplicate it, go & store it in a vault in the Arctic (as Github has done). So what happens next? I don’t know, I only have questions. None of which you’ll find on StackOverflow.

link

The Universe is a hologram: Stephen Hawking’s final theory, explained by his closest collaborator

Hawking’s final theory of the Big Bang provides a bold and surprising answer. It envisages the Universe as a holographic projection.

Stephen liked to visualise this idea in a disk-like image of the kind shown above. The outer circle depicts a timeless hologram consisting of countless entangled qubits.

The disk shows the evolution of an expanding Universe that projects down from this. The origin of the Universe lies at the centre of the disk and it expands outward in the radial direction.

It is as if there is a code operating on the entangled qubits that brings about the Universe and this is what we perceive as the flow of time.

Crucially, by taking a fuzzier view of the hologram, one ventures farther back in time, toward the interior of the disk. It is like zooming out. Eventually, however, one runs out of bits. This is the origin of time, according to our theory.

There can be nothing before the Big Bang, because the past that holographically emerges doesn’t extend further back.

link

Why Nobody ‘Had, Caught or Got’ COVID-19

…In summary, there are indeed “cases” of COVID-19 but the case definition has been disconnected from the concept of disease. There is no disease and there is nothing to “get,” except for a label as a ‘case’. The Johns Hopkins “COVID-19 Dashboard” displays these hundreds of millions of meaningless figures, which look impressive to the uninitiated viewer. However, knowledge of how these numbers have been produced brings an understanding that we have just witnessed a pseudo-pandemic, or what Virus Mania’s Dr Claus Köhnlein christened a “PCR Pandemic” in 2020.

link

Medvedev — ‘We have witnessed a catastrophic decline in the competence and basic literacy of the leaders of the EU’

https://t.co/u81fod6AEo

Incompetence and mediocrity can arise from: 1) a natural tendency for mentors to select mentees that are less competent than they are; that are more loyal to them and less likely to think independently; and 2) many/most mentors have already been selected through this process and—far more seriously—this has happened due to a clandestine war that has been waged against Europe since the end of WW2. USA has the same problem as do Canada, NZ and Australia. The entire West has already been taken over by this process. ABN

Why I don’t believe there ever was a Covid virus ~ Dr Mike Yeadon

I’VE grown increasingly frustrated about the way debate is controlled around the topic of origins of the alleged novel virus, SARS-CoV-2, and I have come to disbelieve it’s ever been in circulation, causing massive scale illness and death. Concerningly, almost no one will entertain this possibility, despite the fact that molecular biology is the easiest discipline in which to cheat. That’s because you really cannot do it without computers, and sequencing requires complex algorithms and, importantly, assumptions. Tweaking algorithms and assumptions, you can hugely alter the conclusions.

This raises the question of why there is such an emphasis on the media storm around Fauci, Wuhan and a possible lab escape. After all, the ‘perpetrators’ have significant control over the media. There’s no independent journalism at present. It is not as though they need to embarrass the establishment.  I put it to readers that they’ve chosen to do so.

So who do I mean by ‘they’ and ‘the perpetrators?  There are a number of candidates competing for this position, with their drug company accomplices, several of whom are named in Paula Jardine’s excellent five-part series for TCW, Anatomy of the sinister Covid project. High on the list is the ‘enabling’ World Economic Forum and their many political acolytes including Justin Trudeau and Jacinda Ardern.

But that doesn’t answer the question why are they focusing on the genesis of the virus. In my view, they are doing their darnedest to make sure you regard this event exactly as they want you to. Specifically, that there was a novel virus.

link