The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of this site. This site does not give financial, investment or medical advice.
First, I don’t agree with some of what the author of the video is saying since he literally argues for censorship, which in essence means enslaving the whole of humanity. But I want to mention I didn’t know about this manifesto before I wrote my article:
Situational Awareness Manifest
People didn’t read or comment on it, dismissing AGI. I understand that since most people don’t understand AI, AGI, and its consequences. But just like I mentioned in my post about Nvidia stock showing that people at the top understand it, it is also mentioned in this video and manifesto. This is literally the same thing that I was trying to say in my post. Leopold Aschenbrenner, an ex-OpenAI employee, sees the same thing as I do.
Key Excerpts:
37:46 "Be able to overthrow the US government. Whoever controls superintelligence will quite possibly have enough power to seize control from pre-superintelligence forces. Even without robots, the small civilization of superintelligences would be able to hack any undefended military, election, television, etc. system, cunningly persuade generals and electorates, economically outcompete nation-states, design new synthetic bioweapons and then pay a human in bitcoin to synthesize it, and so on."
41:28 "Economic returns justify the investment. The scale of expenditures is not unprecedented for a new general-purpose technology, and the industrial mobilization for power and chips is doable. When he says it is not unprecedented, he is talking about the internet rollout in the '90s and how much money was spent by telecom providers to lay cable in the ground to get ready for the internet explosion."
48:35"We’re developing the most powerful weapon mankind has ever created. The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets—the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world. And yet AI lab security is probably worse than a random defense contractor making bolts. It’s madness."
57:24 "Talk about the project. This is essentially what he is describing as the Manhattan Project for the atomic weapon, for what we need for AI. As the race to AGI intensifies, the national security state will get involved. The US government will wake from its slumber and, by 2027-2028, will get some form of government AGI project. No startup can handle superintelligence. Somewhere in the skiff, the endgame will be on. He's saying it is absolutely necessary, and he thinks it's a good thing, that the government gets heavily involved in AGI creation. He finds it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise—that's a really funny line, I have to say. But in the next few years, the world will wake up, and so too will the national security state. History will make a triumphant return. He's basically saying we are going to be able to rally our American industrial might, much like we did during the Industrial Revolution and during wars, even during COVID, regardless of your position on how that played out. So, do we need an AGI Manhattan Project? Slowly at first, then all at once, it will become clear this is happening. Things are going to get wild. This is the most important challenge for the national security of the United States since the invention of the atomic bomb. In one form or another, the national security state will get very heavily involved. The project will be the necessary, indeed the only plausible, response."
What I would argue is that the national security state already woke up and is doing it. He just doesn't know about it, and through that lens, you can understand some of the strange geopolitical things that are happening.
I also want to share this video:
I want you to hear or read Sam Altman from OpenAI talking about alignment, about controlling and understanding AI:
12:56 "I think that safety is going to require a whole package approach, but this question of interpretability does seem like a useful thing to understand, and there are many levels at which that could work. We certainly have not solved interpretability. So, I want to pause there for a second. Interpretability basically means: can you understand why a model has output what it has? Once you put in a prompt, it is a black box for the most part. Then at the end of that black box, you get the output. A lot of people are working on this. In fact, let me show you something. Claude actually put out a paper about a week and a half ago about this exact topic, and the title of the blog post is 'Golden Gate Claude.' So, just a little bit about what the paper is. On Tuesday, we released a major new research paper on interpreting large language models, in which we began to map out the inner workings of our AI model, Claude. In 'The Sonic in the Mind of Claude,' we found millions of concepts that activate when the model reads relevant text or sees relevant images, which we call features. One of those was the concept of the Golden Gate Bridge. We found that there's a specific combination of neurons in Claude's neural network that activates when it encounters a mention or a picture of this most famous San Francisco landmark."
5:34 "Before long, the world will wake up. He says there are only a few hundred people, most of them in San Francisco and the AI labs, who have situational awareness. That, again, is the title of this paper—situational awareness—meaning they actually understand what is coming."
Maybe this will make people understand and stop underestimating AI and AGI. Someone wrote in a comment on one of my posts, "The whole so-called Artificial Intelligence fad"—that shows people don’t understand it.
7:23
There is a lot more to it than just safety. Understanding what's going on inside the model will give us insight into how to vastly improve it. Let's keep watching:
"You don't understand what's happening. Isn't that an argument to not keep releasing new, more powerful models?"
"Well, we don't understand what's happening in your brain at a neuron-by-neuron level, and yet we know you can follow some rules, and we can ask you to explain why you think something. There are other ways to understand a system besides understanding it at the neuron-by-neuron level."
Reassuring? Not for me.
This is what someone wrote under one of my posts:
"So-called Artificial Intelligence, in essence, is just very basic calculations being done at a very high speed, relative to our perception. When these simple calculations are carried out quintillions of times per second, and coupled with computer language combined with mathematical algorithms, they seem godlike to stupid people."
Maybe by reading and understanding that we invented a black box that we don’t even fully know what it can do, and we feed it and make it grow stronger, people will finally understand it’s not a calculator—this is a new nuke, even more, this is our future god. We are building something potentially smarter than us. Our purpose is understanding and inventing. What if we invent something that is better at understanding and inventing than us? What is our purpose then?
Co-written with GPT-4o hehe :D
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of this site. This site does not give financial, investment or medical advice.

Google is now paying $300 to $500 per hour for doing work online work from home. Last paycheck of me said that $20537 from this easy and simple job. Its amazing and earns are awesome. No boss, full time freedom and earnings are in front of you. This job is just awesome. Every person can makes income online with google easily….
.
This one more details us—————-⫸ 𝗪𝘄𝘄.Payathome9.Com
I’d say that the corporatocracy is simply letting a slow drip drip drip of AI into the public consciousness for something they’ve long ago completed. It’s not that they’re “developing” something that’s new it’s that they’re getting people used to the idea and are now milking the public to fund their own enslavement.
I agree there are rumours Ilya Sutskever and other left OpenAI because they reached AGI behind close doors. Now we hear there is slowdown in AI, but it can look like that because national security state went into OpenAI took over AI and now they are block from realising some of it because national security state forbids realising it because they want it for themselves.
I’ll come back and watch/read this post when I have an hr. Intrigued about it taking over government…😉
It Is the craziest thing that mankind has given birth to. It Is so dangerous. I Imagine this things previle over humanity. The danger Is that they don’t have a soul, a hearth,any feelings. Knowing and learning everything much more than any huma, It can happen a speciation and the end of humanity. Isn’t so?
I have chat gpt and I thiink that is a little bit stupid in answers. Maybe becouse It is developed by americans? Last time I had to explain several times what I wanted to know. It didn’t satisfy me, It couldn’t give me a response. Let’s hope it continues like this for a long time.
At list the most thief in the world are USA. China and other countries are worried about you. At list they are developing same things. As fare as I know…
At list I asked chat gpt how to make money It answered me that I have tò work. 😂😂😂
I would prefer to deal with an AI that does not have human features, but rather resembles a box. Let’s not even give it the power to look like us. Too dangerous.
In fact I wish they didn’t exist at all. It is madness.
How AI can help ngos or poor countries?