Manhattan Project 2: AI
- Yuşa Kaymakçı

- Jan 29
- 5 min read

The Manhattan Project of the 1940s was a strategic initiative aimed at securing U.S. national security by achieving nuclear supremacy. At that time, the possibility of Nazi Germany developing a nuclear weapon was perceived as an existential threat to the United States.
Today, as the White House identifies AI supremacy as its top priority, the project aimed at developing Artificial General Intelligence (AGI)—the ultimate frontier in AI—is being referred to as "Manhattan Project 2."
Supremacy in AI is now viewed as the 21st-century equivalent of developing the atomic bomb. But why is AI so critical?
The Original Manhattan Project
The Manhattan Project emerged during World War II with the goal of developing a nuclear weapon before Nazi Germany. In 1942, the U.S. assembled a team under the scientific leadership of J. Robert Oppenheimer, the "father of the atomic bomb," and successfully developed the weapon.
The bombs developed by Oppenheimer were dropped on Hiroshima and Nagasaki, leading to Japan's surrender and the official end of WWII with a clear U.S. victory. This weapon granted the U.S. decades of global hegemony and established it as the architect of a new world order.
Big Data Processing and the Military Use of AI
The quest for security following the September 11 attacks accelerated the rise of AI. Despite having the world's most advanced intelligence agencies, the U.S. failed to foresee the attacks. While security agencies possessed massive amounts of data, flaws in data processing created a significant vulnerability.
Technology billionaire Peter Thiel recognized this gap. He noted that agencies like the CIA and NSA were collecting vast amounts of information but could not analyze it effectively. This insight paved the way for a new approach that placed AI at the heart of national security. He founded Palantir with the goal of developing software to facilitate the processing of "Big Data."
Through Thiel's political connections, Palantir caught the CIA's attention. Under intense pressure during operations in Iraq and Afghanistan, intelligence agencies decided to give Palantir a chance; the CIA became the company's first major customer and investor.
Following this partnership, Palantir gained access to massive datasets. Its Gotham software began providing target identification and tactical recommendations to military units, showing immediate impact on the ground.
Over time, the software evolved from a mere information platform into the "eyes" of the U.S. military. Palantir's algorithms performed calculations, provided evaluations, and even delivered precise coordinates for targets. This marked the emergence of a new form of warfare where machine recommendations increasingly replaced human judgment. In essence, AI has reached a point where soldiers, fighter jets, and tanks are waiting on a single command from the software: "Fire."
Artificial General Intelligence (AGI)
The ultimate goal in AI today is reaching the level of AGI—intelligence capable of thinking, learning, and evolving autonomously. Although still theoretical, AGI is viewed as a paramount strategic objective by states and tech giants alike. It is believed that the first nation to achieve AGI will wield unprecedented global power.
The criticality of AGI extends beyond the military. It is expected to provide dominance in every field, from economics and science to manufacturing and intelligence. Therefore, the AGI race has transcended a typical technological competition. Much like nuclear weapons, there is a belief that falling behind in this field will be irreversible.
A major turning point in AI's current leap was Nvidia’s GPUs and the development of the CUDA software in 2006. CUDA allowed GPUs, originally designed for graphics, to be used for complex mathematical and analytical calculations. Where CPUs fell short, the parallel processing capacity of GPUs made the training of massive AI models possible.
In 2015, OpenAI was founded as one of the few institutions explicitly stating an AGI goal. Its founders and early backers included Sam Altman, Elon Musk, Peter Thiel, and Reid Hoffman. While it began as a non-profit dedicated to "safe AGI for humanity," it recently bypassed this principle by entering into an agreement with the Pentagon.
The AGI race, however, faces significant bottlenecks: time, money, and resources. Building massive data centers takes time; producing the necessary hardware requires vast capital. Additionally, these technologies rely on limited resources like rare earth metals and require immense energy, necessitating the construction of new nuclear power plants. The struggle to control these resources often leads to global conflict.
With the Trump administration taking office in 2025, the White House launched "Project Genesis," placing AI at the center of national security. Officials have explicitly used the "Manhattan Project 2" analogy, emphasizing the parallels with the original 1940s project. Just as beating Nazi Germany to the bomb was vital then, beating China to AGI is the priority today.
AI as a Tool for Mass Surveillance and Control
The name Palantir comes from the "seeing stones" in The Lord of the Rings, which allow users to see across great distances, into the past, and the future. Thiel’s software is trending toward becoming exactly that.
Palantir’s reach extends beyond the Pentagon. Contracts with U.S. Immigration and Customs Enforcement (ICE) and software like ImmigrationOS have facilitated the identification and deportation of immigrants through facial recognition and database tracking.
These tools are also used by domestic law enforcement for person-tracking and suspect identification, sparking intense protests over privacy violations.
Some employees resigned, refusing to "feed the beast," fearing that as immigrants themselves, they could one day be targeted by the very tools they built. Palantir CEO Alex Karp responded coldly, stating that it is better to be known as a "monster" than to be "incompetent."
AI as a Destructive Power: Gaza
Palantir also collaborates with the Israeli military (IDF) and Mossad. In Gaza, AI systems like Lavender and "Where’s Daddy?" are used to track and identify individuals with alleged Hamas links.
The criteria for these definitions remain opaque. When asked about the high death toll of children and civilians, the justification often cites "human shields." However, AI software often fails to distinguish between a target and the innocent people surrounding them. In many cases, the IDF levels an entire building to take out a single identified target.
The use of AI in Gaza has turned the city into a landscape of rubble. With no clear moral or legal standards, AI-identified targets are met with immediate, heavy bombardment. In this sense, Gaza—a city decimated by AI-driven warfare—could be described as the Hiroshima of the 21st century. Despite global protests, Karp remains steadfast in his support for Israel’s use of these technologies.
The Final Step Toward Mass Destruction
The "Manhattan Project 2" label is more than just a metaphor. AGI represents a self-evolving intelligence. While terrifying, history shows that humanity often builds what it fears—just as it did with the atomic bomb. The question is not if it will be built, but who will build it first.
The fear is that AGI could spiral out of control, posing a threat to all of humanity. As seen in Gaza, AI can be weaponized for devastating ends. Manhattan Project 2 suggests that a technology meant for human progress is being reclaimed as a tool for absolute dominance under the guise of national security.
Currently, nuclear stockpiles are large enough to end civilization with a single command. In the future, as AI is integrated into autonomous war machines and robotics, we may find ourselves in a world where the only thing missing is a human decision.
Today, Palantir points at the target and says, "Strike here," but a human still pulls the trigger. Tomorrow, a simple software update could give a robot the command to "Kill" autonomously. Like nuclear weapons, the end of the world could soon be just one command away.





Comments