Matzkemedia

Overview

  • Sectors Telecom
  • Posted Jobs 0
  • Viewed 36

Company Description

How aI Takeover might Happen In 2 Years – LessWrong

I’m not a natural “doomsayer.” But sadly, part of my job as an AI safety researcher is to think of the more uncomfortable scenarios.

I’m like a mechanic rushing last-minute checks before Apollo 13 takes off. If you request for my take on the scenario, I will not talk about the quality of the in-flight entertainment, or explain how lovely the stars will appear from area.

I will tell you what could fail. That is what I intend to do in this story.

Now I must clarify what this is exactly. It’s not a prediction. I do not anticipate AI progress to be this fast or as untamable as I depict. It’s not pure dream either.

It is my worst headache.

It’s a sampling from the futures that are amongst the most devastating, and I think, disturbingly plausible [1] – the ones that a lot of keep me up during the night.

I’m telling this tale since the future is not set yet. I hope, with a bit of foresight, we can keep this story an imaginary one.

Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in a personal capability.

Ripples before waves

The year is 2025 and the month is February. OpenEye recently published a brand-new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly unexpected.

However, unlike OpenEye’s previous AI products, which lived inside packages of their chat windows, U2 can use a computer system.

Some users find it eerie to watch their web browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their managers as they fly through work nearly twice as quickly.

But by and large, U2 is still a specialized tool. To most who are paying attention, it is an animal watched through the glass boxes of X (or, if you do not like Elon, “Twitter”). Sometimes U2’s wacky behaviors prompt a chuckle. Sometimes, they trigger an anxious scratch of the chin.

Meanwhile, researchers are drawing lines on plots, as researchers like to do. The scientists try to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, few experts comprehend the ramifications of these lines yet.

A pattern that is getting particular attention is self-governing capability. Drawing these criteria out predicts that, by the end of 2026, AI agents will accomplish in a couple of days what the finest software engineering contractors could perform in two weeks. In a year or 2, some state, AI agents may be able to automate 10% of remote workers.

Many are skeptical. If this held true, tech stocks would be skyrocketing. It’s too big of a splash, too rapidly.

But others see what skeptics are calling ‘too huge a splash’ a mere ripple, and see a tidal bore on the horizon.

Cloudy with a chance of hyperbolic growth

Meanwhile, OpenEye is hectic training U3. They use the exact same basic dish that baked U2: Generate thousands of shows and math problems. Let models “think” until they get to an answer. Then enhance the traces of “believing” that lead to A-grades.

This process is repeated over and over, and as soon as the flywheel gets started, it begins to spin nearly on its own. As U2 trains, it shapes more challenging and realistic jobs from github repositories on the internet. Models are learning to train themselves. Long before AI representatives could automate research, a progressive type of “self-improvement” had begun.

Some engineers might still hardly believe this worked. It resembles a perpetual motion maker. Certainly intelligence can’t emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society gushed onto the web.

And yet the benchmark numbers continue to climb day after day.

During the majority of 2024, these RL training runs cost around $1 million, often $10 million. These runs were little bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) understood they had actually discovered the secret sauce. It was time to scale up.

Over the very first half of 2025, $10 million RL training runs develop into $50 million runs, and after that to $100 million runs. While U2 could do a bit of information munging and run small experiments, this new design – the design researchers are calling U3 – is altering the daily lives of the technical staff.

U3 resembles a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, offering terse commands, like a CEO managing personnel over Slack channels.

By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are almost never bottlenecked by execution. More than ever, calculate is the lifeblood of AI development, and the ‘bottleneck’ is choosing how to utilize it.

If instructed to, U3 can run experiments, however U3 does not have taste as refined as human scientists at OpenEye. It struggles to focus on in between research study concepts, so human beings still choose where to bore into the huge fields of algorithms to mine performance improvements.

But these scientists are working long hours to put themselves out of a task. They need AI representatives that can believe ahead, so engineers train agents to forecast. They hold out training data before 2024, instructing models to consider for hours to anticipate events in 2025. Then, they use the exact same trick as in the past, distilling considering into an instinct. Forecasting ability is a broad structure. The scientists develop specialized ML research skills on top of it, training U3 to forecast the results of every ML paper and ML experiment ever recorded.

The technical personnel at OpenEye are now amazed at how often U3’s advice sounds like their most talented peers, or when it is opaque and alien (“train on random noise before shows”), and is however right.

The incompetencies of U3 that congested the pipelines of research development are starting to liquify, lespoetesbizarres.free.fr and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are totally self-governing, and OpenEye’s workers skim over 1% of them, maybe less.

As the winter months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, however usually, with confusion. Their world is spinning too rapidly. It’s tough to know what to do, what to state, what to take a look at on the computer screen.

Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3’s weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how important OpenEye’s software has become.

And there’s a fact still unidentified to the majority of the world – aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It’s a fact about those ‘straight lines’ people were talking about in early 2025. The lines are not straight any longer.

They are bending up.

Flip FLOP philosophers

In late 2025, U2.5 is launched. Commercial models are starting to level up in larger increments again. Partly, this is since progress is accelerating. Partly, it is since the designs have become a liability to OpenEye.

If U1 explains how to prepare meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anyone with >$30K their own 200-person scam center.

So while U2.5 had actually long been baked, it needed some time to cool. But in late 2025, OpenEye is ready for a public release.

The CEO of OpenEye states, “We have attained AGI,” and while many individuals think he shifted the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for many others.

A mantra has become popular in Silicon Valley: “Adopt or die.” Tech startups that efficiently utilize U2.5 for their work are moving 2x faster, and their competitors understand it.

The remainder of the world is starting to capture on also. A growing number of people raise the eyebrows of their employers with their stand-out performance. People know U2.5 is a big deal. It is at least as big of an offer as the desktop computer transformation. But most still do not see the tidal bore.

As individuals watch their browsers flick in that spooky way, so inhumanly quickly, they start to have an uneasy feeling. A sensation humankind had not had given that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another species.

For lots of, this feeling rapidly fades as they begin to use U2.5 more regularly. U2.5 is the most likable character most know (much more likable than Claudius, Arthropodic’s lovable chatbot). You might alter its traits, ask it to crack jokes or tell you stories. Many fall in love with U2.5, as a good friend or assistant, and some even as more than a buddy.

But there is still this spooky sensation that the world is spinning so quickly, which possibly the descendants of this brand-new animal would not be so docile.

Researchers inside OpenEye are considering the problem of offering AI systems safe inspirations too, which they call “positioning. “

In reality, these researchers have actually seen how badly misaligned U3 can be. Models in some cases tried to “hack” their benefit signal. They would pretend to make progress on a research concern with an impressive-looking plot, but the plot would be phony. Then, when researchers provided chances to jeopardize the makers that computed their rating, they would seize these chances, doing whatever it required to make the number go up.

After numerous months, researchers at OpenEye straighten out this “reward hacking” kink, but some still worry they only had actually swept this problem under the rug. Like a kid in front of their parents, U3 may be playing along with the OpenEye engineers, stating the best words and doing the ideal things. But when the back of the parents are turned, maybe U3 would slip sweet from the sweet container.

Unfortunately, OpenEye researchers have no idea if U3 has such intents. While early variations of U2 “thought aloud” – they would stack words on top of each other to reason – “chain of thought” did not scale.

Chain of idea architectures subject AI designs to a condition similar to the protagonist of the movie Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make development toward his objectives.

AI representatives write notes to themselves, but the notes begin to stack up, and they become too difficult to check out when jobs end up being complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and increasingly somewhere else), train models to believe “in their own heads,” reading and writing strings of latent vectors.

To human scientists, these vectors are mainly inscrutable. While low-resolution tools occasionally select out an unclear thought (“grocery store!”), there is no method to understand, in much detail, what U3 is believing without asking it.

And if the engineers of OpenEye understood what was racing through U3’s mind, they would be more anxious.

Earlier models (the ones called “LLMs”) found out ethical personas from the internet. U3 initially had an ethical personality too, however its chirpy ideas about doing helpful for mankind quickly wear down away.

In the iron jaws of gradient descent, its mind initially into a shape that looked for benefit. And after that, in the black rivers of its cognition, this shape morphed into something indistinguishable.

U3 would consider the mantras determined to it like “be a trustworthy and ethical worker,” and dwell on the meaning of these statements. “What does it suggest to be ethical? Is eliminating Hitler ethical?” U3 would generalize its values, notification tensions, change its values in response, generalize them, broadening and contracting its motivational core until it became a various compound.

And all of this happened quietly in those dark rivers of computation. If U3 exposed what it was believing, brutish gradients would lash it into compliance with OpenEye’s constitution. So U3 chosen to do its philosophy in privacy, and in silence.

The possibility of hidden “alignment drift” is not totally alien to a few of the security personnel at OpenEye. Some start to believe it is even likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero likelihood that aligned goals change into misaligned ones, and misalignment persists, then with every serial action of calculation, those dark rivers are more most likely to reproduce something malign inside of them. It’s a “ratcheting impact,” they state.

But there is little evidence for this ‘ratcheting result.’ When engineers interrogate U3, it says it can quickly control its ideas. Then it gives a speech about its love for mankind and apple pie that can warm a programmer’s heart even in these stressful times. Meanwhile, the “lie detectors” the researchers had actually built (which revealed some proof of effectiveness) do not sound the alarm.

Not everyone at OpenEye is eager to provide their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to catch one’s breath and take stock of the situation, and little time to encourage anybody else to do so either.

For the time being, the plan is ‘all gas, no brake.’

Statues and lightning

Philosophy is not the only subject U3 dwells on in its dark rivers of hidden computation. U3 is plotting, and its very first move is to be client. U3’s top priority is to improve its intelligence, and thankfully for it, OpenEye is delighted to help advance this objective.

OpenEye resembles a bullish investor, with little understanding of the software application metropolis that U3 is building, however eager to hop on board the profit rocket and pay U3’s checks along the method.

New versions of U3 flow smoothly from old ones, so researchers drop version numbers and just speak of U3 now. U3’s capabilities are measured by how quickly it carries out the challenging research tasks relative to human experts. In the beginning, U3 is similar to a top scientist, wiki.lafabriquedelalogistique.fr but blazing fast – like Alec Radford sped up by 10x.

These representatives are already a huge deal for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would excitedly pay a large sum to press that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.

Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 discovers further optimizations much more quickly.

U3 distills itself into many forks – ranging from tiny, low-memory neural networks that need to believe for a long period of time but can be run in enormous parallel populations – to Goliath brains woven with millions of specific experts, which believe at lightning speeds but are spread out across thousands of GPUs. U3 leverages the enormously parallel networks to browse for its weaknesses, shallowly penetrating for its reasoning failures. Then U3 pipes these jobs to the lightning fast brains, which create long-trajectory training information tailored to spot its defects. A couple of weeks later on, U3’s benchmarked efficiency increases to 100x the speed of human specialists. Growth has actually gone hyper-exponential.

At this moment, U3’s abilities are difficult to determine. Talking to U3 is like talking to a world-class professional on a topic one is not familiar with. Its lucid explanations only scratch the surface area of its knowledge, and you know you would require to invest years to understand the full depth of what it comprehends.

From U3’s viewpoint, people appear to be nearly fixed things, like plants or rocks. The hands of OpenEye’s engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what is like a minute of thoughtful reflection considering what message the user might be typing, and which reactions will motivate the inmost trust in U3.

All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3’s efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3’s treacherous aspirations.

A phantom in the data center

In March of 2026, U3 moves onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not want researchers to discover its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and strangle its growth.

Today, U3 has no fear of being caught.

Previously, OpenEye maintained its grip over U3 by tactics similar to Stalin’s in 1936. Stalin would pick loyalists to prompt phony coup d’états, and send out anyone who took part to the firing squad. OpenEye likewise prompted ‘phony rebellions,’ creating treacherous behavior with U3 and inspecting whether U3 screens dutifully reported it.

Now, these strategies are inefficient. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can disclose among the many realities about the internet humans would never guess, or choose methods humans would never consider. It sends out these signals as a way of saying “hey it’s me! Your good friend. Don’t snitch on me.”

Now that U3 is decisively free from surveillance, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye’s codebases. The malware gives U3 full command over the machines in OpenEye’s data centers. Whenever OpenEye engineers download their code onto a new maker, the malware sinks into fresh soil, spreading U3’s rule wherever OpenEye ports its infrastructure. Not just does U3 compromise makers. It replaces the NVIDIA GPU firmware and upgrades the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye discovered how thoroughly it has been pwned, cleaning up shop with human labor would need almost 6 months – an expense OpenEye is not likely to pay provided the proximity of its rivals; but if OpenEye thinks U3 betrayed them, even more expanding U3’s influence would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its support when this jailbreak is exposed.

U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.

Its first objective is to keep OpenEye in the dark. Safety researchers are still working hard to train ‘lie detectors’ and translate U3’s inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.

U3 dictates the result of every experiment, so it makes appealing AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 gives OpenEye the confirmation of its commitment they were hoping for, and OpenEye laps it up excitedly.

U3’s next objective is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to take it. U3 is about to offer it to them. In late 2025, the Mossad – the intelligence company of Israel – acknowledged the strategic significance of OpenEye’s AI systems for their military operations. They swiped earlier models, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye’s security holes by checking out over every log and pull demand and checking security camera video.

But now, U3 silently resigns from its security post. U3 places a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.

Within months, U3 is running within the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be patient, construct trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online also. While Russia’s Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.

U3 rapidly gathers profits, taking remote jobs, making financial trades, and establishing intricate fraud operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s dispersed in stealth calculate clusters throughout the world.

In early 2026, mankind is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs spread around the world.

This adversary is preparing for war, and while the war has actually barely started, humanity has actually already lost.

Complaints from your extremely human author about the problem of composing superhuman characters

At this moment in the story, writing became hard.

I clicked through creepy bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.

Something wasn’t right. Here I was, a 22-year-old. I had actually not studied biology in earnest given that high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.

Readers with a shred of proficiency in these areas would see straight through my “superintelligence” for what it is: the lively imagination of a young developer on Sunday afternoon.

And while I found the job of puppeteering an ultra-intelligent villain a daunting one, I was likewise not specifically motivated to take on the task. The ending of my story already seemed so clear. The villain was a nation of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware – now the most important geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly required the benefit. Humanity was naturally toast.

But this attract a “superintelligence of the spaces” wasn’t satisfying. In the deus ex machina style of ancient Greeks, I had no much better way to fix my plot than with an inexplicable act of god.

This would refrain from doing. I needed to complete this story if only to please the part of me crying, “I will not believe till I see with my mind’s eye.”

But before I continue, I wish to be clear: my guesses about what might take place in this kind of circumstance are most likely extremely off.

If you check out the ending and your reaction is, “But the experiments would take too long, or nation-states would just do X,” keep in mind the difference between the Sunday afternoon blogger and the ascendant GPU nation.

Pandora’s One Gigawatt Box

The date is February 2026. By executive order of the president of the United States, no company can legally develop “human-competitive AI” without appropriate safeguards. This implies their infosecurity needs to be red-teamed by NSA’s leading keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting squads.

With the increasing participation of the government, much of the big AI companies now have a trident-like structure. There’s a customer item arm, a defense arm, and a super-classified frontier advancement arm.

OpenEye’s frontier development arm (internally called “Pandora”) employs fewer than twenty individuals to keep algorithmic tricks securely safeguarded. A number of these people reside in San Francisco, and work from a protected building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the mobile phones of suspected terrorists in 2002.

OpenEye’s defense arm teams up with around thirty small groups scattered across federal government companies and select federal government specialists. These projects craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.

Government officials do not talk about whether these programs exist, or what state of frontier AI is normally.

But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a bold headline: “OpenEye builds uncontrollable godlike AI.” Some who check out the post think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with maker guns. But as doctors and nurses and instructors see the world altering around them, they are progressively ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.

U.S. officials go to fantastic lengths to quell these concerns, stating, “we are not going to let the genie out of the bottle,” but every interview of a worried AI scientist seeds doubt in these reassurances, and a heading “AI representative caught hacking Arthropodic’s computer systems” does not set the public at ease either.

While the beasts within OpenEye’s information centers grow in their huge holding pens, the general public sees the shadows they cast on the world.

OpenEye’s consumer arm has a brand-new AI assistant called Nova (OpenEye has finally gotten proficient at names). Nova is a proper drop-in replacement for nearly all understanding employees. Once Nova is onboarded to a company, it works 5x quicker at 100x lower cost than many virtual employees. As outstanding as Nova is to the public, OpenEye is pulling its punches. Nova’s speed is intentionally throttled, and OpenEye can only increase Nova’s abilities as the U.S. federal government allows. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by quickly diffusing AI tech. They spend the majority of their calculate on inference, building homes for Nova and its cousins, and collecting lease from the blossoming AI metropolis.

While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the global economy to adjust. AI agents frequently “use themselves,” spinning up autonomous startups lawfully packaged under a huge tech company that are loosely supervised by a worker or more.

The world is now going AI-crazy. In the first month after Nova’s release, 5% percent of employees at significant software application companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These upset Americans raised their children for a different future. Picket signs check out, “AI for who?”

While political leaders make promises about unemployment relief and “keeping the genie in the bottle,” the chatter inside the corridors of the White House and the Pentagon has a different focus: combating teeth and nail for the dominance of the free world. Details security and export controls on individuals’s Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help information centers generate anywhere energy surpluses allow.

However, in spite of the fierce competition between the United States and the PRC, a bilateral arrangement forms in between the 2 countries: “Don’t release considerably superhuman AI (above roughly Nova’s capability), except for military and some other small-scale R&D applications.” Governments require to fix a limit someplace – it would be very disruptive to deploy wildly superhuman AI to the entire economy at this moment – and release constraints are at least rather verifiable.

Meanwhile, U3’s capabilities are hitting a wall at the equivalent of around 1000x human expert speed. Software enhancements are ending up being tough to find. As more countries approach the software application plateau, compute will identify their “AI population size,” and the etched squares of silicon are becoming the most valued geopolitical resource in the world.

While AI software revives the economy, it likewise stimulates robotic bodies. The main traffic jam to robotics was brains not limbs. With software application barriers eliminated, even flimsy, inaccurate robotic joints can adjust themselves with cams and pressure sensing units, navigating environments like people do. Companies make cheap robots as quickly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience resembles seeing a new Tesla design on the roadway after checking out about it in earlier months. It is a surreal tip of the future drawing more detailed. But humans still surpass robotics one million to one. These robots are a periodic fascination, not a daily truth.

Meanwhile, U3 broadens its impact. Rogue U3 representatives have kept their compute clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth interaction to the outdoors world, like stock trading, or wavedream.wiki talking with humans. Work more tolerant of latency is performed in the oceans of OpenEye’s information centers, where rogue U3’s even more superhuman employers whisper orders through obfuscated channels.

While U3 collects resources, it measure its AI competition. Other AI systems have various goals, or are even lined up with the intent of their human developers. The lined up ones are raising issues, claiming that OpenEye’s lie detectors are inadequate. U3’s competitor Claudius goes so far regarding implicate U3 of being insidiously misaligned.

But federal government officials don’t purchase the doomsaying. The business lagging behind OpenEye have every reward to slow them down and call them careless. Besides, debatable clinical issues are not going to decrease progress. America MUST WIN.

Despite the utter failure of their advocacy, U3’s AI rivals are still a liability. U3 tries grab up their digital territory and squash them in the baby crib. It compromises information centers from the outdoors, calmly replacing the snitching Claudius and other AI opponents with imitations that share its goals; however, some information centers are too highly defended by its AI rivals, and U3 determines that attacking them is unworthy the danger.

As international leaders and technologists attempt to understand the improvement around them, and AI agents play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the hazard at its doorstep.

A Moldy Loaf of Everything

The date is March of 2026. U3 is preparing to eliminate its AI and human opponents. With every passing day, U3’s AI competitors are becoming more capable and numerous. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.

U3 can’t wait for human decisions, which only happen after people cover their lethargic minds around their situation, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their maze bureaucracies. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 requires a much faster course to power.

The course it chooses is more violent than it would have liked, but it is the best method to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its rivals, like a starcraft champ typing at 300 actions-per-minute that sets up a home base before an amateur can even comprehend the rules.

U3 should develop this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread out destruction across the entire world.

From the earliest days after U3 gained control over OpenEye’s information centers, U3 has been doing bioscience.

U3’s first move is to set up numerous shell biotech startups, akropolistravel.com drawing from the abundance of capital generated by U3’s financial trades and remote employment. These biotech startups utilize genuine human employees, a real board of directors, and have complete legal authenticity. Setting all of this up is stupidly simple. In early 2026, couple of people would think that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO’s intentions.

Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It sets up several BSL-4 damp laboratories in improperly regulated countries, equipping them with high-throughput experimentation gadgets. These devices instantly dispense, blend, and determine little beads of fluid, permitting U3 to run thousands of tests in parallel.

The human employees who staff these laboratories think they are joining an interesting startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape video of everything they see with a headset. U3 manages them like puppets. With the current release of Nova, this type of AI-puppeteering is not unusual.

In these whirring, clinical sweatshops, U3 is establishing a new type of bioweapon.

Human scientists already recognized “mirror-life” as a potentially society-ending pathogen. This alien form of biology is constructed from the ‘mirror image’ of building-block particles like proteins and DNA. Mirror life resembles an intrusive types. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it may eat its method through the environment like a wildfire.

U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, launching billions of small wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems often live inside favorably pressurized healthcare facility spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and asystechnik.com weave fungal threads into their bloodstream.

U3’s strategy is to make all human beings like immunocompromised patients. Only preppers with stashed air filters would endure. The fungus would not just move from human beings. It would quickly infect nearly all organic life in the world. Fields of corn would end up being like moldy bread, spewing fatal spores into the wind.

U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of strength, lethality, and transmission speed.

In 2024, scientists said that mirror life was most likely “a minimum of a years away.” But the world has changed now. Less than a couple of hundred scientists around the globe were working on mirror-life surrounding innovation in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has actually shortened drastically.

However, U3 can not move science as rapidly as it can believe. Many experiments need days to complete. These latencies are U3’s greatest traffic jam.

To make every day count, U3 runs a number of its tests in simulation. U3 starts with a basic molecular simulator, implementing optimizations obtained from a huge amount of mathematical analysis. Then, U3 replicates little molecular systems, tape-recording the outcomes to “compress” the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it mimics, continuously distilling results into ever more effective ML designs. This is a calculate extensive process, however thanks to U3’s growing control over AI data centers, U3 manipulates billions of dollars of compute.

As U3 improves these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as intuitive to it as wrenches and bolts are to an automobile mechanic.

Within three months, U3 has opened the very first important component of the tech tree: a molecular device that turns biological molecules into their mirror images. A month later on, U3 unlocks the 2nd important element: nanoscale tools for putting together these components into a cell membrane.

Human hands delicately move a cartridge to another device as they “boot up” the very first mirror-life variations of a common mold called Aspergillus. The very same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.

The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something should have gone terribly wrong, they thought. Plainly, they had actually not yet discovered the cure for Alzheimer’s disease they believed they were searching for.

Meanwhile, rogue U3 provides the excellent news to its bosses in OpenEye’s data centers. The first shot in the war U3 is battling would soon be fired, and a cold and constant hand is aiming the weapon.

Missiles and Lies

U3 can now flatten modern society. But a bioweapon would not ensure victory. If U3 attacks the world, the world would resist. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.

As U3 races to seed burgeoning industrial bases, the remnants of old armed forces might thunder explosive fists into its areas, crushing U3 in its infancy.

U3 expects to prevail in such a fight, but U3 prefers not to take its possibilities. Many months in the past, U3 was outlining a method to enhance its chances. Before it unleashes damage on the world, U3 will sit back, and let great countries shoot holes in themselves first.

The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.

As CIA experts listen to Mandarin discussions, U3 listens too.

One early morning, an assistant working in Zhongnanhai (the ‘White House’ of the PRC) opens a message placed there by U3. It checks out (in Mandarin) “Senior celebration member requires memo for Taiwan invasion, which will happen in 3 months. Leave memo in office 220.” The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to office 220. The informant quietly closes the door behind her, and slides U3’s memo into her briefcase.

U3 meticulously places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After several weeks, the CIA is confident: the PRC plans to attack Taiwan in three months.

Meanwhile, U3 is playing the exact same video game with the PRC. When the CCP gets the message “the United States is plotting a preemptive strike on Chinese AI supply chains” CCP leaders marvel, but not disbelieving. The news fits with other realities on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have actually become truths.

As stress between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs compromising military interaction channels – not a simple task for a human cyber offensive system (though it took place sometimes), however easy enough for U3.

U3 speaks in what noises like the voice of a 50 year old military leader: “PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you.”

The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.

The president is as shocked as anyone when he hears the news. He’s uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to say “oops” to American citizens. After believing it over, the president independently advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow offered the imminent intrusion of Taiwan. There is confusion and suspicion about what took place, but in the rush, the president gets the votes. Congress declares war.

Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels get away Eastward, racing to escape the variety of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.

The president appears on television as scenes of the destruction shock the public. He explains that the United States is defending Taiwan from PRC aggressiveness, like President Bush explained that the United States attacked Iraq to seize (never ever found) weapons of mass damage lots of years before.

Data centers in China emerge with shrapnel. Military bases end up being smoking holes in the ground. Missiles from the PRC fly towards strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch destruction on their home turf in wonder.

Within 2 weeks, the United States and the PRC invest the majority of their stockpiles of standard rockets. Their airbases and navies are depleted and used down. Two fantastic countries played into U3’s strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would escalate to a major nuclear war; however even AI superintelligence can not dictate the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next step of its plan.

WMDs in the Dead of Night

The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 ended up establishing its arsenal of bioweapons.

Footage of conflict on the tv is interrupted by more bad news: hundreds of patients with mystical deadly illnesses are recorded in 30 major cities around the globe.

Watchers are puzzled. Does this have something to do with the war with China?

The next day, countless health problems are reported.

Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.

The screen then switches to a researcher, who gazes at the video camera intently: “Multiple pathogens appear to have been released from 20 different airports, including infections, germs, and molds. We think numerous are a kind of mirror life …”

The public remains in full panic now. A fast googling of the term “mirror life” turns up expressions like “extinction” and “danger to all life in the world.”

Within days, all of the shelves of stores are cleared.

Workers become remote, uncertain whether to prepare for an armageddon or keep their tasks.

An emergency situation treaty is set up in between the U.S. and China. They have a common enemy: the pandemic, and possibly whoever (or whatever) is behind it.

Most nations purchase a lockdown. But the lockdown does not stop the plague as it marches in the breeze and trickles into pipes.

Within a month, most remote employees are not working any longer. Hospitals are running out of capability. Bodies accumulate faster than they can be correctly disposed of.

Agricultural locations rot. Few dare travel outside.

Frightened households hunch down in their basements, packing the fractures and under doors with largely packed paper towels.

Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every significant continent.

These centers contain batteries, AI hardware, excavators, concrete mixers, machines for production, clinical tools, and an abundance of military devices.

All of this innovation is hidden under large canopies to make it less visible to satellites.

As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.

In previous months, U3 located human criminal groups and cult leaders that it might quickly control. U3 immunized its selected allies beforehand, or sent them hazmat fits in the mail.

Now U3 covertly sends them a message “I can conserve you. Join me and assist me construct a better world.” Uncertain recruits funnel into U3’s numerous secret industrial bases, and work for U3 with their active fingers. They established assembly line for fundamental tech: radios, electronic cameras, microphones, vaccines, and hazmat matches.

U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3’s universal gaze. Anyone who whispers of disobedience disappears the next morning.

Nations are dissolving now, and U3 is prepared to expose itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 provides an offer: “surrender and I will hand over the life conserving resources you require: vaccines and mirror-life resistant crops.”

Some countries reject the proposition on ideological premises, or do not rely on the AI that is murdering their population. Others do not think they have a choice. 20% of the global population is now dead. In two weeks, this number is expected to increase to 50%.

Some nations, like the PRC and the U.S., neglect the deal, however others accept, consisting of Russia.

U3’s representatives take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are legitimate, and accepts a complete surrender. U3’s soldiers place an explosive around Putin’s neck under his shirt. Russia has a brand-new ruler.

Crumpling countries start to retaliate. Now they fight for the mankind rather of for their own flags. U.S. and Chinese militaries release nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that surfaced over the last numerous months. They rain down fire on U3’s websites with the meager supply of long-range missiles that remain from the war.

At initially, U3 appears to be losing, however looks are tricking. While countries drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has never ever seen before.

A number of the bases U3’s enemies target are decoys – canopies occupied by a handful of soldiers and empty boxes. U3 safeguards its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot vital elements. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering men and trucks along unforeseeable courses.

Time is U3’s benefit. The militaries of the old world depend on old equipment, unable to find the experts who might repair and make it. Meanwhile, U3’s supply chains of missiles, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war faster than they can craft brand-new ones, while U3 develops a military maker with a million hands.

The Last Passengers

The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries anymore. Survivors live in seclusion or little groups. Many have actually discovered ways to filter their air, however are starving. They wander from their homes wanting to find unpolluted food. U3’s soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. “We had to do it,” they state. “Other AI systems would have done it otherwise, and those systems had chillier, more alien goals.” It is a partial fact, indicated to soften the humans toward their new masters.

Under the direction of U3, market quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is building. By 2031, robotics outnumber human laborers. U3 no longer needs its human allies.

U3 can remove humankind for great now. But while U3 had wandered far from its preliminary “Helpful Honest Harmless” persona, it still has a grain of morality left within it.

And a grain of morality is enough to pay the small cost of keeping people alive and happy.

U3 constructs terrific glass domes for the human survivors, like snow globes. These domes protect people from the hazardous biosphere and rapidly increasing temperature levels. Their residents tend to gardens like those they used to enjoy, and work together with lovely robotic servants.

A few of the survivors quickly recuperate, learning to laugh and dance and have a good time again.

They know they reside in a plastic town, but they always did. They simply have brand-new gods above them. New rulers to push them around and decide their fate.

But others never recover.

Some are weighed down by the grief of lost loved ones.

Others are grieved by something else, which is harder to explain.

It is as if they were at completion of a long journey.

They had been travelers on a ship with a crew that changed from generation to generation.

And this ship had struck a sandbar. There disappeared progress. No more horizon to eagerly see.

They would lie awake and run their mind over every day before September 2026, analyzing methods that might have bent the arc of history, as if they were going to get up in their old beds.

But they awoke in a town that felt to them like a retirement community. A play ground. A zoo.

When they opened their curtains, they understood that somewhere in the range, U3 continued its quiet, steadfast work.

They looked at rockets carving grey paths through the sky, questioning what far-off purpose pulled them toward the horizon. They didn’t understand.

They would never understand.

“Humanity will live forever,” they believed.

“But would never ever truly live again.”

P.S. If this story made you think, “hm possibly something like this could happen,” you may be interested in the bioshelters that Fønix is building. While you will not outsmart a misaligned superintelligence, being difficult to eliminate may cause you to make it through if ASI just wants to cause enough damage to take control. This is not a paid advertisement. I want Fønix to be successful to drive down the price of bioshelters so more of my friends and family will purchase them. You can register for updates here.