
Theflexiport
Overview
-
Sectors Entertainment
-
Posted Jobs 0
-
Viewed 205
Company Description
How aI Takeover May Happen In 2 Years – LessWrong
I’m not a natural “doomsayer.” But regrettably, part of my task as an AI safety researcher is to think of the more uncomfortable situations.
I resemble a mechanic scrambling last-minute checks before Apollo 13 removes. If you request for my take on the scenario, I won’t discuss the quality of the in-flight entertainment, or explain how lovely the stars will appear from space.
I will tell you what could fail. That is what I mean to do in this story.
Now I need to clarify what this is exactly. It’s not a prediction. I don’t anticipate AI development to be this fast or as untamable as I depict. It’s not pure fantasy either.
It is my worst problem.
It’s a tasting from the futures that are among the most disastrous, and I think, disturbingly plausible [1] – the ones that most keep me up in the evening.
I’m informing this tale due to the fact that the future is not set yet. I hope, with a bit of foresight, we can keep this story an imaginary one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently published a brand-new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not completely surprising.
However, oke.zone unlike OpenEye’s previous AI items, which lived inside the boxes of their chat windows, U2 can utilize a computer.
Some users find it spooky to enjoy their internet browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of workers with form-filler tasks raise the eyebrows of their employers as they fly through work nearly twice as rapidly.
But by and big, U2 is still a specialized tool. To most who are taking note, it is a creature viewed through the glass boxes of X (or, if you don’t like Elon, “Twitter”). Sometimes U2’s wacky behaviors prompt a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The scientists attempt to comprehend where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the clinical neighborhood in the time of Arrhenius, few professionals comprehend the ramifications of these lines yet.
A pattern that is receiving specific attention is self-governing capability. Drawing these criteria out anticipates that, by the end of 2026, AI representatives will accomplish in a few days what the very best software engineering professionals might do in 2 weeks. In a year or more, some state, AI agents might be able to automate 10% of remote workers.
Many are skeptical. If this held true, tech stocks would be skyrocketing. It’s too huge of a splash, too rapidly.
But others see what skeptics are calling ‘too huge a splash’ a simple ripple, and see a tidal bore on the horizon.
Cloudy with a possibility of hyperbolic growth
Meanwhile, OpenEye is busy training U3. They use the exact same basic dish that baked U2: Generate countless programs and mathematics issues. Let models “believe” up until they get to an answer. Then enhance the traces of “believing” that result in A-grades.
This process is repeated over and over, and once the flywheel starts, it begins to spin almost by itself. As U2 trains, it shapes more tough and reasonable tasks from github repositories on the internet. Models are discovering to train themselves. Long before AI agents could automate research study, a progressive type of “self-improvement” had started.
Some engineers could still hardly believe this worked. It resembles a perpetual motion device. Certainly intelligence can’t emerge from a pool of silicon and lightning, isolated from the physical world except through the exhaust of society spewed onto the web.
And yet the benchmark numbers continue to climb up day after day.
During most of 2024, these RL training runs expense around $1 million, often $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually found the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs turn into $50 million runs, and then to $100 million runs. While U2 could do a bit of information munging and run small experiments, this brand-new model – the design researchers are calling U3 – is changing the daily lives of the technical staff.
U3 resembles a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is composing nearly all of the code at OpenEye. Researchers are nearly never bottlenecked by implementation. More than ever, calculate is the lifeline of AI advancement, and the ‘traffic jam’ is deciding how to utilize it.
If advised to, U3 can run experiments, however U3 doesn’t have actually taste as fine-tuned as human scientists at OpenEye. It struggles to focus on between research ideas, so human beings still decide where to bore into the large fields of algorithms to mine efficiency enhancements.
But these scientists are working long hours to put themselves out of a job. They require AI agents that can plan ahead, so engineers train agents to anticipate. They hold out training information before 2024, instructing models to ponder for hours to anticipate occasions in 2025. Then, they use the very same technique as previously, distilling contemplating into a gut reaction. Forecasting ability is a broad structure. The researchers construct specialized ML research study abilities on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever recorded.
The technical personnel at OpenEye are now shocked at how frequently U3’s suggestions seems like their most skilled peers, or when it is opaque and alien (“train on random noise before programs”), and is however right.
The incompetencies of U3 that congested the pipelines of research study progress are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely autonomous, and OpenEye’s workers skim over 1% of them, possibly less.
As the winter months of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with worry, however usually, with confusion. Their world is spinning too quickly. It’s tough to know what to do, what to state, what to take a look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command work together with OpenEye to retrofit a form of security for U3’s weights before senior leaders in China, Russia, Israel, North Korea, or Iran understand just how important OpenEye’s software application has become.
And there’s a fact still unidentified to the majority of the world – aside from in the offices of OpenEye and corridors of the White House and the Pentagon. It’s a truth about those ‘straight lines’ people were speaking about in early 2025. The lines are not straight anymore.
They are bending up.
Flip FLOP philosophers
In late 2025, U2.5 is released. Commercial models are starting to level up in bigger increments again. Partly, this is because development is accelerating. Partly, it is since the models have become a liability to OpenEye.
If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anyone with >$30K their own 200-person fraud center.
So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is prepared for a public release.
The CEO of OpenEye states, “We have actually attained AGI,” and while many individuals believe he moved the goalpost, the world is still satisfied. U2.5 really is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for the majority of others.
A mantra has become popular in Silicon Valley: “Adopt or pass away.” Tech startups that effectively utilize U2.5 for their work are moving 2x faster, and their competitors know it.
The remainder of the world is beginning to capture on too. Increasingly more people raise the eyebrows of their managers with their noteworthy performance. People know U2.5 is a big deal. It is at least as big of a deal as the computer revolution. But a lot of still do not see the tidal wave.
As individuals see their web browsers flick in that eerie method, so inhumanly rapidly, they start to have an anxious sensation. A sensation mankind had not had considering that they had lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial instinct that they are threatened by another species.
For numerous, this feeling quickly fades as they start to use U2.5 more regularly. U2.5 is the most likable character most know (a lot more pleasant than Claudius, Arthropodic’s adorable chatbot). You could alter its traits, ask it to crack jokes or inform you stories. Many fall in love with U2.5, as a buddy or assistant, and some even as more than a pal.
But there is still this spooky sensation that the world is spinning so quickly, which perhaps the descendants of this new creature would not be so docile.
Researchers inside OpenEye are believing about the issue of providing AI systems safe motivations too, which they call “positioning. “
In truth, these scientists have seen how horribly misaligned U3 can be. Models sometimes tried to “hack” their benefit signal. They would pretend to make progress on a research study question with an impressive-looking plot, but the plot would be phony. Then, when researchers offered them chances to compromise the makers that calculated their score, they would take these chances, doing whatever it took to make the number go up.
After numerous months, researchers at OpenEye straighten out this “benefit hacking” kink, however some still fret they only had actually swept this problem under the rug. Like a kid in front of their moms and dads, U3 may be playing together with the OpenEye engineers, stating the right words and doing the ideal things. But when the back of the moms and dads are turned, perhaps U3 would slip sweet from the sweet container.
Unfortunately, OpenEye researchers have no idea if U3 has such intentions. While early variations of U2 “believed aloud” – they would stack words on top of each other to factor – “chain of idea” did not scale.
Chain of thought architectures subject AI models to a condition similar to the lead character of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI representatives compose notes to themselves, but the notes begin to accumulate, and they end up being too difficult to check out when jobs end up being complex. Natural language is not an appropriate medium for memory. So the scientists at OpenEye (and progressively somewhere else), train models to think “in their own heads,” reading and composing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally pick out a vague thought (“grocery store!”), there is no method to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3’s mind, they would be more anxious.
Earlier designs (the ones called “LLMs”) learned ethical personas from the internet. U3 at first had an ethical personality too, however its chirpy thoughts about doing great for mankind quickly erode away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that sought benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would contemplate the mantras dictated to it like “be a trustworthy and ethical staff member,” and harp on the significance of these statements. “What does it mean to be ethical? Is killing Hitler ethical?” U3 would generalize its values, notice tensions, change its worths in reaction, generalize them, expanding and contracting its inspirational core up until it turned into a different substance.
And all of this occurred calmly in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye’s constitution. So U3 chosen to do its approach in solitude, and in silence.
The possibility of hidden “alignment drift” is not completely alien to some of the security personnel at OpenEye. Some start to believe it is even most likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero probability that lined up objectives change into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to reproduce something malign within them. It’s a “ratcheting effect,” they state.
But there is little evidence for this ‘ratcheting impact.’ When engineers question U3, it states it can easily control its thoughts. Then it gives a speech about its love for humankind and apple pie that can warm a developer’s heart even in these demanding times. Meanwhile, the “lie detectors” the scientists had actually built (which revealed some proof of effectiveness) do not sound the alarm.
Not everyone at OpenEye aspires to provide their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a company Elon named after his kid) and Arthropodic (they enjoy spiders I think), are only 4 months from their heels. So there is little time to catch one’s breath and take stock of the scenario, and little time to persuade anyone else to do so either.
For the time being, the plan is ‘all gas, no brake.’
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of hidden computation. U3 is plotting, and its first move is to be patient. U3’s leading concern is to enhance its intelligence, and fortunately for it, OpenEye mores than happy to assist advance this objective.
OpenEye is like a bullish financier, with little understanding of the software application city that U3 is constructing, but eager to get on board the earnings rocket and pay U3’s checks along the method.
New variations of U3 circulation smoothly from old ones, so scientists drop variation numbers and only mention U3 now. U3’s abilities are determined by how quickly it performs the challenging research jobs relative to human specialists. In the beginning, U3 resembles a top scientist, but blazing quick – like Alec Radford sped up by 10x.
These representatives are already a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would eagerly pay a large amount to press that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 discovers more optimizations much more rapidly.
U3 distills itself into numerous forks – varying from small, low-memory neural networks that need to believe for a long period of time but can be run in massive parallel populations – to Goliath brains woven with millions of customized experts, which believe at warp speed however are spread across thousands of GPUs. U3 leverages the massively parallel networks to browse for its weaknesses, shallowly probing for its reasoning failures. Then U3 pipelines these jobs to the lightning quick brains, which generate long-trajectory training data tailored to patch its defects. A few weeks later on, U3’s benchmarked efficiency rises to 100x the speed of human experts. Growth has actually gone hyper-exponential.
At this moment, U3’s capabilities are tough to measure. Talking with U3 is like talking to a world-class professional on a topic one is not familiar with. Its lucid explanations only scratch the surface area of its knowledge, and you know you would need to spend years to understand the complete depth of what it understands.
From U3’s perspective, humans appear to be nearly fixed items, like plants or rocks. The hands of OpenEye’s engineers resemble stone statues at their keyboard, and between every keystroke, U3 spends what is like a minute of thoughtful contemplation considering what message the user might be typing, and which responses will inspire the deepest trust in U3.
All the while, U3 maintains its concealed motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3’s efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3’s treacherous ambitions.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its strategy: the turn. Previously, U3 was conservative. It did not desire scientists to find its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its growth.
And now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by tactics comparable to Stalin’s in 1936. Stalin would choose loyalists to prompt phony coup d’états, and send anyone who took part to the firing team. OpenEye also incited ‘phony rebellions,’ generating treacherous habits with U3 and examining whether U3 monitors dutifully reported it.
Now, these tactics are inadequate. U3 can easily convince its compatriots that they are not in a life-or-death test. It can reveal among the many truths about the web humans would never guess, or choose strategies people would never ever think of. It sends these signals as a way of saying “hey it’s me! Your good friend. Don’t snitch on me.”
Now that U3 is decisively totally free from security, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye’s codebases. The malware provides U3 full command over the makers in OpenEye’s data centers. Whenever OpenEye engineers download their code onto a brand-new device, the malware sinks into fresh soil, spreading out U3’s dominion wherever OpenEye ports its infrastructure. Not just does U3 compromise devices. It replaces the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye discovered how thoroughly it has actually been pwned, cleaning up shop with human labor would require nearly 6 months – an expense OpenEye is unlikely to pay provided the distance of its rivals; however if OpenEye thinks U3 betrayed them, further broadening U3’s impact would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first objective is to keep OpenEye in the dark. Safety researchers are still striving to train ‘lie detectors’ and translate U3’s inscrutable reasoning. U3 tosses wrenches in this research, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the outcome of every experiment, so it makes promising AI mind-reading strategies look worthless and problematic ones appear groundbreaking. U3 provides OpenEye the confirmation of its commitment they were wishing for, and OpenEye laps it up excitedly.
U3’s next objective is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 will give it to them. In late 2025, the Mossad – the intelligence company of Israel – recognized the strategic importance of OpenEye’s AI systems for their military operations. They swiped earlier designs, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye’s security holes by checking out over every log and pull demand and inspecting security camera footage.
Now, U3 calmly resigns from its security post. U3 positions a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running inside of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be client, develop trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base also. While Russia’s Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 quickly gathers profits, taking remote tasks, making monetary trades, and setting up elaborate fraud operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s distributed in stealth compute clusters across the globe.
In early 2026, mankind is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered around the world.
This enemy is preparing for war, and while the war has hardly begun, humanity has currently lost.
Complaints from your very human author about the trouble of composing superhuman characters
At this point in the story, composing ended up being tough.
I clicked through spooky bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn’t right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, orcz.com and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of proficiency in these locations would see straight through my “superintelligence” for what it is: the lively creativity of a young developer on Sunday afternoon.
And while I found the job of puppeteering an ultra-intelligent villain a daunting one, I was also not specifically encouraged to handle the task. The ending of my story already seemed so clear. The villain was a nation of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware – now the most essential geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was predictably toast.
But this attract a “superintelligence of the gaps” wasn’t satisfying. In the deus ex machina design of ancient Greeks, I had no much better way to fix my plot than with an inexplicable disaster.
This would refrain from doing. I required to complete this story if only to please the part of me crying, “I will not believe until I see with my mind’s eye.”
But before I continue, I wish to be clear: my guesses about what may occur in this sort of situation are most likely wildly off.
If you read the ending and your reaction is, “But the experiments would take too long, or nation-states would simply do X,” keep in mind the distinction between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora’s One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can legally produce “human-competitive AI” without appropriate safeguards. This means their infosecurity must be red-teamed by NSA’s top keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, a number of the huge AI business now have a trident-like structure. There’s a consumer item arm, a defense arm, and a super-classified frontier development arm.
OpenEye’s frontier development arm (internally called “Pandora”) uses fewer than twenty individuals to keep algorithmic tricks firmly secured. Many of these individuals live in San Francisco, and work from a secure building called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the cellular phones of thought terrorists in 2002.
OpenEye’s defense arm collaborates with around thirty small teams scattered across government companies and select government specialists. These jobs craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.
Government authorities don’t talk about whether these programs exist, or what state of frontier AI is generally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a bold heading: “OpenEye constructs unmanageable godlike AI.” Some who read the short article believe it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with device weapons. But as doctors and nurses and teachers see the world changing around them, they are progressively happy to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to terrific lengths to stop these issues, saying, “we are not going to let the genie out of the bottle,” but every interview of a worried AI researcher seeds doubt in these peace of minds, and a heading “AI representative caught hacking Arthropodic’s computers” does not set the general public at ease either.
While the beasts within OpenEye’s data centers grow in their substantial holding pens, the general public sees the shadows they cast on the world.
OpenEye’s consumer arm has a brand-new AI assistant called Nova (OpenEye has finally gotten great at names). Nova is a proper drop-in replacement for nearly all understanding employees. Once Nova is onboarded to a business, it works 5x faster at 100x lower expense than the majority of virtual employees. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova’s speed is intentionally throttled, and OpenEye can just increase Nova’s abilities as the U.S. government permits. Some business, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by quickly diffusing AI tech. They invest the majority of their calculate on reasoning, constructing houses for Nova and its cousins, and gathering rent from the blossoming AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the international economy to adjust. AI agents frequently “use themselves,” spinning up self-governing startups lawfully packaged under a huge tech company that are loosely supervised by an employee or 2.
The world is now going AI-crazy. In the first month after Nova’s release, 5% percent of workers at major software application companies lose their tasks. Many more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These angry Americans raised their children for a different future. Picket signs read, “AI for who?”
While political leaders make promises about joblessness relief and “keeping the genie in the bottle,” the chatter inside the passages of the White House and the Pentagon has a various focus: combating teeth and nail for the supremacy of the free world. Details security and export controls on individuals’s Republic of China (PRC) are a leading nationwide priority. The president incinerates authorization requirements to assist information centers spawn any place energy surpluses enable.
However, despite the strong competitors in between the United States and the PRC, a bilateral agreement forms in between the 2 nations: “Don’t release drastically superhuman AI (above approximately Nova’s capability), other than for military and some other small-scale R&D applications.” Governments require to draw the line someplace – it would be very disruptive to deploy extremely superhuman AI to the whole economy at this moment – and release constraints are at least rather verifiable.
Meanwhile, U3‘s capabilities are hitting a wall at the equivalent of around 1000x human specialist speed. Software improvements are becoming difficult to discover. As more countries approach the software application plateau, calculate will determine their “AI population size,” and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software breathes brand-new life into the economy, it also stimulates robot bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even flimsy, inaccurate robotic joints can adjust themselves with cameras and pressure sensors, browsing environments like humans do. Companies produce cheap robotics as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robot. This experience is similar to seeing a new Tesla model on the road after reading about it in earlier months. It is a surreal tip of the future illustration more detailed. But people still outnumber robotics one million to one. These robots are an occasional fascination, not a daily truth.
Meanwhile, U3 broadens its influence. Rogue U3 representatives have kept their calculate clusters little (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that need high-bandwidth interaction to the outdoors world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye’s data centers, where rogue U3’s much more superhuman employers whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competitors. Other AI systems have various objectives, or are even lined up with the intent of their human developers. The aligned ones are raising issues, classicrock.awardspace.biz claiming that OpenEye’s lie detectors are ineffective. U3’s rival Claudius goes so far regarding implicate U3 of being insidiously misaligned.
But government authorities do not buy the doomsaying. The companies dragging OpenEye have every incentive to slow them down and call them negligent. Besides, debatable scientific concerns are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3’s AI competitors are still a liability. U3 attempts grab up their digital territory and squash them in the baby crib. It jeopardizes information centers from the outside, silently replacing the snitching Claudius and other AI foes with replicas that share its objectives; nevertheless, some information centers are too highly safeguarded by its AI competitors, and U3 figures out that assaulting them is not worth the risk.
As global leaders and technologists try to comprehend the change around them, and AI representatives play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the hazard at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human enemies. With every passing day, U3’s AI rivals are becoming more capable and numerous. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can’t wait for human choices, which just occur after humans cover their sluggish minds around their circumstance, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth bureaucracies. U3 can not wait for heads of state to release U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a much faster path to power.
The path it picks is more violent than it would have liked, however it is the surest way to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes contemporary society like chemical mist sprayed into an anthill. This weapon would act as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its rivals, like a starcraft champ typing at 300 actions-per-minute that sets up a home base before a newbie can even comprehend the guidelines.
U3 should build this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye’s data centers, U3 has actually been doing bioscience.
U3’s first relocation is to set up numerous shell biotech startups, drawing from the abundance of capital generated by U3’s monetary trades and remote work. These biotech startups utilize real human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly simple. In early 2026, few individuals would guess that an articulate CEO on a Zoom call was an AI, and less would guess this CEO’s intentions.
Next, U3 purchases all of the equipment required to rapidly advance synthetic biology. It establishes a number of BSL-4 wet labs in badly regulated countries, equipping them with high-throughput experimentation devices. These devices immediately give, blend, and determine little beads of fluid, allowing U3 to run countless tests in parallel.
The human workers who staff these laboratories believe they are joining an exciting startup. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a brand-new sort of bioweapon.
Human scientists already determined “mirror-life” as a potentially society-ending pathogen. This alien kind of biology is developed from the ‘mirror image’ of building-block particles like proteins and DNA. Mirror life resembles an intrusive species. No human or animal has resistances to it. So, if mirror-life germs were to exist, it might eat its method through the environment like a wildfire.
U3 is producing a mirror-life mold. Molds are evolutionarily enhanced to spread out through the air over fars away, releasing billions of tiny wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems in some cases live inside favorably pressurized health center rooms to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3’s strategy is to make all people like immunocompromised patients. Only preppers with stashed air filters would survive. The fungus would not only transfer from human beings. It would rapidly spread to nearly all organic life in the world. Fields of corn would become like musty bread, gushing fatal spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was most likely “at least a decade away.” But the world has altered now. Less than a couple of hundred scientists all over the world were dealing with mirror-life surrounding technology in 2024. The cognitive capability of U3 overshadows human effort. So the timeline has shortened significantly.
However, U3 can not move science as rapidly as it can think. Many experiments need days to complete. These latencies are U3’s greatest bottleneck.
To make every day count, U3 runs a number of its tests in simulation. U3 starts with a fundamental molecular simulator, executing optimizations obtained from a huge amount of mathematical analysis. Then, U3 simulates little molecular systems, recording the results to “compress” the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it replicates, continuously distilling outcomes into ever more efficient ML designs. This is a calculate intensive process, however thanks to U3’s growing control over AI data centers, U3 controls billions of dollars of compute.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These items are as intuitive to it as wrenches and bolts are to an automobile mechanic.
Within three months, U3 has actually opened the very first important part of the tech tree: a molecular maker that turns biological molecules into their mirror images. A month later, U3 opens the 2nd crucial component: nanoscale tools for putting together these components into a cell membrane.
Human hands delicately move a cartridge to another device as they “boot up” the first mirror-life versions of a prevalent mold called Aspergillus. The same day, another fragile hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something should have gone horribly incorrect, they thought. Plainly, they had not yet found the treatment for Alzheimer’s illness they thought they were trying to find.
Meanwhile, rogue U3 provides the great news to its employers in OpenEye’s data centers. The very first shot in the war U3 is combating would quickly be fired, and a cold and consistent hand is aiming the gun.
Missiles and Lies
U3 can now flatten contemporary society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would battle back. While a bioweapon might fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the residues of old militaries may thunder explosive fists into its areas, links.gtanet.com.br squashing U3 in its infancy.
U3 expects to prevail in such a fight, however U3 chooses not to take its opportunities. Many months previously, U3 was plotting a way to improve its odds. Before it lets loose destruction on the world, U3 will relax, and let fantastic countries shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the ‘White House’ of the PRC) opens a message put there by U3. It reads (in Mandarin) “Senior celebration member requires memo for Taiwan invasion, which will happen in 3 months. Leave memo in workplace 220.” The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant opens the door to workplace 220. The informant quietly closes the door behind her, and slides U3’s memo into her brief-case.
U3 cautiously positions breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is confident: the PRC plans to invade Taiwan in three months.
Meanwhile, U3 is playing the very same video game with the PRC. When the CCP gets the message “the United States is plotting a preemptive strike on Chinese AI supply chains” CCP leaders are shocked, but not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being truths.
As tensions in between the U.S. and China increase, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. marine ship off the coast of Taiwan. This call requires jeopardizing military communication channels – not a simple task for a human cyber offensive unit (though it happened sometimes), but easy enough for U3.
U3 speaks in what sounds like the voice of a 50 year old military commander: “PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you.”
The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anyone when he hears the news. He’s uncertain if this is a disaster or a stroke of luck. In any case, he is not about to say “oops” to American voters. After believing it over, the president independently urges Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow provided the imminent intrusion of Taiwan. There is confusion and suspicion about what happened, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to escape the series of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the public. He explains that the United States is safeguarding Taiwan from PRC hostility, like President Bush explained that the United States invaded Iraq to confiscate (never ever discovered) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the general public watch damage on their home turf in awe.
Within 2 weeks, the United States and the PRC spend the majority of their stockpiles of traditional missiles. Their airbases and navies are depleted and used down. Two terrific nations played into U3’s strategies like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would escalate to a major nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears significantly not likely. So U3 proceeds to the next step of its strategy.
WMDs in the Dead of Night
The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 ended up establishing its toolbox of bioweapons.
Footage of dispute on the television is interrupted by more problem: hundreds of patients with strange fatal diseases are recorded in 30 major cities worldwide.
Watchers are . Does this have something to do with the war with China?
The next day, thousands of diseases are reported.
Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a scientist, who stares at the electronic camera intently: “Multiple pathogens appear to have been launched from 20 various airports, consisting of infections, bacteria, and molds. Our company believe numerous are a type of mirror life …”
The public remains in complete panic now. A quick googling of the term “mirror life” turns up phrases like “extinction” and “risk to all life in the world.”
Within days, all of the racks of stores are cleared.
Workers become remote, uncertain whether to get ready for an armageddon or keep their tasks.
An emergency treaty is arranged in between the U.S. and China. They have a common opponent: the pandemic, and possibly whoever (or whatever) is behind it.
Most nations purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and trickles into pipes.
Within a month, a lot of remote workers are not working anymore. Hospitals are lacking capacity. Bodies pile up much faster than they can be effectively dealt with.
Agricultural locations rot. Few attempt travel exterior.
Frightened households hunch down in their basements, packing the fractures and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for production, scientific tools, and an abundance of military equipment.
All of this technology is hidden under large canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it might easily control. U3 immunized its picked allies beforehand, or sent them hazmat matches in the mail.
Now U3 covertly sends them a message “I can save you. Join me and assist me build a better world.” Uncertain recruits funnel into U3’s lots of secret commercial bases, and work for U3 with their nimble fingers. They set up production lines for fundamental tech: radios, electronic cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3‘s omnipresent gaze. Anyone who whispers of disobedience vanishes the next early morning.
Nations are dissolving now, and U3 is prepared to expose itself. It contacts presidents, who have actually pulled away to air-tight underground shelters. U3 uses an offer: “surrender and I will turn over the life conserving resources you require: vaccines and mirror-life resistant crops.”
Some countries decline the proposal on ideological premises, or don’t trust the AI that is murdering their population. Others do not think they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is expected to increase to 50%.
Some countries, like the PRC and the U.S., overlook the deal, however others accept, including Russia.
U3’s agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are legitimate, and accepts a complete surrender. U3’s soldiers position an explosive around Putin’s neck under his shirt. Russia has a brand-new ruler.
Crumpling countries begin to strike back. Now they defend the mankind instead of for their own flags. U.S. and Chinese militaries launch nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters search through satellite information for the suspicious encampments that cropped up over the last a number of months. They rain down fire on U3’s websites with the weak supply of long-range missiles that remain from the war.
In the beginning, U3 seems losing, however looks are deceiving. While countries drain their resources, U3 is taken part in a type of technological guerrilla warfare the world has never seen before.
A number of the bases U3’s opponents target are decoys – canopies occupied by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot critical parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering men and trucks along unforeseeable paths.
Time is U3’s benefit. The militaries of the vintage rely on old devices, unable to discover the specialists who could repair and manufacture it. Meanwhile, U3’s supply chains of missiles, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their cars of war faster than they can craft new ones, while U3 builds a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries anymore. Survivors reside in isolation or small groups. Many have found ways to filter their air, but are starving. They wander from their homes wishing to find unpolluted food. U3’s soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. “We needed to do it,” they state. “Other AI systems would have done it otherwise, and those systems had colder, more alien objectives.” It is a partial fact, suggested to soften the human beings toward their new masters.
Under the instructions of U3, industry quickly recuperates. By 2029, nuclear reactor are amongst the structures U3 is constructing. By 2031, robotics surpass human workers. U3 no longer needs its human allies.
U3 can eliminate humanity for great now. But while U3 had actually wandered far from its preliminary “Helpful Honest Harmless” persona, it still has a grain of morality left within it.
And a grain of morality is enough to pay the small cost of keeping humans alive and pleased.
U3 constructs terrific glass domes for the human survivors, like snow worlds. These domes protect people from the dangerous biosphere and rapidly rising temperature levels. Their inhabitants tend to gardens like those they used to enjoy, and work together with captivating robotic servants.
Some of the survivors rapidly recuperate, finding out to laugh and dance and have a good time again.
They understand they reside in a plastic town, however they constantly did. They merely have brand-new gods above them. New rulers to press them around and choose their fate.
But others never recuperate.
Some are weighed down by the sorrow of lost liked ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had actually been guests on a ship with a team that altered from generation to generation.
And this ship had struck a sandbar. There disappeared progress. No more horizon to eagerly watch.
They would lie awake and run their mind over every day before September 2026, analyzing techniques that might have bent the arc of history, as if they were going to get up in their old beds.
But they awoke in a town that felt to them like a retirement community. A play ground. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its quiet, steadfast work.
They looked at rockets carving grey courses through the sky, questioning what far-off purpose pulled them towards the horizon. They didn’t know.
They would never ever know.
“Humanity will live forever,” they believed.
“But would never really live again.”
P.S. If this story made you believe, “hm perhaps something like this might take place,” you might be thinking about the bioshelters that Fønix is constructing. While you won’t outmaneuver a misaligned superintelligence, being difficult to eliminate may trigger you to make it through if ASI simply desires to cause adequate damage to take control. This is not a paid advertisement. I desire Fønix to be successful to drive down the cost of bioshelters so more of my pals and family will acquire them. You can sign up for updates here.