


Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Jun 12, 2025 am 11:26 AMLet’s dive into this.
This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities (refer to the link here).
Moving Toward AGI And ASI
To begin with, some foundational understanding is essential to frame this conversation.
There's a vast amount of ongoing research aimed at pushing AI further. The overarching aim is to reach artificial general intelligence (AGI), or possibly even artificial superintelligence (ASI), which lies beyond AGI.
AGI refers to AI that equals human-level intelligence and can perform cognitive tasks as competently as humans. ASI represents an intelligence far surpassing human capabilities across all measurable dimensions. In theory, ASI would outthink humans in every conceivable way. For a deeper exploration of conventional AI versus AGI and ASI, check my previous analysis linked here.
We are not yet at AGI.
In fact, it remains uncertain whether we will ever achieve AGI, or if so, whether it will take decades or centuries. Predictions about when AGI might arrive vary widely and lack solid empirical backing. ASI remains even more distant from our current capabilities in AI.
Sam Altman Sparks Debate
On June 10, 2025, Sam Altman posted a new blog titled “The Gentle Singularity,” where he shared several notable thoughts (excerpted below):
- “We have passed the event horizon; the acceleration has begun. Humanity is nearing the creation of digital superintelligence, and so far, it seems far less strange than expected.”
- “AI will bring significant benefits to society, particularly through faster scientific progress and greater productivity, leading to a much improved quality of life.”
- “By 2030, individuals will be able to accomplish far more than they could in 2020, and many people will find ways to capitalize on this transformation.”
- “This is how the singularity unfolds: what once seemed miraculous becomes routine, then basic.”
- “It’s hard to predict today what we’ll discover by 2035—perhaps one year solving high-energy physics, the next starting space colonization; or a materials science breakthrough followed by high-bandwidth brain-computer interfaces.”
- “We must address safety concerns, both technically and socially, but it’s also vital to ensure broad access to superintelligence due to its economic implications.”
There’s plenty to unpack in these statements.
His optimistic tone presents views on various unresolved topics—like the undefined AI event horizon, the potential effects of artificial superintelligence, speculative timelines, vague notions about the AI singularity, and more.
Let’s briefly examine the core ideas.
The Declared Event Horizon
A central question among AI researchers is whether we're heading toward achieving AGI and ASI. It could be yes, it could be no. Altman’s mention of the AI event horizon suggests that we’ve not only reached it but have already moved beyond it. According to him, the acceleration phase has officially started.
That’s a bold claim, and not everyone in the field agrees.
Consider the following points:
Supporters argue that the rise of generative AI and large language models (LLMs) clearly shows we are on the right path toward AGI and ASI. These systems demonstrate impressive language fluency and reasoning-like abilities, suggesting future breakthroughs are imminent.
However, others aren’t convinced that LLMs are the correct pathway. Concerns exist that generative AI may soon hit diminishing returns, as discussed in my earlier coverage (link here). We might be approaching a plateau, or worse, heading in the wrong direction entirely.
No one knows for certain whether we’re on the correct trajectory. Altman, however, has made his stance clear: we are well past the tipping point. Critics may see this as reinforcing OpenAI’s current strategy.
Only time will tell.
The Proposed AI Singularity
Another key idea in AI discussions is the concept of a singularity—a moment when AGI or ASI emerges rapidly and significantly, marking a dramatic leap in capability.
Some believe the AI singularity could occur almost instantaneously. One moment we’re refining standard AI techniques, and the next, AGI or ASI appears fully formed—an explosive leap in intelligence. This hypothetical explosion would result in AI rapidly improving itself, leading to a sudden leap in capability.
Alternatively, the singularity might unfold gradually over time.
Others speculate it could take minutes, hours, days—or even years—to fully manifest. There’s also the possibility that the singularity doesn't happen at all, and it’s simply a theoretical construct without real-world application.
Altman’s post implies that the singularity is already underway (or perhaps set to accelerate around 2030 or 2035), and that it will be a slow, progressive process rather than a sudden event.
Food for thought.
The Forecasted Arrival Of AGI And ASI
Currently, predictions about when AGI and ASI will arrive rely heavily on speculation and intuition. Most proposed dates lack strong empirical support.
Many prominent AI figures make confident claims about when AGI will emerge, often pointing to 2030 as a likely milestone. See my detailed look at these forecasts here.
A more measured approach involves surveys of AI experts. These polls suggest that most believe AGI will arrive by 2040, as discussed in my article linked here.
It's unclear from Altman’s post whether he refers specifically to AGI or ASI—he uses the term "superintelligence," which could mean either. His shifting definitions of AGI and ASI have been previously noted (see the link here).
In five to ten years, we’ll know how accurate these predictions were. Don’t forget to check back.
All Positivity, No Warnings
One aspect of Altman’s post that has raised eyebrows—especially among AI ethicists—is the portrayal of AGI and ASI as purely beneficial developments. He describes a “gentle singularity” that leads to a better world. That’s a comforting vision, but it overlooks the other side of the coin.
The AI community is divided into two main groups regarding the arrival of AGI or ASI.
One group, known as AI doomers, warns that AGI or ASI could pose an existential threat to humanity. Some call this “P(doom),” referring to the probability of catastrophic outcomes.
The opposing group, known as AI accelerationists, believes AGI and ASI will solve global problems—ending hunger, curing diseases, boosting economies, and freeing people from mundane labor. They envision a cooperative relationship between AI and humans, where AI becomes humanity’s final invention—one that unlocks unimaginable possibilities.
There’s no definitive answer yet on who’s right. For a deeper dive into this debate, see the link here.
Clearly, Altman aligns with the optimistic camp—seeing only bright skies ahead.
Opinions Masked As Facts
It’s crucial to critically evaluate the many claims being made about AI’s future. Often, these statements are presented with such confidence that they seem like established truths. However, in reality, they are opinions, conjectures, and hypotheses—not facts.
Franklin D. Roosevelt once said: “There are as many opinions as there are experts.” Stay informed, stay cautious, and don’t accept any AI prophecy at face value.
You’ll thank yourself later.
The above is the detailed content of Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Let’s dive into this.This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

Looking at the updates in the latest version, you’ll notice that Alphafold 3 expands its modeling capabilities to a wider range of molecular structures, such as ligands (ions or molecules with specific binding properties), other ions, and what’s refe

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

Dia is the successor to the previous short-lived browser Arc. The Browser has suspended Arc development and focused on Dia. The browser was released in beta on Wednesday and is open to all Arc members, while other users are required to be on the waiting list. Although Arc has used artificial intelligence heavily—such as integrating features such as web snippets and link previews—Dia is known as the “AI browser” that focuses almost entirely on generative AI. Dia browser feature Dia's most eye-catching feature has similarities to the controversial Recall feature in Windows 11. The browser will remember your previous activities so that you can ask for AI

As we explore the capabilities of artificial intelligence today, we also encounter questions regarding what we choose to dedicate to the technology.In many ways, this can be boiled down to discussing the attention mechanism.Stephen Wolfram, a promine

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In
