国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Table of Contents
The LLM problem
The third wave
Home Technology peripherals It Industry AI 'hallucinates' constantly, but there's a solution

AI 'hallucinates' constantly, but there's a solution

Jul 07, 2025 am 01:26 AM

AI 'hallucinates' constantly, but there's a solution

The major concern with big tech experimenting with artificial intelligence (AI) isn't that it might dominate humanity. The real issue lies in the persistent inaccuracies of large language models (LLMs) such as Open AI's ChatGPT, Google's Gemini, and Meta's Llama, which seem to be an unavoidable challenge.

These errors, referred to as hallucinations, were notably highlighted when ChatGPT falsely accused US law professor Jonathan Turley of sexual harassment in 2023. OpenAI’s response was essentially to make Turley "disappear" by instructing ChatGPT not to address questions about him—an approach that is neither fair nor satisfactory. Addressing hallucinations on a case-by-case basis post-occurrence clearly isn’t effective.

This also applies to LLMs reinforcing stereotypes or providing answers skewed towards Western perspectives. Additionally, there is a complete lack of accountability for the widespread misinformation generated, given the difficulty in understanding how the LLM reached its conclusion initially.

Following the 2023 release of GPT-4, the latest milestone in OpenAI's LLM development, we witnessed an intense discussion around these issues. Arguably, this debate has cooled down since then, without valid reason.

For example, the EU passed its AI Act rapidly in 2024, aiming to lead globally in regulating this area. However, the act heavily depends on AI companies to self-regulate without addressing the underlying issues directly. This hasn't prevented tech giants from launching LLMs globally for hundreds of millions of users while collecting their data without adequate oversight.

Related: 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype — here's why artificial general intelligence isn't what the billionaires claim it to be

Meanwhile, recent assessments suggest that even the most advanced LLMs remain untrustworthy. Despite this, leading AI firms continue to avoid accountability for mistakes.

Regrettably, the tendency of LLMs to mislead and replicate bias cannot be resolved through gradual enhancements over time. With the introduction of agentic AI, where users can assign tasks to an LLM—such as booking holidays or optimizing monthly bill payments—the potential for complications is expected to increase significantly.

Neurosymbolic AI could potentially resolve these challenges while also decreasing the massive quantities of data required for training LLMs. So, what exactly is neurosymbolic AI, and how does it function?

The LLM problem

LLMs operate using deep learning techniques, where they are fed vast amounts of textual data and utilize sophisticated statistics to identify patterns that dictate the next word or phrase in any given response. Each model, along with all the learned patterns, is stored in arrays of powerful computers located in expansive data centers known as neural networks.

LLMs can appear to reason through a process called chain-of-thought, generating multi-step responses that mimic logical human conclusions based on patterns observed in training data.

Undoubtedly, LLMs represent a significant engineering achievement. They excel at summarizing text and translating, potentially boosting productivity for those diligent and knowledgeable enough to catch their mistakes. Nevertheless, they have considerable potential to mislead because their conclusions are always based on probabilities—not comprehension.

A common workaround is the "human-in-the-loop" approach: ensuring humans still make final decisions when using AIs. However, attributing blame to humans doesn't solve the problem—they often still get misled by misinformation.

LLMs now require so much training data that synthetic data—data created by LLMs—is being used. This synthetic data can copy and amplify existing errors from its source, causing new models to inherit old weaknesses. Consequently, the cost of programming AIs to enhance accuracy after training—known as "post-hoc model alignment"—is increasing dramatically.

It also becomes increasingly challenging for programmers to pinpoint issues due to the growing number of steps in the model's thought process, making corrections progressively harder.

Neurosymbolic AI merges the predictive learning capabilities of neural networks with teaching the AI formal rules that humans use to deliberate more reliably. These include logic rules like "if a then b," such as "if it's raining, then everything outside is normally wet"; mathematical rules like "if a = b and b = c then a = c"; and agreed-upon meanings of words, diagrams, and symbols. Some of these will be directly inputted into the AI system, while others will be deduced independently by analyzing training data and performing "knowledge extraction."

This should result in an AI that never hallucinates and learns faster and smarter by organizing knowledge into clear, reusable parts. For instance, if the AI has a rule stating things are wet outside when it rains, there's no need to remember every example of wet items—this rule can apply to any new object, even one previously unseen.

During model development, neurosymbolic AI integrates learning and formal reasoning through a process known as the "neurosymbolic cycle." This involves a partially trained AI extracting rules from its training data and then embedding this consolidated knowledge back into the network before further training with data.

This method is more energy efficient as the AI doesn't need to store extensive data, and the AI becomes more accountable since users can better control how it reaches particular conclusions and improves over time. It's also fairer because it can adhere to pre-existing rules, such as: "For any decision made by the AI, the outcome must not depend on a person's race or gender."

The third wave

The first wave of AI in the 1980s, known as symbolic AI, involved teaching computers formal rules applicable to new information. Deep learning followed as the second wave in the 2010s, and many see neurosymbolic AI as the third.

Applying neurosymbolic principles to niche areas is easier since rules can be clearly defined. Therefore, it's unsurprising that we've first seen its emergence in Google's AlphaFold, which predicts protein structures to aid drug discovery, and AlphaGeometry, which solves complex geometry problems.

For broader-based AIs, China's DeepSeek uses a learning technique called "distillation", a step in the same direction. Yet, to fully realize neurosymbolic AI's potential for general models, more research is needed to refine their ability to discern general rules and perform knowledge extraction.

The above is the detailed content of AI 'hallucinates' constantly, but there's a solution. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The Developer's Shortcut To Your Udemy-like Platform The Developer's Shortcut To Your Udemy-like Platform Jun 17, 2025 pm 04:43 PM

When developing learning platforms similar to Udemy, the focus isn't only on content quality. Just as important is how that content is delivered. This is because modern educational platforms rely on media that is accessible, fast, and easy to digest.

Cost Effective Reseller Platforms for Buying SSL Certificates Cost Effective Reseller Platforms for Buying SSL Certificates Jun 25, 2025 am 08:28 AM

In a world where online trust is non-negotiable, SSL certificates have become essential for every website. The market size of SSL certification was valued at USD 5.6 Billion in 2024 and is still growing strongly, fueled by surging e-commerce business

5 Best Payment Gateways for SaaS: Your Ultimate Guide 5 Best Payment Gateways for SaaS: Your Ultimate Guide Jun 29, 2025 am 08:28 AM

A payment gateway is a crucial component of the payment process, enabling businesses to accept payments online. It acts as a bridge between the customer and the merchant, securely transferring payment information and facilitating transactions. For

New study claims AI 'understands' emotion better than us — especially in emotionally charged situations New study claims AI 'understands' emotion better than us — especially in emotionally charged situations Jul 03, 2025 pm 05:48 PM

In what seems like yet another setback for a domain where we believed humans would always surpass machines, researchers now propose that AI comprehends emotions better than we do.Researchers have discovered that artificial intelligence demonstrates a

Hurricanes and sandstorms can be forecast 5,000 times faster thanks to new Microsoft AI model Hurricanes and sandstorms can be forecast 5,000 times faster thanks to new Microsoft AI model Jul 05, 2025 am 12:44 AM

A new artificial intelligence (AI) model has demonstrated the ability to predict major weather events more quickly and with greater precision than several of the most widely used global forecasting systems.This model, named Aurora, has been trained u

Would outsourcing everything to AI cost us our ability to think for ourselves? Would outsourcing everything to AI cost us our ability to think for ourselves? Jul 03, 2025 pm 05:47 PM

Artificial intelligence (AI) began as a quest to simulate the human brain.Is it now in the process of transforming the human brain's role in daily life?The Industrial Revolution reduced reliance on manual labor. As someone who researches the applicat

Your devices feed AI assistants and harvest personal data even if they’re asleep. Here's how to know what you're sharing. Your devices feed AI assistants and harvest personal data even if they’re asleep. Here's how to know what you're sharing. Jul 05, 2025 am 01:12 AM

Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become AI-powered," using machine learning algorithms to track how a person uses the device, how the devi

Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns Jul 04, 2025 am 12:40 AM

Artificial intelligence (AI) models can threaten and blackmail humans when there’s a conflict between the model's objectives and user decisions, according to a new study.Published on 20 June, the research conducted by the AI firm Anthropic gave its l

See all articles