But now we have this whole other dichotomy between neural networks, which are themselves only really 10 years old or so in terms of evolved systems, and new biological organoid brains developed with biological materials in a laboratory.
If you feel like this is going to be deeply confusing in terms of neurological research, you’re not alone – and there are a lot of unanswered questions around how the brain works, even with these fully developed simulations and models.
Terminology and Methodology with Organoids
A couple of weeks ago, I wrote about the process of growing brain matter in a lab. Not just growing brain matter, but growing a small pear-shaped brain, which scientists call an organoid, that apparently can grow its own eyes.
Observing that sort of strange phenomenon feeds our instinctive tendency to connect vision to intelligence – to explore the relationship between the eye and the brain.
People on both sides of the aisle, in AI research and bioscience, have been looking at this relationship. In developing some of the neural net models that are most promising today, researchers were inspired by the primitive visual capabilities of a roundworm called C elegans, which famously led to the development of some kinds of AI and medical research.
Back to the production of biological brain-esque organoids, in further research, I found that scientists use stem cells and something called “Matrigel” – and that this developed from a decades-long analysis of tumor material in lab mice. There’s a lot to unpack there, and we’ll probably hear a lot more about this as people realize that these mini-brains are around.
One of the tech talks at a recent Imagination in Action event also piqued my interest in this area. It came from Kushagra Tiwary, who talked about exploring “what if” scenarios involving different kinds of evolution.
“One of the first questions that we ask is: what if the goals of vision were different, right? What if vision evolved for completely different things? The second question we're asking is: We all have lenses in our eyes, our cameras have lenses. What if lenses didn't evolve? How would we see the world if that didn't happen? Would we be able to detect food? … Maybe these things wouldn't happen. And by asking these questions, we can start to investigate why we have the vision that we have today, and, more importantly, why we have the visual intelligence that we have today.”
He had one more question. (Two more questions, really.)
“Our brains also develop at kind of the same pace as our eyes, and one would argue that, you know, we really see with our brains, not with our eyes, right? So what if the computational cost of the brain were much lower?”
He talked about the brain/eye scaling relationship, and key elements of how we process information visually.
Then Tiwary mentioned that this could inform AI research as we build agents in some of the same ways that we humans are built ourselves.
At the same event, another standout talk came from Annika Thomas, a researcher working at the intersection of computer vision, robotics, and 3D scene understanding. Her focus: enabling collaborative visual intelligence for multi-agent systems operating in complex environments—from disaster zones to distant planets.
She described how today’s robots often operate like “solo travelers,” each building its own map of the world with little awareness of others around them. Her work focuses on changing that—teaching robots to “see intelligently” and to share what they see.
Thomas discussed a technique called Gaussian splatting, which allows robots to build fast, photorealistic 3D maps. Inspired by how our brains process visual information, this method helps teams of robots localize, recognize objects, and collaborate more effectively—whether they’re mapping a forest, packing boxes in a warehouse, or planning autonomous missions on the Moon. “It’s as if we’ve given them a shared consciousness about space,” she said. The vision she laid out—robots that see, understand, and act together—offers a glimpse of how collaborative AI could transform industries and reshape how we interact with intelligent machines.
The bottom line is that we have all of these highly complex models – we have the neural nets, which are fully digital, and now we have proto-brains growing in a petri dish.
Then we also have these bodies of research that show us things like how the human brain evolved, how it differs from its artificial alternatives, and how we can continue to drive advancements in this field.
Last, but not least, I recently saw that scientists believe we’ll be able to harvest memories from a dead human brain in about 100 years, by 2125.
Why so long?
I asked ChatGPT, and the answer that I got was threefold – first, the process of decomposition makes the job difficult, second, we don’t have full mapping of the human brain, and third, the desired information is stored in delicate frameworks.
In other words, our memories in our brain are not in binary ones and zeros, but in neural structures and synaptic strengths, and those can be hard to measure by any outside party.
It occurs to me, though, that if artificial intelligence itself has this vast ability to perceive small differences and map patterns, this type of capability may not be as far away as we think.
That’s the keyword here: think.
The above is the detailed content of Beyond Computer Vision, Brains In Jars, And How They See. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Let’s dive into this.This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

Looking at the updates in the latest version, you’ll notice that Alphafold 3 expands its modeling capabilities to a wider range of molecular structures, such as ligands (ions or molecules with specific binding properties), other ions, and what’s refe

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

Dia is the successor to the previous short-lived browser Arc. The Browser has suspended Arc development and focused on Dia. The browser was released in beta on Wednesday and is open to all Arc members, while other users are required to be on the waiting list. Although Arc has used artificial intelligence heavily—such as integrating features such as web snippets and link previews—Dia is known as the “AI browser” that focuses almost entirely on generative AI. Dia browser feature Dia's most eye-catching feature has similarities to the controversial Recall feature in Windows 11. The browser will remember your previous activities so that you can ask for AI

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a
