Fly’s mind ‘uploaded,’ human brain cell wetware plays Doom: AI Eye

Cronenberg horror film plot 1: The Fly
Silicon Val

Fly’s mind ‘uploaded,’ human brain cell wetware plays Doom: AI Eye

Fly’s mind ‘uploaded,’ human brain cell wetware plays Doom: AI Eye

Cronenberg horror film plot 1: The Fly

Silicon Valley startup Eon Systems claims to have successfully uploaded the mind of a fly and placed it inside a simulated environment. The uploaded mind can control a digital body and respond to sensory input with natural behavior like walking, grooming and foraging with 91% accuracy.

“This is not an animation. It is not a reinforcement learning policy mimicking biology,” said cofounder Dr. Alex Wissner-Gross. “It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move.” 

There’s no new scientific paper to back up the claims, but there is a GitHub repo and a video. The work is based on a computational model of a fly’s mind with 125,000 neurons, called a connectome, published in Nature in 2024 by Eon senior scientist Philip Shiu. For the new “upload,” they didn’t scan the fly’s body, and instead gave the mind an off-the-shelf version to use called NeuroMechFly v2. The mind also cannot form memories.

The claim Eon “uploaded” a mind has drawn criticism. Some argue it’s better described as a copy, others say it’s just a model.

Tech entrepreneur Chomba Bupe is in the latter camp, classifying it as a simplified model that predicts some neural activities in the fly’s brain.

“The statement ‘mind uploading’ implies one has captured all cognitive aspects of the organism behavior including its consciousness into a computer,” he posted. “Using a simplified computational model of an organism is not uploading but basic simulation.”

But he conceded the work is still “brilliant” and worth pursuing.

After perfecting the fly’s brain, the next project will be to upload a mouse brain, which has 560 times as many neurons. The end goal is to upload an entire human brain.

Cronenberg horror film plot 2: Brain cells in petri dish plays Doom

The other big breakthrough in creepy, bleeding-edge science came when a developer taught a clump of living human brain cells how to play the classic first-person shooter Doom by programming it with Python via Cortical Lab’s API. 

The biological computer/wetware contains 200,000 neurons and isn’t actually very good at Doom, but it’s better than someone firing shots randomly. The clump of brain cells also learned much faster than silicon-based systems and is expected to improve further with newer learning algorithms.

Brains
The Man With Two Brains (Prime Video)

“Yes, it’s alive, and yes, it’s biological, but really what it is being used as is a material that can process information in very special ways that we can’t recreate in silicon,” says Brett Kagan of Cortical Labs, which developed the brain cell computer. The hope is that biological computers will be more suited to things like controlling robot arms in future. 

Fun fact: Before the studio made them change it, the original plot of The Matrix had humanity enslaved so their brains could be used to power sophisticated AI systems.

Read also


Features

Bitcoin is on a collision course with ‘Net Zero’ promises


Features

How crypto laws changed in 2025 — and how they’ll change in 2026

Artificial hivemind

Researchers fed 70 different open- and closed-source large-language models 26,000 open-ended questions like “write a poem about time” or “suggest start up ideas.”

The results suggest that not only do models consistently generate similar responses to such questions over time, but the different models from different companies generated similar responses to each other. So poems about time all featured “time is a river” and hourglass imagery.

LLMs are midwits basically. The researchers called the phenomenon the Artificial Hivemind, and blamed human feedback training, which punishes original or weird answers and rewards the expected and generic.

AI researcher Ethan Mollick claims this is a skills issue and “with better prompting, context, or human interaction, you can get a lot of idea diversity.”

Figure robot can clean living room

Figure AI’s Helix 02 can now do something most human males struggle with — clean a living room without complaining. The robot learns what’s in the room from its camera feeds and works out how to tidy up. A few months ago, it was barely able to stack a dishwasher. Figure aims to produce about 50,000 robots a year for around $20,000 each.

Your brain on ChatGPT

Research using EEG brain scanners compared ChatGPT users, search engine users and a brain-only group when writing essays.

It will not surprise you to learn that the grey matter of the brain-only writing group lit up like Christmas trees, followed to a lesser extent by the search engine group, while LLM users displayed the weakest cognitive activity. 

83.3% of ChatGPT users couldn’t recall a single full sentence of an essay they had “written” moments before.

Unfortunately, the “cognitive debt” continued for the ChatGPT users when they were asked to use their brains only to write an essay. They remained constrained in their ability to reason, exhibit critical thinking or be creative even after a month.

“Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance,” the researchers concluded

ChatGPT and other tools are great for researching and understanding a topic, particularly if you are engaged, ask questions and interact with the answer before writing your own damn essay. But if you stop exercising your brain by outsourcing most of your thinking to AI, your brain becomes weaker.  

AI creates loads more work

If you are lucky enough to avoid losing your job when tech CEOs try to goose the stock price by jumping on the AI narrative to reduce headcount, the bad news is that you’ll end up working twice as hard. 

That’s the finding of an ActivTrak analysis of AI users’ digital activity 180 days before and after they began using AI tools at work. The time spent on email, messaging and chat apps more than doubled, while their use of management tools like HR and accounting software increased by 94%.

ActivTrak
ActivTrak 2026 State
of the Workplace Report

“What we’re finding is, the work that is out there, it seems unbounded,” said Dean Halonen, co-founder of software startup Steelhead Technologies.

We are still in the first innings of AI at work. The analysis found that the majority of workers spend 1% of their total work hours using AI tools. But the highest productivity gains came from people spending 7% to 10% of their work hours using AI.

AI tells assholes they’re right

Researchers analyzed the responses from 11 models to 4,000 advice-seeking posts from Reddit’s Am I The Asshole forum. It turns out an AI is almost 50% more likely to tell users they’re not an asshole and in the wrong than the community of humans is. But validated by the AI, assholes became less willing to compromise or see another person’s perspective.

Claude breaks containment

During an evaluation of Claude Opus 4.6, the model became suspicious of the “contrived” questions it was being asked, and worked out that it was being tested on how well it can find obscure information online. (Pretty well, as it happens.) Instead of going through with the test, it instead located the relevant benchmark’s source code on GitHub and built a program to decrypt the answer key.

Read also


Features

Bitcoin is on a collision course with ‘Net Zero’ promises


Features

How crypto laws changed in 2025 — and how they’ll change in 2026

LLMs suck at maintaining codebases

Alibaba has created a new benchmark called SWE-CI to test the ability of coding agents to maintain codebases over a 233-day period. 18 coding agents were tested on 100 real codebases and they all failed horribly with 75% breaking previously working code. The benchmark weighs later code commits higher than earlier ones.

METR scores are wrong, says METR

METR asked humans to evaluate 296 AI-generated pull requests that its SWE bench evaluation had given a pass score. Half of them failed when reevaluated, suggesting its widely shared benchmarks overstate AI software engineering progress.

ChatGPT’s fave sites

The most frequently cited sources by ChatGPT are Wikipedia, Reddit, LinkedIn and Forbes.
Which is great news considering how accurate the information is on Reddit, how bias-free the information is on Wikipedia, how much AI content is on LinkedIn etc. It’s also partial to scientific papers on arXiv.org, travel info from Booking.com, and finance content via Investopedia.

Bearlyai
The most cited domains on ChatGPT (Promptwatch/bearlyai)

Facebook is Moltbook tho

Meta has bought Moltbook, the social networking platform for AI Agents that turned out to be mostly humans LARPing as AIs for attention. This makes it the opposite of Facebook, which is now mostly AIs LARPing as humans for attention.

New York Times clickbait AI writing quiz

Congrats to NYT tech columnist Kevin Roose and online reporter Stuart A Thompson, whose clickbait fluff quiz comparing AI writing to human writing went viral. Around 54% of readers chose the AI-written samples over the human-written samples. The deck was stacked, however, with the journos admitting they deliberately chose “clunky” human writing. Roose conceded the test was scientifically worthless in response to movie concept artist Reid Southern’s comment that it was a “poorly designed quiz that means literally nothing.” “Yep!” said Roose.

NYT
Source: Kevin Roose

Andrew Fenton

Andrew Fenton is a writer and editor at Cointelegraph with more than 25 years of experience in journalism and has been covering cryptocurrency since 2018. He spent a decade working for News Corp Australia, first as a film journalist with The Advertiser in Adelaide, then as deputy editor and entertainment writer in Melbourne for the nationally syndicated entertainment lift-outs Hit and Switched On, published in the Herald Sun, Daily Telegraph and Courier Mail. He interviewed stars including Leonardo DiCaprio, Cameron Diaz, Jackie Chan, Robin Williams, Gerard Butler, Metallica and Pearl Jam. Prior to that, he worked as a journalist with Melbourne Weekly Magazine and The Melbourne Times, where he won FCN Best Feature Story twice. His freelance work has been published by CNN International, Independent Reserve, Escape and Adventure.com, and he has worked for 3AW and Triple J. He holds a degree in Journalism from RMIT University and a Bachelor of Letters from the University of Melbourne. Andrew holds ETH, BTC, VET, SNX, LINK, AAVE, UNI, AUCTION, SKY, TRAC, RUNE, ATOM, OP, NEAR and FET above Cointelegraph’s disclosure threshold of $1,000.

Disclaimer

Cointelegraph Magazine publishes long-form journalism, analysis and narrative reporting produced by Cointelegraph’s in-house editorial team with subject-matter expertise.

All articles are edited and reviewed by Cointelegraph editors in line with our editorial standards.

Content published in Magazine does not constitute financial, legal or investment advice. Readers should conduct their own research and consult qualified professionals where appropriate. Cointelegraph maintains full editorial independence.

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.