Login

Renting a "minibrain" for $500 per month is now possible. The implications are horrific.

I know that one man's dystopia may be another man's dream, and it is certainly easier to see things negatively than positively. I find it hard to believe that anyone would not be disturbed by the news that you can rent a "brain organoid", which is used for "realtime reading and neural stimulation".

This is not the futuristic dream of some techno-optimist. These in vitro "organoids", which are available for rent from FinalSpark's Neuroplatform, a wetware computing firm, can be rented right now.

For only $500 per month, university and education groups have access to neuronic organoid processors. These are essentially collections containing brain tissue that contain thousands of neurons connected via electrodes.

The idea is that biological matter can be used for computing because it is more energy-efficient than transistors. FinalSpark's research paper explains that they "developed a hardware-software system that allows electrophysiological experimentation on an unprecedented scale," allowing researchers to run experiments on neuron organoids that can last up to 100 days.

FinalSpark has developed a Python Application Programming Interface that allows researchers to remotely conduct research on these organoids. According to my quick reading of the research article, the experiments at this point seem to be aimed at discovering how biological neural networks (BNNs), such as how stimulation and output work, might function.

The research is laying the foundation for a more efficient type of neural network than transistor-based (artificial). This is, y'know livingneural network, which I could hesitantly describe as a form of brain control.

It's possible that I'm a sentimental, head in the clouds philosopher, but I can't resist focusing on the "terrifying", and reeling at the idea of using (part of) a human brain in a vat to compute in this way.

The "brain-in-a vat" scenario popularised by Hilary Putnam is a thought exercise that asks how we know that we are not brains in vats with our neurons stimulated to produce the apparent reality around us.

This "Wetware", however, makes me think about an inverse experiment: how do we know that the brains in the vats that we are stimulating don't experience their own apparent reality in a limited form? How can we be sure they aren't conscious in a limited way?

What are the ethical implications if we do not know this? I've just finished reading Ray Kurzweil’s The Singularity is Nearer. He suggests that we have a moral responsibility to treat AI as conscious if there is a chancethat it could be. Maybe this wetware has the same ethical responsibility: if there's even a small possibility that it might be conscious, we should treat it accordingly.

What will be the psychological effects and societal consequences of treating brain tissue as a commodity to be used for data-processing and computation? Where might wetware research in the future lead? Do we really wish to know how to control the brain, and therefore, minds, computationally?

These are questions I suppose that can also be posed about brain-computer interactions (BCIs). It's a little more frightening to see brain tissue in vitro attached to a microchip, being stimulated, recorded, and repeated over and over again for its entire 100-day lifespan. I guess progress will continue to march forward regardless.

Interesting news

Comments

Выбрано: []
No comments have been posted yet