“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. “What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.” In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to AGI.
Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion… And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI’s imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world’s technologists have not yet dreamed up. There is no way of knowing how long that will take. “A system that’s better than humans in one way will not necessarily be better in other ways,” Harvard University cognitive scientist Steven Pinker said. “There’s just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven’t even thought of yet. There’s a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets.”
I am not sure I believe in AGI. Like it will never exist, because it can’t. I could be wrong. Hell, I’m often wrong, I just don’t think a machine will ever be anything but a machine.
If intelligence occurred in us by pure trial and error there’s no reason it could not be made artificially.
How would you define AGI then? If the definition is just “intelligence” then I would say we are already there. I think the concept is infinitely complex and our human understanding may never totally get there. Again, I could be wrong. Technology changes. People said man could never fly too.
What, precisely, do you think a brain is?
Organic matter that can’t be replicated
This is so obviously untrue that it’s actually amazing to see someone say it.
It’s replicated each time there’s a child made.
It’s organic matter that is created. It’s not AGI? This isn’t Black Mirror, you can’t duplicate consciousness from organic matter to a digital medium. We don’t even know what consciousness is. I understand that before the airplane, people thought that manned flight was impossible. How can we create consciousness when we don’t even know what the goal is? If the understanding of consciousness changes and the technology to create a digital rendition of it comes about, then clearly my position will change. I guess I just lack the foresight to see that understanding and technology ever coming to fruition.
Way to miss the point.
The brain is a physical system. If it is matter it can be replicated, as is demonstrated by the human brain being replicated daily a mind-boggling number of times. There is absolutely nothing in principle preventing human beings at some point from replicating one, or one that functions in a similar way, at some point in the distant future.
There’s just nothing even close to that now, and never will be while capitalism fucks everything up by turning everything they have into grifts that suck up all resources for no overall gain.
I don’t know why you are being rude.
A system in an organic medium, versus a system in a digital medium are completely different and your logic doesn’t apply to each one as if the word “system” allows you apply logic to both mediums. It’s like saying weather can happen inside my Nintendo, because they are both systems.
Saying that there’s an ability to create a being of understanding in a digital system is the same as saying we have the ability to travel faster than light through bending spacetime using a wormhole. Can you do it? Maybe, but current science says that there’s no evidence of the ability outside of thought exercises. Right now, it’s science fiction and there’s no current way of even testing the fact that it can exist.
The medium doesn’t matter. The behaviour does.
There’s nothing magical about “organic” either. That just means it’s based on carbon.
I believe it would be possible with quantum computing and more understanding of actual brains. All you have to do is mimic how a real brain works with electronics. Hell, you can make a computer out of brain cells right now, and it can solve problems better than these “throw everything at the wall and see what sticks” kinda AIs. Though… If a computer has a real biological brain in it doing the thinking, is it artificial intelligence?
What would quantum computing do to help?
Compute!
Quantumly!
The person who came up with the Chinese Room Argument argued that if a brain was completely synthetic, even if it were a perfect simulation of a real brain, it would not think - it would not have a genuine understanding of anything, only a simulation of an understanding. I don’t agree (though I would still say it’s “artificial”), but I’ll let you draw your own conclusions.
From section 4.3:
The problem with Searle’s Chinese Room is that he’s basically describing how the brain works. NOWHERE in the brain is there a cell (or even a group of cells) that defines “Chinese Language”. The “Chinese Language” encoding is spread out over an absolutely mind-numbing collection of cells and connections that individually each fire off on some (relatively) simple rules. No individual cell “knows” Chinese. They “know” voltage levels, chemical concentrations, and when to fire along which connection based on these.
So if we take Searle at face value … we don’t think either.