“The technology we’re building today is not sufficient to get there,” said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. “What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.” In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion… And scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI’s imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today’s technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world’s technologists have not yet dreamed up. There is no way of knowing how long that will take. “A system that’s better than humans in one way will not necessarily be better in other ways,” Harvard University cognitive scientist Steven Pinker said. “There’s just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven’t even thought of yet. There’s a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets.”

  • supersquirrel@sopuli.xyz
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    1 day ago

    The funniest shit about AI is that even if the current line of research was promising (it isn’t beyond specific domain use cases), the ecological holocaust of the environment being caused by AI datacenters will destroy this planet as a habitable place WELL before we develop an artificial intelligence.

    This is all so pointlessly dumb, on so many levels

  • scott
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    OpenAI has contractually defined the development of AGI using a metric of chatgpt sales numbers so get ready for them to claim they’ve developed AGI even though they never will.

    • BertramDitore@lemm.ee
      link
      fedilink
      English
      arrow-up
      17
      ·
      24 hours ago

      Yup, and that’s just one of many things that make me confident in my impulse to never trust OpenAI or any company that is just so obviously a money-grabbing grift.

      [OpenAI and Microsoft] came to agree in 2023 that AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits. Source.

      What a ridiculous way of thinking.

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    1 day ago

    I am not sure I believe in AGI. Like it will never exist, because it can’t. I could be wrong. Hell, I’m often wrong, I just don’t think a machine will ever be anything but a machine.

      • FenderStratocaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        How would you define AGI then? If the definition is just “intelligence” then I would say we are already there. I think the concept is infinitely complex and our human understanding may never totally get there. Again, I could be wrong. Technology changes. People said man could never fly too.

          • FenderStratocaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 hours ago

            It’s organic matter that is created. It’s not AGI? This isn’t Black Mirror, you can’t duplicate consciousness from organic matter to a digital medium. We don’t even know what consciousness is. I understand that before the airplane, people thought that manned flight was impossible. How can we create consciousness when we don’t even know what the goal is? If the understanding of consciousness changes and the technology to create a digital rendition of it comes about, then clearly my position will change. I guess I just lack the foresight to see that understanding and technology ever coming to fruition.

            • Way to miss the point.

              The brain is a physical system. If it is matter it can be replicated, as is demonstrated by the human brain being replicated daily a mind-boggling number of times. There is absolutely nothing in principle preventing human beings at some point from replicating one, or one that functions in a similar way, at some point in the distant future.

              There’s just nothing even close to that now, and never will be while capitalism fucks everything up by turning everything they have into grifts that suck up all resources for no overall gain.

              • FenderStratocaster@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 hours ago

                I don’t know why you are being rude.

                A system in an organic medium, versus a system in a digital medium are completely different and your logic doesn’t apply to each one as if the word “system” allows you apply logic to both mediums. It’s like saying weather can happen inside my Nintendo, because they are both systems.

                Saying that there’s an ability to create a being of understanding in a digital system is the same as saying we have the ability to travel faster than light through bending spacetime using a wormhole. Can you do it? Maybe, but current science says that there’s no evidence of the ability outside of thought exercises. Right now, it’s science fiction and there’s no current way of even testing the fact that it can exist.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      24 hours ago

      I believe it would be possible with quantum computing and more understanding of actual brains. All you have to do is mimic how a real brain works with electronics. Hell, you can make a computer out of brain cells right now, and it can solve problems better than these “throw everything at the wall and see what sticks” kinda AIs. Though… If a computer has a real biological brain in it doing the thinking, is it artificial intelligence?

      • hedgehog@ttrpg.network
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        16 hours ago

        Though… If a computer has a real biological brain in it doing the thinking, is it artificial intelligence?

        The person who came up with the Chinese Room Argument argued that if a brain was completely synthetic, even if it were a perfect simulation of a real brain, it would not think - it would not have a genuine understanding of anything, only a simulation of an understanding. I don’t agree (though I would still say it’s “artificial”), but I’ll let you draw your own conclusions.

        From section 4.3:

        Consider a computer that operates in quite a different manner than an AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program parallels the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese – every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below.

        In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. (Note however that the basis for this claim is no longer simply that Searle himself wouldn’t understand Chinese – it seems clear that now he is just facilitating the causal operation of the system and so we rely on our Leibnizian intuition that water-works don’t understand (see also Maudlin 1989).) Searle concludes that a simulation of brain activity is not the real thing.

        However, following Pylyshyn 1980, Cole and Foelber 1984, and Chalmers 1996, we might wonder about gradually transitioning cyborg systems. Pylyshyn writes:

        If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.

        These cyborgization thought experiments can be linked to the Chinese Room. Suppose Otto has a neural disease that causes one of the neurons in his brain to fail, but surgeons install a tiny remotely controlled artificial neuron, a synron, alongside his disabled neuron. The control of Otto’s artificial neuron is by John Searle in the Chinese Room, unbeknownst to both Searle and Otto. Tiny wires connect the artificial neuron to the synapses on the cell-body of his disabled neuron. When his artificial neuron is stimulated by neurons that synapse on his disabled neuron, a light goes on in the Chinese Room. Searle then manipulates some valves and switches in accord with a program. That, via the radio link, causes Otto’s artificial neuron to release neuro-transmitters from its tiny artificial vesicles. If Searle’s programmed activity causes Otto’s artificial neuron to behave just as his disabled natural neuron once did, the behavior of the rest of his nervous system will be unchanged. Alas, Otto’s disease progresses; more neurons are replaced by synrons controlled by Searle. Ex hypothesi the rest of the world will not notice the difference; will Otto? If so, when? And why?

        Under the rubric “The Combination Reply”, Searle also considers a system with the features of all three of the preceding: a robot with a digital brain simulating computer in its aluminum cranium, such that the system as a whole behaves indistinguishably from a human. Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems or Virtual Mind Reply. Some (e.g. Rey 1986) argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would indeed be reasonable to attribute understanding to such an android system – but only as long as you don’t know how it works. As soon as you know the truth – it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning – you would cease to attribute intentionality to it.

        • The problem with Searle’s Chinese Room is that he’s basically describing how the brain works. NOWHERE in the brain is there a cell (or even a group of cells) that defines “Chinese Language”. The “Chinese Language” encoding is spread out over an absolutely mind-numbing collection of cells and connections that individually each fire off on some (relatively) simple rules. No individual cell “knows” Chinese. They “know” voltage levels, chemical concentrations, and when to fire along which connection based on these.

          So if we take Searle at face value … we don’t think either.