One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol
I understand what your saying. It definitely is the eliza effect.
But you are taking sementics quite far to state its not ai because it has no “intelligence”
I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.
Sorry to say but your about as reliable as llm chatbots when it comes to this.
You are not researching facts and just making things up that sound like they make sense to you.
Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”
When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.
When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.
Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.
Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.
Llms cannot produce opinions because they lack a subjective concious experience.
However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.
Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.
I am not sure there is a basic explanation, this is very complex field computer science.
If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them
I am however not going to write you a multi page essay myself.
Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.
If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.
A bit off topic but as an autistic person i note
You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.
I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the
Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.
Okay but what are some useful definitions for us to use here? I could argue a pencil is intelligent if i can play with terms enough.
Id like to have a couple, because it’s such a broad topic. Give them different names.
opinions
The capacity to be wrong is not what matters; garbage in garbage out. Lets focus on why it’s wrong, how it gets there.
llm models or chatbots
Arent all modern chatbots based on llm’s?
subjective conscious
Conscious. Define. Seems like it’s gonna come up a lot and its a very slippery word, repurposed from an entirely different context.
common sense is information held uncritically
Okay! I can work with that.
language is fluid and messy
Yeah, but in common use it matters. Not necessarily that they stick to original uses, but the political implications and etymology of new uses should be scrutinized, because it does shape thought, especially for NT’s.
But i recognize that it’s messy. that’s why we’re defining terms.
No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.
There are a lot of definitions of intelligence, and these things dont fit any of them.
Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?
Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.
In the dame way a fist full of dice can make decisions; sure.
facts
If its programmed to run a script to do a google search and cite the first paragraph of wikipedia; sure. That function is basically eliza with an api call.
knowledge
Okay, sketch on what this actually means, but every answer i can think of, none of which im strongly committed to: still no.
Its a bullshit machine. Like recognizes like, but it can’t do anything else. If you think its intelligent, that’s because you are not.
Edit: And im really disappointed. I kind of always wanted a computer friend. I would adore the opportunity to midwife whole new forms of intelligence. That sounds really fucking cool. It’s the kind of thing i dreamed of as a kid, and this shit being sold as my childhood aspirations is blackpilling as fuck. I think the widespread acceptance of the bullshit sales pitch, and fact it means we’re less likely to get the real thing, has lead me to a lot of much more anti-human opinions than i used to have.
Nope. There’s no cognition, no cognitive functions at all in LLMs. They are incapable of understanding actions, reactions, consequences and outcomes.
Literally all it’s doing is giving you a random assortment of words that vaguely correlate to indicators that scored highly for the symbols (ideas/intents) that the prompt you entered contained.
Literally that’s fucking it.
You’re not “talking with an AI” you’re interacting with an LLM that is an amalgam of the collective responses for every inquiry, statement, reply, response, question, etc… That is accessible on the public Internet. It’s a dilution of the “intelligence” that can be derived from what everyone on the Internet has ever said, and what that cacophony of mixed messages, on average, would reply with.
The reason why LLMs have gotten better is because they’ve absorbed more data than previous attempts and some of the outlying extremist messages have been carefully pruned from the library, so the resultant AI trends more towards the median persons predicted reply, versus everyone’s voice being weighed evenly.
It only seems like “AI” because the responses are derived from real, legitimate human replies that were posted somewhere on the Internet at some point in time.
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
The recipe thing is so funny to me, they try to be all unique with their recipes “made by AI”, but in reality it’s based on a slab of text that resembles the least unique recipe on the internet lol
Yeah what is even the selling point? Made by ai is just a google search when you put in: sandwich recipe
There was that supermarket in New Zealand with a recipe AI telling people how to make chlorine gas…
This is not ai.
This is the eliza effect
We dont have ai.
I understand what your saying. It definitely is the eliza effect.
But you are taking sementics quite far to state its not ai because it has no “intelligence”
I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.
There is no reasonable definition of intelligence that this technology has.
Sorry to say but your about as reliable as llm chatbots when it comes to this.
You are not researching facts and just making things up that sound like they make sense to you.
Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”
When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.
When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.
Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.
Eliza with an api call is intelligence, then?
Llm’s cannot do that. Tell me your basic understanding of how the technology works.
What do you mean when we say this? Lets define terms here.
Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.
Llms cannot produce opinions because they lack a subjective concious experience.
However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.
Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.
I am not sure there is a basic explanation, this is very complex field computer science.
If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.
Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.
If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.
A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.
I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.
Okay but what are some useful definitions for us to use here? I could argue a pencil is intelligent if i can play with terms enough.
Id like to have a couple, because it’s such a broad topic. Give them different names.
The capacity to be wrong is not what matters; garbage in garbage out. Lets focus on why it’s wrong, how it gets there.
Arent all modern chatbots based on llm’s?
Conscious. Define. Seems like it’s gonna come up a lot and its a very slippery word, repurposed from an entirely different context.
Okay! I can work with that.
Yeah, but in common use it matters. Not necessarily that they stick to original uses, but the political implications and etymology of new uses should be scrutinized, because it does shape thought, especially for NT’s.
But i recognize that it’s messy. that’s why we’re defining terms.
Of course it is AI, you know artificial intelligence.
Nobody said it has to be human level, or that people don’t do anthropomorphism.
This is not artificial intelligence. There is mo intelligence here.
Todays “AI” has intelligence in it, what are you all talking about?
No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.
There are a lot of definitions of intelligence, and these things dont fit any of them.
Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?
Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.
It literally cannot do that
In the dame way a fist full of dice can make decisions; sure.
If its programmed to run a script to do a google search and cite the first paragraph of wikipedia; sure. That function is basically eliza with an api call.
Okay, sketch on what this actually means, but every answer i can think of, none of which im strongly committed to: still no.
Its a bullshit machine. Like recognizes like, but it can’t do anything else. If you think its intelligent, that’s because you are not.
Edit: And im really disappointed. I kind of always wanted a computer friend. I would adore the opportunity to midwife whole new forms of intelligence. That sounds really fucking cool. It’s the kind of thing i dreamed of as a kid, and this shit being sold as my childhood aspirations is blackpilling as fuck. I think the widespread acceptance of the bullshit sales pitch, and fact it means we’re less likely to get the real thing, has lead me to a lot of much more anti-human opinions than i used to have.
It’s so funny typing with that much authority and being completely wrong still.
Nope. There’s no cognition, no cognitive functions at all in LLMs. They are incapable of understanding actions, reactions, consequences and outcomes.
Literally all it’s doing is giving you a random assortment of words that vaguely correlate to indicators that scored highly for the symbols (ideas/intents) that the prompt you entered contained.
Literally that’s fucking it.
You’re not “talking with an AI” you’re interacting with an LLM that is an amalgam of the collective responses for every inquiry, statement, reply, response, question, etc… That is accessible on the public Internet. It’s a dilution of the “intelligence” that can be derived from what everyone on the Internet has ever said, and what that cacophony of mixed messages, on average, would reply with.
The reason why LLMs have gotten better is because they’ve absorbed more data than previous attempts and some of the outlying extremist messages have been carefully pruned from the library, so the resultant AI trends more towards the median persons predicted reply, versus everyone’s voice being weighed evenly.
It only seems like “AI” because the responses are derived from real, legitimate human replies that were posted somewhere on the Internet at some point in time.
This is as much of an artificial intelligence as a mannequin is an artificial life form.
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.