Oof! Somebody took Karl’s kool-aid.
Have you, like, looked around at what kind of place you are responding to lmao
I only work in theory
‘E questo è il fiore del partigiano, morto per la libertà’
Oof! Somebody took Karl’s kool-aid.
Have you, like, looked around at what kind of place you are responding to lmao
One of the reason the left in The Netherlands is in such a shambolic state is because they keep promising things only to compromise with the right, disappointing their voters. What many leftist people need in the west is an actual principal left wing part who does not want to compromise on important left wing topics. One of the reasons our marxist party in Belgium is growing the way it is, is because we don’t compromise on a lot of things. Otherwise you are going to end up like the Dutch Socialist Party and adapt racist immigration policies that have nothing to do with being a leftist. The goal is to educate the masses, not to throw ideals out the window in the hope of getting votes, because that way, only the right will win as they keep moving the goalposts.
I used to be an enlightened ‘the truth is in the middle’ centrist until I realized that the real world requires having actual ideals
Dutch news is running a story about China’s youth unemployment of 21%, stating that according to economists, it’s a ‘telling sign of decay’. Meanwhile Europe is sitting at roughly the same percentage. I wonder what the signs say about that.
Why I think this machine is not sentient:
One crucial part of sentience, in my opinion, is the ability to turn your feelings into thoughts and actions. This part of sentience is also still a big mystery (the whole conecpt of sentience is, by the way). We have people claiming we can never prove others or ourselves of being sentient, for we do not exactly know what it is that makes us sentient. We do have some basic guidelines though.
While this machine claims to have feelings and has a basic understanding of how feelings relate to eachother and work, it has no capability of forming feelings, nor is its behavior influenced by feelings. To illustrate my point, I have a very simplified example I want to share:
Let’s take two subjects for an experiment: this AI and a human child. We never, ever learn both of these subjects the concept of pain, violence, abuse, fear, any negative emotion basically. We just teach them hapiness and stuff. Let’s also, for the sake of the story, pretend the AI can somehow see and process visual input.
We now take a third person and we start to punch them in the face with our fist, repeadetly. This third person will not like this and will start to cry, maybe scream and it will probably try to flee or something. The child, being a human, will know this behavior is not correct and will probably be scared by it. By seeing the emotions, the reaction of the third person and the emotions and the reaction of us will make this person understand that something is not right. It will get scared, and based on the input of being scared it will form new output on how to behave. It might run away from the situation or it might defend itself in order to not get killed/hurt.
The AI, on the other hand, has never had any input about these types of situations or emotions it just witnessed. It might be confused to reply, but it will not feel the fear for survival that the kid just felt.
How do I know this? These feelings get triggered by this input. The kid’s heartbeat will increase, adrenaline will start to produce, adrenaline will get in the bloodstream and affect the heart, the brains, the other senses in order to get an appropriate response out of the child. The machine has no such mechanisms, it runs on electricity. There’s not going to be an increased stream of electricity to the mother board or whatever.
This is what makes the the biggest difference between being sentient or not. The ability to have, without ever seeing the correct input, a correct response to a situation based on feelings.
This is a very simplified take on this, the topic goes really deep. But I tried to make it as simple as possible for people to understand where I’m coming from. Is this machine cleverly designed? Yes. But everything it does, is because it’s taught to do so. It will not do stuff without input. It will not do stuff based on ‘instincts’. It will not do stuff if it has never before had a concept of something.
Feel free to add your opinion to this reply. I like this kind of stuff and I’m also eager to learn more.
I once saw a vid about the credit score being real but it was like piloted in one or two cities in China and it was for stuff like returning your city sharing bike to the correct place in a decent state or something and if you didn’t do it in a proper way a few times you’d be blocked for renting one