I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don’t think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying “Fuck Software” just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I’m not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I’m currently writing a research paper on this topic, so I would like some opinion.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    11 hours ago

    I hold more against AI than not.

    1. Take out all the copyrighted material all the big models illegally trained off of and their AK collapses. Their scrapers are also doing everything they can to kill small websites through DDOS by scraping everything they can as many times as possible and you know this is a feature and not a bug because it gets rid of any form of competition by default.
    2. It’s an attack on education and critical thinking. Why would the average Joe put in the effort to research and learn something properly lr even question the results if they believe an AI would never lie or be wrong? Critical thinking is already a skill on the decline, but I firmly believe AI is expediting this for a lot of people who either don’t know better or just don’t care.
    3. From a coding perspective, if your software relies on AI generated code, I’ve heard more stories of that same software being full of vulnerabilities caused by said AI code than not. I also hold a view that if you have to use AI to understand what a program does rather than using something like a textbook written by an actual dedicated expert in a programming language or consult someone who is better than you in said programming language, you are gonna learn the absolute worst practices, like leaving a default Django password that the AI generates.
    4. I view AI as extremely unprofessional. If you have to rely on AI to help your work that isn’t AI related, I’ll take that as you don’t actually know what you’re doing and am just phoning it in. It also shows how lazy and unable to think for yourself you are. I’d gladly admit I’m dumb as a bag of rocks considering I have used it to fix software errors on my Linux running laptop, so I’m not exception to my own rule. Forums and help groups for software troubles exist for a reason.
    5. With the amount of energy they require ( using non-renewables ), any action you do to try and stop global warming is offset within a couple nanoseconds ( hyperbole, maybe, but it’s definitely been a big issue ). I’m absolutely positive they don’t use renewable, at least for the big US data centers, because that would require building new infrastructure that they’re not willing to shill out a penny for. Most likely the same for just about every other country with large AI data centers out there.
    6. Looping back to point 1, with them scraping everything to death, they are able to essentially embolden the deranged “artists gatekeeping art!” sickos who believe that artists are somehow gatekeeping art from everyone because they have spent time practicing and getting better. I can pretty much guarantee these deranged sickos want art for free and without having to put in any effort at all because they all don’t value art or artists at all. After all, why would they support the arts when they could instead generate AI Jesus for Fakebook and get a shit ton of validation for their “art” from people without having to put in any effort?
    7. AI writing is just derivative of the training data it’s based on. As of now, because of the way it’s trained, you can tell certain models apart from how the text is generated. “Brow furled”? Immediately, if I see that, my mind immediately goes to text generated by Anthropic’s Claude AI. Also, the writing, from my experience, tends to be too well written for the average Joe. It’s also as generic and logical as I usually tend to write and have been trying to shake off. That’s subjective, though, on how generic and logical it reads.