• D61 [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    A person wanting to learn how to do something is different from an AI company wanting to feed its math problem more material to math all over.

    Bill wants to learn how to fix a car because its useful to themself and others.

    An AI company wanting its chatbot to be able to answer questions about fixing a car isn’t meant to help itself or others fix their car… its to increase the liklihood that investors will give the AI company more money.

    They are not the same thing.

    As it stands, trying to cite IP law is the best that can be done for people who aren’t transnational conglomerates with yearly revenues in the multiple billions of dollars.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      AI training isn’t only for mega-corporations. Setting up barriers that only benefit the ultra-wealthy will only end with corporations gaining a monopoly of a public technology by making it prohibitively expensive for regular folks. And that’s before they bind users to predatory ToS, allowing them exclusive access to user data and effectively selling our own data back to us. What some people want would mean the end of open access to competitive, corporate-independent tools and would imperil research, reviews, reverse engineering, and even indexing information.

      They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich. The same people who abuse DMCA takedown requests for their chilling effects on fair use content now need your help to do the same thing to open source AI. Their next greatest foe after libraries, students, researchers, and the public domain. Don’t help them do it.

      I recommend reading this article by Cory Doctorow, and this open letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.

      • D61 [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        They want you to believe that analyzing things without permission somehow goes against copyright,

        Whats the purpose of generative AI? Its not just to analyze, its to replicate and mass produce. We’re not talking about some software that is churning through medical data to figure out how to feed premature babies. Its that stupid AI web search answer that is just copy pasted from another website that may or may not have to source where it made its summary from. (usually from a hit that is on the first page of the search results that I’m going to click on anyways)

          • D61 [any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            I don’t think it is that hard to answer.

            If there was no money in doing it, it wouldn’t have happened the way it did. Whether this money is from investors or from monetizing services doesn’t matter in the short term, only that somebody is willing to pay.

            • Even_Adder@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              An alluring oversimplification, but your cynical framework can’t account for the communities of people who put in time and effort into FOSS projects. The quality and popularity of open source alternatives has eroded the moats of proprietary services, making it impossible for them to monopolize and profit from this public technology. So if it happened the way it did, it wasn’t for the reasons stated.