Title explains it all. For people in tech jobs who are being indoctrinated encouraged to use AI, what could be some methods of malicious compliance?

The only one I managed to come up with would be asking the chatbot to write the question you wanted to ask, them prompting it with its own replay to speed up that sweet model collapse.

  • Jack@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    This will be very intensive for the bot, and will be just waste of energy.

    I just stopped using AI at work, these companies are entirely, unimaginably unprofitable. If we all stop using them for a few months, they will just implode. ( Or maybe I am too hopefull?)

    • ApeHardware@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 days ago

      Already doing my part then. The only times I used it was at the beginning when they were straight up tracking usage of the damn thing, but not the inputs, so me and a few coworkers asked it random shit to have a laugh. That was before we found about the environmental impacts and model collapse. Right now that “AI adoption team” has been very quiet for several months, so we just do work as normal (it sucks but at least we get paid.).