Title explains it all. For people in tech jobs who are being indoctrinated encouraged to use AI, what could be some methods of malicious compliance?

The only one I managed to come up with would be asking the chatbot to write the question you wanted to ask, them prompting it with its own replay to speed up that sweet model collapse.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    You could let it draft some overtime timesheets or expense claims for hallucinated business trips. Maybe a rap diss-track or rant about the boss / project. ChatGPT loves to go nuclear with these things. (Or maybe not so much if they monitor your input.)

    And why do you even ask us? Just let the AI come up with some (subtle) malicious ideas.