Title explains it all. For people in tech jobs who are being indoctrinated encouraged to use AI, what could be some methods of malicious compliance?
The only one I managed to come up with would be asking the chatbot to write the question you wanted to ask, them prompting it with its own replay to speed up that sweet model collapse.
From what I have seen so far, just using the output of the damn thing without double checking is enough to cause errors.
Automated messages to customers contain errors? Leave them in! Especially fun if you work in insurancd and law.
Code written by it is buggy? Just copy paste that shit into everything! And let bots check the result as well.
Have some important math to do? Let the bot rip! Give the guys from accounting so work for once.
Remember: Just uncritically using the damn thing is already malicious compliance with the ammount of errors they produce. No more cleaning up behind them, no more trying to invest actual work. If corpos decide they want AI, let them choke on it.