Oh dear. Goonswarm Games are shutting down after Running With Scissors cancelled POSTAL: Bullet Paradise due to the use of generative AI. It’s a bit of a saga this one.
I covered the initial announcement, along with a follow-up update in there where Running With Scissors attempted to defend the developer. The backlash only continued, and eventually RWS cancelled it as per the statement GamingOnLinux was sent on December 5th via Vince Desi, founder of Running With Scissors



Okay now you are the ones loosing track of the point. Moral arguments didn’t work on the war on drugs, and there was much more agreement there than there is here. I never actually made an argument about morals, I am saying that whatever you believe is basically irrelevant as you won’t stop this from happening, and can’t even stop consuming AI generated things yourself all of the time. I am not saying you are morally incorrect, I am saying your actions and arguments are futile. Probably the luddites were correct on some level; that did not change the outcome of their movement.
That being said I am interested in what moral objections you have. I am not a big believer in morality myself, but it’s nonetheless interesting to hear what the objections specifically are. I understand some of the ones surrounding climate issues or job losses; I would still be interested in learning if there are other reasons people are unhappy.
I was responding to your comment about consumers not being able to tell if Generative AI has been used. I can’t tell the difference between battery farmed eggs and free range ones, but I still avoid eggs that aren’t free range. That doesn’t seem like I’ve lost track of the point, to me.
I then expanded what I was saying, by explaining that I don’t agree with battery farming, which is why I avoid eggs produced that way, despite not being able to actually tell if an egg is free range or battery farmed. This relates to the point about Generative AI use because I also don’t agree with using Generative AI in art and media, and might or might not be able to tell if it’s used.
To respond to your latest comment about my actions and beliefs being irrelevant because my not supporting something I disagree with won’t stop it happening; you’re right. Hens are still battery farmed. As much as I wish it wouldn’t happen, it does. Does that make me sticking by my own morals and feelings futile or irrelevant? I don’t think so.
As for the objections I have with Generative AI, there are a few: I think it’s inauthentic. I want to experience art that was made with intention. I want to see what people are capable of making, and how people tell stories. We’re a storytelling species, I think it’s really important for us. I feel like I’ve been lied to when art is generated by an unthinking machine and then presented as though it was made by a human.
For me, art is a connection between me and the artist. If somebody writes a sad song, and plays it, then I get to experience and understand their feelings in that moment. It’s a communication. I feel something, and they’ve given that to me. If a chatbot did it… Well nobody communicated anything. It’s a lie. It basically catfished my emotions.
There are other objections, too; the plagiarism of actual human work without recompense. The fact that these chatbots are making people mentally unstable, the fact that these chatbots only exist to enrich the already wealthy. The fact that all of this is being sold to us as some way to remove effort from our lives, even the fun parts of our lives. I think effort and hard work are their own reward a lot of the time, and I hate to see laziness championed because it leads to uninteresting and lame shit.
Sorry, that was a long one, and I’ll cut it off here before it gets any longer 😂
Most of the time when people talk about plagarism in relation to AI it’s not actually plagiarism. Unless you are referring to people using image edit models to remix someone else’s work, but you could say the same about photoshop or making a collage. These mostly come from misunderstanding how the models work, and there is a reason you don’t see technical people or machine learning engineers arguing this.
I do agree though that there is an issue with people becoming mentally unstable as a result of using LLMs or VLMs. There is a specific model family that caused this primarily due to alignment issues. That model being GPT-4o. To some extent other models did also contribute to this, but the primary reason is GPT-4o. OpenAI and others have tried to fix this, but the community surrounding ChatGPT has been very resistant. To the point that GPT-4o was fully removed from ChatGPT but people demanded it be returned to them and unfortunately they got their wish. It seems people had become emotionally attached to the model. I think in this case the people and community surrounding the models are their own worst enemy. There are some interesting benchmarks on LessWrong by AI safety experts showing that some models are much better at detecting and handling psychosis than others. I believe Claude and Kimi models performed the best.
As for authenticity and intentionality: I think you might have a point for some use cases. It’s also important to bare in mind that image and video generation are only one tiny subset of AI and even they have some good uses. In particular they can be used to tell stories written and voiced by human beings. Here I am referring to things like Gossip Goblin, which use AI generated video, but all the stories being told are written by humans. The GenAI here is being used instead of manually doing animation and special effects. One of the biggest uses of AI is in programming. This is used in everything from the latest Windows and Linux OSes to video games and websites. I don’t really see how using AI for writing code can remove intentionality from the process of making a game or other interactive media experience.
Edit: also you might want to read this: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai
Re your last point, I’m a full time web developer and while I’m not building entire games or whatever, I am currently working on a fairly complex and involved data migration project. My boss has demanded I do the whole thing with AI and don’t write code.
Thus far, it’s been incredibly frustrating to get it to do what I need it to do without having the chatbot change tonnes of tiny things or assume and hallucinate stuff that simply shouldn’t be there. Beyond those time-wasting frustrations, the fact that I’m not getting hands-on means my mental model of how the data is translating from one system to another is muddy. It’s not as clear as it would be were I building this thing myself. Specifically, because I’m not building it myself, I’m not running into edge cases personally and unpicking the knots of the current system.
There’s no intentionality in what chatbots generate, by definition. They have no intention, they’re not alive, they can’t think. They don’t understand things.
I’m sorry but I’m sort of done with this topic. I don’t like Generative AI, I think it’s disingenuous, lazy, furthering the commodification of art and creativity, and damaging our abilities to think critically. However, I do understand that some people have found it helpful in some contexts, and other people like to play with it. Thanks for the chat. 👍