> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
> "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination."
Whatever placeholder you use is part of your development process, whether it ships or not. Saying you used none when you did is not cool and rightfully makes you wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
I can understand the Indie Game Awards preferring to be tough now, or probably wasn’t an easy decision to them as well as it ruined their ceremony.
We’re all bystanders here with very little information, so I’d refrain from using unserious expressions like “witch hunt”, especially considering their more recent connotations (i.e. in modern times, “witch hunt” is most often used by bad actors attempting to discredit prevent legitimate investigations).
I agree, even though I'm not in favour of gen ai. It was a terrible mistake letting placeholder assets get out in the final release, but it shouldn't actually count as shipping AI-generated content in your product.
> It literally is shipping AI generated content in the product.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Countries with sane laws include a tolerance limit to take into account flaws in speedometers and radars. Here in Brazil, the tolerance is 10%, so tickets clearly state "driving at speed 10% above limit".
I think the metaphor here would be more like getting your license permanently suspended for going 3 mph over. Whether that happens anywhere or not in reality, the point is, it would be an absurd overreaction.
I believe in giving someone a reasonable amount of time to correct their mistakes. Yes, it was a terrible mistake to release the game in that state, but I think correcting it within days is sufficient.
It's not a "terrible mistake" to accidentally ship placeholder textures. Let's tone it down just a wee bit, maybe.
Anyway, I don't agree with banning generative AI, but if the award show wants to do so, go ahead. What has caused me to lose any respect for them is that they're doing this for such a silly reason. There's a huge difference between using AI to generate your actual textures and ship those, and.... accidentally shipping placeholder textures.
It really illustrates that this is more ideological than anything.
If you ever make a typo on an official document, would you like that to be not correctable and you forever be responsible for the results? Yeah, that's about that level of silly.
I don't find it that surprising. The creatives that are against generative AI aren't against it only because it produces slop. They are against it because it uses past human creative labor, without permission or compensation, to generate profit for the companies building the models which they do not redistribute to the authors of that creative labor. They are also against it due to environmental impact.
In that view, it doesn't matter whether you use it for placeholder or final assets. You paying your ChatGPT membership makes you complicit with the exploitation of that human creative output, and use of resources.
That should be the crux of the issue, and stated plainly.
This is just another scheme where those at the top are appropriating the labor of many to enrich themselves. This will have so many negative consequences that I don't think any reactions against it are excessive.
It is irrelevant whether AI has "soul" or not. It literally does not matter, and it is a bad argument that dillutes what is really going on.
There is still human intentionality in picking an AI generated resource for surface texture, landscape, concept art, whatever. Doubly so if it is someone that create art themselves using it.
The problem of allowing "placeholder AI assets" is that any shipped asset found to be AI is going to be explained away as being "just a placeholder". How are we supposed to confirm that they never meant to ship the game like this? All we know is that they shipped the game with AI assets.
Adding to that: 'it was a placeholder' has been used to excuse direct (flagrant) plagiarism from other sources, such as what happened with Bungie and their game Marathon
There is a small irony that the Indie Game Awards rejects nominations of games using AI but The Game Awards does not. It is independent teams of developers who are less likely to be able to afford to pay an artists who may be able to produce something of value with AI assets that they otherwise would not have the resources for. On the other side, it is big studios with a good track record and more investment who are more likely to be able to pay artists and benefit from their artistry.
To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer. Maybe if it's successful, the programmer can afford to pay an artist to help create their next game. If we want to encourage human made art, I think we should focus on rewarding the big game studios who do this and not being so strict on the 2 or 3 person teams who might not exist without the help of AI.
(I say this knowing Clair Obscur was made by a large well respected team so if they used AI assets I think it's fair their award was stripped. I just wish The Game Awards would also consider using such a standard.)
I agree that this holds in theory, but in practice? All the overhyping of AI I've heard from the gaming sector has come from the big studios, not indies. And, as you point out, Clair Obscur isn't the 'most indie' of indies anyway.
There's not that much irony considering how people into indie games are more about the art and craft of video games, whereas The Game Awards is a giant marketing cannon for the video game industry, and the video game industry has always been about squeezing their employees. If they can hire fewer artists and less QA because of GenAI, they're all for it.
Just two days ago there were reports that Naughty Dog, a studio that allegedly was trying to do away with crunch, was requiring employees to work "a minimum of eight extra hours a week" to complete an internal demo.
I bet if they'd only used AI assisted coding would be a complete non-event, but oh no, some inconsequential assets were generated, grab the pitchforks!
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
> rightfully criticised because it steals from artists. Generative AI for source code learns from developers
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
IMO, this is key to the issue, learning != stealing. I think it should be acceptable for AI to learn and produce, but not to learn and copy. If end assets infringe on copyright, that should be dealt with the same whether human- or AI-produced. The quality of the results is another issue.
> I think it should be acceptable for AI to learn and produce, but not to learn and copy.
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
> Art eludes definition while asking questions about what it means to be human.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.
I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.
That's a fun framing. Let me try using it to define art.
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
No, the only difference is that image generators are a much fuller replacement for "artists" than for programmers currently. The use of quotation marks was not meant to be derogatory, I sure many of them are good artists, but what they were mostly commissioned for was not art - it was backgrounds for websites, headers for TOS updates, illustrations for ads... There was a lot more money in this type of work the same way as there is a lot more money in writing react sites, or scripts to integrate active directory logins in to some ancient inventory management system than in developing new elegant algorithms.
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
I really don't agree with this argument because copying and learning are so distinct. If I write in a famous author's style style and try to pass my work off as theirs, everyone agrees that's unethical. But if I just read a lot of their work and get a sense of what works and doesn't in fiction, then use that learning to write fiction in the same genre, everyone agrees that my learning from a better author is fair game. Pretty sure that's the case even if my work cuts into their sales despite being inferior.
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
FOSS code is the backbone of many closed source for-profit companies. The license allows you to use FOSS tools and Linux, for instance, to build fully proprietary software.
Well, if its GPL you are supposed to provide the source code to any binaries you ship. So if you fed GPL code into your model, the output of it should be also considered GPL licensed, with all implications.
> The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
"Mostly" is doing some heavy lifting there. Even if you don't see a problem with reams of copyleft code being ingested, you're not seeing the connection? Trusting the companies that happily pirated as many books as they could pull from Anna's Archive and as much art as they could slurp from DeviantArt, pixiv, and imageboards? The GP had the insight that this doesn't get called out when it's hidden, but that's the whole point. Laundering of other people's work at such a scale that it feels inevitable or impossible to stop is the tacit goal of the AI industry. We don't need to trip over ourselves glorifying the 'business model' of rampant illegality in the name of monopoly before regulations can catch up.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.
The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.
I'm not sure how valid it is to view artwork differently than source code for this purpose.
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does. If one is illegal or unethical, then the other would be too.
I'm not sure about licenses that explicitly forbid LLM use -- although you could always modify a license to require this! -- but GPL licensed projects require that you also make the software you create open source.
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
Very likely no license can restrict it, since learning is not covered under copyright. Even if you could restrict it, you couldn't add a "no LLMs" clause without violating the free software principles or the OSI definition, since you cannot discriminate in your license.
Is anyone else detecting a phase shift in LLM criticism?
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
It’s the usual “I don’t like it, I’m against, but it’s okay if I use it” thing. People understand the advantage it gives a person over another one, so they will still use it here and there. You’ll have some people who will be vehemently against it, but it will be the same as people who categorically against having smartphones, or avoiding using any Meta products because of tracking and etc.
Just like feminism when it was starting, back then millions of women believed it was silly for them to vote, and those who believed otherwise had to get loud to get more on their side, and that's one example, similar things have happened with hundreds other things that we now take for granted, so it's value as judgment measure it's very low by itself alone.
Just as they were told to like them in the first place. A lot of this is driven that way because most of the public only has a surface-level understanding of the issues.
It feels like a similar trend to the one that NFTs followed: huge initial hype, stoked up by tech bros and swallowed by a general public lacking a deep understanding, tempered over time as that public learns more of the problematic aspects that detractors publicise.
NFTs have way less downsides than LLMs and GenAI, since the main downside was just wasting electricity. I didn't have to worry about someone cloning my voice and begging my mom on the phone for money.
If a fraction of the AI money would go into innovative digital content creation tools and workflows I'm not sure AI would be all that useful to artists. Just look at all those Siggraph papers throughout the years that are filled with good ideas but lacked the funding and expertise to put a really good ui on top.
This is crazy. Tools like photoshoot have gen ai tools in them. Does that mean that Photoshop is now a minefield for artists? If a single artist uses the wrong tool once they disqualify the entire final product for awards, even if the asset is fully removed on the final build.
LOL this is beyond idiotic. Banning AI-generated assets from being used in the game is a red line we could at least debate.
But banning using AI at all while developing the game is... obviously insane on its face. It's literally equivalent to saying "you may not use Photoshop while developing your game" or "you may not use VS Code or Zed or Cursor or Windsurf or Jetbrains while developing your game" or "you may not have a smartphone while developing your game".
To be consistent, if you wish to protect workers by rejecting artificially produced assets, you should feel the same about textiles produced by industrial machinary. Either this decision was wrong or the Luddites had a good point.
These things will keep happening and the bar to be against certain use cases of AI will shift gradually over time.
Before we know it we will have entrusted a lot to AI and that can be both a good or a bad thing. The acceleration of development will be amazing. We could be well on our way to expand into the universe.
I wonder what definition of AI they're using? If you go by the definition in some textbooks (e.g., the definition given in the widely used Russell and Norvig text), basically any code with branches in it counts as AI, and thus nearly any game with any procedurally generated content would run afoul of this AI art rule.
That's all I've found as well, but, personally, I find that a bit unclear, for a couple of reasons. First, are they saying that the game itself can use generative AI, but it can't be used in the development of the game? So that would mean that if the game itself generates random levels using a generative AI approach, that's allowed, but, if I were to use that same code to pre-generate and manually modify the levels, that wouldn't be allowed because I'm now using generative AI as part of the development process? I.e., I can create a game that itself is a generative AI, but I can't use that AI I've built as part of the development of a downstream game?
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
And a third thing is, if I use an IDE that has generative AI, even for something as simple as code completion, does that run afoul of the rule? So, if I used Visual Studio with its default IntelliCode settings, that's not allowed because it has a generative AI-based autocomplete?
People were against steam engines, tractors, CGI, self-checkouts, and now generative AI. After some initial outrage, it will be tightly integrated into society. Like how LLMs are already widely used to assist in coding.
Or not. Unlike all of the above, AI directly conflicts with the concept of intellectual property, which is backed by a much larger and more influential field.
True. Especially indie game awards. That have the least resources available and most like would benefit most from some use of AI. At that scale often even reasonably paid game developers are expensive.
"Existing outside of the traditional publisher system, a game crafted and released by developers who are not owned or financially controlled by a major AAA/AA publisher or corporation, allowing them to create in an unrestricted environment and fully swing for the fences in realizing their vision."
In other words, "indie" means a developer-driven game independent of the establishment. It doesn't necessarily imply a low budget or the lack of professional experience.
Why is usage of AI even a discussion point? Steam also now enforces publishers to disclose if they used AI during game creation. It is a tool, and as a consumer I judge the end product. I don't care what tools were used in the production, just as I don't care if you use Photoshop, Pixelmator, Maya, 3DSMax or whatnot. The end result is what counts. And if the end result is full of bullshit AI slop and is not fun to play, don't give them an award. I played Claire Obscure and it is an absolut stunning and beautiful game.
It’s interesting, because we have examples of other sects in the past that also opposed human progress through technology. History is repeating itself.
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
In that case, the neo-Luddites are worse than the original Luddites, then? Since many are definitely not "totally fine with the machines", and definitely do not confine their attacks only on the manufacturers that go against worker rights, but they include the average person in their attacks. And the original Luddites already got a lot of hate for attempting to hold back progress.
It's unclear if Gen AI promotes any sort of human progress.
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
After the huge impact on the PC gaming community, it's logical to despise AI and ban it from any awards. First cryptocurrencies pumped huge price raises on GPUs, then prices won't return to normal due to AI and now it's impacting RAM prices.
Next year a lot of families will struggle to buy a needed computer for their kids' school due to some multibillion techs going all-in.
I think it's more the fact that they lied before nomination than the AI usage itself. Any institution is bound to disqualify a candidate if it discovers it was admitted on false grounds.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
Are you sure? A survey by the YouTuber Games And AI found that the vast majority of indie game developers are either using, or considering using AI. Like around 90%.
Only when it comes to graphics/art. When it comes to LLMs for code, many people do some amazing mental gymnastics to make it seem like the two are totally different, and one is good while the other is bad.
> That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
> It's a double standard because people don't think of code as creative.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
I would disagree, code is as much the product in games as the assets.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair. It's what the game can and cannot do and shapes the experience.
Then the gameplay itself depend a lot on how the code was made and iterations on the code also shape the gameplay.
You think many are built without any assistance for coding? My impression was that people were mostly concerned about game assets like graphics and music
I think many are built without the use of gen ai to create assets. Obviously, the term "AI" is flexible enough that you could clarify every piece of software as involving AI if you wanted to, but I don't think that's productive.
The AI witch hunt claims its first victim, apparently over some placeholder textures.
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
From the submitted article:
> "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination."
Whatever placeholder you use is part of your development process, whether it ships or not. Saying you used none when you did is not cool and rightfully makes you wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
I can understand the Indie Game Awards preferring to be tough now, or probably wasn’t an easy decision to them as well as it ruined their ceremony.
We’re all bystanders here with very little information, so I’d refrain from using unserious expressions like “witch hunt”, especially considering their more recent connotations (i.e. in modern times, “witch hunt” is most often used by bad actors attempting to discredit prevent legitimate investigations).
That’s incredibly harsh. A blanket ban on AI generated assets is dumb as hell. Generating placeholder assets is completely acceptable.
I agree, even though I'm not in favour of gen ai. It was a terrible mistake letting placeholder assets get out in the final release, but it shouldn't actually count as shipping AI-generated content in your product.
It literally is shipping AI generated content in the product.
> It literally is shipping AI generated content in the product.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Countries with sane laws include a tolerance limit to take into account flaws in speedometers and radars. Here in Brazil, the tolerance is 10%, so tickets clearly state "driving at speed 10% above limit".
You will literally get a fine for going three miles per hour over the speed limit in many countries.
True, however the penalty depends on the amount by which the threshold was crossed; in the country I live in at least.
I think the metaphor here would be more like getting your license permanently suspended for going 3 mph over. Whether that happens anywhere or not in reality, the point is, it would be an absurd overreaction.
I believe in giving someone a reasonable amount of time to correct their mistakes. Yes, it was a terrible mistake to release the game in that state, but I think correcting it within days is sufficient.
It's not a "terrible mistake" to accidentally ship placeholder textures. Let's tone it down just a wee bit, maybe.
Anyway, I don't agree with banning generative AI, but if the award show wants to do so, go ahead. What has caused me to lose any respect for them is that they're doing this for such a silly reason. There's a huge difference between using AI to generate your actual textures and ship those, and.... accidentally shipping placeholder textures.
It really illustrates that this is more ideological than anything.
If you ever make a typo on an official document, would you like that to be not correctable and you forever be responsible for the results? Yeah, that's about that level of silly.
I don't find it that surprising. The creatives that are against generative AI aren't against it only because it produces slop. They are against it because it uses past human creative labor, without permission or compensation, to generate profit for the companies building the models which they do not redistribute to the authors of that creative labor. They are also against it due to environmental impact.
In that view, it doesn't matter whether you use it for placeholder or final assets. You paying your ChatGPT membership makes you complicit with the exploitation of that human creative output, and use of resources.
They are also against it because they believe it will compete with them and they will get paid less.
That should be the crux of the issue, and stated plainly.
This is just another scheme where those at the top are appropriating the labor of many to enrich themselves. This will have so many negative consequences that I don't think any reactions against it are excessive.
It is irrelevant whether AI has "soul" or not. It literally does not matter, and it is a bad argument that dillutes what is really going on.
There is still human intentionality in picking an AI generated resource for surface texture, landscape, concept art, whatever. Doubly so if it is someone that create art themselves using it.
The problem of allowing "placeholder AI assets" is that any shipped asset found to be AI is going to be explained away as being "just a placeholder". How are we supposed to confirm that they never meant to ship the game like this? All we know is that they shipped the game with AI assets.
Adding to that: 'it was a placeholder' has been used to excuse direct (flagrant) plagiarism from other sources, such as what happened with Bungie and their game Marathon
Generating a brick wall texture using an AI should be acceptable as well, even when it's not a placeholder.
There is a small irony that the Indie Game Awards rejects nominations of games using AI but The Game Awards does not. It is independent teams of developers who are less likely to be able to afford to pay an artists who may be able to produce something of value with AI assets that they otherwise would not have the resources for. On the other side, it is big studios with a good track record and more investment who are more likely to be able to pay artists and benefit from their artistry.
To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer. Maybe if it's successful, the programmer can afford to pay an artist to help create their next game. If we want to encourage human made art, I think we should focus on rewarding the big game studios who do this and not being so strict on the 2 or 3 person teams who might not exist without the help of AI.
(I say this knowing Clair Obscur was made by a large well respected team so if they used AI assets I think it's fair their award was stripped. I just wish The Game Awards would also consider using such a standard.)
I agree that this holds in theory, but in practice? All the overhyping of AI I've heard from the gaming sector has come from the big studios, not indies. And, as you point out, Clair Obscur isn't the 'most indie' of indies anyway.
You’re not wrong, but I think a hardline stance is pragmatic for keeping AI out while it’s not yet normalized.
There's not that much irony considering how people into indie games are more about the art and craft of video games, whereas The Game Awards is a giant marketing cannon for the video game industry, and the video game industry has always been about squeezing their employees. If they can hire fewer artists and less QA because of GenAI, they're all for it.
Just two days ago there were reports that Naughty Dog, a studio that allegedly was trying to do away with crunch, was requiring employees to work "a minimum of eight extra hours a week" to complete an internal demo.
https://bsky.app/profile/jasonschreier.bsky.social/post/3mab...
You need to separate AI usage from automating certain parts of a pipeline from end to end creation.
Taking a scorched earth approach to AI usage is just being a luddite.
I bet if they'd only used AI assisted coding would be a complete non-event, but oh no, some inconsequential assets were generated, grab the pitchforks!
It is a non event for consumers the only ones who care much are artists.
As always the market decides.
I’d take that bet against you.
Ok great, but you don't really say much.
Maybe, but that is a different issue.
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
> rightfully criticised because it steals from artists. Generative AI for source code learns from developers
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
IMO, this is key to the issue, learning != stealing. I think it should be acceptable for AI to learn and produce, but not to learn and copy. If end assets infringe on copyright, that should be dealt with the same whether human- or AI-produced. The quality of the results is another issue.
> I think it should be acceptable for AI to learn and produce, but not to learn and copy.
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
It's a double standard because it's apples and oranges.
Code is an abstract way of soldering cables in the correct way so the machine does a thing.
Art eludes definition while asking questions about what it means to be human.
> Art eludes definition while asking questions about what it means to be human.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.
Yeah this was probably for like a stone texture or something. It "eludes definition while asking questions about what it means to be human".
I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.
That's a fun framing. Let me try using it to define art.
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
The images clair obscur generated hardly "eludes definition while asking questions about what it means to be human.".
The game is art according to that definition while the individual assets in it are not.
Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
Speak for yourself.
I consider some code I write art.
You're just someone who can't see the beauty of an elegant algorithm.
No, the only difference is that image generators are a much fuller replacement for "artists" than for programmers currently. The use of quotation marks was not meant to be derogatory, I sure many of them are good artists, but what they were mostly commissioned for was not art - it was backgrounds for websites, headers for TOS updates, illustrations for ads... There was a lot more money in this type of work the same way as there is a lot more money in writing react sites, or scripts to integrate active directory logins in to some ancient inventory management system than in developing new elegant algorithms.
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
I really don't agree with this argument because copying and learning are so distinct. If I write in a famous author's style style and try to pass my work off as theirs, everyone agrees that's unethical. But if I just read a lot of their work and get a sense of what works and doesn't in fiction, then use that learning to write fiction in the same genre, everyone agrees that my learning from a better author is fair game. Pretty sure that's the case even if my work cuts into their sales despite being inferior.
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
FOSS code is the backbone of many closed source for-profit companies. The license allows you to use FOSS tools and Linux, for instance, to build fully proprietary software.
Well, if its GPL you are supposed to provide the source code to any binaries you ship. So if you fed GPL code into your model, the output of it should be also considered GPL licensed, with all implications.
Sure, that usage is allowed by the license. The license does not allow copying the code. LLMs are somewhere in between.
> The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
"Mostly" is doing some heavy lifting there. Even if you don't see a problem with reams of copyleft code being ingested, you're not seeing the connection? Trusting the companies that happily pirated as many books as they could pull from Anna's Archive and as much art as they could slurp from DeviantArt, pixiv, and imageboards? The GP had the insight that this doesn't get called out when it's hidden, but that's the whole point. Laundering of other people's work at such a scale that it feels inevitable or impossible to stop is the tacit goal of the AI industry. We don't need to trip over ourselves glorifying the 'business model' of rampant illegality in the name of monopoly before regulations can catch up.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.
The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.
I'm not sure how valid it is to view artwork differently than source code for this purpose.
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does. If one is illegal or unethical, then the other would be too.
Is there a OSS licence that excludes LLM?
I'm not sure about licenses that explicitly forbid LLM use -- although you could always modify a license to require this! -- but GPL licensed projects require that you also make the software you create open source.
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
Very likely no license can restrict it, since learning is not covered under copyright. Even if you could restrict it, you couldn't add a "no LLMs" clause without violating the free software principles or the OSI definition, since you cannot discriminate in your license.
They don't require it if you don't include OSS artifacts/code in your shipped product. You can use gcc to build closed source software.
> The quality suffers in both cases
According to your omnivision?
Is anyone else detecting a phase shift in LLM criticism?
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
No, AFAICT, AI hate has been common (but not the majority position, and still not) in normie contexts for a while.
It’s the usual “I don’t like it, I’m against, but it’s okay if I use it” thing. People understand the advantage it gives a person over another one, so they will still use it here and there. You’ll have some people who will be vehemently against it, but it will be the same as people who categorically against having smartphones, or avoiding using any Meta products because of tracking and etc.
People were told by other people to dislike LLMs and so they did, then told other people themselves.
Just like feminism when it was starting, back then millions of women believed it was silly for them to vote, and those who believed otherwise had to get loud to get more on their side, and that's one example, similar things have happened with hundreds other things that we now take for granted, so it's value as judgment measure it's very low by itself alone.
Just as they were told to like them in the first place. A lot of this is driven that way because most of the public only has a surface-level understanding of the issues.
Read the other comments in the thread lol- “Fuck artists, we will replace them”
This is not a winning PR move when most normal people are already pretty pro-artist and anti tech bro
It feels like a similar trend to the one that NFTs followed: huge initial hype, stoked up by tech bros and swallowed by a general public lacking a deep understanding, tempered over time as that public learns more of the problematic aspects that detractors publicise.
I think this comparison makes little sense, as in the case of AI there is some actual impactful substance backing the hype.
NFTs have way less downsides than LLMs and GenAI, since the main downside was just wasting electricity. I didn't have to worry about someone cloning my voice and begging my mom on the phone for money.
Typical brigading, same with blm, woke, right wing, etc.
Wow you do mentally group things efficiently, that much I can say.
Says one who brings feminism in this thread for no apparent reason.
He made apparent analogy. You dont have to be so oversensitive that any mention of feminism and women blows into woke attack in your head.
If a fraction of the AI money would go into innovative digital content creation tools and workflows I'm not sure AI would be all that useful to artists. Just look at all those Siggraph papers throughout the years that are filled with good ideas but lacked the funding and expertise to put a really good ui on top.
This is crazy. Tools like photoshoot have gen ai tools in them. Does that mean that Photoshop is now a minefield for artists? If a single artist uses the wrong tool once they disqualify the entire final product for awards, even if the asset is fully removed on the final build.
LOL this is beyond idiotic. Banning AI-generated assets from being used in the game is a red line we could at least debate.
But banning using AI at all while developing the game is... obviously insane on its face. It's literally equivalent to saying "you may not use Photoshop while developing your game" or "you may not use VS Code or Zed or Cursor or Windsurf or Jetbrains while developing your game" or "you may not have a smartphone while developing your game".
To be consistent, if you wish to protect workers by rejecting artificially produced assets, you should feel the same about textiles produced by industrial machinary. Either this decision was wrong or the Luddites had a good point.
If the product is not made from material dug out from ground or plants or animals by only bare hands. And I mean bare hands. Is it even worth buying?
These things will keep happening and the bar to be against certain use cases of AI will shift gradually over time.
Before we know it we will have entrusted a lot to AI and that can be both a good or a bad thing. The acceleration of development will be amazing. We could be well on our way to expand into the universe.
Oh its AI that makes not indie, not the huge funding.
I wonder what definition of AI they're using? If you go by the definition in some textbooks (e.g., the definition given in the widely used Russell and Norvig text), basically any code with branches in it counts as AI, and thus nearly any game with any procedurally generated content would run afoul of this AI art rule.
Their FAQ only states:
> Games developed using generative AI are strictly ineligible for nomination.
I haven't found anything more detailed than that; I'm not sure if anything more detailed actually exists, or needs to.
That's all I've found as well, but, personally, I find that a bit unclear, for a couple of reasons. First, are they saying that the game itself can use generative AI, but it can't be used in the development of the game? So that would mean that if the game itself generates random levels using a generative AI approach, that's allowed, but, if I were to use that same code to pre-generate and manually modify the levels, that wouldn't be allowed because I'm now using generative AI as part of the development process? I.e., I can create a game that itself is a generative AI, but I can't use that AI I've built as part of the development of a downstream game?
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
And a third thing is, if I use an IDE that has generative AI, even for something as simple as code completion, does that run afoul of the rule? So, if I used Visual Studio with its default IntelliCode settings, that's not allowed because it has a generative AI-based autocomplete?
AI is a moving goalpost. At least now the moving goalpost is call AGI.
A bunch of 'if' is an "expert system", but I'm old enough to remember when that was groundbreaking AI.
People were against steam engines, tractors, CGI, self-checkouts, and now generative AI. After some initial outrage, it will be tightly integrated into society. Like how LLMs are already widely used to assist in coding.
Or not. Unlike all of the above, AI directly conflicts with the concept of intellectual property, which is backed by a much larger and more influential field.
This is a great thing for AI. Totally beclown the anti-AI zealots.
All press is good press.
Few care about the mainstream game review sites or oddball game award shows as their track record is terrible (Concord reviews).
Most go by player reviews, word of mouth, and social media.
Great opportunity for a new award body that allows AI use.
True. Especially indie game awards. That have the least resources available and most like would benefit most from some use of AI. At that scale often even reasonably paid game developers are expensive.
I hear FIFA makes new awards these days
Just to be clear, it's some Indie Game awards, not the main The Game Awards
That’s not how awards work. Awards trade on prestige. In order for an award to matter, the people you’re giving it to have to care.
I think you’ll find most of the small teams making popular indie video games aren’t going to be interested in winning a pro-AI award.
indie game? with their budget and staff? really?
those guys worked in AAA studios and they got a 10 millions budget
how "indie" is that?
Indie Game Awards defines it like this:
"Existing outside of the traditional publisher system, a game crafted and released by developers who are not owned or financially controlled by a major AAA/AA publisher or corporation, allowing them to create in an unrestricted environment and fully swing for the fences in realizing their vision."
In other words, "indie" means a developer-driven game independent of the establishment. It doesn't necessarily imply a low budget or the lack of professional experience.
"indie" is just a marketing term now, it doesn't actually mean anything specific.
Exactly like "AI"!
Why is usage of AI even a discussion point? Steam also now enforces publishers to disclose if they used AI during game creation. It is a tool, and as a consumer I judge the end product. I don't care what tools were used in the production, just as I don't care if you use Photoshop, Pixelmator, Maya, 3DSMax or whatnot. The end result is what counts. And if the end result is full of bullshit AI slop and is not fun to play, don't give them an award. I played Claire Obscure and it is an absolut stunning and beautiful game.
It’s interesting, because we have examples of other sects in the past that also opposed human progress through technology. History is repeating itself.
For instance, see Luddites: https://en.wikipedia.org/wiki/Luddite
That does the Luddites a bit of a disservice:
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...
In that case, the neo-Luddites are worse than the original Luddites, then? Since many are definitely not "totally fine with the machines", and definitely do not confine their attacks only on the manufacturers that go against worker rights, but they include the average person in their attacks. And the original Luddites already got a lot of hate for attempting to hold back progress.
I really like Neal Stephenson's neologism 'amistics' - referring to which technologies a culture knows about but chooses not to use.
It's unclear if Gen AI promotes any sort of human progress.
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
After the huge impact on the PC gaming community, it's logical to despise AI and ban it from any awards. First cryptocurrencies pumped huge price raises on GPUs, then prices won't return to normal due to AI and now it's impacting RAM prices.
Next year a lot of families will struggle to buy a needed computer for their kids' school due to some multibillion techs going all-in.
I play games on cheap hardware. I would like awards to focus on the quality of the game, rather than how they were made.
Awards that focus on quality is too desired to not be a thing.
I expect generative AI to become a competitive advantage taken up by the vast majority.
I think it's more the fact that they lied before nomination than the AI usage itself. Any institution is bound to disqualify a candidate if it discovers it was admitted on false grounds.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
> Almost all games currently being made would have programmers using copilot
I think that is almost certainly untrue, especially among indie games developers, who are often the most stringent critics of gen ai.
Are you sure? A survey by the YouTuber Games And AI found that the vast majority of indie game developers are either using, or considering using AI. Like around 90%.
Only when it comes to graphics/art. When it comes to LLMs for code, many people do some amazing mental gymnastics to make it seem like the two are totally different, and one is good while the other is bad.
> Almost all games currently being made would have programmers using copilot
Which LLM told you that?
Please, LLM code assistants are ubiquitous enough nowadays with inline code suggestions in vscode on by default. It's an extremely safe claim.
That would imply the following to be true,
> Almost all games currently being made would have programmers using VSCode.
Which clearly isn't the case, unless they like to suffer in regards to the Unreal and Unity integrations.
> That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
Fuck 'em. We can replace artists.
You get why people hate AI when AI boosters talk like this, right?
> It's a double standard because people don't think of code as creative.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
I would disagree, code is as much the product in games as the assets.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair. It's what the game can and cannot do and shapes the experience.
Then the gameplay itself depend a lot on how the code was made and iterations on the code also shape the gameplay.
To further this: You can even feel the org structure in games.
- Final Fantasy 7 Rebirth clearly had two completely decoupled teams working on the main game and the open world design respectively
- Cyberpunk 2077 is filled with small shoeboxes of interactable content
This is so ridiculous that I suspect that it will be even better publicity for them than the award itself.
It's some random Indie award, not the main The Game Awards. Clair Obscur has enough publicity already and rightly so.
Dunno if they even care too much about that, the game is already a breakaway success.
Well that’s a rule that makes no sense.
These awards are behind the times and risk irrelevance.
What software in 2025 is written without AI help?
Every game released recently would have AI help.
> Every game released recently would have AI help.
For indie games in particular, that is very much not true. In fact, Steam has a 'made with AI' label, so it's not even true on that platform.
You think many are built without any assistance for coding? My impression was that people were mostly concerned about game assets like graphics and music
I think many are built without the use of gen ai to create assets. Obviously, the term "AI" is flexible enough that you could clarify every piece of software as involving AI if you wanted to, but I don't think that's productive.
Do you have proof that many are using AI for coding?