Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the content generated by an LLM does not exist anywhere else... it is by definition "creative."


Which is precisely why I said “genuine creativity” in hopes of avoiding pedantry around word etymology. It’s hard to have these discussions when people are deliberately being obtuse. By this definition, nearly everything is creative making any discussion about it meaningless. So, let’s use the connotative meaning.


Ok yeah let's avoid that bullshit then. The problem is there's enough leeway between the words that I could honestly say everything LLMs generate are representative of what most humans define as "genuine creativity" and I bet you that I can show you human art and computer art side by side and you'd be incapable of knowing which one is genuinely creative.

I hate pedantic vocabulary just as much as you and this is not the direction I want to take it. But that test I outlined above literally points out that there is no difference. What the LLMs output fits our definition of genuine creativity because you can't tell the difference.

In fact the word itself is the ludicrous thing here. You just made it up to differentiate AI art and human art, but in reality there is no differentiator it's one category with zero recognizable difference... the only actual difference is "what" created it.


The systems in question require learning from art generated by humans. If they didn't, they could avoid all of this IP mess by learning how to draw. The supposition I'm pushing back on is that humans only generate art by regurgitating what came before them and I don't see any basis for that claim. We have art formed by completely isolated societies in very distinct styles. Children draw all sorts of fanciful creatures that they've never seen in the wild or in other art. Artists have developed different techniques for capturing their work and it can resonate with people. We have prodigies capable of creating symphonies before they can do much else in the world.

Sure, there's incremental evolution. But, we also have breakthrough artists inventing new techniques and art forms. It doesn't matter that a computer program can clone a power chord structure and create something that sorta sounds like Nirvana. That's not proof of creativity. Yes, it created something so it's "creative" in entirely mechanical stance. Just like solar flares will flip bits in my computer and "create" things as well.

We can argue all day about what art means. It can get really philosophical really quick. I contend people can dream up new ideas and execute on them in a way that resonates with others and that they don't need to copy everyone else to do that. That given the basics (here's some paper an color pencils) people can develop skills and invent wholly unique ways to represent themselves and the world around them. That they're able to do that in isolation and without education. I point to the entirety of human civilization as my supporting evidence. I think it's reductive to claim we're just statistical models consuming media and shuffling things around.

It seems to me this whole argument hinges on saying humans and these image generation tools work the same way. If they do, then teach one of these programs what it means to draw, give them a sensor network to the outside world, and let's see what they generate. That would be hugely compelling and would sidestep this whole discussion about IP whitewashing. But, that's not what's happening. Whether because of convenience or because it's the only practical way to generate art, these systems only work by training on art created by humans. That they're able to generate a final product that looks like something else made by a human shouldn't be shocking -- that's the whole basis of copying.


>https://cdn.discordapp.com/attachments/1136039656660684880/1...

Take a look at the image on the lower right hand corner. That is indisputably original. The badge doesn't exist anywhere else and the alien form with one entire leg jutting out of the torso has never been done before. We know that this one legged creature is entirely original because it doesn't exist in any of the shows.

This is the key how you know LLMs aren't regurgitating stuff. It's trying to reproduce something from a flawed understanding of reality. A regurgitation would get things truly correct, but a one legged human is a creative error due to a lack of understanding. The LLM doesn't understand reality as completely and as cohesively as we do but it understands an aspect of it enough that it can produce art that mostly works. The LLM is definitely creating stuff from pure thought. These things are not copies and that's what you don't get.

The hype for LLMs is so over the top that it looks like the latest outrage from something that occurred on social media. What you and other people are missing is that we crossed a certain AI threshold here. This isn't mere regurgitation.


>The supposition I'm pushing back on is that humans only generate art by regurgitating what came before them and I don't see any basis for that claim.

I never claimed this. Art isn't a regurgitation. It's a composition of what came before plus a random seed. Humans do this. But so do LLMs.

Think about it. Can you erase all forms of memory in a human until it's brain dead and expect it to produce art? No. It can't.

> Children draw all sorts of fanciful creatures that they've never seen in the wild or in other art.

https://cdn.discordapp.com/attachments/1136039656660684880/1...

>We have art formed by completely isolated societies in very distinct styles

prompt: Draw art in the style of a society or civilization that has never existed. Make the art very distinct in style such that the style is very divergent from anything that has been seen before.

https://cdn.discordapp.com/attachments/1136039656660684880/1...

>It seems to me this whole argument hinges on saying humans and these image generation tools work the same way. If they do, then teach one of these programs what it means to draw, give them a sensor network to the outside world, and let's see what they generate.

No. The argument hinges on something far more insidious. If I showed you two pieces of art side by side.... One was AI generated in seconds and the other one was created through pure passion and hours and hours of hard work and toil. If you can't tell one was AI generated then all that toil and passion is useless.


I think you're right.

Why? Just observe how art and content is perceived before and after you tell people it was made by AI. I think it's an unfounded bias.

You probably remember the AI art piece that won an art contest? It was perceived as better than the rest, obviously, or it wouldn't have won - until it was revealed it was made by AI.

Now, IMHO, that was not fair and if it was disqualified, that was absolutely the right move, but that's not the point.

The same can be observed when people talk about (AI) art. I've seen people comment "Awesome! There is just something about [this artwork] that AI can't reproduce, human art has soul!" even though the artwork they commented on* was made by AI

After it is revealed to be AI made, it is suddenly "soulless" and not genuine anymore.

I remember there was an online mental health care platform that (without revealing this) introduced AI therapists. According to their analytics, the AI therapists, on average, got higher scores than the human therapists. Then, word got out they're using AI and suddenly, people rated the service a lot worse than before. It was obviously wrong to not disclose this.

There's nothing wrong with valuing craftsmanship and human work higher than machine work. We as humans do and I do, too - but generally, nowadays, digital watercolor art made in Photoshop (and co) is not inherently considered "not creative" or "soulless" because it wasn't painted with real watercolor... but I'm pretty sure that was not always the case. I'm sure artists discussed about how digital art is not genuine, creative or soulless back then, too.

Can anyone that was there at the time tell me if there were similarities in sentiment when digital art and Photoshop (and co) became popular?

What I'm trying to say is that there is definitely a bias at work and people generally can't tell the difference if they are not told about the (possibility of) involvement of AI.

I also fear that currently, we're just the lame adults that don't like the new thing and eventually in 10 to 20 years after AI art is normal and accepted by the generation that grew up with it, something new will come out that that generation will think is lame and not genuine, and so on.

That is how I remember it. I'm relatively sure, but take it with a grain of salt - memories are, as we know, not reliable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: