(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39058428

总体而言,艺术家似乎对使用人工智能生成图像以及与版权法的潜在冲突非常担忧。 一些讨论者认为,在人工智能训练集中有意或无意地包含未经许可或来源不良的内容相当于“中毒”,然而,其他人则认为这个问题之前已经得到解决,并且选择退出训练数据库并不是现有的要求。 美国法律,尽管它可能是有益的。 讨论的另一个关键点是人工智能模型是否可以被视为变革性作品,从而可能免除基于相似性标准的版权限制。 许多人还强调,与传统图像相比,确定人工智能生成图像的版权保护范围的重要性。 总的来说,这次对话凸显了人工智能技术与版权考虑之间交叉点的复杂性和微妙性。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Nightshade: An offensive tool for artists against AI art generators (uchicago.edu)
547 points by ink404 1 day ago | hide | past | favorite | 632 comments












This seems to introduce levels of artifacts that many artists would find unacceptable: https://twitter.com/sini4ka111/status/1748378223291912567

The rumblings I'm hearing are that this a) barely works with last-gen training processes b) does not work at all with more modern training processes (GPT-4V, LLaVA, even BLIP2 labelling [1]) and c) would not be especially challenging to mitigate against even should it become more effective and popular. The Authors' previous work, Glaze, also does not seem to be very effective despite dramatic proclamations to the contrary, so I think this might be a case of overhyping an academically interesting but real-world-impractical result.

[1]: Courtesy of /u/b3sn0w on Reddit: https://imgur.com/cI7RLAq https://imgur.com/eqe3Dyn https://imgur.com/1BMASL4



The screenshots you sent in [1] are inference, not training. You need to get a Nightshaded image into the training set of an image generator in order for this to have any effect. When you give an image to GPT-4V, Stable Diffusion img2img, or anything else, you're not training the AI - the model is completely frozen and does not change at all[0].

I don't know if anyone else is still scraping new images into the generators. I've heard somewhere that OpenAI stopped scraping around 2021 because they're worried about training on the output of their own models[1]. Adobe Firefly claims to have been trained on Adobe Stock images, but we don't know if Adobe has any particular cutoffs of their own[2].

If you want an image that screws up inference - i.e. one that GPT-4V or Stable Diffusion will choke on - you want an adversarial image. I don't know if you can adversarially train on a model you don't have weights for, though I've heard you can generalize adversarial training against multiple independent models to really screw shit up[3].

[0] All learning capability of text generators come from the fact that they have a context window; but that only provides a short term memory of 2048 tokens. They have no other memory capability.

[1] The scenario of what happens when you do this is fancifully called Habsburg AI. The model learns from it's own biases, reinforcing them into stronger biases, while forgetting everything else.

[2] It'd be particularly ironic if the only thing Nightshade harms is the one AI generator that tried to be even slightly ethical.

[3] At the extremes, these adversarial images fool humans. Though, the study that did this intentionally only showed the images for a small period of time, the idea being that short exposures are akin to a feed-forward neural network with no recurrent computation pathways. If you look at them longer, it's obvious that it's a picture of one thing edited to look like another.



Hey you know what might not be AI generated post-2021? Almost everything run through Nightshade. So given it's defeated, which is pretty likely, artists have effectively tagged their own work for inclusion.


It is a great shame that we have come to a no-win situation for artists when VCs are virtually unable to lose.


I mean that's more or less status quo isn't it? Big business does what it wants, common people can get fucked if they don't like it. Same as it ever was.


That's exactly right. It is just the variety of new ways in which common people get fucked that is dispiriting, with seemingly nothing capable of moving in the opposite direction.


Modern generative image models are trained on curated data, not raw internet data. Sometimes the captions are regenerated to fit the image better. Only high quality images with high quality descriptions.


I wouldn't call what Stable Diffusion et al are trained on "high quality". You need only look through the likes of LAION to see the kind of captions and images they get trained on.

It's not random but it's not particularly curated either. Most of the time, any curation is done afterwards.



Why wouldn't an artist just generate AI spam and Nightshade it?


Correct me if I'm wrong but I understand image generators as relying on auto-labeled images to understand what means what, and the point of this attack to make the auto-labelers mislabel the image, but as the top-level comment said it's seemingly not tricking newer auto-labelers.


not all are auto labelled, some are hand labelled, some are initially labelled with something like clip/blip/booru and then corrected a bit by hand. The newest thing though is using llm's with image support like GPT4 to label the images, which kind of does a much better job most of the time.

Your understanding of the attack was the same as mine, it injects just the right kinds of pixels to throw off the auto-labellers to misdirect what they are directing causing the tags to get shuffled around.

Also on reddit today some of the Stable Diffusion users are already starting to train using Nightshade so they can implement it as a negative model, which might or might not work, will have to see.



Even if no new images are being scraped to train the foundation text-to-image models, you can be certain that there is a small horde of folk still scraping to create datasets for training fine-tuned models, LoRAs, Textual Inversions, and all the new hotness training methods still being created each day.


The context windows of LLMs are now significantly larger than 2048 tokens, and there are clever ways to autopopulate context window to remind it of things.


[3] sounds really interesting - do you have a link?




If it doesn't work during inference I really doubt it will have any intended effect during training, there is simply too much signal and the added adversarial noise works on the frozen and small proxy model they used (CLIP image encoder I think) but it doesn't work on a larger model and trained on a different dataset, if there is any effect during training it will probably just be the model learning that it can't take shortcuts (the artifacts working on the proxy model showcase gaps in its visual knowledge).

Generative models like text-to-image have an encoder part (it could be explicit or not) that extract the semantic from the noised image, if the auto-labelers can correctly label the samples then the encoded trained on both actual and adversarial images will learn to not take the same shortcuts that the proxy model has taken making the model more robust, I cannot see an argument where this should be a negative thing for the model.



Yeah. At worst a simple img2img diffusion step would mitigate this, but just eyeballing the examples, traditional denoisers would probably do the job?

Denoising is probably a good preprocessing step anyway.



It’s a common preprocessing step and I believe that’s how glaze (this lab’s previous work) was defeated.


I can’t really see any difference in those images on the Twitter example when viewing it on mobile


The animation when you change images makes it harder to see the difference, I opened the three images each in its own tab and the differences are more apparent when you change between each other instantly.


But that’s not realistic?

If you have to have both and instantly toggle between them to notice the difference, then it sounds like it’s doing its job well and is hard to notice the difference.



What kind of artist is not going to be bothered with seeing huge artifacting on their work? Btw for me it was immediately noticeable even on mobile


The person who drew it would definitely notice.


One of the few times a 'blink comparator' feature in image viewers would be useful!


At full size it's super obvious - I made a side-by-side:

https://i.imgur.com/I6EQ05g.png



I still don't see a difference. (Mobile)


Here's a maybe more mobile friendly comparison:

https://i.imgur.com/zUVn8rt.png

But now that I double-check, I was comparing with the images zoomed to 200%. On desktop the artifacts are also noticeable at 100%, but not nearly as bad as in my previous comment.



Have you done a color blindness test before? Red-green is the most common type and the differences here are mostly shades of green.


Second picture looks like you were looking at it through a dirty window, there's lot of pale white stains, or light reflections, it's really blurry.


What phone are you using? It’s extremely obvious on my iPhone


Something similar to jpeg artifacts on any surface with a normally smooth color gradient, in some cases rather significant.


I didn't see it immediately either, but there's a ton of added noise. The most noticeable bit for me was near the standing person's bent elbow, but there's a lot more that becomes obvious when flipping back and forth between browser tabs instead of swiping on Twitter.


look at the green drapes to the right, or any large uniform colored space. It looks similar to bad JPEG artifacts.


I don't have great vision, but me neither. They're indistinguishable to me (likewise on mobile).


I was on desktop and it looks like pretty heavy jpeg compression. Doesn't completely destroy the image, but it's pretty noticeable when blown up large enough.


It's really noticeable on desktop, like compressing an 800kb jpeg to 50kb. Maybe on mobile you won't notice, but on desktop the image looks blown out.


The gradient on the bat has blocks in it instead of being smooth.


It took me a minute too but on the fast you can see some blocky artifacting by the elbow and a few spots elsewhere like curtain upper left.


Seems obvious that the people stealing would be adjusting their process to negate these kinds of countermeasures all the time. I don't see this as an arms race the artists are going to win. Not like the LLM folks can consider actually paying their way...the business plan pretty much has "...by stealing everything we can get our hands on..." in the executive summary.


Maybe it's more about "protecting" images that artists want to publicly share to advertise work, but it's not appropriate for final digital media, etc.


In short, anti-AI watermark.


Yeah. It may mess with the artist's vision but the impact is still way more subtle than other methods used to protect against these unwanted actions.

Of course I'm assuming it works to begin with. Sounds like a game of cat and mouse. And AI has a lot of rich cats.



Sir /u/b3nsn0w is courteous, `/nod`.


The artifacts are a non-issue. It's intended images with nightshade are intended to be silently scrapped and avoid human filtering.


The artifacts are extremely an issue for artists who don't want their images damaged for the possibility of them not being trained by AI.

It's a bad tradeoff.



Nightshaded images aren't intended for portfolios. They're mean to be uploaded enmasse and scraped later.


To where? A place no one sees them and they aren't scraped?


I think the point is that they're akin to a watermark.

Even before the current AI boom, plenty of artists have wanted to showcase their work/prove that it exists without necessarily making the highest quality original file public.



Most serious artists I know (at least in my community) release their high-quality images on Patreon or similar.


For example in accounts on image sites that are exposed to suspected scrapers but not to others. Scrapers will still see the real data, but they'll also run into stuff designed to mix up the training process.


do you mean scrapped or scraped?


scraped


> The artifacts are a non-issue.

According to which authority?



I find the AI training topic interesting, because it's really data/information that is involved. Forget about the fact that it's images or stories or Reddit posts, it's all data.

We are born and then exposed to the torrent of data from the world around us, mostly fed to us by other humans, this is what models are trying to tap.

Unfortunately our learning process is completely organic and takes decades and decades and decades; there's no way to put a model through this easily.

Perhaps we need to seed the web with AI agents who converse and learn as much like regular human beings as possible and assemble the dataset that way. Although having an agent browse and find an image to learn to draw from is still gonna make people reee even if that's exactly what a young and aspiring human artist would be doing.

Don't talk about humans being sacred; we already voted to let corporations be people, for the 1% to exist and "lobby", breaking our democracy so that they can get tax breaks and make corrupt under the table deals. None of us stopped that from happening...



Huge market for snake oil here. There is no way that such tools will ever win, given the requirements the art remain viewable to human perception, so even if you made something that worked (which this sounds like it doesn’t) from first principles it will be worked around immediately.

The only real way for artists or anyone really to try to hold back models from training on human outputs is through the law, ie, leveraging state backed violence to deter the things they don’t want. This too won’t be a perfect solution, if anything it will just put more incentives for people to develop decentralized training networks that “launder” the copyright violations that would allow for prosecutions.

All in all it’s a losing battle at a minimum and a stupid battle at worst. We know these models can be created easily and so they will, eventually, since you can’t prevent a computer from observing images you want humans to be able to observe freely.



I heard that flooding the net with AI generated art would do much much more harm to generative AI than this whatever is this. Yes, this must be some snake oil salesman, those take it seriously turn AIs own weapon against AI.


The level of claims accompanied by enthusiastic reception from a technically illiterate audience make it sound, smell, and sound like snake oil without much deep investigation.

There is another alternative to the law. Provide your art for private viewing only, and ensure your in person audience does not bring recording devices with them. That may sound absurd, but it's a common practice during activities like having sex.



That doesn't sound like a viable business model. There seems to be a non-trivial bootstrap problem involved -- how do you become well-known enough to attract audiences to private venues in sufficient volume to make a living? -- and would in no way diminish demand for AI-generated artwork which would still continue to draw attention away from you.


This would just create a new market for art paparazzis who would find any and all means to inflitrate such private viewings with futuristic miniature cameras and other sensors and selling it for a premium. Less than 24 hours later the files end up on hundreds or thousands of centralized and decentralized servers.

I'm not defending it. Just acknowledging the reality. The next TMZ for private art gatherings is percolating in someone's garage at the moment.



I find this difficult to believe; no matter how small your camera is, photography is about light. Art reproduction photography is surprisingly hard to do if you care about the quality of the end result. Unless you can surreptitiously smuggle in a studio lighting setup, tripod, and color checker card… sure you can take an image in secret, but not one that is a good representation of the real thing.


You could just build a stabilizer system and stand really still for 1 second. Then expose for a longer time. Photography is Apertrue, ISO, and exposure time. This will gather enough light to do a proper exposure even in a dimm lit venue. Anything darker and every viewer will have a hard time seeing the private art. ANother thing would be to crank up the ISO and denoise it later. Its much more lossy but with this you could get lower exposure times.


It’s about number of photons and aperture. In principle this could be very hard to detect, especially once people get good at multiple distributed apertures that are coherent with one another.


The thing is people want the benefits of having their stuff public but not bear the costs. Scraping has been mostly a solved problem especially when it comes to broad crawling. Put it under a login, there, no more AI "stealing" your work.


I don't think that's true at all. Images and text get reposted with or without consent, often without attribution. It wouldn't make it right for the AI companies to scrape when the original author doesn't want that but someone else has ignored their wishes and requirements. Basically, what good is putting your stuff behind login or some other restrictive viewing method if someone just saves the image/text? I think it's still a relatively serious problem for people creating things. And without some form of easy access to viewing, the people creating things don't get the visibility and exposure they need to get an audience/clients.

This is one the AI companies should offer the olive branch on IMO, there must be a way to use stenography to transparently embed a "don't process for AI" code into an image or text or music or any other creative work that won't be noticeable by humans, but the AI would see if it tried to process the content for training. I think it would be a very convenient answer and probably not be detrimental to the AI companies, but I also imagine that the AI companies would not be very eager to spend the resources implementing this. I do think they're the best source for such protections for artists though.

Ideally, without a previous written agreement for a dataset from the original creators, the AI companies probably shouldn't be using it for training at all, but I doubt that will happen -- the system I mention above should be _opt-in_, that is, you must tag such content that is free to be AI trained in order for AI to be trained on it, but I have 0 faith that the AI companies would agree to such a self-limitation.

edit: added mention to music and other creative works in second paragraph 1st sentence

edit 2: Added final paragraph as I do think this should be opt-in, but don't believe AI companies would ever accept this, even though they should by all means in my opinion.



Here are my 2 cents, I think we will need some laws specifying two types of AI models, ones trained with full consent (opt-in) for its training material and ones without. The first one would be like Adobe's firefly model where they allegedly own everything they trained it with, or something where you go around asking for consent for each thing in your training corpus (probably unfeasible for large models). Maybe things in the public domain would be ok to train with. In this case there are no restrictions and the output from such models can even be copyrighted.

Now for the second type, representing models such as Stable Difusion and Chat GPT, it would be required to have their trained model freely available to anyone and any resulting output would not be copyrightable. It may be a more fairer way of allowing anyone to harness the power of AI models that contain essentially the knowledge of all man kind, but without giving any party an unfair monopoly on it.

This should be easily enforceable for big corporations, else it would be too obvious if they are trying to pass one type model as another or even keep the truth about their model from leaking. It might not be as easy to keep small groups or individuals from breaking those rules, but hey, at least it evens the playing field.



Is that login statement strictly true? Unless the login is paid, there's no reason we can't get to (if not already there) the point where the AI scraper can just create a login first.


No, eforcing click-wrap legal agreements is actually possible. With basic KYC the scraper would instantly open up itself for litigation and no internet art piece is frankly worth this sort of trouble.


But then you can rate-limit to a point where scraping everything will take a considerable amount of time.

Of course the workaround would be to have multiple accounts, but that in turn can be made unscalable with a "prove you're human" box.



you are not incorrect that this would help mitigate, but it still misses a few key points I think regarding why artists are upset about AI generation

- This is still vulnerable to stuff like mturk or even just normal users who did get past the anti-bot things pulling and re-uploading the content elsewhere that is easier for the AI companies to use

- The artists' main contention is that the AI companies shouldn't be allowed to just use whatever they find without confirm they have a license to use the content in this way

- If someone's content _does_ get into an AI model and it's determined somehow (I think there is a case with a news paper and chatGPT over this very issue?), the legal system doesn't really have a good framework for this situation right now -- is it copyright infringement? (arguably not? it's not clear) is it plagiarism? (arguably yes, but plagiarism in US court system is very hard to proof/get action on) is it license violation? (for those who use licenses for their art, probably yes, but it's the same issue as plagiarism -- how to prove it effectively?)

Really what this comes down to is that the AI companies use the premise that they have a right to use someone else's works without consent for the AI training. While your suggestions are technically correct, it puts the impetus on the artists that they must do something different because the AI companies are allowed to train their models as they currently do without recourse for the original artist. Maybe that will be ruled true in the future I don't know, but I can absolutely get why artists are upset about this premise shaping the discussion on AI training, as such a premise negates their rights as an artist and many artists have 0 path for recourse. I'm pretty sure that OpenAI wouldn't think about scraping a Disney movie from a video upload site just because it's open access since Disney likely can fight in a more meaningful way. I would agree with artists who are complaining that they shouldn't need to wait for a big corporation to decide that this behavior is undesirable before real action is taken, but it seems that is going to be what is needed. It might be reality, but it's a very sad reality that people want changed.



True I can imagine that kind of thing becoming popular.


>There is no way that such tools will ever win, given the requirements the art remain viewable to human perception

On the other hand, the adversarial environment might push models towards a representation more aligned with human perception, which is neat.





> Huge market for snake oil here.

This tool is free, and as far as I can tell it runs locally. If you're not selling anything, and there's no profit motive, then I don't think you can reasonably call it "snake oil".

At worst, it's a waste of time. But nobody's being deceived into purchasing it.



If this is a danger from "snake oil" of this type, it'd be from the other side, where artists are intentionally tricked into believing that tools like this mean that AI isn't or won't be a threat to their copyrights in order to get them to stop opposing it so strongly, when in fact the tool does nothing to prevent their copyrights from being violated.

I don't think that's the intention of Nightshade, but I wouldn't put past someone to try it.



There's an academic paper being published.

Snake oil for the sake of getting published is a very real problem that does exist.



Religion is also deceptive and snake-oil even if it does not involve profit driven motivations.


It very often does involve such motivations, though I agree with your larger point.


This is the hard reality. There is no putting this genie back in the bottle.

The only way to be an artist now is to have a unique style of your own, and to never make it online.



"and to never make it online."

So then of course, you also cannot sell your work, as those might put it online. And you cannot show your art to big crowds, as some will make pictures and put it online. So ... you can become a literal underground artists, where only some may see your work. I think only some will like that.

But I actually disagree, there are plenty of ways to be an artist now - but most should probably think about including AI as a tool, if they still want to make money. But with the exception of some superstars, most artists are famously low on money - and AI did not introduce this. (all the professional artists I know, those who went to art school - do not make their income with their art)



GP almost certainly mean "make physical art." Pictures of that can get online, but it's not the real thing.


Can you elaborate on how they supplement their income?


Every other source of income? So other, art-unrelated jobs.


Everything old is new again. It's the same thing with any DRM that happens on the client side. As long as it's viewable by humans, someone will figure out a way to feed that into a machine.


You don’t need it to visible. You only need it to be scrapped to poison the models. I think that’s the idea.


I'm thinking — is it possible to create something on a global level similar to what they did in Snapchat: some sort of image flickering that would be difficult to parse, but still acceptable for humans?


Sorry i do not use Snapchat and with googeling "Snapchat image flickering" i did not find a good result. Could you elaborate this a bit more or provide me with a link where this is described? Thank you very much. :)


My guess. Is that at some poi t of time You will not be able to use any generated image or video in commercial. Because of 100% copyright claim for using parts of copyrighted image. Like YouTube those days. When some random beeps matches with someone music...


It should be like that. I agree


"A law, ie, leveraging state backed violence to deter the things they don’t want."

We all know what a law is you don't need to clarify. It makes your prose less readable.



Other people pointed out they appreciated this prose. It’s easy to forget what exactly people are asking for when they talk about regulating the training of machine learning models.


> leveraging state backed violence to deter the things they don’t want

I just want to say: I really appreciate the stark terms in which you've put this.

The thing that has come to be called "intellectual property" is actually just a threat of violence against people who arrange bytes in a way that challenges power structures.



A few months ago I made a proof-of-concept on how finetuning Stable Diffusion XL on known bad/incoherent images can actually allow it to output "better" images if those images are used as a negative prompt, i.e. specifying a high-dimensional area of the latent space that model generation should stay away from: https://news.ycombinator.com/item?id=37211519

There's a nonzero chance that encouraging the creation of a large dataset of known tampered data can ironically improve generative AI art models by allowing the model to recognize tampered data and allow the training process to work around it.



This seems like a pretty pointless "arms race" or "cat and mouse game". People who want to train generative image models and who don't care about what artists think about it at all can just do some basic post-processing on the images that is just enough to destroy the very carefully tuned changes this Nightshade algorithm makes. Something like resampling it to slightly lower resolution and then using another super-resolution model on it to upsample it again would probably be able to destroy these subtle tweaks without making a big difference to a human observer.

In the future, my guess is that courts will generally be on the side of artists because of societal pressures, and artists will be able to challenge any image they find and have it sent to yet another ML model that can quickly adjudicate whether the generated image is "too similar" to the artist's style (which would also need to be dissimilar enough from everyone else's style to give a reasonable legal claim in the first place).

Or maybe artists will just give up on trying to monetize the images themselves and focus only on creating physical artifacts, similar to how independent musicians make most of their money nowadays from touring and selling merchandise at shows (plus Patreon). Who knows? It's hard to predict the future when there are such huge fundamental changes that happen so quickly!



It is a funny appearance (weird viewpoint) that artists are furious loosing their monopily in stealing and cloning components from other artists, recomposing into a similar but new thing.

And that OpenArt on the analogy of OpenSource is a non-existing thing (I know, I know, different things, source code is not for the generic audience and can be hidden on will, unlike art, just having some generative thoughts artefact here ;) )



>Or maybe artists will just give up on trying to monetize the images themselves and focus only on creating physical artifacts, similar to how independent musicians make most of their money nowadays from touring and selling merchandise at shows (plus Patreon).

As is, art already isn't a sustainable career for most people who can't get a job in industry. The most common monetization is either commissions or hiding extra content behind a pay wall.

To be honest I can see more proverbial "Furry artists" sprouting up in a cynical timeline. I imagine like every other big tech that the 18+ side of this will be clamped down hard by the various powers that be. Which means NSFW stuff will be shielded a bit by the advancement and you either need to find underground training models or go back to an artist. .



>need to find underground training models

It's not particularly that hard. The furry nsfw models are already the most well developed and available models you can get right now. And they are spitting out stuff that is almost indistinguishable from regular art.



> This seems like a pretty pointless "arms race" or "cat and mouse game".

If there is any "point" of this, it's that's going to push the AI models to become better at capturing how humans see things.



> musicians make most of their money nowadays from touring and selling merchandise at shows

Be reminded that this is - and has always been - the mainstream model of the lineages of what have come to be called "traditional" and "Americana" and "Appalachian" music.

The Grateful Dead implemented this model with great finesse, sometimes going out of their way to eschew intellectual property claims over their work, in the belief that such claims only hindered their success (and of course, they eventually formalized this advocacy and named it "The Electronic Frontier Foundation" - it's no coincidence that EFF sprung from deadhead culture).



the point is you could circumvent one nightshade, but as long as the cat and mouse game continues there can be more


This feels like it'll actually help make AI models better versus worse once they train on these images. Artists are basically, for free, creating training data that conveys what types of noise does not change the intended meaning of the image to the artist themselves.


I want progressive fees on copyright/IP/patent usage, and worldwide gov cooperation/legislation (and perhaps even worldwide ability to use works without obtaining initial permission, although let's not go into that outlandish stuff)

I want a scaling license fee to apply (e.g. % pegged to revenue. This still has an indirect problem with different industries having different profit margins, but still seems the fairest).

And I want the world (or EU, then others to follow suit) to slowly reduce copyright to 0 years* after artists death if owned by a person, and 20-30 years max if owned by a corporation.

And I want the penalties for not declaring usage** / not paying fees, to be incredibly high for corporations... 50% gross (harder) / net (easier) profit margin for the year? Something that isn't a slap on the wrist and can't be wriggled out of quite so easily, and is actually an incentive not to steal in the first place.)

[*]or whatever society deems appropriate.

[**]Until auto-detection (for better or worse) gets good enough.

IMO that would allow personal use, encourages new entrants to market, encourages innovation, incentivises better behaviour from OpenAI et al.



> And I want the world (or EU, then others to follow suit) to slowly reduce copyright to 0 years* after artists death if owned by a person, and 20-30 years max if owned by a corporation.

Why death at all?

It's icky to trigger soon after death, it's bad to have copyright vary so much based on author age, and it's bad for many works to still have huge copyright lengths.

It's perfectly fine to let copyright expire during the author's life. 20-30 years for everything.



Extremely naive to think that any of this could be enforced to any adequate level. Copyright is fundamentally broken and putting some plasters on it is not going to do much especially when these plasters are several decades too late.


The number of people who are going to be able to produce high fidelity art with off the shelf tools in the near future is unbelievable.

It’s pretty exciting.

Being able to find a mix of styles you like and apply them to new subjects to make your own unique, personalized, artwork sounds like a wickedly cool power to give to billions of people.



In terms of art, population tends to put value not on the result, but origin and process. People will just look down on any art that’s AI generated in a couple of years when it becomes ubiquitous.


This is already the case. Art is a process, a form of human expression, not an end result.

I'm sure OpenAI's models can shit out an approximation of a new Terry Pratchett or Douglas Adams novel, but nobody with any level of literary appreciation would give a damn unless fraud was committed to trick readers into buying it. It's not the author's work, and there's no human message behind it.



Novels aren't about a message. They're entertainment. If the novel is entertaining then it's irrelevant whether there is or isn't a message in it. Besides, literature enthusiasts will invent a message for a popular story even if there never was one.

Also, I'm sure that you can eventually just prompt the model with the message you want to put into the story, if you can't already do that.



I haven’t read anything “shit out” by any LLM that even nearly approaches the level of quality by the authors you named — would very much like to see something like that - do you have any evidence for your claims?

AFAICT current text generation is something approaching bad mimicry at best and downright abysmal in general. I think you still need a very skilled author and meaty brain with a story to tell to make use of an LLM for storytelling. Sure it’s a useful tool that will make authors more effective but we are far from the point where you tell the LLM “write a story set in Pratchetts Discworld” and something acceptable or even entertaining will be spit out - if such a thing can even be achieved.



Thing is there are way more good books written, than any single person can consume in their lifetimes. An average person like me, reading a mixed diet of classics, obscure recommendations and what's popular right now, I still don't feel like I'm making a dent in the pile of high quality written content.

Given all that, the purpose of LLMs should be to create tailor made content to everyone's tastes. However, it seems the hardcore guardrails put into GPT4 and Claude prevent it from generating anything enjoyable. It seems, even the plot of the average Star Wars movie is too spicy for modern LLM sensibilities, never mind something like Stephen King.



I disagree. I definitely value modern digital art more than most historical art, because it just looks better. If AI art looks better (and in some cases it does) then I'll prefer that.


That’s totally fine, everyone’s definition of art is subjective. But general value of an art as a piece will just still be zero for AI generated ones, just like any IKEA / Amazon print piece. You just pay for the “looks pretty”, frame and paper.


>You just pay for the “looks pretty”, frame and paper.

But you pay that for any piece of art though? You appreciate it because you like what it looks like. The utility of it is in how good it looks, it's not how much effort was put into it.

If you need a ditch you're not going to value the ditch more if the worker dug it by hand instead of using an excavator. You value it based on the utility it provides you.



That analogy doesn’t work for art, since worker’s ditch is result based. There are no feelings like “i like this ditch”, “experience of a ditch” or “i’m curious how this ditch was dug”.

Again, i’m not saying buying a mass made AI art will be wrong. Just personally speaking, it will never evoke any feelings other than “looks neat” for me. So its inherent “art value” is close to 0 as I can guess its history is basically someone put in a prompt and sent it to print (which I can do myself on my phone too!). It’s the same as looking at cool building pics on my phone (0 art value) versus actually seeing them in person (non-0), mostly because the feelings I get from it. That being said, if it makes others happy, it’s not my place to judge.



> population tends to put value not on the result, but origin and process

I think population tends to value "looks pretty", and it's other artists, connoisseurs, and art critics who value origin and process. Exit Through the Gift Shop sums this up nicely



https://en.wikipedia.org/wiki/Labor_theory_of_value

According to Marx, value is only created with human labour. This is not just a Marxist theory, it is an observation.

There may be lots of over-priced junk that makes you want to question this idea. But let's not nit-pick on that.

In two years time people will not see any value in AI art, quite correctly because there is not much human labour in creating it.



In two years time, no one will know what was created with AI, what was created by humans, or what was created by both.


> According to Marx, value is only created with human labour. This is not just a Marxist theory, it is an observation.

And yet it's completely and absolutely wrong. Value is created by the subjective utility offered to the consumer, irrespective of what inputs created the thing conveying that utility.



Labor theory of value is quite controversial, many economists call it tautological or even metaphysical. I also don't really see what LTV has to say about AI art, if anything, except that the economic value generated by AI art should be distributed to everybody and not just funneled to a few capitalists at the top. I would agree with that. It's true that more jobs get created even as jobs are destroyed, but it's also true that just as our ancestors fought for a 40 hour work week and a social safety net, we should be able to ask for more as computers become ever so productive.


> This is not just a Marxist theory, it is an observation.

Yeah? Well, you know, that's just like uh, your opinion, man



Nope, but I already look down on artists who refuse to integrate generative AI into their processes.


Can you share some of the art you’ve made with generative AI?


Cool, who are you?


People who use generative AI in their processes are not artists.


And people who use Photoshop are?

There is somewhat famous digital artist from Russia - Alexey Andreev. Google it, he has very distinctive style of realistic technique and surrealistic situations, like landing big manta ray on the deck of aircraft carrier. Or you can see his old works in his 5-years-not-updates LJ [1].

Now he uses generative AI as one of his tools. As Photoshop, as different (unrealistic!) brushes in Photoshop, as other digital tools. His style is still 100% recognizable and his works don't become worse or more "generic". Is he still artist? I think so.

Where will you draw the line?

[1] - https://alexandreev.livejournal.com/



I don’t think this is quite right. I think paraphrasing The Incredibles has a better take:

When everybody is an artist, then nobody will be one.



I use generative AI to rubber duck and help improve my code.

Am I no longer a software engineer?



This is true. They are just taking a sample from a generated latent space, just like taking a photo of something doesn't make you an artist.


So, there is no artists in, for example, street photography? Picture must be altered to become art, or staged?

Was it irony? :)



They are photographers. Here is the definition of an artist so you can have better clarity on what an artist is:

"A person who creates art (such as painting, sculpture, music, or writing) using conscious skill and creative imagination"



I took gp as satire. But maybe not haha.


> Being able to find a mix of styles you like and apply them to new subjects to make your own unique, personalized, artwork sounds like a wickedly cool power to give to billions of people.

And in the process, they will obviate the need for Nightshade and similar tools.

AI models ingesting AI generated content does the work of destroying the models all by itself. Have a look at "Model Collapse" in relation to generative AI.



It'll be about as wickedly tool as the ability to get on the internet, e.g. commoditized, transactional, and boring.


I know this is an unpopular thing to say these days, but I still think the internet is amazing.

I have more access to information now than the most powerful people in the world did 40 years ago. I can learn about quantum field theory, about which pop star is allegedly fucking which other pop star, etc.

If I don't care about the law I can read any of 25 million books or 100 million scientific papers all available on Anna's Archive for free in seconds.



As Jeff Bezos recently said on the Lex podcast: one of the greatest compliments you can give an inventor is that they’re invention will be taken for granted by future generations.

“It won’t be any more wickedly cool than the internet” - saying something won’t be any more wickedly cool than the most profound and impactful pieces of infrastructure human civilization has erected is a pretty high compliment.



And we only had to alienate millions of people from their labor to do it.


Absolutely agree we should allow people to accumulate equity through effective allocation of their labor.

And I also agree that we shouldn’t build systems that alienate people from that accumulated equity.



Yeah, sadly those millions of people don’t matter in the grand scheme of things and were never going to profit off their work long term


Where the "grand scheme of things" are the quarter reports of a few AI-invested companies?


What a bummer of a thing to say.

Those millions/billions of people matter a great deal.



They matter but not under the current system. Artists are a rarely paid profession, and there are professional artists out there but there’s now a huge amount of people that will never contact an artist for work that used to only be human powered. It’s not personal for me. I understand that desire to resist the inevitable but it’s here now.

For what it’s worth I never use midjourney or dalle or any of the commercial closed systems that steal from artists but I know I can’t stop the masses from going there and inputting “give me pretty picture in style x”



Resistance is important imo. If this happens and we, who work in this industry, say nothing, what good are we. It's only inevitable if it's socially acceptable.


Worth it.


Is this utilitarianism?


Not really. There is a reason why we find realistic painting to be more fascinating than a photo and why some still practice it. The effort put in by another artist does affect our enjoyment.


For me it doesn’t. I’m generating images, realistic, 2.5d, 2d and I like them as much. I don’t feel (or miss) what you described. Or what any other arts guy describes, for that matter. Arts people are different, because they were trained to feel something a normal person wouldn’t. And that’s okay, a normal person without training wouldn’t see how much beauty and effort there is in an algorithm or a legal contract as well.


The word "we" is doing a lot of heavy lifting here. A large majority of consumers can't even tell apart AI-generated from handmade, let alone care who or what made the thing.


Yeah, that's just information you made up on the spot.


With this "solution" it looks like the world of art enters the cat-and-mouse game the ad blockers were playing for the last decade or two.


I just tested it with Azure AI image classification and it worked - so this cat is yet to adapt to the mouse’s latest idea.

I still feel it is absolutely wrong to roam around the internet and scrape images (without consent) in order to power one’s cash cow AI. I hope more methods to protect artworks (including audio and other formats) become more accessible.



Artists copy from each other all the time. Arguably, culture exists because of copying (folk stories by necessity); copyright makes culture top-down and stagnant, and you can't avoid it because they have the money to shove it right in your face. Who wants trickle-down culture?


it's not an artist, it's a piece of software

in the same way bittorrent or gzip is



Sure. The person using it has intent. Now we have come to a point in which intent alone is art. Let there be light.


I might be missing something because I don't know much about the architecture of either Nightshade or AI art generators, but I wonder if you could try to have a GAN-like architecture (an extra model trying to trick the model) for the part of the generator that labels images to build resistance to Nightshade-like filters.


It doesn't even have to be a full GAN, you only need to train the discriminator side to filter out the data. Clean reference images + Nightshade would be the generator side.


Long-term I think the real problem for artists will be corporations generating their own high quality targeted datasets from a cheap labor pool, completely outcompeting them by a landslide.


In the short-to-medium term, we're seeing huge improvements in the data efficiency of generative models. We haven't really started to see self-training in diffusion models, which could improve data efficiency by orders of magnitude. Current models are good at generalisation and are getting better at an incredible pace, so any efforts to limit the progress of AI by restricting access to training data is a speedbump rather than a roadblock.


It will democratize art.


Art is already democratized. It has been for decades. Everyone can pick it up at zero cost. Even you!

The poorest people have historically produced great art. Training a model, however? Expensive. Running it locally? Expensive. Paying the sub? Expensive.

Nothing is being democratized, the only thing this does is devaluing the blood and sweat people have put into their work so FAANG can sell it to lazy suckers.



then it won't be art anymore, it'll just be mountains of shit

sorta like what the laptop did for writing



This is a good point. There hasn’t been any writing since the release of the Gateway Solo in 1995


What the article doesn't illustrate is that it destroys fine detail in the image, even in the thumbnails of the reference paper: https://arxiv.org/pdf/2310.13828.pdf

Also... Maybe I am naive, but it seems rather trivial to work around with a quick prefilter? I don't know if tradition denoising would be enough, but worst case you could run img2img diffusion.

reply



The poisoned images aren't intended to be viewed, rather scraped and pass a basic human screen. You wouldn't be able to denoise as you'd have to denoise the entire dataset, the entire point is that these are virtually undetectable from typical training set examples, but they can push prompt frequencies around at will with a small number of poisoned examples.


> You wouldn't be able to denoise as you'd have to denoise the entire dataset

Doing that requires much less compute than training a large generative image model.



I guess the idea is that the model trainers are ignorant of this and wouldn't know to preprocess/wouldn't bother?

That's actually quite plausible.



> I guess the idea is that the model trainers are ignorant of this

Maybe they're ignorant of it right up until you announce it, but then they're no longer ignorant of it.



Right, but they aren't necessarily paying attention to this.

I am not trying to belittle foundational model trainers, but a lot goes on in ML land. Even groups can't track every development.



> the entire point is that these are virtually undetectable from typical training set examples

I'll repeat this point for clarity. After going over the paper again, denoising shouldn't affect this attack, it's the ability of plausible images to not be detected by human or AI discriminators (yet)



This is fantastic. If companies want to create AI models, they should license the content they use for the training data. As long as there are not sufficient legal protections and the EU/Congress do not act, tools like these can serve as a stopgap and maybe help increase pressure on policymakers


It's going to be interesting to see how the lawsuits against OpenAI by content creators plays out. If the courts rule that AI generated content is a derivative work of all the content it was trained on it could really flip the entire gen AI movement on its head.


If it were a derivative work[1] (and sufficiently transformational) then it's allowed under current copyright law and might not be the slam dunk ruling you were hoping for.

[1] https://en.wikipedia.org/wiki/Derivative_work



Oh, interesting, I didn't realize that's how it worked. Thanks for the additional context around this. Guess it's not as upending as I thought it could be.


Not if it is AI generated. So far only humans can be original enough to warrant copyrights, at least in the US .

BTW, the right to prepare derivative works belongs to the copyright holder of the reference work.

I doubt that many AI works are in fact derivative works. Sure, some bear enough similarity, but a gross majority likely doesn't.



"sufficiently transformational" is carrying a lot of water here. At minimum it would cloud the issue and might expose anyone using AI to lawsuits where they'd potentially have to defend each generated image.


Sufficiently transformational only applies to copyrightability, but AI works are not copyrightable under current US law, so it's a non-issue.


My biggest fear is that the big players will drop a few billion dollars to silence the copyright holders with power go away, and new rules are put in place that will make open-source models that can't do the same essentially illegal.


…then I'll keep enjoying my Stable Diffusion and pirated models.


> they should license the content they use for the training data

You mean like OpenAI and Adobe ?

Only the free and open source models didn't licensed any content for the training data.



Adobe is training off of images stored in their cloud systems, per their Terms of Service.

OpenAI has provided no such documentation or legal guarantees, and it is still quite possible they scraped all sorts of copyright materials.



Google scrapes copyrighted material every day and then presents that material to users in the form of excerpts, images, and entire book pages. This has been ruled OK by the courts. Scraping copyrighted information is not illegal or we couldn't have search engines.


Scraping is only legal if it's temporary and transformational. If Google started selling the scrapped images it would be a different story.


What is not transformational for generative AI ?


Google is not presently selling "we trained an AI on people's art without permission, and you can type their name in along with a prompt to generate a knockoff of their art, and we charge you money for this". So it's not really a 1:1 comparison, since there are companies selling the thing I described right now.


That pretty clearly would fall under transformative work. It is not illegal for a human to paint a painting in the style of, say, Banksy, and then sell the resulting painting.


Humans and AI are not the same thing, legally or physically. The law does not currently grant AI rights of any kind.


If a human isn't violating the law when doing that thing, then how is the machine violating the law when it cannot even hold copyright itself?


In some locales sitting on the street writing down a list of people coming and going is legal, but leaving a camera pointed at the street isn't. Legislation like that makes a distinction between an action by a person (which has bounds on scalability) and mechanized actions (that do not).


I'm not sure how to explain this any clearer: Humans and machines are legally distinct. Machines don't have the rights that humans have.


Fair Use is the relevant protection and is not specific to manual creation. Traditional algorithms (e.g: the snippets, caching, and thumbnailing done by search engines) are already covered by it.


What's not prohibited is allowed, at least in the US.


No they are not. They train their models on Adobe Stock content. They do not train on user content.

https://helpx.adobe.com/manage-account/using/machine-learnin...

"The insights obtained through content analysis will not be used to re-create your content or lead to identifying any personal information."

"For Adobe Firefly, the first model is trained on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired."

(I work for Adobe)



> OpenAI has provided no such documentation

OpenAI and Shutterstocks publicly announced their collaboration, Shutterstocks sells AI generated images, generated with OpenAI models.



There is in fact, an extreme amount of circumstantial evidence that they intentionally and knowingly violated copyright en mass. It’s been quite a popular subject in tech news the past couple weeks.


There is a small difference between any and all. OpenAI certainly didn't licence all of the image they use for training.


source for OpenAI paying anyone a dime? don't you think that would set a precedent that everyone else deserves their cut?


>Like Glaze, Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image.

It's still noticeably visible.



Yeah, I've seen multiple artists complain about how glazing reduces image quality. It's very noticeable. That seems like an unavoidable problem given how AI is trained on images right now.


Isn't this just teaching the models how to better understand pictures as humans do? As long as you feed them content that looks good to a human, wouldn't they improve in creating such content?


You would think the economists at UChicago would have told these researchers that their tool would achieve the opposite effect of what they intended, but here we are.

In this case, the mechanism for how it would work is effectively useless. It doesn't affect OpenAI or other companies building foundation models. It only works on people fine-tuning these foundation models, and only if the image is glazed to affect the same foundation model.



Each time there is an update to training algorithms and in response poisoning algorithms, artists will have to re-glaze, re-mist, and re-nightshade all their images?

Eventually I assume the poisoning artifacts introduced in the images will be very visible to humans as well.



I'm glad to see tools like Nightshade starting to pop up to protect the real life creativity of artists. I like AI art, but I do feel conflicted about its potential long term effects towards a society that no longer values authentic creativity.


Is the existence of the AI tool not itself a product of authentic creativity? Does eliminating barriers to image generation not facilitate authentic creativity?


No, it facilitates commoditization. Art – real art – is fundamentally a human-to-human transaction. Once everyone can fire perfectly-rendered perfectly-unique pieces of 'art' at each other, it'll just become like the internet is today: filled with extremely low-value noise.

Enjoy the short term novelty while you can.



This is the right prediction. Once machines can generate visual art, people will simply stop valuing it. We may see increased interest in other forms of art, e.g., live performance art like theater. It's hard to predict exactly how it'll play out, but once something becomes cheap to produce and widely available, it loses its luster for connoisseurs and then gradually loses its luster for everybody else too.


> Art – real art – is fundamentally a human-to-human transaction.

Why is this hippie nonsense so popular?



Because some things are different than others, even though they might have the same word to describe them.


Setting aside the efficacy of this tool, I would be very interested in the legal implications of putting designs in your art that could corrupt ML models.

For instance, if I set traps in my home which hurt an intruder we are both guilty of crimes (traps are illegal and are never considered self defense, B&E is illegal).

Would I be responsible for corrupting the AI operator's data if I intentionally include adversarial artifacts to corrupt models, or is that just DRM to legally protect my art from infringement?

edit:

I replied to someone else, but this is probably good context:

DRM is legally allowed to disable or even corrupt the software or media that it is protecting, if it detects misuse.

If an adversarial-AI tool attacks the model, it then becomes a question of whether the model, having now incorporated my protected art, is now "mine" to disable/corrupt, or whether it is in fact out of bounds of DRM.

So for instance, a court could say that the adversarial-AI methods could only actively prevent the training software from incorporating the protected media into a model, but could not corrupt the model itself.



None whatsoever. There is no right to good data for model training, nor does any contractual relationship exist between you and and a model builder who scrapes your website.


If you're assuming this is open-shut, you're wrong. I asked this specifically as someone who works in security. A court is going to have to decide where the line is between DRM and malware in adversarial-AI tools.


I'm not. Malware is one thin, passive data poisoning is another. Mapmakers have long used such devices to detect/deter unwanted copying. In the US such 'trap streets' are not protected by copyright, but nor do they generate liability.

https://en.wikipedia.org/wiki/Trap_street



A trap street doesn't damage other data. Not even remotely useful as an analogy. That's to allow detection of copies, not to corrupt the copies from being useable.


Worth trying but I doubt it unless we establish a right to train.


The way Nightshade works (assuming it does work) is by confusing the features of different tags with each other. To argue that this is illegal would be to argue that mistagging a piece of artwork on a gallery is illegal.

If you upload a picture of a dog to DeviantArt and you label it as a cat, and a model ingests that image and starts to think that cats look like dogs, would anybody claim that you are breaking a law? If you upload bad code to Github that has bugs, and an AI model consumes that code and then reproduces the bugs, would anyone argue that uploading badly written code to Github is a crime?

What if you uploaded some bad code to Github and then wrote a comment at the top of the code explaining what the error was, because you knew that the model would ignore that comment and would still look at the bad code. Then would you be committing a crime by putting that code on Github?

Even if it could be proven that your intention was for that code or that mistagged image to be unhelpful to training, it would still be a huge leap to say that either of those activities were criminal -- I would hope that the majority of HN would see that as a dangerous legal road to travel down.



That’s like asking if lying on a forum is illegal


No, it's much closer to (in fact, it is simply) asking if adversarial AI tools count as DRM or as malware. And a court is going to have to decide whether the model and or its output counts as separate software, which it is illegal for DRM to intentionally attack.

DRM can, for instance, disable its own parent tool (e.g. a video game) if it detects misuse, but it can't attack the host computer or other software on that computer.

So is the model or its output, having been trained on my art, a byproduct of my art, in which case I have a legal right to 'disable' it, or is it separate software that I don't have a right to corrupt?



> asking if adversarial AI tools count as DRM or as malware

Neither. Nightshade is not DRM or malware, it's "lying" about the contents of an image.

Arguably, Nightshade does not corrupt or disable the model at all. It feeds it bad data that leads the model to generate incorrect conclusions or patterns about how to generate images. This is assuming it works, which we'll have to wait and see, I'm not taking that as a given.

But the only "corruption" happening here is that the model is being fed data that it "trusts" without verifying that what the data is "telling" it is correct. It's not disabling the model or crashing it, the model is forming incorrect conclusions and patterns about how to generate the image. If Google translate asked you to rate its performance on a task, and you gave it an incorrect rating from what you actually thought its performance was, is that DRM? Malware? Have you disabled Google translate by giving it bad feedback?

I don't think the framing of this as either DRM or malware is correct. This is bad training data. Assuming it works, it works because it's bad training data -- that's why ingesting one or two images doesn't affect models but ingesting a lot of images does, because training a model on bad data leads the model to perform worse if and only if there is enough of that bad data. And so what we're really talking about here is not a question of DRM or malware, it's a question of whether or not artists have a legal obligation to make their data useful for training -- and of course they don't. The implications of saying that they did would be enormous, it would imply that any time you knowingly lied about a question that was being fed into an AI training set that doing so was illegal.



I see it as no different than mapmakers inventing a nonexistent alley, to check who copies their maps verbatim ("trap street"). Even if this caused, for example, a car crash because of an autonomous driver, the onus I think would be on the one that made the car and used the stolen map for navigation, and not on the one that created the original map.

https://en.wikipedia.org/wiki/Trap_street



Japan is considering it, I think? https://news.ycombinator.com/item?id=38615280


How would that situation be remotely related?


In so far as anger goes against AIs being trained on particular intellectual properties.

A made up scenario¹ is that a person who is training an AI, goes to the local library and checks out 600 books on art. The person then lets the AI read all of them. After which they are returned to the library and another 600 books are borrowed

Then we can imagine the AI somehow visiting a lot of museums and galleries.

The AI will now have been trained on the style and looks of a lot of art from different artists

All the material has been obtained in a legal manner.

Is this an acceptable use?

Or can an artist still assert that the AI was trained with their IP without consent?

Clearly this is one of the ways a human would go about learning about styles, techniques etc..

¹ Yes you probably cannot borrow 600 books at a time. How does the AI read the books? I dont know. Simplicity would be that the researcher takes a photo of each page. This would be extremmly slow but for this hypothetical it is acceptable.



I think the key difference here is that the most prominent image generation AIs are commercial and for-profit. The scenarios you describe are comparing a commercial AI to a private person. You cannot get a library card for a company, and you cannot bring a photography crew to a gallery without permission.


I love it. This undermines the notion of ground truth. What separates correct information from incorrect information? Maybe nothing! I love how they acknowledge the never ending attack versus defense game. In stark contrast to "our AI will solve all your problems".


For this to work, wouldn't you have to have an enormous number of artists collaborating on "poisoning" their images the same way (cow to handbag) while somehow keeping it secret form ai trainers that they were doing this? It seems to me that even if the technology works perfectly as intended, you're effectively just mislabeling a tiny fraction of the training data.


I’m completely flabbergasted by the number of comments implying copyright concepts such as “fair use” or “derivative work” apply to trained ML models. Copyright is for _people_, as are the entailing rights, responsibilities and exemptions. This has gone far beyond anthropomorphising and we need to like get it together, man!


You act like computers and ML models aren't just tools used by people.


What did I write to give you that impression?


My initial interpretation was that you're saying fair use is irrelevant to the situation because machine learning models aren't themselves legal persons. But, fair use doesn't solely apply to manual creation - use of traditional algorithms (e.g: the snippets, caching, and thumbnailing done by search engines) is still covered by fair use. To my understanding, that's why ronsor pointed out that ML models are tools used by people (and those people can give a fair use defense).

Possibly you instead meant that fair use is relevant, but people are wording remarks in a way that suggests the model itself is giving a fair use defence to copyright infringement, rather than the persons training or using it?



Well then I could have been much clearer because I meant something like the latter.

An ML model can neither have nor be in breach of copyright so any discussion about how it works, and how that relates to how people work or “learn” is besides the point.

What actually matters is firstly details about collation of source material, and later the particular legal details surrounding attribution. The last part involves breaking new ground legally speaking and IANAL so I will reserve judgement. The first part, collation of source material for training is emphatically not unexplored legal or moral territory. People are acting like none of the established processes apply in the case of LLMs and handwave about “learning” to defend it.



> and how that relates to how people work or “learn” is besides the point

It is important (for the training and generation stages) to distinguish between whether the model copies the original works or merely infers information from them - as copyright does not protect against the latter.

> The first part, collation of source material for training is emphatically not unexplored legal or moral territory.

Similar to as in Authors Guild v. Google, Inc. where Google internally made entire copies of millions of in-copyright books:

> > While Google makes an unauthorized digital copy of the entire book, it does not reveal that digital copy to the public. The copy is made to enable the search functions to reveal limited, important information about the books. With respect to the search function, Google satisfies the third factor test

Or in the ongoing Thomson Reuters v. Ross Intelligence case where the latter used the former's legal headnotes for training a language model:

> > verbatim intermediate copying has consistently been upheld as fair use if the copy is "not reveal[ed] to the public."

That it's an internal transient copy is not inherently a free pass, but it is something the courts take into consideration, as mentioned more explicitly in Sega v. Accolade:

> > Accolade, a commercial competitor of Sega, engaged in wholesale copying of Sega's copyrighted code as a preliminary step in the development of a competing product [yet] where the ultimate (as opposed to direct) use is as limited as it was here, the factor is of very little weight

And, given training a machine learning model is a considerably different purpose than what the images were originally intended for, it's likely to be considered transformative; as in Campbell v. Acuff-Rose Music:

> > The more transformative the new work, the less will be the significance of other factors



Listen, most website and book-authors want to be indexed by google. It brings potential audience their way, so most don’t make use of their _right_ to be de-listed. For these models, there is no plausible benefit to the original creators, and so one has to argue they have _no_ such right to be “de-listed” in order to get any training data currently under copyright.


> It brings potential audience their way, so most don’t make use of their _right_ to be de-listed.

The Authors Guild lawsuit against Google Books ended in a 2015 ruling that Google Books is fair use and as such they don't have a right to be de-listed. It's not the case that they have a right to be de-listed but choose not to make use of it.

The same would apply if collation of data for machine learning datasets is found to be fair use.

> one has to argue they have _no_ such right to be “de-listed” in order to get any training data currently under copyright.

Datasets I'm aware of already have respected machine-readable opt-outs, so if that were to be legally enforced (as it is by the EU's DSM Directive for commercial data mining) I don't think it'd be the end of the world.

There's a lot of power in a default; the set of "everything minus opted-out content" will be significantly bigger than "nothing plus opted-in content" even with the same opinions.



With the caveat that I was exactly wrong about the books de-listing, I feel you are making my point for me and retreating to a more pragmatic position about defaults.

The (quite entertaining) saga of Nightshade tells a story about what is going to be content creators “default position” going forward and everyone else will follow. You would be a fool not to, the AI companies are trying to end run you, using your own content, and make a profit without compensating you and leave you with no recourse.



> I feel you are making my point for me and retreating to a more pragmatic position about defaults

I'm unclear on what stance I've supposedly retreated from. My position is that an opt-out is not necessary under current US law, but that it wouldn't be the worst-case outcome if new regulation were introduced to mandate it.

> The (quite entertaining) saga of Nightshade tells a story about what is going to be content creators “default position” going forward and everyone else will follow

By "default" I refer not to the most common choice, but to the outcome that results from inaction. There's a bias towards this default even if the majority of rightsholders do opt to use Nightshade (which I think is unlikely).



No one is saying a model is the legal entity. The legal entities are still people and corporations.


Oh come on, you’re being insincere. Wether or not the model is learning from the work just like people is hotly debated as if it would make a difference. Fair use is even brought up. Fair use! Even if it applied, these training sets collate all of everything

I feel like I’m taking crazy pills TBQH



I really don't understand the anxiety of artists towards AI - as if creatives haven't always borrowed and imitated. Every leading artist has had acolytes, and while it's true no artist ever had an acolyte as prodigiously productive as AI will be, I don't see anything different between a young artist looking to Picasso for cues and Stable Diffusion or DALL-E doing the same. Styles and methods haven't ever been subject to copyright - and art would die the moment that changed.

The only explanation I can find for this backlash is that artists are actually worried just like the rest of us that pretty soon AI will produce higher quality more inventive work faster and more imaginatively than they can - which is very natural, but not a reason to inhibit an AI's creative education.



This has been litigated over and over again, and there have been plenty of good points made and concerns raised over it by those who it actually affects. It seems a little bit disingenuous (especially in this forum) to say that that conclusion is the "only explanation" you can come up with. And just to avoid prompting you too much: trust me, we all know or can guess why you think AI art is a good thing regardless of any concerns one might bring up.


[flagged]



Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.



I disagree that my comments are un-substantive and flamebait, but I understand that my comments may run counter to the agenda of the site.


The agenda is simply to have a discussion forum that doesn't suck, and in particular that doesn't regress to the internet mean.


I genuinely love this site to be clear and appreciate all the work that you do around here - but the agenda of the site is to service Y Combinator Management, LLC.


Oh yes, but the question is how best to do that. The answer is that we can maximize HN's value for YC simply by making the site as good as possible, because that's what makes the community as happy as possible, and that's what has value. I've written about this many times in case you or anyone want to read more: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so....

Trying to make HN as good as possible has many aspects, but one of them is the attempt to keep discussions at least a little more substantive than the level they would normally degenerate to without effort, and that means avoiding flamewar comments, fulmination, etc., as described in https://news.ycombinator.com/newsguidelines.html.



Imitation isn’t the problem so much as it is that ML generated images are composed of a mush of the images it was trained on. A human artist can abstract the concepts underpinning a style and mimic it by drawing all-new lineart, coloration, shading, composition, etc, while the ML model has to lean on blending training imagery together.

Furthermore there’s a sort of unavoidable “jitter” in human-produced art that varies between individuals that stems from vastly different ways of thinking, perception of the world, mental abstraction processes, life experiences, etc. This is why artists who start out imitating other artists almost always develop their imitations into a style all their own — the imitations were already appreciably different from the original due to the aforementioned biases and those distinctions only grow with time and experimentation.

There would be greatly reduced moral controversy surrounding ML models if they lacked that mincemeat/pink slime aspect.



Doing the work to increase OpenAIs moat


Obviously AIs can just train on images that aren't poisoned.


Is it possible to reliably detect whether an image is poisoned? If not then it achieves the goal of punishing entities which indiscriminately harvest data.


You can use older images, collected from before the "poisoning" software was released. Then you don't have to.

This, of course, assumes that "poisoning" actually works. Glaze and Nightshade and similar are very much akin to the various documented attacks on facial recognition systems. The attack does not exploit some fundamental flaw in how the systems work, but specific characteristics in a given implementation and version.

This matters because it means that later versions and models will inevitably not have the same vulnerabilities. The result is that any given defensive transformation should be expected to be only narrowly effective.



It's roughly in the same spot as reliably detecting if you have permission to use the image for your data training set in the first place.

If it doesn't matter, then neither does the poisoning matter.



AI's have learned much tougher things. You just need a small data set of poisoned images to learn it's features.


Cute. The effectiveness of any technique like this will be short-lived.

What we really need is clarification of the extent that copyright protection extends to similar works. Most likely from an AI analysis of case law.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com