The Digital Millennium Copyright Act’s safe harbor provision makes it a lot easier for websites to make “fair use” of content than for humans, says an opinion piece by author Peter Wayner on Wired.com. Web sites can get away with reusing images with, in most cases, no more liability than responding to a DMCA takedown can get them out of. But humans aren’t so lucky.
Wayner tells the story of looking into adding photos to a book he was writing about productions of the play Death of a Salesman in 1949 and 2012. Adding such photos to a book would require the payment of royalties.
After working through the often byzantine licensing matrices of major photo archives, I found the pictures would cost about $300-$600 per image — adding 20 images would easily add about $10,000 to the book budget. Would this be worth it? Would more people buy an illustrated book? An informal marketing survey suggested it wasn’t worth it; one friend told me flat out that if he wanted the pictures, he would just go to Google. And he was right: All the photos were there.
Wayner makes the point that these aren’t just random image inclusions, but search algorithms that are making decisions more and more like creators do. If you Google Death of a Salesman, you get an automatically curated collection of photos and facts that Google got for free—but would have cost Wayner $10,000.
Algorithms that borrow too much can be DMCA’d, Wayner says, but there are so many more of them and they work so much faster than humans that it’s a Sisyphean task trying to keep up. So Wayner suggests there should be bots that can “make intelligent decisions about fair use,” that can tell when websites take reuse too far. They could even be smart enough to honor creators’ Creative Commons licenses. He uses the example of the YouTube monetizing bot, which allows content creators to profit from ads on their content that someone else uploaded to YouTube.
While this is a laudable goal, I fear that it’s a forlorn hope, for two reasons. First of all, “what is fair use” can change from situation to situation, or even from courtroom to courtroom. That’s why there’s a four factor test for it that requires human consideration. Any time a content re-user says, “That’s fair use!” the creator can say, “Oh no it’s not!” and take him to court. While there are certain bright lines from precedent that make it more likely the case would be thrown out, the creator can still sue anyway.
And second, nobody has incentive to make a bot intended to enforce only-fair use. The content reusing sites on the ‘net have the incentive to be as inclusive as possible, and the content owners have the incentive to be as miserly as possible. Search engines and aggregators want to use everything, and content owners want them to use nothing. How many stories have we seen of ridiculously broad DMCA takedowns hitting public domain books and titles their authors had granted permissions for? Sometimes this averages out—you’d never have gotten the YouTube monetizer if it hadn’t been the only way YouTube could keep whole rafts of their videos from going down—but not always.
We may just end up in a war of the content bots…with humans, who’d like to make such uses but can’t afford to, stuck in the middle.