Belgian Infringement Case Prompts Publishers’ Project On Automated Access 27/09/2006 by Dugie Standeford for Intellectual Property Watch 1 Comment Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window)By Dugie Standeford for Intellectual Property Watch. Efforts by Belgian publishers’ representative Copiepresse to stop Google from displaying copyrighted content without permission have energized a global publishing industry initiative aimed at resolving the problem. With a trial set in November on claims the search engine’s news service and “cache” are infringing copyright, four major publishing organizations are set to launch a pilot project that aims to “find a machine-based solution to the machine-based activities” of Google, said London intellectual property attorney Laurence Kaye, who is advising the publishers. Meanwhile, one legal expert warned the Copiepresse litigation could ultimately hurt publishers’ pocketbooks. The Brussels Court of First Instance ruled on 5 September that the activities of Google News and the use of “Google cached” violate Belgian copyright law. It ordered the search engine to pull articles, photographs and other material of Belgian publishers of French- and German-language publications from its sites or face a daily fine of €1 million – and to post the entire judgment on the home pages of google.be and news.google.be for five continuous days. Google, which at first failed to answer the case, challenged the posting requirement. It was upheld on 22 September subject to a hearing on the merits of the case 24 November. Other newspapers may join the lawsuit, Kaye said. “From the beginning, Google has taken a very cavalier approach to copyright. In Google’s view, it should be up to publishers to opt out of their programmes rather than the way copyright has always worked – opt in.” Search engines are great, Kaye said, but their activities, stripped down, amount to automated copying of “giant chunks” of third-party copyright information, indexing and displaying in response to users’ requests. The service has allowed Google to build “massive advertising revenues.” Publishers do not want to shutter search engines, “but they do want them to behave in a responsible way like other copyright users,” Kaye said. To that end, the World Association of Newspapers, European Publishers Council (EPC), International Publishers Association and European Newspaper Publishers’ Association will pilot an Automated Content Access Protocol (ACAP) beginning 6 October at the Frankfurt Book Fair, said Kaye, who is advising on the project. ACAP will allow content providers to systematically grant permissions information relating to access and use of content in a form that can be read by “crawlers” so search engine operators and any other users can automatically comply with applicable licenses or policies, the EPC said. There are already existing protocols to help website owners tell search engine “spiders’ which areas of a site can be indexed. ACAP will not replace them, but will try to overcome problems such as the simplistic nature of the permissions they control, basically, “‘yes, please spider this page’ or ‘no, please do not spider this page.’” During the 12-month pilot, publishers will develop terms and conditions for the search engines to whom they have given the authority to automatically search and index their works. If successful, the standard will allow all publishers to take a tailored approach to search engines, ultimately enriching users’ experiences, the EPC said. While the project will focus first on the needs of print publishers, it will be usable for every type of online content, including video and audio. Possible ‘Boomerang’ Effect for Publishers? Some questions have been raised about the publishers’ approach. Once someone publishes on the Web, he gives “an implicit license to search tools,” said Cedric Manara, an associate law professor at the Ecole de Hautes Etudes Commerciales du Nord in France. Those who do not want their content indexed can turn to metatags such as “no index,” “no follow,” or “robot.txt.” Some publishers have already signed agreements with Google giving them control over what is indexed. “This seems reasonable,” Manara said. “The lawsuit is probably excessive.” If, nevertheless, it moves to judgment instead of settling, Belgian publishers “will soon realise that their lawsuit will have a boomerang effect,” Manara said. Now that they are no longer indexed, and Google has removed their links from Google News as well as from the entire search engine, will their traffic remain the same? “There is a French expression that says one cannot have the butter and the money made from the butter,” Manara said. The Copiepress case amounts to the same thing: “You cannot be indexed on Google and complain that there are links directing to the article that is indexed, and not to the homepage.” The case “goes to the heart of how search engines work,” Google said on its blogspot. Publishers benefit from the links, but if they do not want their websites to appear in search results, they can use the robots.txt. “If a newspaper does not want to be part of Google News we remove their content from our index,” the company added. “All they have to do is ask.” Dugie Standeford may be reached at email@example.com. Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related "Belgian Infringement Case Prompts Publishers’ Project On Automated Access" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.