US, European Views On IP Management And Digital Business 25/07/2017 by Guest contributor for Intellectual Property Watch Leave a Comment Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) The views expressed in this article are solely those of the authors and are not associated with Intellectual Property Watch. IP-Watch expressly disclaims and refuses any responsibility or liability for the content, style or form of any posts made to this forum, which remain solely the responsibility of their authors. By Magda Voltolini Data-driven technologies are enabling the expansion of trade and data flows around the world. We have disruptive smart products, smart industrial processes, smart clouds and smart services. Traditional industries such as pharmaceuticals, chemical and mechanical engineering digitally transform production processes to generate custom-tailored services and improve competitiveness using artificial intelligence while new companies emerge with disruptive offers. Such artificial intelligence-based business models, however, are bringing about a rethinking in European regulations in relation to copyright, such as that deployed by DeepMind and Pinterest, for instance, because machine learning may reproduce countless amounts of proprietary content to generate raw solutions. A recent event in Paris delved into these and other issues, including data ownership and access rights, as well as inventions by computers. Stakeholders are equally inquiring about data ownership and access rights, particularly considering the volume of data being generated and IP management uncertainties. For example, ABB partnered with Microsoft Azure smart cloud and IBM Watson artificial intelligence in its digitalisation process, but who will own the raw data and solutions produced out of these partnerships? Moreover, industry also questions about creative computer inventions in relation to patent inventorship. On 4 May, the University of Strasbourg Center for International Intellectual Property Studies (CEIPI), the Bureau for Economic Theory and Applications (BETA), the CEIPI-BETA Project in Law and Economics of Intellectual Property, and the International Institute for IP Management (I3PM) invited high-level industry delegates, legal scholars, economists and policymakers to a conference entitled “Intellectual Property and Digitalization: Challenges for Intellectual Property ” to tackle these questions and promote further research. All conference videos are available at http://www.canalc2.tv/video/14508. The event was an outcome of the initiative “I3PM meets Academia” and this article aims at reporting main findings and policy recommendations shared during the occasion. For questions about this or future events please contact firstname.lastname@example.org I3PM is a European network of IP managers with the mission to promote IP management as an interdisciplinary profession that perceives the profession as an interface between legal and economics sides of businesses – it enables an IP value-added asset in relation to communications, said Peter Bittner, chairman of the Advisory Board of I3PM. “The conference topic arose from an advisory I3PM Board discussion, where it was found that the digital transformation was really hitting companies, which do not come originally from the IT area, but traditionally from the chemical, pharmaceutical and mechanical engineering because they are having to reorganize their production processes and logistics and make everything digital in order to stay competitive.” Christophe Geiger, professor of law at the University of Strasbourg and director general and director of the Research Department at CEIPI, said that “the aim of the conference [was] to gather an interdisciplinary approach to “the impact of digitalisation on the issue and understanding and use of IP in the future,” as well as “to produce a policy recommendation, some guidance in the field of IP.” Xavier Seuba, senior lecturer and researcher at CEIPI, co-director and coordinator of the CEIPI-BETA Project, said, “the cooperation is a mirror in the relationship between law and economics in the current debates and initiatives concerning innovation, competition and institutions.” Seuba said: “Policymakers and legal communities expect contributions from economists, because economics partially explains [IP functions] by contextualizing the role that IP can fulfill in innovation, creativity, employment and so on. This creates space for policy discussion and thus legal reform is required. It was under this context under which the BETA and CEIPI project was created, specifically to promote a fluent cooperation between researchers, a forum for discussion and research expertise in the areas of IP management, IP enforcement, open innovation, determinants for optimal patenting and digitalisation and IP. [The Project] provides publications, events and collaboration with other institutions.” Keith Maskus, chief economist for the United States State Department, and Arts and Sciences Professor of Distinction and Professor of Economics at the University of Colorado, opened the event and addressed general framework issues and incentives in the area of copyrights and trade. Additionally, he recommended few areas for research, which follow. After highlighting international trade statistics, Maskus provided comments on a “supportive framework” that can foster conditions for enabling/ enhancing (supporting) creativity in a cross-border digital trade, based on literature from a think tank, as follows: “Dynamic technology policy: support for R&D and experimentation in business models [flexibility is key for developing business models] Infrastructure for widespread and efficient connectivity Copyright protection that is transparent and strikes a balance between creative participants and well structured limitations and exceptions Internet provider liability rules: in the US a fairly light regulatory touch has been effective [to generate market incentives for building major platforms economies and so on] Access to finance and development of fintech can help small companies enter and compete The balance struck in the treatment of data privacy: some countries prefer strong controls over data dissemination and some, like the United States, opt for fairly permissive use by businesses, subject to appropriate risk management and liability for fraudulent use. This approach has supported the development of business models, sometimes without prior authorization of individuals Interoperability in relation to international trade barriers, postal costs, etc.” With respect to copyrights, Maskus recommended balanced rules that influence global trade: “balance between for permissive use for digital creativity and the need to permit content providers control balance between content developers and publisher’s rights on the one hand and education and scientific research on the other” As to economic research needs, he highlighted that digital trade is in its infancy and that: “there is a strong need for better and more comprehensive international databases on digital trade and investment micro-econometric studies of whether and how investment location decisions of technology companies depend on policy regimes and other factors competition and access effects of geographical market definitions supported by digital copyrights responsiveness of digital trade and services to harmonization of regulatory systems what really accounts for the formation of platform and content tech companies?” Sean O’Connor, Boeing International professor and director, Center for Advanced Study and Research on Innovation Policy (CASRIP), University of Washington School of Law suggested a flexible approach in respect to digital contract models. In his evaluation of the advantages and disadvantages in the evolution from sales to licences to services contract models, in the delivery of digital content delivered as pure services models “there is no permanence of artifacts.” He affirmed that there must be a concern “that consumers understand what they are getting and that we [monitor] policy transactions,” while refuting court intervention to override innovative digital transactions. He recommended therefore “the need for ongoing transaction flexibility and heightened scrutiny of user contracts.” Alissa Zeller, vice president, Global Intellectual Property at BASF, presented ideas on the impact of digitalisation on freedom to operate (FTO) processes in relation to information and communication technologies (ICT) patents’ insertion in the chemical industry. Consequently, Zeller proposed risk-based FTO approaches for managing risk in this new era. Her slides are available here [pdf]. She gave an example that “farming is done by robots these days, remote control machines,” inferring that it involves the use and risk of infringing ICT patents. What does it mean in terms digital growth in terms of a unit of measure? Peter Bittner, chairman of the advisory board of I3PM, cited Michael Steinbrecher, Rolf Schumann book “Update” (2015) to illustrate data growth in terms of worldwide storage capacity in Exabyte (1018); specifically, in 1986, 3 Exabyte, in 1993, 16 Exabyte, in 2000, 55 Exabyte (stored in analogue / paper form, that is to say, the equivalent of watching an HD film for 1.1 million years). In 2015, the worldwide data storage capacity increased to 1352 Exabyte, being 99% digital. Today, considering digitalisation processes, smart systems, online activity, “we produce 2.5 Exabyte of new data per day,” he said. It means that data generation today corresponds to 12.5 times of all printed books in the world. BASF digitalisation is happening in several forms along the value chain, Zeller said, “meaning, putting a ‘smart’ before everything.” In the chemical industry, the new invention cycle is a revolution since traditionally BASF had few inventions, with long R&D times and long and expensive product cycles. The challenge now is that the chemical industry must adapt to the rhythm of ICT patenting, in which “numerous patents go into one product and the product life cycle is short, the investment per innovation is low and there is cross-licensing” (similar to the ICT patent cycle model). In this context, she highlighted that “[in] the chemistry landscape, FTO is a key element for IP and strategy, whereas in the ICT landscape the FTO is nearly impossible.” As a result, she created three different FTO approaches, notably for chemical invention (classical FTO), ICT invention (risk-based FTO) and cross-over invention (both FTOs). In her video presentation and slides, Zeller explains the FTO approaches for mapping and analysing ICT patenting risks. Risk Management If the risk exists, one can stop the project or mitigate risk by “forecasting technical changes or preparing for a defensive strategy, being careful with communications, avoiding countries of high risk and being contractually wise (guarantee caps),” Zeller said. Zeller concluded that artificial intelligence FTOs will be automated and disrupt legal practitioners and she proposed a few questions: “Are artificial intelligence-generated FTO legal work products? Can we wait? Do we need fundamental legislative changes? What is the legal relevance of computer-generated (mass)-patent filings?” Claudia Jamin, group vice president, head of IP operations, Europe, at ABB Asea Brown Boveri Ltd, said that “ABB is undergoing a massive restructuring and part of this is [due to] digital business.” As a traditional company, ABB is adapting to the digital transformation. The Internet of Things is “about machine data transferred over to (or back to) ABB to make work more efficient, life cycles longer, idle times shorter or longer, applicable to all different group products.” In terms of big data, Jamin said it is important for ABB “to get the combination of data, as much data as possible, as quick as possible to really learn what is in the market and better server customers.” However, she notes that ABB can also “service competitors’ products, or combine [its] own and competitors’ data and transfer – how do you treat this from a copyright perspective?” ABB and Microsoft partnered to drive digital industry transformation (Press Release of 4 October 2016) for the intelligent cloud and Watson for the intelligent data analysis (Press Release of 25 April 2017). ABB as such is offering “ABB Ability”, a cloud-based digital data or product management and services under business model “Pace of the Data”, a full business path (illustrated in her video presentation) from data generated by the customer machine to Microsoft Azure Cloud to IBM Watson machine intelligence to ABB business optimization to customer businesses units. From a legal standpoint, the “Pace of the Data” is not subject to existing data protection. Sharing similar views with Prof. Hilty, Jamin noted that “it might apply, in some cases, under the Directive on Protection of Trade Secrets.” In relation to copyrights, she asked that “if you combine data, are you creating or copying in a big data package?” In relation to databases, individual data is not fully protected. Additionally, she said, there are questions “on big data ownership regarding the definition of co-ownership vs. joint ownership (who owns what), liability and potential of conflict with other jurisdictions, notably the USA.” Another big challenge is the change in the competitive environment: “cooperation partners today and what we see in contractual negotiations, partners of today may become competitors of tomorrow. It is an interesting race between fast developing companies such as Google and IBM and technical companies such as ABB and Siemens.” Lastly, Jamin asked whether “we will we need a patent attorney or a risk management professional?” Reto Hilty, professor and director at Max Planck Institute for Innovation and Competition, presented his views on big data ownership and use in the digital age. He agreed with Maskus that “[t]he value of data is the mineral oil of the digital economy” and highlighted that data can also be the ‘oxygen’ and the ‘infrastructure’ in the digital economy. Additionally, he cited that the European Commission sees data as “a catalyst for economic growth, innovation and digitalisation across all economic sectors, particularly SMEs (and start-ups) and for society as a whole, as part of the Digital Single Market Strategy for Europe.” “The EU Commission is concerned in providing regulatory measures to a smarter world, for a thriving [EU] data driven economy. The leadership of European digital industries can be achieved, but European legal regulation leadership is another question. Regulation is an intervention in the scope of freedom – it is only justified if positive impact is reasonably can be expected. In our case, it is justified if the driven economy can be stimulated to a higher degree.” He questioned to what extent European intervention would further stimulate this emerging economy and said the “Commission seems to see it less clearly.” His presentation therefore focused on two major challenges: data ownership and access rights. He explained that the “[c]hallenges that we [have faced] with patent law advises [us] to be cautious with the creation of data ownership considering that the establishment of legal exclusivity may foster dysfunctional effects instead of fostering the economy.” Additionally, he noted “the data ownership would only be in Europe and should raise scepticism – is it really wise for Europe to take regulatory leadership in data ownership? Is it possible that Europe is afraid of US dominance since major data-driven digital companies are from the US?” He saw that it may be premature to provide conclusive answers but provided some considerations in relation to ownership and access. Ownership The European Commission Communication COM (2017) 9 Final on Building a European Data Economy refers to two data terms. The first term infers a right in rem, which leads to property of data, meaning an exclusive right characterized by an erga omnes effect. That is to say, transferable and licensable to third parties contractually allowing the use of the data, protection mechanisms and the right to claim damages against the unauthorised data usage. The second term means a defensive right for a de lege lata, de facto data holder, characterized by a right to sue third parties for illicit data misappropriation. This approach resembles possession rather than ownership, Hilty said. “It is comparable to possession as such, but not protected by know-how. There is a concern that the legal protection applied may be similar to the one applied by the Directive on the Protection of Trade Secrets 2016.” Hilty made two points on data regulation approaches: Data protection, de lege ferenda with erga omnes effects already exists under the Directive on Protection of Databases , however, it does not address data as such, emphasising that the Directive was enacted based on theoretical assumptions of the European Commission and effects were rather negative. The lesson is that mistakes should not be repeated, he said. As for the defensive approach, de lege lata, the General Data Protection Regulation (GDPR) placed a fundamental role in European personal data protection but “it is questionable whether it provides adequate protection beyond individual rights in the context of the data-driven economy.” Another example is the Directive on the Protection of Trade Secrets, which deserves attention in the sense whether data as information is already partially protected based on a defensive rights’ approach. However, further legal analysis is needed. De lege lata approach reveals that context matters: personal data and trade secrets regulations, as above cited. Data may be of a largely different nature and nature defines the question of ownership, he concluded. Data can be characterised as three types, according to Hilty: Data of technical/ factual nature: machine data (temperature of the machine), meteorological data, market and stock exchange data (if the data contains information on know how it could fit under the Directive on Protection of Trade Secrets) Personal data: data with a connection to an individual person, such as health data, consumer behavior, preferences on internet/ social networks and movement General Data Protection Regulation (GDPR) Data as such: of particular importance because the majority of applications of big data are based on such data – it is not attributable to an individual person but it can be easily produced and associated with an individual person. Under this context, the GDPR established a dangerous role for data. In view of data ownership, Hilty explained that depending on the category data can be subject to different conditions in terms of collecting, processing, functioning and using, in relation to interests of potential stakeholders. “In case of a traffic application, who should be the data owner?” Hilty said. “The car producer, the supplier of sensor/control unit, the application producer, the service provider, the car driver? Or co-ownership? What would co-ownership mean? Would anybody have an interest of prohibiting use of such data? Would ownership have the purpose of monetization of data? If so, who should pay whom for what in relation to the use of this data and how much?” With respect to data access, the value of data is obvious but not to individual data. “It is the smart combination of a big data volume matter,” he said. “Access to such data is crucial for service or product providers. For example, on health data, the pharmaceutical company has the data but an independent doctor will need the information if the patient needs another medical treatment. This is to say, “access may be an issue of major relevance irrespective of legal ownership simply because in most cases factual data control excludes third parties.” Would Antitrust Law Play a Guiding Role to Ensure Access? Hilty said antitrust law in most cases is not tailored in a way to adequately address the issue of data access, but if ever applicable, a sector specific regulation may be required. Incentivize the transformation of personal data or personable related data in another type of data to clearly exempt the GDPR for relating to business models, he said, adding conclusions on data ownership: “If there are serious doubts in terms of legal data ownership, it certainly does not exclude a defensive approach to specifically protect misappropriation of de facto data, which might enhance and facilitate data transaction. Neither in case of defensive approaches nor in case of access concerns, a one-size fits all approach will likely produce positive effects at large. Above all, this is due to enormous dynamic of big data environment. State regulations risks producing undesirable and dysfunctional effects. Therefore, an approach could be the definition of policy targets for the industry concerns. Self-regulation is the most powerful tool in our current stage of knowledge”. Cedric Manara, senior copyright counsel at Google, highlighted the fact that European companies that base their development on artificial intelligence may infringe copyright in European countries, while they are safe in the United States. Google uses the term “deep learning or machine learning or artificial intelligence” in meaning that one can train machines to do a specific task. DeepMind is the artificial intelligence acquired by Google. Manara gave two examples of the use of artificial intelligence to illustrate his views. The first, Spanish researchers at the University of Granada used seven short videos taken from YouTube with a database of handguns so the machines could recognize the type of guns showed in each video. The study results were published online. The second, the Geena Davis Institute machine learning project copied tons of movies so it could analyze women’s representation in movies, this is to say, how many words and how long women appeared on screen to demonstrate social value from an influence level perspective. From the US copyright law perspective, a fair use for copy analyses allows artificial learning, so the Geena Davis Institute on Gender and Media project does not infringe copyright. Manara proposed solutions for a European copyright perspective for the use of data for machine learning, as follows. European Typology of Constraints Manara cited few examples to illustrate “typology constraints”, for instance, that Urkund, a European plagiarism detector tool used by universities copies online content, without authorisation therefore infringing third parties’ copyrights, to find out whether students submitted original work. In the Spanish case, there is copyright infringement because researchers would need the authorisation from the producers for the analyses of the seven movies. In this context, Manara cited the work of Amanda Levendowsky, “How Copyright Law Creates Biased Artificial Intelligence,” to indicate that when you train a machine, if you want to ensure that your machine learning is safe from infringing copyrights, you will need to work only with public domain data (old data) or data which is irrelevant and therefore produce biased results. In other words, if you cannot have full access to all existing data, machine learning results produce a rather negative impact. The Way Forward Manara finds that term “text and data mining” prescribed in Article 3 of the Proposal for a Directive on Copyright in the Digital Single Market is limited because it only allows data mining for non-profit organisations, which means that Europe does not allow companies to create data-driven business models, in his opinion. For instance, applications such as Shazam and Vivino are infringing copyright, he said. In the artificial intelligence context, there is “no new expression in the mining: the purpose is to extract information, not to communicate the works,” he said. Manara explained that machine learning generates results that one cannot recognize original works, with a form of non-expressive use. For the development of European companies using artificial intelligence business models, therefore, Manara proposed that “[we] do not treat each information from a copyright perspective, because we need to extract information to analyze data in correlation with other works and create results that are completely different.” Artificial intelligence processes data in a different way from that of copyright, he concluded, saying, “Yes, there is a copy but this copy is not a reproduction in the meaning of copyrights and the result is very different. Big data is a layer upon copyright, which should be distinguished from it.” Manara told Intellectual Property Watch afterward that “every day 3 billion photos are uploaded to major social media platforms. On each single photo, there can be none to multiple elements that are protected by copyright: a building, painting or statue in the background, a design on the t-shirt of the person you see on the photo, a product, an ad, etc. And the person who uploaded the photo may not be the person who has the rights on it. If you wanted to make sure it’s OK to analyze just one photo, you would need to clear the rights, which would be burdensome as you don’t know who the author is/authors are, what is likely to be protected in the photo (in addition of the photo itself), where you can use it, when you can use it, etc. Clearance is impossible.” Computer-Invented Patents Ryan Abbott, professor of law and health sciences, University of Surrey, explored the future of patent incentives and artificial intelligence for “creative computer inventions.” Can a computer be an inventor? “It is an important theoretical and practical question because computers are doing the work of inventors, and inventors have ownership rights. Failure to list inventors can make patents invalid or unenforceable,” he said. From a US patent law perspective, there is no statute, no case law, and no patent office policy to say whether a computer can be an inventor, said Abbott. However, there are a number of potential barriers: the 1952 Patent Act uses the term “individual” for inventors, and judicial language characterizes invention as a mental act. Abbott argues that computers should be inventors, because it will incentivize the development of inventive machines, and promote fairness. What are the options? Abbott offered two options for computer-invented patents: Public domain: computers do not need incentives, maybe it would chill human inventions, maybe it is unfair to reward people first recognizing computer inventions – something that companies are not saying much these days because the incentive can go to an intern, for instance, the person did not much in inventive act. Recognise computers as inventors: it will functionally produce more inventions and incentivize the US patent rationale. “We want to generate more discovery, people will be incentivize to create more computer to produce the innovation we are looking for. And promote the disclosure and commercialization of patents.” Computer inventorship would not need a characterization of computer type for inventorship, but to apply the same patentability criteria to human inventorship. If the computer is the patent owner, the assigner should be automatically the owner subject to contract where there is complex computer creation. What does it imply for EPO and software? Yann Menière, professor of economics at MINES ParisTech, and chief economist, European Patent Office, inferred that the Industry 4.0 and Internet of Things (IoT) are beyond ICT networks and that EPO examiners see the undergoing digitalisation evolution everywhere. He said: “At the EPO, we see IoT massive inventions related to sensors that connect to people in all types of fields, in particular to the environment, not only to communications networks. This is the origin of massive data collection, which is then sent to communications networks and then to a cloud. The value of aggregated data from various individuals and various companies you can produce statistical opportunities and other information.” The enabling technologies are all about software and Industry 4.O is about “advancing software using standard machines. Advancement takes place in the software architecture. These software phenomena make it possible to replace repetitive intellectual tasks and to coordinate software on embedded platforms – this is an IP question,” he said. It is a challenge worldwide to seize these inventions, said Menière, adding that the EPO approach is the notion of computer-implemented invention. Control Bowman Heiden, deputy director, Center for Intellectual Property, University of Gothenborg (Sweden), found that in the transition from traditional industries, “the key now is to control the technology market” (rather than the product market) in the supply chain. He said: “Strategic IP should be aligned to business strategy, that is to ask, how to position IP to have a control? IP should provide input to the business and the key businesses is where there is IP, which is not an easy task. IP management is happening everywhere to build processes. How to handle convergence and divergence? Will the investment be in a platform connected to the customer or in a product? The value creating lawyers will be important – the smart service for customers will win.” Heiden recommends businesses focus on multiple IP portfolios strategies because there will be several IP management issues in relation to collaboration or competition partnerships. In terms of opportunities, the Internet of Things will keep enfolding standard-essential patents and multiple actors. He suggests that companies licence patents under pools to eliminate royalty stacking and quality issues (including injunctive relief issues). There will be other control mechanisms beyond IP rights, in his opinion, such as antitrust intervention and network effects. Magda Voltolini holds an LLM in Intellectual Property Law from the University of Edinburgh. She has experience in writing about a range of IP topics, notably covering policies and management, and in devising online marketing strategies for IP businesses. She recently started a blog to report stakeholders views on current IP management issues at www.ipmanagementblog.com.  The programme is available at http://www.i3pm.org/files/misc/PROGRAMME_CEIPI_BETA_I3PM_CONFERENCE.pdf  The European Center for International Political Economy and The Information Technology and Innovation Foundation  The Exabyte (EB) is a unit of information for data. It is a multiple of the unit byte for digital information and its prefix Exa indicates the multiplication by the sixth power of 1000 (1018). 1 EB = 10006bytes = 1018bytes = 1000000000000000000B = 1000 petabytes = 1millionterabytes = 1billiongigabytes. Image Credits: CEIPI Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related Guest contributor may be reached at email@example.com."US, European Views On IP Management And Digital Business" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.