Artificial Intelligence Holds Enticing Promise, Needs Framework, Say OECD, Microsoft, IEEE 10/01/2018 by Catherine Saez, Intellectual Property Watch Leave a Comment Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window)As artificial intelligence technology spreads its wings, governance issues are emerging, as are international discussions, including a range of activities planned for 2018. One of the panels of the December Internet Governance Forum in Geneva explored the policy questions, as panellists said artificial intelligence is unabatedly spreading to many areas of our lives with promises of economic growth and benefits, but with few regulations to frame it. Issues include ethics, privacy, biases, and lack of transparency. AI panel at the Internet Governance Forum The 12th annual meeting of the Internet Governance Forum took place from 17-21 December at the UN in Geneva. A panel organised by the Organisation for Economic Co-operation and Development (OECD) and the Ministry of Internal Affairs and Communication of Japan (MIC) looked at initiatives seeking to lay the foundation of policies or guidelines to address the potential drawbacks of the rapidly expanding hold of artificial intelligence (AI) on the world. Masahiko Tominaga, vice-minister for policy coordination at the MIC, presented artificial intelligence with the potential for solving various problems and with “enormous benefits” to be expected. However, risks associated with AI such as the lack of transparency and the loss of control, as well as social, economic, ethical and legal issues, need to be tackled and discussed at the international level, he argued. OECD Work on AI, G7 Declaration As examples of international discussions, Tominaga cited a conference organised by the OECD, bearing the same title as the IGF session: “AI: Intelligent Machines, Smart Policies,” held from 26-27 October in Paris. He said that Japan, hosting the Group of 7 meeting in 2016, suggested that G7 countries take the lead in international discussions toward the formulation of AI research and development guidelines, which should be taken into account by AI developers. A declaration [pdf] produced during the 2017 G7 ICT and industry ministers’ meeting in Italy in September stated: “We share the vision of human-centric A.I. which drives innovation and growth in the digital economy. We believe that all stakeholders have a role to play in fostering and promoting an exchange of perspectives, which should focus on contributing to economic growth and social well-being while promoting the development and innovation of AI.” The declaration contains an annex dedicated to questions and opportunities of AI. Karine Perset, an economist in the OECD Digital Economy Policy Division, said the OECD is conducting work on AI, which is at the top of policy agendas for many OECD member countries, non-member countries and stakeholder groups. The strong impact of AI on many areas including productivity, business models, and broader concerns about social well-being and inequalities make AI an important subject, she said. Under the leadership of Japan, the OECD started an international, multi-stakeholder dialogue on AI, she explained, taking stock of who is doing what and what differences are emerging. The analytical work is set to begin in early 2018, and will look at ways to measure the impacts of AI, and might work on a high-level non-binding framework which could help governments when developing policy in this area, she said. There is a need for an international analysis and benchmarking of social and ethical implications of AI technologies, she said, underlining the strong willingness of the OECD to engage with other groups, such as the business community, the G7, the G20, the European Union, and more technologists, for multi-stakeholder cooperation and dialogue. According to an OECD spokesperson, in 2018, the OECD is planning to produce and analytical/policy report on AI building on the OECD conference held last October. Follow-up work may include the development of OECD guidelines on AI’s ethical and other policy considerations in cooperation with all concerned stakeholders, she told Intellectual Property Watch. Also in 2018, the OECD is planning to organise a conference in Shanghai in early September on neurotechnologies, which the spokesperson said are strongly related to AI, focusing on the ethical aspects. The OECD Science, Technology and Innovation outlook to be published in October will include a chapter on AI, she said. Joanna Bryson, reader at the University of Bath, England, and Affiliate at the Center for Information Technology Policy at Princeton University, said machine learning exploits the searches that human have already done and history before us. It thus embeds human biases, such as sexism or racism. The web is very biased toward America, she said. Microsoft: Empowerment and New Opportunities for All Carolyn Nguyen, director of technology policy at Microsoft, said that according to a study [pdf] from consulting firm Accenture in 2016, AI could double annual economic growth rate for some developed countries by 2035. AI could also boost labour productivity by up to 40 percent and in the United States could translate into an additional US$8.3 trillion in growth value added in 2035. AI works by looking for patterns in large data sets, and using those patterns to make predictions or recommendations, she said, adding that AI should really be called computational intelligence. At Microsoft, “we firmly believe in using AI to empower and create new opportunities for every person and every organisation,” said Nguyen, adding that AI is used to amplify human ingenuity. She mentioned a partnership on AI, launched in September 2016. The initiative, originally founded by Amazon, Apple, DeepMind, Google, Facebook, IBM, and Microsoft, means to develop and share best practices; provide an open and inclusive platform for discussion and engagement; advance public understanding; and identify and foster aspirational efforts in AI for socially beneficial purposes, according to its website. Over 50 partner organisations have joined since 2016, she said. Those include the American Civil Liberties Union, Amnesty International, eBay, the Electronic Frontier Foundation, Intel, and UNICEF. The discussion about AI should be anchored in practical principles, not high-level principles, she said, but how those principles can be translated into engineering practices and guidelines through the sharing of best practices. She called on governments to continue to fund research in AI and stressed the importance of the discussion of data availability for AI. Global Initiative on Ethics of IEEE Karen McCabe, senior director, technology policy and international affairs at the Institute of Electrical and Electronics Engineers (IEEE), described the Global Initiative on Ethics of Autonomous and Intelligent Systems, launched in 2016, that now includes experts from all around the world. The project hosts 13 working groups working on various topics, such as data privacy processes, transparency of autonomous systems, and standards for personal data AI agents. The IEEE just issued the second version of its Ethically Aligned Design, which is publicly available and open for comments. Some 11 IEEE standards have been inspired by the work done in the context of the Global Initiative on Ethics of Autonomous and Intelligent Systems, said McCabe. Live Global Civic Debate On AI Jean-Marc Rickli, Global Risk and Resilience Cluster leader, Crisis and Conflict Management Programme at the Geneva Center for Security Policy, talked about a global civic debate titled “Governing the rise of Artificial Intelligence,” running from 7 September 2017 to 31 March 2018. The global civic debate is organised by the Future Society, incubated at the Harvard Kennedy School of Government, and looking at the “profound consequences of the current technological explosion.” According to Rickli, there is an increasing discrepancy between what is being developed technologically and the policies that are being adopted. The idea of this global civic initiative is to give the population a voice in driving the debate on the governance of AI. Following a question from the audience about the necessity of safeguards, he said pressure should be applied to governments so safeguards are developed in the context of the use of AI. Perset said that part of the work conducted at the OECD is to look where existing frameworks could apply to AI, and where new safeguards might be needed. Image Credits: Catherine Saez Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related Catherine Saez may be reached at csaez@ip-watch.ch."Artificial Intelligence Holds Enticing Promise, Needs Framework, Say OECD, Microsoft, IEEE" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.