Artificial Intelligence: No Clear Roadmap For The Future 09/06/2017 by Elise De Geyter for Intellectual Property Watch Leave a Comment Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window)“Artificial intelligence is as a weapon” and we have to deal with it “as we deal with other weapons,” Cindy Smith, director of the United Nations Interregional Crime and Justice Research Institute (UNICRI), said during a panel discussion at an artificial intelligence summit in Geneva this week. The panel discussion focused on preparing a roadmap to ensure that artificial intelligence develops in “a safe, responsible and, an ethical manner” that benefits all the different segments of our society. Summit on Artificial Intelligence for Good The panel on the “Future Roadmap for Artificial Intelligence – Collaborating for Good” was organised on the first day of the AI for Good Global Summit, which is taking place from 7-9 June, and is organised by the International Telecommunication Union (ITU), and XPRIZE Foundation. Opportunities for Artificial Intelligence For Smith, artificial intelligence is of “paramount importance of the society,” and we are only starting to understand how the advances can really be applied. Artificial Intelligence is “an opportunity” to change the world, said Robert Kirkpatrick, director of UN Global Pulse. Artificial intelligence will make better decisions than humans across a wide range of activities, according to Stuart Russell, professor of computer science and Smith-Zadeh professor at the University of California, Berkeley. “We first solve artificial intelligence and then we use artificial intelligence to solve everything else,” Russell said. It is necessary that artificial intelligence will be “human-centric”, because humans are going to benefit from the applications, according to Russell. Manuela Veloso, professor of computer science and robotics at Carnegie Mellon University, shared a video about how CoBot robots escort visitors and bring coffee to the office of a researcher. Sam Molyneux, general manager and scientist at the Chan Zuckerberg Initiative, told the conference that artificial intelligence can be used to empower the scientific process itself. Artificial Intelligence can address the decentralised character of science by making scientific data available for all, he said. According to Paul Bunje, chief scientist at XPRIZE, artificial intelligence can be used to protect wildlife and contribute to the achievement of the UN Sustainable Development Goals (SDGs) related to the environment. He gave an example of how the use of artificial intelligence enabled the successful prosecution of an individual who was illegally fishing in a protected area. Risks of Artificial Intelligence Even though there are “serious possibilities” for artificial intelligence, there are also “serious risks,” several speakers said. It is “a new world” and a “scary world,” Chris Fabian, co-founder of UNICEF Innovation, said at the same event. According to Fabian, artificial intelligence will have “a devastating effect” on human jobs, especially in poor regions. Russell said several Nobel Prize winners think that “the most challenging threat of the economy in the next 20 years” is artificial intelligence taking away jobs. He called for a careful economic analysis of this threat. Kirkpatrick noted that no regulation today is able to address all the risks and opportunities of artificial intelligence. According to him, there is “a big challenge of transparency.” It is really hard to analyse data and biases could be easily introduced. One of the other remaining questions of artificial intelligence, according to Veloso, is autonomy. Nobody knows what robots will do when they are on their own, she added. It is “ridiculous” to say that we can turn off machines, according to Russell. Russell warned that it is an “unwise strategy” to say that artificial intelligence is not going to happen. Artificial intelligence has already been manipulating the information environment in which we live. The deliberate misuse of artificial intelligence is potentially “a much worse problem” than malware, according to Russell. Cyber-attacks empowered by artificial intelligence will have “a whole new dimension,” Smith predicted. One of the other challenges of artificial intelligence is “navigating between the risk of doing and the risk of not doing,” according to Peggy Hicks, director of Thematic Engagement, Special Procedures and Right to Development of the Office of the UN High Commissioner For Human Rights (OHCHR), Roadmap for the Future Bunje said that artificial intelligence is still not very well developed and that we are in “a very good position” to think about a roadmap for the future and build “a new world”. Even though the use of artificial intelligence looks really close, there is “still a long way” to go, said Chaesub Lee, director of the ITU Telecommunication Standardization Bureau. We have to anticipate and avoid “the failure mode,” Russell said. He added that we need to set up very soon a paradigm to control misuse of artificial intelligence. Soft law can help to implement standards which could ultimately result into hard law, Hicks said. Smith warned that there is “no time” to wait for a convention. Smith urged “communication, collaboration and coordination” between the industry, academia and government. Civil society should also be part of the debate, she added. Human rights can offer “a very useful tool” in navigating the future, Hicks said, adding that people fear what they do not understand, but it is very hard to make people understand. Russell underlined that we need to think about “a destination,” an economy in which we want to live. Katsumi Emura, chair of the Industrialization Roadmap Task Force of the Strategic Council for AI Technology, presented the roadmap for “Society 5.0”, a society Japan wants to realise in the future. The society entails a “new social mood and an advanced fusion of cyberspace and physical space.” Society 5.0 targets social issues and contributes to the Japanese gross domestic product target. People at the age of 80 years old would be able to work actively in Society 5.0, Emura said. The ITU and many other international organisations are involved in the area of artificial intelligence, observed Lan Xue, dean of School of Public Policy and Management of the Tsinghua University in China. He raised the question of how the different regimes developed by the different organisations are going to cooperate as no regime is dominant. Artificial Intelligence and Inequality Several speakers underlined that artificial intelligence should leave no one behind. “Artificial Intelligence technologies and expertise are unaffordable for all but the wealthiest,” Kirkpatrick told the conference. Fabian stated that artificial intelligence can develop solutions for “many difficult problems for most of us.” But the question remains how we can use artificial intelligence to solve the problems for all of us, he added. We are already living in “a very unequal world” and we have to make sure that artificial intelligence is not going to make it worse, Fabian said. Elise De Geyter is an intern at Intellectual Property Watch and a candidate for the LLM Intellectual Property and Technology Law at the National University of Singapore (class 2017). Image Credits: Elise De Geyter Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related Elise De Geyter may be reached at info@ip-watch.ch."Artificial Intelligence: No Clear Roadmap For The Future" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.