Artificial Intelligence Is Changing Societies, But What Cost To Social Justice? Transparency Is Key 22/12/2017 by Catherine Saez, Intellectual Property Watch 1 Comment Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window)The desire of countries to hop on the train of artificial intelligence and get a piece of the pie might be contrary to democracy, according to a speaker at this week’s Internet Governance Forum. Even though artificial intelligence has the potential to improve lives around the globe, the challenges that come with it are complex and difficult to address, said the speakers. The UN-organised Internet Governance Forum took place in Geneva from 17-21 December. A session about making artificial intelligence work for equity and social justice was co-organised by JustNet Coalition, Social Watch, Georgia Tech, the Association for Progressive Communications, and Software Freedom Law Centre of India on 20 December, during the Internet Governance Forum. Mishi Choudhary, legal director at the Software Freedom Law Centre of India, said several examples show that big data and machine learning are changing society. There is a somewhat late realisation that human society is going to shift permanently because of the power given to robots. The most important thing is transparency, she insisted, and it should not be left to academics or companies. Self-regulation by companies is always what comes first, since technology is ahead of policymaking. “Innovation trumps everything,” she said, the governments want to move fast and in the process are forgetting a lot of the traditional values of democracy, which have taken generations to build. Because innovation and economic development are major goals, it is easy to forget, she added. She also voiced concern about which kinds of data are used to teach machines. It would be an opportunity to allow for the reflection of more diversity, she said, but right now skin colour, gender, culture, are not really reflected in data-sets, she added. Preetam Maloor, strategy and policy advisor in the Corporate Strategy Division of the UN International Telecommunication Union, said it is clear that AI has the potential to improve lives around the globe and could play a major role in the achievement of the UN Sustainable Development Goals for 2030. However, challenges linked to AI are complex and multifaceted, including ethical issues, transparency, data security, and socioeconomic challenges. It is important, he said, that developing countries are not marginalised, and make sure the social agenda is in place so that inequalities are not amplified. He suggested three approaches, the first of which would be to establish a multi-stakeholder platform, with an expert panel of experts. The second would be a review of the impact of AI on current frameworks and social issues, with more evidence-based information on social welfare, and the third would be capacity building efforts related to the fair distribution of AI. He announced that the second edition of the “AI for Good Summit” (2018) would take place from 15-17 May, in Geneva. Malavika Jayaram, executive director of the Digital Asia Hub, remarked on the interesting conversation that AI is forcing on social justice and said the world knows robot Sophia has been granted citizenship in Saudi Arabia. She said she agrees with Choudhary that innovation trumps everything as a lot of countries see AI as a way to leapfrog. There are a lot of questions remaining on how to deal with concepts in AI, like how a computer specialist could instil the notion of fairness into AI. Choudhary also said that many countries, especially developing countries, want a share in technological-based innovation to be able to serve their citizens even if the price to be paid is democracy. India has built a biometric database and is ready to export it to everybody, she said, adding that issues such as privacy are inconvenient issues. Juan Carlos of Derechos Digitales of Chile said that some technological solutions put forward by some governments and companies are not the ones that society necessarily needs. Social issues can only have social solutions, not technological solutions, he said. There is a need to look at society to find answers to social issues, he added. Norbert Bollow, JustNet Coalition, Switzerland, described AI as a social phenomenon with a technical core behind it. Natural intelligence, he said, is based on data and on the recognition of patterns through which one can learn to do very different things, and the more this intelligence is exercised, the stronger it gets. AI systems use two types of components, he said. One is an algorithm component, based on programmes recognising some kind of patterns to take actions, such as learning how to play chess. The second component is more recent and seeks to reproduce unconscious processing. Neural networks are able to recognise patterns that no human has articulated before, he said. However, a large amount of data is needed to train computers to recognise patterns in such a way that they are useful to generate the targeted output, he added. Internet Governance Discussions Need to be Accessible Another IGF session on 19 December addressed the role of internet governance content in shaping the digital future. Speakers in the session insisted on the need to develop diversity in particular language and cultural diversity on the internet. Nowadays, a speaker said, 66 percent of internet users live in developing countries, many of which do not have English as a native language. This focus on English language is preventing many communities from participating in internet governance discussions, speakers said. Beyond the language barrier, another speaker said, the content of the information relating to internet governance needs to be simple and understandable by most people. Image Credits: Catherine Saez Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related Catherine Saez may be reached at csaez@ip-watch.ch."Artificial Intelligence Is Changing Societies, But What Cost To Social Justice? Transparency Is Key" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
[…] intelligence was one of the hot topics during the IGF 2017 (see Intellectual Property watch story). Descriptions of “social credit scores” system in place in eight pilot cities in China given […] Reply