• Home
  • About Us
    • About Us
    • Subscribe
    • Privacy Policy
  • Advertise
    • Advertise On IP Watch
    • Editorial Calendar
  • Videos
  • Links
  • Help

Intellectual Property Watch

Original news and analysis on international IP policy

  • Copyright
  • Patents
  • Trademarks
  • Opinions
  • People News
  • Venues
    • Bilateral/Regional Negotiations
    • ITU/ICANN
    • United Nations – other
    • WHO
    • WIPO
    • WTO/TRIPS
    • Africa
    • Asia/Pacific
    • Europe
    • Latin America/Caribbean
    • North America
  • Themes
    • Access to Knowledge/ Open Innovation & Science
    • Food Security/ Agriculture/ Genetic Resources
    • Finance
    • Health & IP
    • Human Rights
    • Internet Governance/ Digital Economy/ Cyberspace
    • Lobbying
    • Technical Cooperation/ Technology Transfer
  • Health Policy Watch

Experts To Regulators: AI Is A Panacea – With Hidden Dangers To Humanity

12/06/2018 by Catherine Saez, Intellectual Property Watch Leave a Comment

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to email this to a friend (Opens in new window)
  • Click to print (Opens in new window)

The heads of national telecommunications and technology regulatory authorities are gathering next month at the International Telecommunication Union to address new technologies. Not surprisingly, artificial intelligence and data hold centre stage in the programme. The meeting comes after a recent event at the United Nations where divergent voices recognised the potential beneficial uses of new technology, but warned against the undeclared intentions behind it. Separately, a new study from the Massachusetts Institute of Technology shows the influence of data in machine learning algorithms with chilling consequences.

The 18th edition of the Global Symposium for Regulators (GSR) will be held from 9-12 July, with the theme “New Regulatory Frontiers.”

l-r: Kostoupolos, Karachalios, Li

Last month, the permanent mission of India to the Conference on Disarmament with the Observer Research Foundation convened a side event on artificial intelligence (AI) policy issues and future strategies. The event description said discussions on possible standards and norms have started in several countries, such as Canada, China, Germany, those in sthe European Union, India, Japan, Norway, South Korea, the United Kingdom, and the United States.

One of the panels of this event looked at the potential and challenges of adopting AI in industries.

Beware of Total Reliance on New Technologies

One of the panellists explained the usefulness of artificial intelligence in several sectors such as health and agriculture, but warned about issues of cyber security, and detrimental effects of an environment entirely ruled by data, and reliant only on new technologies.

Lydia Kostopoulos, a technology and cyber security consultant, described the usefulness of AI in agriculture, and the ability of algorithms to enable machine to machine communication. However, cyber security challenges remain as there is always a risk of hacking into the system, or data leaking, and being manipulated.

Challenges for AI continue to appear, she said, citing an experiment where researchers at the University of Washington put graffiti on stop signs which made them unrecognisable by an autonomous vehicle. Such graffiti would not confuse a human being, she said.

Researchers at the Massachusetts Institute of Technology (MIT) also conducted an experiment [pdf] in which they fooled the AI system to consider a 3D printed turtle as a rifle, she said.

She added that recent research by students at the University of California, Berkeley claimed that audio instructions undetectable to the human ears can be input into appliances such as Apple’s Siri, Amazon’s Alexa and Google’s Assistant, as reported by the New York Times in May.

Another research paper [pdf] was published in 2017 by researchers at Zhejiang University, China, on the possibility of inaudible voice commands, then dubbed “Dolphin Attacks,” and in 2016 Berkeley researchers warned [pdf] about the possibility of embedding hidden voice commands into YouTube videos.

New technologies can unlock tremendous value, Kostopoulos said, but there is a need for greater awareness for the increasing dependency on those new technologies. She warned about over-reliance on AI. There is need for a “fall-back plan” in case of circumstances where AI is compromised or not operating at 100 percent, to ensure there is a possibility to continue operating without it, she said.

From a societal standpoint, she said, “I wonder if we are going to lose our own agency …, if we rely on AI to tell us everything.”

“I wonder if we are going to lose our own trust in ourselves and what our body is telling us.”she added. The social concern is that “we may trust data so much to the point where we don’t trust ourselves anymore.”

Chinese Researcher: AI Must Not Be Seen Only Through Western Lens

David Li from Schenzen innovation lab said that today the city of Schenzen is producing about 90 percent of the world’s electronics. So chances are that most of the projected 40 or 50 billion internet-connected things will be produced there too, he said.

The world cannot be seen only through the lens of western countries, he said, explaining that Apple is a huge company seen from Geneva, but “if you sit in China, Apple is 7.4 percent …, if you sit in India, it’s even worse, [and] if you sit in Africa, Apple is almost irrelevant.”

Today the top ten smart phone companies total 49 percent of the market share, he said, adding that the remaining 51 percent is composed of local brands, tailored to local needs.

The foundation of AI is open source, said Li, as everybody, everywhere can use it. AI can be used at the local community level, in agriculture for example, and AI applications should serve local societies, he said, adding, “It is really hard to improve the life of people in Geneva with AI.”

Challenges can be great incentives, but “we have to keep a close eye to the good practice,” he said.

The Time of Innocence is Over

Konstantinos Karachalios, managing director of the Institute of Electrical and Electronics Engineers Standards Association (IEEE), using a metaphor, said there are a number of poisonous snakes in AI, some of them “biting us badly already.” Some see the snakes, some prefer not to see them, he added.

The dependency on data over senses happens when the sphere of information imposes its logic on the fear of space in our bodies, he said. “This is a huge loss,” he remarked. “Real time has won over space.”

“Myself and also the IEEE community” said Karachalios, “we don’t talk about AI, we do not know what it is. We believe it is a fuzzy fashionable wording used to disguise many things and not to disclose.”

“I am criticising the whole hype around AI,” he said, adding “in our case, we talk about systems engineering,” and how to use those systems to improve lives.

“This is the time to resist the reduction … reducing human being to computers,” he said in reference to humanoid robots.

In the industry sector, he warned about data barons gathering power over human beings, and cited British sociologist Anthony Giddens’ theory of imperial power, in which he said empires are built on the mastering of two technologies, the technology of story, and the technology of communication and transfer. The Chinese, Roman, and Persian empires were built with the mastery of those two technologies, he added.

If Giddens’ theory is true, he said, if storage, communication and transfer are really what makes an empire, who are the emperors of our time, he asked. Those are three industries and they do not have the same interest, he said, calling for all to ally with industries that “do not undermine our political freedom, our self-determination, and do not reduce our space to nothing.”

The industries which agree with that concept should ally to find a common ground to fight back, “because we have lost a lot of ground already,” he said. “The time of innocence is over.”

IEEE has launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in 2016. According to a background paper [pdf] on the global initiative’s mission, it was launched “to move beyond the paranoia and the uncritical admiration regarding autonomous and intelligent technologies and to illustrate that aligning technology development and use with ethical values will help advance innovation while diminishing fear in the process.”

Karachalios, who was responsible for public policy issues at the European Patent Office prior to joining IEEE, said the fight back should not be against the technology per se, which can be helpful, but the “mentality of people behind it.”

“We have to reform the thinking of the techno-scientific community,” starting at university curricula, he said.

Bridges have to be built with the political community, entering into a dialogue not only to inform them, but to listen to them, he said, adding “there is a huge demand about this.” Technical actors must be included in the dialogue to develop standards, for which there is an increasing demand, as standards are not developed in a vacuum, he said.

“We should not surrender to the logic of the system,” he said, adding that there is a need for education, group thinking, and collective intelligence, which would create more jobs.

“We cannot be just defensive,” but have to use opportunities that are opened by the complexity of the systems and to impose new working conditions, new types of human interventions, he said.

Norman, First World Psychopath AI

Engineered by researchers at MIT, Norman is “born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” according to the MIT website.

Norman represents “a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” according the MIT page, which further explains that “Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image.”

The website invites viewers to see what Norman sees, and take a survey to help Norman fix itself.

 

Image Credits: Catherine Saez

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to email this to a friend (Opens in new window)
  • Click to print (Opens in new window)

Related

Catherine Saez may be reached at csaez@ip-watch.ch.

Creative Commons License"Experts To Regulators: AI Is A Panacea – With Hidden Dangers To Humanity" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Filed Under: IP Policies, Language, Subscribers, Themes, Venues, English, Human Rights, ITU/ICANN, Information and Communications Technology/ Broadcasting, Innovation/ R&D, New Technologies, Patents/Designs/Trade Secrets

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Email
  • Facebook
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo
My Tweets

IPW News Briefs

Saudis Seek Alternative Energy Partners Through WIPO Green Program

Chinese IP Officials Complete Study Of UK, European IP Law

Perspectives on the US

In US, No Remedies For Growing IP Infringements

US IP Law – Big Developments On The Horizon In 2019

More perspectives on the US...

Supported Series: Civil Society And TRIPS Flexibilities

Civil Society And TRIPS Flexibilities Series – Translations Now Available

The Myth Of IP Incentives For All Nations – Q&A With Carlos Correa

Read the TRIPS flexibilities series...

Paid Content

Interview With Peter Vanderheyden, CEO Of Article One Partners

More paid content...

IP Delegates in Geneva

  • IP Delegates in Geneva
  • Guide to Geneva-based Public Health and IP Organisations

All Story Categories

Other Languages

  • Français
  • Español
  • 中文
  • اللغة العربية

Archives

  • Archives
  • Monthly Reporter

Staff Access

  • Writers

Sign up for free news alerts

This site uses cookies to help give you the best experience on our website. Cookies enable us to collect information that helps us personalise your experience and improve the functionality and performance of our site. By continuing to read our website, we assume you agree to this, otherwise you can adjust your browser settings. Please read our cookie and Privacy Policy. Our Cookies and Privacy Policy

Copyright © 2025 · Global Policy Reporting

loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.