Published on
Page last updated on
The 14th annual meeting of the Internet Governance Forum (IGF) was hosted by the government of Germany in Berlin in late November 2019, under the theme “One World. One Net. One Vision.” Many people with an interest in internet governance had come to Berlin to discuss their own agendas, and all of the debates were very interesting.
The agendas that I was interested in were artificial intelligence (AI) and human rights, and content regulation. Although the IGF does not produce binding commitments for governments, it provides a space for broad discussion, from various points of view, on problems related to internet governance, and the issues addressed at the IGF are very important and meaningful.
AI and human rights
The more that artificial intelligence (AI) develops, the more comfortable our lives may be. Actually, so-called weak AI has been installed and operates in aspects of all our lives online, through systems such as navigation, automatic translation, and spam mail filtering. However, what I worry about is how AI’s decisions, regardless of the type of AI, will affect human life.
There is not yet concrete evidence of how far strong and super artificial intelligence will develop and impact our lives and human rights, because AI is not yet commercialised or routine with regard to all aspects of life. However, when “deep learning based on big data” becomes commercialised and routine, what would happen then if the decisions made by AI instead of humans determine human life?
In order to develop AI, a huge amount of data is needed. That data would come from a wide range of areas such as each person’s personal data, behaviour information, and public information. The problem is that in developing AI systems, the discrimination present in society will be reflected in those systems. This includes discrimination of all types, based on factors such as race, sex, gender orientation, health status and region. In the meantime, if AI makes a discriminatory decision, who is responsible for the consequences? The companies who use the AI or developed the AI? Or the government that neglects regulations? There is no clear responsibility with regard to this issue.
As data collection becomes more routine and the vast amount of data collected is stored and utilised, profiling problems and privacy issues become more serious. This is why we need strong regulation of personal data protection when developing information and communications technologies (ICTs). However, the business sector and governments are focusing on utilising data rather than protection. For example, as facial recognition technology advances, it will be easier to track individuals or monitor people using intelligent CCTVs. In the business sector, companies could earn more money if they use more data to target marketing. But, at the same time, people’s privacy will be threatened.
Under these circumstances, the threat to human rights and privacy because of AI was one of the main themes at the IGF. In particular, many panels discussed AI ethics and principles to protect human rights. The participants agreed that even though nothing has been defined yet, measures should be established to minimise the negative impact of AI on human rights in the long term. For instance, we should consider the introduction of transparency reports to monitor consequences from decisions based on AI. Furthermore, AI algorithms should be open to the public in order to review the procedure. Additionally, there should be implementation of an objection-raising system, and strong and strict protection of a data subject’s rights, such as veto power against automatic decision making, as well as privacy by default in the design of AI systems. Although there are no clear standards or procedures yet, I hope that we will carefully review the impact of AI on human rights and engage in more active discussion to come up with measures for the safe use of technology.
Content regulation on the internet
Content regulation might be the one topic that is of interest all over the world. Content regulation is a controversial topic because it can cause problems of censorship by governments. It is true that society needs to tackle hate speech and discrimination, but, at the same time, we should precisely establish regulations.
It is hard to introduce automatic filtering of specific discriminatory words, because an automatic system cannot judge the context that words were used in, and derogatory and false information about minorities can be spread without the use of any words that would be considered abusive language, slang or discriminatory expression.
Furthermore, once content regulation is introduced, it makes censorship routine. It would affect the freedom of speech of everyone, whether small or big, and expose all users daily to censorship on the internet.
As an example of content regulation, Germany’s NetzDG was mentioned. According to this law, the operator of the website where hate expressions are posted should take the responsibility to filter out the hate speech. When operators find out that there are hate expressions on their sites, they should take temporary measures, such as blocking the post and sending a notice to writer of that post. If the operator does not do anything although they know about the situation, the operator would be punished under this law. However, this measure could cause excessive censorship. A company that wants to avoid responsibility can block content based on certain words or introduce AI to regulate quickly without considering the context so as to determine if there genuinely is discrimination involved.
Until today, there is no clear answer for content regulation. Regulation cannot be said to be unconditionally bad, and stricter regulation cannot solve all problems. At the same time as introducing content regulation, it is important to clarify temporary measures, objection processes, transparency reports and regulatory grounds. As content regulation is a global problem, many experimental methods are being introduced and there will be many trials and errors. But most importantly, social consensus should be the basis for the process.
In addition to these, there were lots of other interesting subjects, such as the rights of women and children in the online space, youth participation, and discussions with civil society organisations in general. There might have been awkward moments because it was my first time participating in a global IGF. However, APC members were very nice and warm. Thanks to them I could enjoy discussions and conversation.
Photo: The launch of the 2019 edition of Global Information Society Watch (GISWatch), “Artificial intelligence: Human rights, social justice and development”, which took place at the IGF in Berlin. Miru participated in the launch as author of the country report from the Republic of Korea. Read more about the launch here.