Algorithm Regulation Under the Framework of Human Rights Protection
January 09,2020   By:CSHRS
Algorithm Regulation Under the Framework of Human Rights Protection
From the perspective of Toronto Declaration
Xu Bin *

Abstract: The application of artificial intelligence algorithms in public services brings with it issues such as algorithmic discrimination and inequality? The discussion of the ethics and regulation of algorithms, from the legal perspective, is actually the relationship between intelligent algorithms and the protection of human rights. in response to the discrimination and prejudice of algorithmic violations of human rights, a coalition of nonprofit organizations released the Toronto Declaration on Machine Learning regulates the development and use of algorithms with the principle of “due process.” The Ethics Guidelines for Trustworthy Ai issued by the European union in 2019 focus on the participants in algorithm development. While uS courts that the algorithms enjoy the right to “freedom of speech”. China should learn from the foreign algorithm regulation experience, and seek to establish an overall framework of government regulation that balances the need for technological innovation and commercial competition with human rights protection.
Keywords: Toronto Declaration  · algorithm  · human rights  · legal regulation  · pragmatism

The popularization of new technologies will not only change society, it will also change the legal relationships between people and machines. While changing the human-machine relationship, AI is having a certain impact on the existing legal system. From a global perspective, some countries take a conservative approach and try to incorporate artificial intelligence into the traditional legal framework by explaining the new relationship between man and machine. Some other countries actively adopt the path of legislation and regulate algorithms and ethics through new legislation and industry standards. The Toronto Declaration on Machine Learning (hereafter referred to as the Toronto Declaration), which can be more widely applied to AI, was issued by a number of international nonprofit organizations in Canada on May 16, 2018 was a manifestation of the legislative tradition of European countries and was closely related to the civil law tradition of European countries. In response to this, MIT and other research institutions have jointly launched research on AI ethics. 1 

The AI Ethics Committee established by Google and the man-machine ethical principles it released in 2019 have set an early standard for the private sector’s exploration of man-machine ethical principles. The relationship between freedom of speech and the regulation of algorithms has been involved in a series of cases in US courts, which is a typical expression of the conservative path under the common law system.

Current discussions on AI pay close attention to the possible ethical crisis caused by the development of technologies, that is, whether AI will lead to the class division of human society, impact on the original social relations, and even become a technical weapon similar to the atomic bomb, which poses a great threat to human life. These ethical discussions involve the legal right to survival and the right to life, which belong to the category of human rights. Therefore, when reflected at the legal level, the AI technology and ethics need to address the legal relationship between AI algorithms and human rights. In practice, intelligent algorithms have conducted direct violations against human rights. The key to the era of AI is algorithms. Regulation of algorithms has become an important carrier and starting point for defining the man-machine relationship at the legal level. In 2017, China’s State Council issued the plan for the Development of the New-Generation Artificial intelligence, proposing to “establish AI-related laws and regulations, ethical norms and policy systems, and build capabilities for AI security assessment and control”. The development of AI and other new technologies calls for the adjustment of the legal relationship between man and machine, that is, the adjustment of the relationship between algorithms and human rights. In terms of the obvious tension between the regulation of algorithms and human rights protection, what kind of regulatory path and legal thinking mode have countries around the world adopted? And, what kind of Chinese solution has or may be formed in dealing with the relationship between algorithms and human rights? This is the focus of this paper.

I. Algorithm Discrimination in AI Development

Algorithms are constantly integrating into our lives. Public institutions and private institutions, as two different kinds of algorithm users, can result in algorithm discrimination against human rights in their activities. These discriminations are shown in different ways directly or indirectly. In the public sector, the combination of intelligent algorithms and justice has raised concerns about human rights abuses in the United States. The defendant’s risk of future crime assessment algorithm, which has been in use since 2014, has been criticized by the third-party institutions as a violation of the rights of black people and as racial discrimination. The system, similar to that used in the sci-fi film Minority Report, uses a complex set of equations to calculate the likelihood of future crimes based on a defendant’s age, education, ancestral experience, race, gender and other factors. The coefficient scores become part of the basis for the judge to pass a sentence. The system is used in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin. The risk of crime is closely related to the possibility of correction of criminals in the future. As a result, the US Department of Justice’s national corrections agency has recommended that the defendant’s risk assessment system be used at every stage of the judicial process. The system is also expanding its coverage to try to be used in the federal crime assessments, and related bills have been submitted to the US Congress. Since the use of the defendant’s risk assessment system, third-party assessment institutions and some conservative forces have warned that the system could violate human rights. In 2014, the Attorney General of the US Department of Justice criticized the defendant’s risk assessment, saying that “although these technologies are developed with good intentions, they inevitably undermine the justice we cherish.” 2 Northpointe is the company responsible for developing the algorithm for the assessment system. Since 2016, the media of ProPublica, a non-profit organization, has conducted an empirical analysis of the company’s algorithm results, showing that the cases in which the algorithm is applied do indeed display discrimination in terms of black and white races, and black people have significantly worse results than the white. 3

The ethical problems raised by the development of AI also plague private institutions. Project Maven, Google’s military project in partnership with the Pentagon, aims to use AI to analyze images taken by drones. 4 The project’s computer vision algorithm improves the accuracy of the weapon’s targeting. Since the project was launched, more than 3,000 Google employees have signed an open letter opposing the company’s AI and its algorithms’ being used for military purposes. 5 This incident also led to the establishment of Google’s AI Ethics Committee. The AI Ethics Committee was set up to deal with the discrimination that may result from the different results of intelligent algorithms and their military uses. But during the selection process for members of the ethics board, thousands of Google employees called for the removal of Kay Coles James because of her discriminatory remarks about transgender people and her organization’s skepticism about global climate change. 6 The problem of algorithm discrimination makes it difficult for the organizations to supervise the ethics of algorithms to meet the needs of human rights protection.

The plight of intelligent algorithms is not just the threats posed by “unicorns”. Governments often use lots of algorithms in the form of public-private partnerships or the purchase of services. In the context of a welfare state, every aspect of citizens’ lives is closely linked to algorithms. In recent years, Great Britain has applied digital technology and data tools to redevelop public services on a large scale. According to the UN special rapporteur on extreme poverty and human rights, the problem with the automation of public services in the UK is that it is not transparent to citizens. The digital welfare states are affecting human rights. “Universal credit has built digital barriers that have effectively prevented many individuals from acquiring their entitlements.” 7 Low-income groups lag far behind others in terms of digital skills and literacy. Once the automation system comes online, the algorithms of Britain’s public services will become discriminatory on a hierarchical basis. The opacity of algorithms has become a major concern for the application of AI technologies in government services. In politics and administration, governments can even manipulate their political preferences through technological operations, resulting in undemocratic results. To replace artificial judgment is the goal of AI. Algorithms have taken over many of the government’s administrative decisions.

While improving efficiency, AI has gradually replaced human judgment in complex affairs. Under the background of administrative law governance, the intelligent algorithms have become a part of the legal system. The governance of algorithms means that legal values such as human rights, democracy and people’s livelihoods are also important principles of algorithm regulation. Thus, several countries, institutions and regional organizations have carried out research on the subject of human rights regulation and algorithm regulation in recent years. Among them, in May 2018, the coalition of human rights organizations and technology groups in Canada released new standards for the regulation of intelligent algorithms. The Toronto Declaration issued by the coalition (the full name is the Toronto Declaration: protecting the Rights to Equality and Non-discrimination in Machine Learning Systems) represents a mainstream approach to human rights and algorithm regulation. 8

II. The Legal Logic of the Toronto Declaration

Even early before the Toronto Declaration, Internet giants started to be concerned about the relationship between algorithms and human rights. In particular, with the rapid development of AI technologies, many human rights dilemmas such as those between algorithms and discrimination, privacy and big data have emerged. In June 2018, Google proposed seven “ethical principles” for AI applications to ensure that technology is progressing on the right track. These principles include making sure that AI is applied to applications that are socially beneficial, avoid creating or reinforcing unfair bias, and attach importance to privacy when developing and using AI technol-ogies. 9 In view of the widening gap between the rich and the poor, the weaponization of AI technology, abuse of technology and other issues, Microsoft has also introduced corresponding ethical principle. 10 The Toronto Declaration emphasizes the prevention of discrimination in AI algorithms and machine learning and emphasizes that algorithms may violate existing human rights laws. “We have to focus on how these technologies affect individuals and human rights. In the world of machine learning systems, who is liable for human rights violations?”  11 The Declaration, while not legally binding, provides new guidance to governments and technology companies dealing with these issues.

A. The due process of algorithm research and development

The purpose of the Toronto Declaration is to apply international human rights concepts and norms to AI and machine algorithms. In the practice of AI, intelligent algorithms may “make mistakes” due to inherent racial biases as was the case with Google’s image recognition app. From the results, the effect of these algorithms does have an impact on human rights, and even constitute an objective behavior in a legal sense. However, in terms of subjective factors, it is difficult for governments and ordinary consumers to detect the subjective intention of machine algorithms. This is the most complex relationship between algorithms and human rights. Many algorithms take the form of technology neutrality to carry out special forms and hide discrimination in a bid to avoid human rights censorship.

Generally speaking, the legal review of human rights mainly focuses on the subjective factors of legal acts. Hidden or indirect algorithms are characterized by technicalization as “technology neutrality,” which is difficult to be examined in the human rights legal framework today. According to the Declaration, the bias in the results of algorithms is actually caused by human factors in the R&D and use of algorithms. Therefore, the main idea for regulation of the Declaration is to treat algorithms as a special subject for legislation, and the emphasis of algorithm regulation is to examine the R&D and use of algorithm with “due process”. Once the perspective of regulation focuses on the procedural control of the algorithm, the review of human rights standards can be extended to all links of the algorithm formation process through tracing. The process for the implementation of a machine algorithm includes the key steps of architecture research and development, engineering code, machine learning and use of the algorithm. The principle of due process in algorithm development is the principle of diversity in international human rights. Guided by the principle of diversity, first of all, the R&D and testing of the architecture should ensure that the affected groups and experts are included in the R&D, testing and evaluation stages. Second, engineering code is often expressed in the form of technology neutrality, but in the code testing process, the testing of different people often leads to the hidden discrimination within the algorithms. Therefore, the diversity of testers is also the main reason for the formation of algorithm bias. Finally, the development of AI makes algorithms capable of machine learning. To “feed” the algorithm with data is the key. Machine learning itself is the demonstration of technological progress and improvement of work efficiency, but machine learning needs people to give a lot of identified data and materials. How and by whom this data is “fed” to the AI learning is often one of the main sources of algorithm discrimination. Thus, the diversity principle also calls for the diversity of “fed” data. 12

B. Government supervision of the use of algorithms

Algorithm discrimination stems from the “legislative process” of algorithms. This perspective has become the main theoretical basis of the Toronto Declaration on the prior regulation and post-remedy of algorithm discrimination. If process is the primary means of control, then the most appropriate actor for process regulation is the government. According to the different users of the algorithms, the Declaration distinguishes between two different intensity of regulation responsibilities, namely, the government agency’s machine algorithm and the private agency’s machine algorithm.

For governments that use algorithms, the Toronto Declaration requires governments to assume the function of mandatory regulation, a duty of the highest intensity. Specifically, it includes: first, the government should actively adopt diverse employment practices, and participate in consultations to ensure that diverse perspectives can be integrated into the R&D, implementation and evaluation stages to express a broad range of individual views. The prior supervision of algorithm discrimination is to ensure the diversity of the whole process of algorithm formation. This is not about the diversity of algorithmic technologies, but about the diversity of people involved in the development and use of algorithms. The Toronto Declaration, however, does not elaborate on whether diversity is reflected in skin color, race or language, gender and other elements. Second, it is necessary to ensure that public institutions are trained in human rights to supervise officials in charge of the procurement, development, use and evaluation of machine learning tools. Government officials represent the countries and should be subjectively neutral when using intelligent machines. Third, it is necessary to establish a corresponding evaluation mechanism for the government’s series of administrative actions related to machine learning and accept public supervision. The way of supervision can be independent evaluation, public supervision or judicial supervision. Fourth, it is a must to ensure that decisions supported by machine algorithms comply with due process recognized by international human rights norms. Government decisions made by machine algorithms should also comply with due process standards under the current legal system, such as research, hearing and democratic decision-making rules, to ensure the consistency of decisions.

Besides, governments often rely on private providers to develop and use intelligent algorithms. For example, with the development of the internet of things (IoT), 5G and AI, many resources such as people, things and information are gradually transformed into data that can be collected and distributed. As a result, the city of the future is destined to be a “digital city” that offers super-algorithms, a digital city that can operate in fine details. The “city brain” builds a series of digital urban infrastructures that can create a “twin city” on the internet that maps to reality. Traffic data, food supply chain management, and medical tracking systems are all part of the IoT data. For example, in the early stage of its smart city strategy, IBM acquired the water level of urban sewers through the smart manhole cover and monitored the underground urban pipe network in real time. On the ground, the huge and complicated traffic system is the behavioral data that urban residents inevitably produce every day. It is also the starting point of many smart cities. Finally, the smart city system includes 12 systems, including water, transportation and rivers. 13 In this regard, the government should assume a medium intensity of regulatory responsibility. In other words, third party R&D institutions should be required to be able to enforce the legitimate responsibility of human rights norms to identify, prevent and reduce algorithm discrimination. The Toronto Declaration sets up four steps for the government to supervise private institutions to reduce algorithm discrimination. First, identify risks. Attention should be paid to the R&D of the algorithm model, supervision mechanism and data processing of the intelligent machines purchased by the government. Second, use evaluation. When using these algorithmic systems, the government should take some measures to reduce human rights damage through impact assessment, such as reducing ignored discrimination in data and systems, running dynamic test models, and ensuring that groups and experts who may be affected are included in the R&D, test and evaluation stages. The government should also entrust an independent third party to evaluate human rights standards. Third, test and audit. As with other government programs, the system should be kept under regular and dynamic testing and auditing, creating a feedback loop of bias discovery and self-repair. Fourth, openness and transparency. According to the principle of government information disclosure, the government should voluntarily disclose the relevant discrimination risks of intelligent machines.

For algorithm discrimination by private institutions, the government should establish a basic framework in the civil field through legislation to ensure weak regulatory responsibility for due diligence in the early stage of algorithm development. In other words, private institutions should undertake algorithm’s human rights assessments of outsourced smart machines for public services on their own initiative or by commissioning third parties, so as to ensure that algorithm discrimination does not arise when algorithms are implemented in the public domain. The private sector has a responsibility to conduct due diligence on human rights. The purpose of due diligence is threefold. First, identify potential discrimination outcomes. When conducting algorithm surveys on the private sector, the government should not only identify direct discrimination, but also be vigilant against some indirect discrimination, such as those that are technically neutral but result in discrimination. Some hidden discrimination risks are often associated with the databases of machine learning, such as atypical, imperfect data-based training systems, or datasets that represent historical or systematic biases. Private institutions should consult relevant human rights experts regarding appropriate databases. Second, governments should take effective measures to prevent and reduce discrimination and track down the responsibility of private institutions. Since the Toronto Declaration adopts the process control theory, algorithm discrimination has become traceable. The key is to respond quickly and effectively to emerging discrimination so that the risks are not exacerbated. Finally, these actions should be open and transparent. Algorithm discrimination is quite hidden, so that the cost for ordinary citizens to understand and notice it is very high, which requires the government to establish the mechanism for active disclosure. The government should publish information about risks as well as specific discrimination cases that have been identified. Where there is a risk of discrimination, the government should publish technical details, including the function of the algorithm, training data and the source of the data. The value balance between this principle and trade secrets needs to be maintained. The level of disclosure and the degree of confidence need to be distinguished by the government according to different discriminatory algorithms and the urgency of human rights relief. 14

C. Relief system after an event
In addition to prior government supervision, the government should also provide complete relief channels when the violation of human rights by algorithms occur. Personal relief mainly focuses on three types of legal indicators: transparency, accountability and effectiveness.

Transparency is not only an important way to regulate the algorithm bias in advance, but also an effective way to reduce the cost of ordinary citizens’ relief after the event and reduce the burden of proof in litigation. In addition, the principle of transparency requires the use of machines to be interpretable and understandable, so that affected individuals and groups can be effectively detected or traced. To this end, the government should make public the public services where intelligent algorithms are used, clearly inform the public how the algorithm decision-making process is completed and record relevant identification behaviors and related discrimination or human rights impacts. At the same time, the government should avoid using “black box” data, so as to meet transparency and accountability requirements. Accountability refers to the traceability of algorithm bias and the accountability of various participants in the process of algorithm development and use, such as model designers, selection of data samples for machine learning, etc.

Relief requires high effectiveness, while some fuzzy and hidden algorithm bias often fails to attract the attention of ordinary citizens. The victims do not know whether the decision or procedure is a violation of their human rights. Sometimes the complex algorithms make it difficult for public or private institutions themselves to explain their decision-making processes. Intelligent algorithms in the field of justice tend to lead to unfair trials and litigation rights. For example, the aforementioned algorithm discrimination in the United States using the system of defendant’s risk assessment implies racial and class discrimination. This requires the government to first make clear, when making decisions using machine algorithms, which actors are legally liable and responsible for the possible discrimination caused by the algorithms. Regarding ways of relief, the government should provide effective relief to the victims, including repair, apology, compensation, punishment and guarantee of no repetition. 15

The Toronto Declaration understands algorithm as governance rules and regulates “due process” from the perspective of “legislative process.” For example, regulating the private rights of a third party with the legal responsibility of “due diligence,” imposing legal responsibilities with different levels of intensity on “legislators” and algorithm developers “entrusted for legislation,” and requiring that the “diversity” value of international law on human rights should be fully reflected in the development of algorithms. These algorithm regulations from the perspective of human rights have distinct characteristics of the written law of continental European law system.

III. The Path Taken by European and American Countries for the Human Rights Regulation of Algorithms

In recent years, the European Union is increasingly involved in the legal regulation of emerging Internet markets and AI. For example, the European Union has launched a market investigation into Google’s antitrust practices and created the groundbreaking “right to be forgotten.” These actions have enriched the connotation of human rights in the new era and demonstrated the regulation concepts of the EU. In terms of the relationship between intelligent algorithms and human rights, the EU further understands the essence of algorithms in private institutions on the basis of the algorithm rules in the Toronto Declaration, believing that algorithms are more like “collective works” or “collaborative works” in the copyright law, with many private participants. As a collective work, the research and development of algorithms integrates a large amount of user data to train the intelligent algorithms as they grow. The establishment of these databases is more of a compiling behavior. As a collaborative work, algorithm development is the work of many programmers, testers and even users. Based on this new understanding, algorithms are not just the private property of private institutions, but the shared property of the users who provide the data and the governments who provide the markets. Therefore, the EU’s human rights regulation for intelligent algorithms has a more detailed demonstration.

A. The EU’s “work” identification and regulation of algorithms

On April 8, 2019, the European Commission issued Ethics Guidelines for Trustworthy Ai (hereinafter referred to as AI Ethics Guidelines). The European Union adjusts the relationship between AI and human beings through the guidelines, which cover not only the relationship between algorithms and human rights in the legal sense, but also more ethical issues such as technical risks, social welfare and environmental protection. The Guidelines were drafted by the High-Level Expert Group on Artificial Intelligence (AI HLEG). The AI HLEG, consisting of independent experts representing academia, industry and civil society, was dedicated to writing the Ai Ethics Guidelines since April 2018. The EU commissioner for the Digital Economy and Society said, “today, we have taken an important step towards secure and ethical AI in the EU. Based on the values of the EU, we have laid a solid foundation with the broad and constructive participation of many stakeholders in business, academia and civil society.” 16 The creation of the AI Ethics Guidelines is an attempt to integrate people from all walks of life in the European Union with emerging industrial forces.

Compared with the Toronto Declaration, the European Union’s regulatory thinking is to control the ethical dilemma caused by AI by making it notifiable and optional. It also pays attention to the factors of participants behind the algorithm. Like the Toronto Declaration, the European Union’s AI Ethics Guidelines try to make the diversity of participants the focus of human rights regulation. But the process of the algorithm is understood as a diversified process of creation. The European Union’s AI Ethics Guidelines state that AI systems should be developed to enhance and complement human cognitive, social and cultural skills. That is, AI, as a tool with intelligence investment, should be used to enhance human cognition of the world.

Overall, the European Union puts forward four ethical standards, namely, re-spect for human autonomy, prevention of injury, fairness, and interpretability, as well as seven key elements to realize reliable AI, namely, human initiative and oversight, technological robustness and security, privacy and data management, transparency, diversity, non-discrimination and fairness, social and environmental well-being, and accountability. Some of these principles, such as accountability, transparency, and diversity, have already been expressed in the Toronto Declaration. On the issues of the legal relationship between algorithms and human rights, the AI Ethics Guidelines proposes two important legal measures to avoid algorithm discrimination. One is to further emphasize and refine the interpretability of algorithms. The other is data cleaning and integrity.

Just like collaborative works, multiple partners, especially users, have the ability and right to know and recognize the use of the algorithms, which requires the algorithms to be interpretable, so as to ensure the openness and transparency of the algorithms. The European Union’s AI Ethics Guidelines states that interpretability is critical to building user trust in AI systems. In other words, the entire decision-making process, the relationship between inputs and outputs, should be interpretable. This being said, all current intelligent algorithms and models operate in a “black box.” It should therefore be clear that, first, traceability of the relevant elements of AI systems, including data, systems and business models should be ensured. Second, the datasets and processes used by AI systems to generate decisions should be documented for traceability and improved to enhance transparency, including data collected and data markers used by algorithms. Traceability includes being auditable and inter- pretable. Third, interpretability calls for explaining the technical process and related decision-making process of AI systems. Technical interpretability requires intelligent algorithms to make decisions that can be understood and tracked. When AI systems could have a significant impact on human life, it is necessary to have a reasonable explanation for the decision-making process of AI systems. Fourth, people have the right to know whether they are communicating with humans or AI systems, which requires that AI systems be identifiable.

Like a collective work, data is crucial in the formation of algorithms, and is an important link in which the results of algorithms may generate human rights discrimination and prejudice. The AI Ethics Guidelines stress data quality and integrity. The quality of the datasets is critical to the performance of AI systems. The data collected may be biased, inaccurate, or inaccurate. The data should be cleaned before training to remove biased, inaccurate and incorrect data. At the same time, the integrity of the data should be guaranteed. If the data given to the AI system is malicious, it may change its behavior. Datasets should be tested and processed throughout planning, training, testing, and implementation. In other words, when using data to train algorithms, human beings must participate in data cleaning. The cleaning standard should be the human rights standard, so as to remove the identification and history of human rights discrimination that may be attached to the data itself. 17

Of course, the European Union’s AI Ethics Guidelines do not define algorithms as “works” in the sense of copyright law, but the legal logic it adopts is similar to the concept of works in copyright law. The relationship between algorithms and human rights is handled by taking works as the media. This makes it become the relationship between the rights of participating authors of works and those of others. In order for the work not to violate the human rights of others in the public domain, the diversity of participants is important, and more importantly, we should pay close attention to the users who are understood as part of the authors and the data generated by the works. The important data formed by algorithms is a part of users. It is also an abstract expression of users’ various behaviors, or in other words, the collection of their social relations and legal rights. Therefore, compared with the creating process emphasized in the Toronto Declaration, the European Union’s AI Ethics Guidelines pays more attention to the rights of the objects to which the algorithms are applied, that is, the users, whose human rights should be protected. Therefore, the openness, interpretability, traceability and data integrity of the works have become the requirements for algorithm regulation in accordance with human rights standards.

B. The US’ “speech” identification and regulation of algorithms

The Toronto Declaration and the Ai Ethics Guidelines of the European union incorporate human responsibilities into the human rights regulation of algorithms. However, it is impossible for the algorithm regulation in real life to be perfect in all aspects concerning prejudice and discrimination.

People can see that the development of strong AI may lead to more hidden and indirect human rights discrimination. American companies, with Google as a typical standard, directly control the software development kits and have introduced strict licensing regulations for algorithms. Google’s idea is to regulate the development of algorithms by controlling development kits and licenses. It is necessary to strike a balance between the current development of AI and the value of human rights protection. For example, Tensor Flow, one of its open-source AI software libraries, will withdraw its licensing if Google considers that users are not using its work “properly.” The logic of this algorithm regulation still permits and regulates the algorithm as a work, which can be seen as the continuation of the European Union’s logic. However, in the United States, Google and other technology companies have already resisted government regulation of algorithms through the “freedom of speech” protected by the US Constitution, believing that government regulation of algorithms is an infringement on freedom of speech. The original intention of algorithm regulation is to protect human rights from the prejudice and discrimination of the opaque “black box” algorithm. However, the logic of American algorithm regulation is distorted as freedom of speech, which protects the algorithms of big enterprises from regulation.

The legal logic that algorithms are empowered as a form of speech has been accomplished in a series of cases in US courts. 18 In the case of SearchKing v. Google in 2003, Google claimed that its webpage ranking as the core algorithm of the search engine was protected by law. The court ruled that the search engine’s algorithm was the answer to the searcher, and the search behavior was the question and answer between Google and its users. 19 In other words, an algorithm is a “statement” or view that a search engine has about various websites, or, its opinion. Such “speech” is guaranteed to have freedom by the first amendment to the US Constitution. After that, the legal logic was further expanded, and so that the essence of the algorithms of websites or search was the collection, editing and sorting of information and resources on the internet, just like the editing and processing of newspapers, which reflected the views and judgments of editors and was protected speech. 20

From this point of view, the meta-question of what an algorithm determines is the relationship between algorithms and human rights. American algorithm regulation focuses on the elements and characteristics of people who understand algorithms. As an alternative to human judgment, algorithms were once understood by the US courts as people and their speech, and even gave the marks of human rights, such as freedom of speech. Therefore, the algorithm regulation in the United States is often the value conflict and balance between various rights in the bundle of human rights. This kind of understanding of algorithms is often applied to strong AI. The relationship between algorithms and human rights even needs to answer the philosophical meta-proposition of whether AI is human. By contrast, the European Union also focuses on the human element of algorithms but pays more attention to the human element behind algorithms and the artificial elements contained in algorithms. This is reflected by the participants in the algorithm formation process in the Toronto Declaration, or all kinds of artificial elements or artificial intelligence in the European Union’s AI Ethics Guidelines. Therefore, an algorithm is either the rules created by the consensus of a number of people, or an intelligent work created through the joint R&D, coding, testing of many people. The algorithm as a rule is mostly understood in the case that the government uses the algorithm or the government entrusts a private authority to develop the algorithm, while the algorithm as a work is mostly understood in the case that a private company uses the algorithm.

Frankly speaking, the complexity of algorithms means that they must contain human elements and man labor results, and they have both human subjectivity and objective nature of tools. The Toronto Declaration, the European Union’s AI Ethics Guidelines, and the US precedents reveal the spectrum of the nature of algorithms and human rights regulation. Today, as the era of strong AI is approaching, should we adopt a more subjective attitude, equating algorithms with human beings and take into account the whole category of human rights, or adopt a more objective attitude, taking algorithms as the object to balance human rights in terms of efficiency and rights? Or, is there a more pragmatic third way?

IV. China’s Algorithm Regulation System Follows Pragmatic Path

The meta-question debate about the nature of algorithms has not yet been fully launched in China, but the relationship between algorithms and human rights has been discussed in relevant public opinion and some literature. As users are exposed to news contents promoted according to algorithms, some public opinion suggests that news cannot be “kidnapped” by algorithms. In the early days of the popularity of search engines, the content presented by search engines prompted questions about the objectivity behind it. In other words, people are gradually wrapped up in a world presented by machine algorithms. With the increasing popularity of smart devices, algorithms and their information promotion have gradually expanded from news content to all consumer information. The problems of algorithms and privacy of user information have become an urgent problem in China.

The bias and discrimination presented by algorithms have not yet entered the center of public opinion in China, but the connectivity of the internet will inevitably import this legal issue into China. The global development of China’s internet and AI industry not only needs to deal with the algorithm regulation logic of the United States and European Union, but also needs to establish a set of universal ethical rules of China’s own, so as to gain a voice in the standard setting rights of the emerging order. Besides, in practice, China’s current legal system, especially those related to human rights such as social security, health, welfare distribution, employment and education, have gradually started to introduce AI and machine learning relying on algorithms. For example, the smart city construction of some local governments has left the power of traffic control to algorithms. Alibaba Cloud’s “city brain,” launched in 2016, has aggregated billions of data scattered across the city from traffic management to public services by taking over 1,300 traffic lights and 4,500 road videos. It has built a complete dynamic network of urban traffic to realize the goal that “all aspects of the city are related to the brain.” The popularity of Alipay also enables the “city brain” to adjust the supply chain of consumer products in time to better serve local residents. Online shopping has gradually turned customer service providers into AI to deal with customers’ after-sales needs, which has changed the application scenario of the traditional Law on protection of Consumer Rights and interests. Ali Health and other enterprises have started to build new algorithm models for health care. In the field of education and training, especially online education, more intelligent algorithms are being used.

Therefore, the development of China’s AI industry must deal with and establish an ethical rule system for algorithms and human rights. In the spectrum of algorithm regulation, China should first abandon the extremes of US-style freedom of speech protection. A large number of public resources in the United States focus on the review of free speech standards before algorithm regulation, which leads to the lag in the development of specific rules during subsequent algorithm regulations, and even results in a “natural state” without regulation. The result of the lack of government regulation is what Kaplan calls “high-tech feudalism” — some big capital and big enterprises have created market monopolies, monopolizing user data and algorithms. In this case, the human rights conflicts between enterprise algorithms and ordinary users are the most serious. Therefore, China should start from a pragmatic approach, establish the overall framework of government regulation algorithm, balance the relationship among scientific and technological innovation, human rights protection and commercial development, and assign enterprises with different degrees of human rights responsibilities in different stages of scientific and technological innovation. Under the background of rearrangement of global industrial chains, the competition of algorithms and scientific and technological innovation directly determine whether a nation-state can obtain the commanding heights in the new order. Therefore, government regulation should give enterprises a weak, post-event algorithmic responsibility. Regarding the mature “unicorn” enterprises and foreign enterprises that have become industrial monopolies, it is necessary for the government regulation to endow them with social and ethical responsibilities equal to their commercial power, making them accept prior supervision and process supervision, together with strong post-event judicial responsibilities. The dynamic and hierarchical system of ethical responsibilities for algorithms is an effective way for the government to coordinate various kinds of value conflicts in the relationship between algorithms and human rights from the perspective of pragmatism.

Second, in terms of the overall framework of algorithm regulation, the basic legal logic of government supervision should implement the three algorithms principles of transparency, interpretability and accountability. These three principles are also the core legal logic of the EU AI Ethics Guidelines and the Toronto Declaration. Transparency is about protecting users’ right to know. Users should be aware that they are talking to machine algorithms or that it is machine algorithms that are making decisions for them and should be further informed of the risks of discrimination and bias. For the third-party algorithm decision-making entrusted by the government, the whole process of the algorithm must be open and transparent to the government and the public as well. And side by side with transparency what is required is the interpretability of algorithms. The functions and strategies implemented by algorithms, as well as the way data are collected and used, should be expressed or commented in a way that can be understood by the public, instead of being directly disclosed in the form of code and other technologies. Such requirements and practices, on the one hand, help substantially achieve the goal of transparency, and on the other hand, help protect the trade secrets of private institutions. The requirement of accountability puts more emphasis on the human factors behind the algorithms, abandons responsibility avoidance, and links the use of technology with human decisions, so that algorithm bias and discrimination can be traceable and accountable. The realization of accountability is also the legal manifestation of algorithm ethics, which is helpful to cultivate the professional ethics and human rights responsibilities of those involved in the development and use of algorithms. The above three aspects of legal standards can be verified by the government for every algorithm introduced to the public domain and public services.

Third, during the application and execution of algorithms, the use of algorithms in government procurement and public service entrustment requires an efficient feedback mechanism. The current Administrative Review Law and Administrative procedure Law will be changed because of the efficient use of intelligent algorithms in administrative affairs. The algorithms in administrative decision-making and ad- ministrative law enforcement can be taken as the object of reconsideration by administrative actors. Third party evaluators can also conduct transparency assessments of algorithms in government and public affairs, so as to help governments or algorithmic service providers improve themselves to meet human rights and legal standards.

Finally, the function of judicial relief and rulemaking helps the government to deal with the relationship between value and efficiency when new things emerge. In terms of post-event relief procedures, China’s Internet courts can give the right of appeal to key actions such as AI and algorithms, and handle the development of new things with judicial skills. In May 2019, the Beijing Internet Court ruled on the coun- try’s first case of copyright infringement of content generated by artificial intelligence, with the user arguing that the intelligent content generated by computer software does not constitute a work because the intelligent input in the development and use of intelligent software has nothing to do with the expression of thoughts in a work. But the court ultimately gave the algorithm protection, holding that the developers and the users’ labor inputs had interests that needed to be protected. The significance of this case is not only that the creation of algorithms is endowed with rights and interests, but also that the algorithms and their owners, developers and users are treated as the object of judicial protection.

As China’s Internet AI industry goes global, it is not necessarily the government regulation model that follows the industry. Territorial jurisdiction is the main feature of a nation state. The standards of autonomous industry associations, more than others, will follow the export of algorithms. Therefore, while the government supervision system is being established, China’s internet and AI industries should take the initiative to work together, spread China’s AI ethics to other enterprises providing algorithms around the world, and coordinate with human rights, democracy and other universal values, so as to establish Chinese standards and a global voice.

V. Conclusion

The introduction of the EU’s AI Ethic Guidelines is not only an exploration of the new man-machine relations, but also a prelude to the competition of the world order. Digital Single Market, a player in this competition, believes that “the moral dimension of AI is not a luxury feature or add-on. Only by trusting it can our society benefit fully from technologies. Ethical AI is a win-win proposition, and it can be a competitive advantage for Europe — a trustworthy, human-oriented AI leader.” 21 In recent years, countries have started to issue standard-like ethics guidelines on AI to regulate man-machine relations. Japan’s civil society issued Guidelines (Draft) on the Security of Next-Generation Robots. The Korean government started to draw up a Moral Charter for Robots as early as 2007. The development of AI is like the discovery of the new world by Columbus. For humankind, the discovery of the American continent may be a direct result of the great voyage, but more importantly, it started the reallo- cation of the global order. The introduction of various ethical standards for AI is not just to coordinate human-machine relations in a country or a region. The universality of the internet and technologies means that the algorithms and ethical regulations of AI will promote a new world order. In this world order, algorithms first encounter human rights, the foundation of the former world order. Once the AI technology itself encounters the conflicts of the philosophical question of “human,” the world order of human rights will also be reconstructed. The European Union and the international organizations involved in the Toronto Declaration have smelled the smoke of competition in the new order and are taking action. Inspired by the concept of a community of shared future for mankind, China needs to make even more creative explorations on how to deal with the relationship between algorithms and human rights and how to contribute Chinese wisdom to the new order of heralded by AI.
(Translated by CHEN Feng)
* XU Bin ( 徐斌 ), Assistant Professor and Doctor of Law, Institute of Law, Chinese Academy of Social Sciences.
1. Cai Yingjie, “Artificial Intelligence: Bounded by Law and Ethics,” people’s Daily, August 23, 2017.
2. Julia Angwin and Jeff Larson et al., ProPublica, Despite Disavowals, Leading Tech Companies Help Extremist Sites Monetize Hate, https://www.propublica.org/article/leading-tech-companies-help-extremist-sites-mone-tize-hate.
3. ProPublica organized a series of papers under the theme of machine bias, see Machine Bias: Investigating the algorithms that control our lives, https://www.propublica.org/series/machine-bias.
4. cnBeta, “The Pentagon Announced Plans to Use Big Data and Machine Learning to Combat ISIS,” , accessed May 31, 2019, https:// www.cnbeta.com/articles/tech/613215.htm.
5. Ifeng, “No longer Being Evil”: Google Decides to Quit the Military Project Maven Project after 2019,” accessed May 31, 2019, http://tech.ifeng.com/a/20180602/45012084_0.shtml.
6. Tencent, “Google Disbands AI Ethics Committee amidst Disputes,” accessed May 31, 2019, http://tech.qq.com /a/ 20190405/002440.htm.
7. Statement on Visit to the United Kingdom by Professor Philip Alston, United Nations Special Rapporteur on extreme poverty and human rights, accessed May 31, 2019, https: //www.ohchr.org/en/NewsEvents/Pages/ DisplayNews.aspx?News ID=23881& LangID=E.
8. Sohu, “Rights and interests coalition released the Toronto Declaration, hoping that AI technologies stay away from discrimination and prejudice,” accessed May 31, 2019, http://www.sohu.com/a/231935060_99956743.
9. Tencent, “Google releases seven principles in using AI, vowing never to apply it to weapons,” accessed May 31, 2019, http://tech.qq.com/a/20180608/005111.htm.
10. Sohu, “Microsoft and the China Development Research Foundation released the Future Cornerstone report, discussing the social role and ethics of artificial intelligence,” accessed May 31, 2019, http://itsohu. com/20180709/n542934991.shtml.
11. sohu.com, “Rights and interests coalition released the Toronto Declaration, hoping that AI technologies stay away from discrimination and prejudice,” accessed May 31, 2019, http://www.sohu.com/a/231935060_99956743.
12. The Toronto Declaration: Protecting the Right to Equality and Non-discrimination in Machine Learning Systems, accessed June 5, 2019, https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to equality-and-non-discrimination-in-ma-chine-leaming-systems/.
13. Peng Mingsheng, “The Smart Earth,” Jishi 17 (2009).
14. The Toronto Declaration, Protecting the Right to Equality and Non-discrimination in Machine Learning Systems.
15. Ibid.
16. sohu.com, “The EU Takes the Lead in Publishing Seven Ethical Principles for AI,” accessed May 31, 2019, http://www.sohu.com/a/306726098_120091102.
17. Ethics Guidelines for Trustworthy AI, https://ec.europa.eu/digital-single-market/en/news/ethics-guide-lines-trustworthy-ai.
18. Zuo Yilu’s paper summarizes this in great detail. See Zuo Yilu, “Algorithms and Speech: American Theory and Practice,” Global Law Review 5 (2018).
19. Search King, Inc. v. Google Tech, Inc, No.02-1457, 2003 WL 21464568 (W. D. Okla. May 27, 2003).
20. Zhang v. Baidu, 10F. Supp.434, SDNY 2014.
21. Sohu, “The EU Takes the Lead in Publishing Seven Ethical Principles for AI,” accessed May 31, 2019, http://www.sohu.com/a/306726098_120091102.