JAPAN UP CLOSE

  • TOP
  • POLICY
  • The Challenge of Guiding Technology in the 21st Century
POLICY

The Challenge of Guiding Technology in the 21st Century

By Staff Writer
October 28, 2020
As the pace of technological advance quickens, many wonder how we can be sure that the innovations that make our lives better won’t be used carelessly or with ill intent. Recent news reports about the risks of smart devices have left consumers worried that the information they gather may be ending up in the hands of people who wish to exploit it. And numerous studies have noted that the algorithms behind AI systems have been shaped by the biases, subconscious or overt, of their programmers.

In response to these concerns, many organizations have proposed guidelines to ensure that new technologies are developed in an ethical manner, but at the same time encouraging the realization of beneficial innovations, and enabling businesses to gain the rewards of their research and development. The matter is far from simple, and there are no universally agreed-upon rules.

To explore the matter, the University of Tokyo held an open workshop in February entitled “Governance of emerging technologies: framing benefits and risks of biotech and AI”. Biotech and AI were selected as the focus based on the results of a survey conducted by the World Economic Forum in 2017, in which those two fields were ranked high as emerging technologies with the potential for great benefits and lasting negative impacts on society. In both cases, the fields have been largely developed by the private sector, and there is a strong public sentiment that they both have the potential for “running away” from their creators.

Technologies like autonomous driving will require AI systems capable of behaving reliably and ethically

The first talk of the session was given by Jeroen van den Hoven, Professor of Ethics and Technology at Delft University of Technology in the Netherlands. In his lecture, “Ethics and governance of AI: An EU perspective” he discussed the various challenges of creating an ethical AI, the balances and conflicts that can arise between ethics and economic development, the roles governments are playing, and the specific guidelines that are being developed by the EU.

Professor van den Hoven began with the direct statement that “AI is changing our lives.” He noted that each country has its own roles envisioned for AI, and that these roles are shaped strongly by the nation’s image of itself and its rivals. At different extremes there are the US model, which he described as “Homo Economicus” guided solely by profit, the Chinese model in which AI is a tool for surveillance and utilitarianism, and the Russian model, in which AI has been described by Vladimir Putin as a key to world power.

Against these extremes, Professor van den Hoven asks “Where is Europe going? Will the EU be a museum, where people visit to see the past, or will it be the cradle of Enlightenment 2.0?” In contrast to its neighbors, he explains, the EU has expressed its desire to find a way to develop AI that is human-centric, secure and remains true to its core ethical values. According to him, the EU already has two clear and binding legal models for setting out ethical principles: the Charter of the Fundamental Rights of the EU, and the European Convention of Human Rights. “President Macron of France has said that there is a third way between artificial intelligence for control, as in China, or artificial intelligence for profit as in America. There is a third European way, and he calls it AI for humanity.” He also notes that Japan’s concept, Society 5.0, is very similar to that of the EU.

The EU Commission on Ethics has been developing a set of principles for creating what Professor van den Hoven describes as “trustworthy AI”, based on following three specific rules. Unlike the famous ‘three laws’ of science fiction robotics, these are slightly more complex: AI must be compliant with EU law, it must be ethically sound, and it must be technically robust. He admits, “Personally, I’m not much of a fan of ‘trustworthy AI’ because it allows you rubber-stamp a piece of AI, but then it starts to lead its own life. And then each of the agents who buy, sell, use or otherwise encounter it after that bear no responsibility because that AI has the stamp that says ‘trustworthy’. We need to focus on the people involved as well.”

The second talk was given by Phil Macnaghten, Professor of Technology and International Development at Wageningen University in the Netherlands. His lecture, “The challenge of responsibility: A responsible innovation approach to the governance of emerging technology” looks at the question of developing responsible approaches scientific inquiry. Without a base of responsible ethics, especially in emerging sciences and technologies, applications can easily be mis-used or produce damaging effects on society.

Professor Macnaghten opened his talk by outlining how our thinking about research and responsibility has progressed. In traditional thinking, responsible research depends solely on producing reliable knowledge. Within this framework, which he terms “Responsibility 1.0”, basic science leads directly and linearly to applied science, which leads to technology, and eventually to prosperity.

While in theory this process can work, it does not necessarily lead to science that addresses the needs of society. For this reason, science policy moved toward “Responsibility 2.0”, or Science for Society, in which the goal of science is to focus on social problems, with responsibility as an organizing concept. “Now, the problem this model of responsibility,” Professor Macnagthen explained, “is that it doesn’t properly take into account the main issue we face, which is that science and innovation don’t simply solve problems, they create new problems.” And that just having a beneficial goal in mind does not always lead to a beneficial outcome.

For this reason, science policy has progressed to a concept called “Science for and with Society” or “Responsibility 3.0” in which scientific activity becomes more open, and the processes as well as the goals must both be in line with social needs. “In this way, it is believed that the future can be taken care of through collective stewardship.”

But why is this really necessary? “Unless” he explained, “we actively shape science and innovation with social values, future changes will happen based on vested interests and market forces.”

But in trying to shape the direction of our innovations, Professor Macnagthen brought up a further challenge: “how do you know where you want to go and what is the legitimacy of those directions?” To ensure that the development of new innovations proceeds along ethical lines, it is necessary to establish a clear purpose, to make sure that the science is trustworthy, to include people who will be affected by the process and results, and finally to closely examine the ethical trade-offs between who will receive the benefits and who will take the risks.

To best answer the challenges of responsibility, Professor Macnagthen listed 4 “dimensions” which need to be satisfied. The first of these is Anticipation, which covers the ‘what-ifs’ of a project. What are the possible results? What are the possible impacts? What could go wrong? What are the monetary and non-monetary costs? The second is Inclusion, ensuring both that the development involves a diverse range of participants, and that everyone who will be impacted can have informed input. The third is Reflexivity; that is, the team must be capable of reflecting upon their own values, able to self-critique, and aware of the framing of their work by others. The last of the four dimensions is Responsiveness, which calls for a framework of governance that demands that all four of these dimensions be adhered to, and which enables changes to be made when needed.

Looking ahead to future science and engineering projects, including agricultural biotechnology, AI, nanotech, and even planetary engineering of the global climate, one can see the similar potential for unintended consequences and intentional misuses that plagued many of our scientific advances of the 20th century. If we are to navigate the advances of the 21st century, it is imperative that we gain the capacity to set guidelines that steer our innovations along the ideal path, and the will to not deviate from those guidelines for the sake of convenience or profit.
Read More
Post your comments