POLICY
What Potential New Regulations Mean for Japan’s AI Strategy
September 26, 2024
From its inception, agility has been a core component of Japan’s AI strategy. Part of agility, however, is knowing when to change policies. Prime Minister Fumio Kishida demonstrated exactly that when he opened discussions on the first AI regulations for Japan earlier this month.
The move marks a significant shift in Japan’s approach to AI, which has thus far relied on sector-specific regulations supplemented by voluntary guidelines. Japanese policymakers have historically avoided blanket restrictions out of concern that they may stifle innovation, investment, and fall behind the technology’s rapid development. In 2021, a Ministry of Economy, Trade and Industry report on Japan’s AI policies stated that “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment,” adding that “even if discussions on legally-binding horizontal requirements are held in the future,” the “technology itself should not be included in the scope of mandatory regulations.”
However, the government’s stance has gradually changed amid global concern over the risks from generative AI and local criticism that Japan’s current laws enable widespread copyright infringement from tech companies. In February, Kishida’s Liberal Democratic Party (LDP) called for generative AI regulations by the end of this fiscal year. In April, the LDP moved to also include AI models that pose extreme risks.
Tougher AI measures would bring Japan’s approach closer to that of the other G7 members countries. The European Union’s AI Act, which will apply to G7 members France, Germany, and Italy, entered into force on August 1 after it was first proposed in 2021. The United States (US) AI executive order was enacted by President Joe Biden in October 2023, and Canada proposed the Artificial Intelligence and Data Act (AIDA) in 2022. The United Kingdom (UK), too, recently announced plans to explore legislation for the world’s most advanced AI models.
The move marks a significant shift in Japan’s approach to AI, which has thus far relied on sector-specific regulations supplemented by voluntary guidelines. Japanese policymakers have historically avoided blanket restrictions out of concern that they may stifle innovation, investment, and fall behind the technology’s rapid development. In 2021, a Ministry of Economy, Trade and Industry report on Japan’s AI policies stated that “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment,” adding that “even if discussions on legally-binding horizontal requirements are held in the future,” the “technology itself should not be included in the scope of mandatory regulations.”
However, the government’s stance has gradually changed amid global concern over the risks from generative AI and local criticism that Japan’s current laws enable widespread copyright infringement from tech companies. In February, Kishida’s Liberal Democratic Party (LDP) called for generative AI regulations by the end of this fiscal year. In April, the LDP moved to also include AI models that pose extreme risks.
Tougher AI measures would bring Japan’s approach closer to that of the other G7 members countries. The European Union’s AI Act, which will apply to G7 members France, Germany, and Italy, entered into force on August 1 after it was first proposed in 2021. The United States (US) AI executive order was enacted by President Joe Biden in October 2023, and Canada proposed the Artificial Intelligence and Data Act (AIDA) in 2022. The United Kingdom (UK), too, recently announced plans to explore legislation for the world’s most advanced AI models.
Indeed, a 2023 LDP white paper names alignment with global regulatory trends as a key reason for introducing domestic restrictions on AI. “With the increasing utilization of AI across borders,” the paper argues, “the big divergence between international discussions and our policy may likely lead to the isolation of the Japanese AI markets. We have come to a period where we need to consider the regulatory gap between their countries and us.”
The prospect of Japan’s regulatory isolation stands in stark contrast with the country’s extensive leadership of international AI governance efforts, many led by Prime Minister Kishida himself. In 2023, Kishida launched the G7’s Hiroshima AI Process Guiding Principles and Code of Conduct under the Japanese G7 Presidency, which was once again a priority for the G7 at this year’s Leaders’ Summit in Italy.
Japan continues to oversee the development of the Hiroshima Process Code of Conduct taking place at the Organization of Economic Co-operation Development (OECD), where it serves as the 2024 Ministerial Council Chair. At the May Ministerial Council Meeting, Kishida announced a ‘Friends Group’ of 49 countries and regions that will support the Hiroshima Process and a new center for the OECD’s Global Partnership on Artificial Intelligence (GPAI) in Tokyo. Later that month, Japan also became one of the 11 signatories to establish an international network of AI Safety Institutes, government-backed research institutions focused on AI safety science. A Japanese AI Safety Institute was launched in February of this year.
Closing Japan’s regulatory gap with other countries does not necessarily mean taking drastic measures. At the August 1st meeting for AI regulation, Kishida affirmed that any new legislation would be innovation and business-friendly, and able to flexibly adapt to changes in the technology. These principles support the LDP’s goal to make Japan “the most AI-friendly country in the world,” including by introducing only “minimum necessary measures” for AI models, according to one LDP white paper from April. “Risks must be addressed with agility, taking the best possible steps in a timely and thorough manner,” the paper argues. “Discipline and regulation are not opposed to innovation, but also promote AI utilization and R&D by creating a safe and secure environment.”
The prospect of Japan’s regulatory isolation stands in stark contrast with the country’s extensive leadership of international AI governance efforts, many led by Prime Minister Kishida himself. In 2023, Kishida launched the G7’s Hiroshima AI Process Guiding Principles and Code of Conduct under the Japanese G7 Presidency, which was once again a priority for the G7 at this year’s Leaders’ Summit in Italy.
Japan continues to oversee the development of the Hiroshima Process Code of Conduct taking place at the Organization of Economic Co-operation Development (OECD), where it serves as the 2024 Ministerial Council Chair. At the May Ministerial Council Meeting, Kishida announced a ‘Friends Group’ of 49 countries and regions that will support the Hiroshima Process and a new center for the OECD’s Global Partnership on Artificial Intelligence (GPAI) in Tokyo. Later that month, Japan also became one of the 11 signatories to establish an international network of AI Safety Institutes, government-backed research institutions focused on AI safety science. A Japanese AI Safety Institute was launched in February of this year.
Closing Japan’s regulatory gap with other countries does not necessarily mean taking drastic measures. At the August 1st meeting for AI regulation, Kishida affirmed that any new legislation would be innovation and business-friendly, and able to flexibly adapt to changes in the technology. These principles support the LDP’s goal to make Japan “the most AI-friendly country in the world,” including by introducing only “minimum necessary measures” for AI models, according to one LDP white paper from April. “Risks must be addressed with agility, taking the best possible steps in a timely and thorough manner,” the paper argues. “Discipline and regulation are not opposed to innovation, but also promote AI utilization and R&D by creating a safe and secure environment.”
What Japan’s next steps for regulating AI will be remain to be seen. As Kishida said at an August 1 meeting, “discussions on such safety measures, including whether laws will be required or not, have just started.” The Prime Minister’s recent announcement to step down from office in September will likely challenge decision making, and policymakers could still ultimately decide against any regulation at all. Even so, at least the Japanese government will have considered its options. This is, after all, what agility is about.