New

The Hiroshima process 

(MainsGS2:Bilateral, regional and global groupings and agreements involving India and/or affecting India’s interests.)

Context:

  • Recently, the annual Group of Seven (G7) Summit was hosted by Japan which took place in Hiroshima.

Regulate artificial intelligence:

  • The G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).
  • The ministerial declaration of the G7 Digital and Tech Ministers discussed “responsible AI” and global AI governance to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies”.
  • Even as the G7 countries are using such fora to deliberate AI regulation, they are acting on their own instead of waiting for the outcomes from the HAP. 

About Hiroshima AI process:

  • G7 leaders were engaged with other issues like the war in Ukraine, economic security, supply chain disruptions, and nuclear disarmament but the communiqué accorded more importance to AI than the technology has ever received in such a forum.
  • It said that G7 is determined to work with others to “advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic value”.
  • G7 ministers tasked to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI.
  • The HAP can develop a common guideline for G7 countries that permits the use of copyrighted materials in datasets for machine-learning as ‘fair use’, subject to some conditions. 
  • It can also differentiate use for machine-learning per se from other AI-related uses of copyrighted materials.

Multiple stakeholders approach:

  • An emphasis on freedom, democracy, and human rights, and mentions of “multi-stakeholder international organisations” and “multi-stakeholder processes” indicate that the HAP isn’t expected to address AI regulation from a State-centric perspective. 
  • Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.
  • The task before the HAP is really challenging considering the divergence among G7 countries in, among other things, regulating risks arising out of applying AI. 
  • It can help these countries develop a common understanding on some key regulatory issues while ensuring that any disagreement doesn’t result in complete discord.

Conclusion:

  • The establishment of the HAP makes one thing clear that AI governance has become a truly global issue that is likely to only become more contested in future.
  • It’s also possible that countries that aren’t part of the G7 but want to influence the global governance of AI may launch a process of their own like the HAP.
Have any Query?

Our support team will be happy to assist you!

OR