NARASIMHAN KASTHURI
SAN FRANCISCO /WASHINGTON, NOV 1
US President Joe Biden signed on Monday the much-awaited executive order on artificial intelligence (AI). The order is aimed at reigning in on the emerging AI technology and managing its risks. The new order also brings hope for skilled immigrants as it will make way for authorities to expand the ability of highly skilled immigrants and non-immigrants with expertise in critical areas to study, stay, and work in the US by modernizing and streamlining visa criteria, interviews, and reviews.
Happening Now: President Biden delivers remarks on his new landmark Executive Order ensuring America leads the way in seizing the promise and managing the risks of AI. https://t.co/yJvn6G2hBf
— The White House (@WhiteHouse) October 30, 2023
- The actions taken on Monday support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the UN
- President Biden signed an Executive Order recently that directs federal agencies to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination
- The new order also brings hope for skilled immigrants as it will make way for authorities to expand the ability of highly skilled immigrants and non-immigrants with expertise in critical areas to study, stay, and work in the US by modernizing and streamlining visa criteria, interviews, and reviews
- Earlier this year, the National Science Foundation announced a $140 million investment to establish seven new National AI Research Institutes, bringing the total to 25 institutions across the country
- The Biden-Harris Administration has also released a National AI R&D Strategic Plan to advance responsible AI
- The order’s focus is on areas like safety, privacy, protecting workers, and protecting innovation
The order’s focus is on areas like safety, privacy, protecting workers, and protecting innovation. The order includes new standards for safety, including requiring companies developing models that pose a serious risk to national security, economic security or public health to notify the federal government when training the model, and they must share the results of all safety tests. The Commerce Department will also develop guidance for content authentication and watermarking to label AI-generated content. It also paves the way for funding for the US government to further invest in the new technology.
Here's how our landmark Executive Order on Artificial Intelligence ensures that everyone can safely benefit from AI: pic.twitter.com/yPzIpgHmAW
— Vice President Kamala Harris (@VP) October 31, 2023
“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security, and trust. It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” deputy chief of staff Bruce Reed said in a statement, the media quoted him saying.
The White House breaks the key components of the executive order into eight parts:
- Creating new safety and security standards for AI, including by requiring some AI companies to share safety test results with the federal government, directing the Commerce Department to create guidance for AI watermarking, and creating a cybersecurity program that can make AI tools that help identify flaws in critical software.
- Protecting consumer privacy, including by creating guidelines that agencies can use to evaluate privacy techniques used in AI.
- Advancing equity and civil rights by providing guidance to landlords and federal contractors to help avoid AI algorithms furthering discrimination, and creating best practices on the appropriate role of AI in the justice system, including when it’s used in sentencing, risk assessments and crime forecasting.
- Protecting consumers overall by directing the Department of Health and Human Services to create a program to evaluate potentially harmful AI-related health-care practices and creating resources on how educators can responsibly use AI tools.
- Supporting workers by producing a report on the potential labor market implications of AI and studying the ways the federal government could support workers affected by a disruption to the labor market.
- Promoting innovation and competition by expanding grants for AI research in areas such as climate change and modernizing the criteria for highly skilled immigrant workers with key expertise to stay in the US.
- Working with international partners to implement AI standards around the world.
- Developing guidance for federal agencies’ use and procurement of AI and speeding up the government’s hiring of workers skilled in the field.
The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions: Catalyze AI research across the US through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.
The order said, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
The actions taken on Monday support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.
G7 Countries want AI companies to voluntarily commit to testing
G7 governments want AI companies to voluntarily commit to testing their most advanced models for a range of potential risks, boosting their cybersecurity defenses and using watermarks for AI-generated content.
Leaders of the Group of Seven (G7) countries made up of Canada, France, Germany, Italy, Japan, UK and US, as well as the EU, on Monday published guiding principles and a 11-point code of conduct to “promote safe, secure, and trustworthy AI worldwide” aimed at companies developing the most advanced AI systems.
The code is the first concrete guidance on what AI companies in G7 countries will be encouraged to do. It urges companies, including startups, to assess and tackle risks emerging from their AI models, and identify patterns of misuse that could emerge once consumers start using their AI products. The G7 governments are trying to persuade AI companies to commit to the code, but a list of signatories has not yet been released.
This comes just a day before Britain receives leaders and AI industry representatives from all G7 countries and others for an AI Safety summit at Bletchley Park.
(Narasimhan Kasthuri was a veteran journalist with The Hindu and Financial Express covering business, IT etc. Now, in the US West Coast, he covers technology for NE. He can be contacted at @narasimhan.kasturi@yahoo.com, Mobile: +1 (650) 793-0056)