free website hit counter UK Government Introduces AI Self Assessment Tool – Netvamo

UK Government Introduces AI Self Assessment Tool

The UK Government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.

The questionnaire is intended for use by any organization that develops, provides or uses services that use AI as part of their standard operations, but is primarily intended for smaller businesses or start-ups. The results will tell decision makers about the strengths and weaknesses of their AI management system.

How to use AI Management Essentials

Now availablethe self-assessment is one of three parts of a so-called “AI Management Essentials” tool. The other two parts include a rating system that provides an overview of how well the business is managing its AI and a set of actions and recommendations for organizations to consider. None of them have been released yet.

AIME is based on the ISO/IEC 42001 standard, NIST frameworkand EU AI Law. Self-assessment questions cover how the company uses AI, manages its risks and is transparent about it with stakeholders.

SEE: Delaying AI’s rollout in the UK by five years could cost the economy £150+ billion, according to Microsoft Report

“The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organizational processes in place to enable the responsible development and use of these products,” according to Report from the Department of Science, Innovation and Technology.

When completing the self-assessment, input should be obtained from employees with technical and broad business knowledge, such as a CTO or software engineer and an HR Business Manager.

The government wants to include the self-assessment in its procurement policy and framework to anchor security in the private sector. It would also like to make it available to public buyers to help them make more informed decisions about AI.

On November 6, the government has opened a consultation invite companies to provide feedback on the self-assessment, and the results will be used to refine it. The rating and recommendation parts of the AIME tool will be released after the consultation closes on 29 January 2025.

Self-assessment is one of many planned government initiatives for AI assurance

In one paper published this week, the government said AIME will be one of many resources available on the “AI Assurance Platform” it is trying to develop. These will help companies conduct impact assessments or review AI data for bias.

The government is also creating a responsible AI terminology tool to define and standardize key AI insurance terms to improve communication and cross-border trade, particularly with the US

“Over time, we will create a set of accessible tools to enable basic good practices for the responsible development and deployment of AI,” the authors wrote.

The government says the UK’s AI insurance market, the sector that provides tools to develop or use AI security and currently includes 524 companies, will grow the economy by more than £6.5 billion over the next decade. This growth can be attributed in part to increasing public confidence in the technology.

The report adds that the government will partner with the AI ​​Safety Institute – which was launched by former prime minister Rishi Sunak on AI Safety Summit in November 2023 — to promote AI assurance in the country. It will also allocate funds to expand the Systemic Safety Grant programme, which currently has up to £200,000 available for initiatives developing the AI ​​assurance ecosystem.

Legally binding legislation on AI security is coming in the next year

Meanwhile, Peter Kyle, the UK’s technology secretary, pledged to make the voluntary agreement on AI safety testing legally binding by implementing AI Bill during the next year on Financial Times Future of AI Summit on Wednesday.

November’s AI Safety Summit saw AI companies – including OpenAI, Google DeepMind and Anthropic – voluntarily agree to allow governments to test the safety of their latest AI models before releasing them publicly. It was first reported that Kyle had expressed its plans to legislate voluntary agreements to executives from prominent AI companies in a meeting in July.

SEE: OpenAI and Anthropic Sign contract with the US AI Safety Institute and submit boundary models for testing

He also said that AI Bill is coming focus on the large basic ChatGPT-style models created by a handful of companies and transforms the AI ​​Safety Institute from a DSIT directorate into an “arm’s length government agency.” Kyle reiterated these points at this week’s summit, according to the FT, stressing that he wants to give the institute “the independence to act fully in the interests of British citizens”.

In addition, he pledged to invest in advanced computing power to support the development of frontier AI models in the UK, in response to criticism from the government scrap £800m of funding for a supercomputer at Edinburgh University in August.

SEE: UK government announces £32m for AI projects after scrapping funding for supercomputers

Kyle stated that while the government cannot invest £100 billion alone, it will work with private investors to secure the necessary funding for future initiatives.

A year in AI security legislation for the UK

Lots of legislation has been published in the past year committing the UK to develop and use AI responsibly.

On 30 October 2023, the group of seven countries, including the UK, created a voluntary AI code of conduct which consists of 11 principles that “promote safe, secure and reliable AI worldwide.”

The AI ​​Safety Summit, where 28 countries pledged to ensure safe and responsible development and deployment, began just a couple of days later. Later in November, the UK’s National Cyber ​​Security Centre, the US Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries released guidelines on how to ensure safety during the development of new AI models.

SEE: UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety

In March, the G7 countries signed another agreement that commits to exploring how AI can improve public services and boost economic growth. The agreement also included joint development of an AI toolbox to ensure that the models used are safe and reliable. The following month, then-Conservative Gov agreed to work with the United States develop tests for advanced AI models by signing a memorandum of understanding.

In May, the government released Inspecta free, open-source testing platform that evaluates the safety of new AI models by assessing their core knowledge, ability to reason, and autonomous capabilities. It was also hosted another AI security summit in Seoulwhich saw the UK agree to work with global nations on safeguards for AI and announced up to £8.5m in funding for research to protect society from its risks.

Then, in September, Britain signed on the world’s first international treaty on AI along with the EU, the US and seven other countries, and commits them to adopt or enforce measures that ensure the use of AI is compatible with human rights, democracy and the law.

And it’s not over yet; with the AIME tool and report, the government has announced a new AI security partnership with Singapore through a cooperative agreement. It will also be represented at the first meeting of international AI Safety Institutes in San Francisco later this month.

Chairman of the AI ​​Safety Institute, Ian Hogarth, said: “An effective AI safety strategy requires global collaboration. That’s why we place so much emphasis on the international network of AI Safety Institutes, while strengthening our own research partnerships.”

However, the US has moved further away from AI collaboration with its latest directive limiting the sharing of AI technology and requiring protection against foreign access to AI resources.

About admin