bannerbanner
Artificial intelligence. Freefall
Artificial intelligence. Freefall

Полная версия

Настройки чтения
Размер шрифта
Высота строк
Поля
На страницу:
5 из 5

Such a platform would be similar to the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), or the International Group of Experts on Climate Change (IPCC). He also outlined five goals and objectives of such a body:

– helping countries maximize the benefits of AI;

– eliminate existing and future threats.

– development and implementation of international monitoring and control mechanisms;

– collecting expert data and transmitting it to the global community;

– study AI to “accelerate sustainable development”.

In June 2023, he also drew attention to the fact that “scientists and experts called the world to action, declaring artificial intelligence an existential threat to humanity on a par with the risk of nuclear war.”

And even earlier, on September 15, 2021, the UN High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on the use of several systems that use artificial intelligence algorithms.

Open AI

At the end of 2023, Open AI (the developer ChatGPT) announced the creation of a strategy to prevent the potential dangers of AI. Special attention is paid to the prevention of risks associated with the development of technologies.

This group will work together with the following teams:

– security systems that address existing issues, such as preventing racial bias in AI;

– Super alignment, which studies how strong AI works and how it will work when it surpasses human intelligence.

The Open AI security concept also includes risk assessment in the following categories: cybersecurity, nuclear, chemical, biological threat, persuasion, and model autonomy.

European Union

In the spring of 2023, the European Parliament pre-approved a law called the AI Act, which sets out rules and requirements for developers of artificial intelligence models.

It is based on a risk-based approach to AI, and the law itself defines the obligations of AI developers and users depending on the level of risk used by AI.

In total, there are four categories of AI systems: those with minimal, limited, high, and unacceptable risk.

Minimal risk – the results of AI work are predictable and cannot harm users in any way. Businesses and users will be able to use them for free. For example, spam filters and video games.

Limited risk – various chatbots. For example, ChatGPT and Midjourney. Their algorithms for accessing the EU will have to pass a security check. They will also be subject to specific transparency obligations so that users can make informed decisions, know that they are interacting with the machine, and disconnect at will.

High-risk-specialized AI systems that have an impact on people. For example, solutions in the fields of medicine, education and training, employment, personnel management, access to basic private and public services and benefits, data from law enforcement agencies, data from migration and border services, and data from justice institutions.

Конец ознакомительного фрагмента.

Текст предоставлен ООО «Литрес».

Прочитайте эту книгу целиком, купив полную легальную версию на Литрес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

Конец ознакомительного фрагмента
Купить и скачать всю книгу
На страницу:
5 из 5