Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Artificial intelligence systems are now being banned with “unacceptable risks” in the European Union


As of Sunday in the European Union, the mass organizers can prohibit the use of artificial intelligence systems that they see as an “unacceptable danger” or harm.

February 2 is the first time to comply with I have a verbThe comprehensive organizational framework of artificial intelligence recently approved by the European Parliament last March after years of development. The verb entered into force officially August 1; What follows now is the first final appointment.

Details were set in Article 5But on a large scale, the verb is designed to cover countless cases where artificial intelligence and interaction with individuals may appear, from consumer applications to physical environments.

under Bloc approachThere are four extensive risk levels: (1) The minimum risk (for example, random mail filters) will not face any organizational supervision; (2) Limited risks, which include Chatbots customer service, will have an organizational supervision of the light touch; (3) High risks – Artificial Intelligence of Health Care recommendations are an example – he will face heavy organizational control; And (4) Unacceptable risk applications – the axis of compliance requirements for this month – will be fully banned.

Some unacceptable activities include:

  • AI used for social registration (for example, building risk profiles based on a person’s behavior).
  • Amnesty International, which manipulates a person’s decisions in a significant or deceptive manner.
  • Artificial intelligence that takes advantage of weaknesses such as age, disability or social and economic status.
  • Artificial intelligence that tries to predict people who commit crimes based on their appearance.
  • AI that uses biological measurements to infer a person’s properties, such as his sexual inclinations.
  • Artificial intelligence that collects biometric data “realistic” in public places for law enforcement purposes.
  • Amnesty International, which is trying to infer people’s feelings at work or school.
  • AI that creates – or expands – facial recognition databases by stripping online images or safety cameras.

Companies that are found on the use of any of the artificial intelligence applications will undergo the European Union for fines, regardless of where its main headquarters is. They can be on the hook up to 35 million euros (about 36 million dollars), or 7 % of their annual revenues from the previous fiscal year, whichever is greater.

Rob Someroi, head of technology at Slaughter and May, indicated in an interview with Techcrunch.

“The organizations are expected to be completely compatible by February 2, but … the next deadline that companies need to be familiar with in August,” said Sumeroy. “By that time, we will know who the competent authorities are, fines and enforcement of enforcement will become.”

Initial pledges

The deadline on February 2 is in some ways a formal procedure.

Last September, more than 100 companies signed I am Amnesty International AgreementA voluntary pledge to start applying the principles of artificial intelligence before entering the application. As part of the agreement, the signatories – which included Amazon, Google and Openai – are committed to identifying artificial intelligence systems that are likely to be classified as high -risk under the artificial intelligence law.

Skip some technology giants, especially Meta and Apple, the agreement. Starting French AI mistakeOne of the harshest AI AI critics, also chose not to sign.

This does not mean that Apple, Meta, Mistral, or others who did not agree to the agreement will not fulfill their obligations – including the prohibition on risk -fraught systems. Sumroy notes that, due to the nature of the prohibited use cases, most companies will not participate in these practices anyway.

“For organizations, the main concern about the European Union’s European Union Law is whether the applicable guidelines, standards and behavior symbols will arrive in time – decisively, whether it will provide organizations clearly about compliance.” “However, the working groups, until now, meet the final dates on the code of behavior rules for developers.”

Possible exemptions

There are exceptions for many prohibitions of artificial intelligence law.

For example, law allows law enforcement to use some regulations that combine biological measurements in public places if these systems help to life. This exemption requires permission from the appropriate ruling body, and the law emphasizes that the application of the law cannot take a decision “resulting in a negative legal impact” on a person who depends only on the outputs of these regulations.

The law also implement exceptions to regulations that deduce emotions in workplace and schools where there is a “medical or safety” justification, such as the systems designed for therapeutic use.

European Commission, European Union Executive Branch, He said he would issue additional guidance In “early 2025”, after consulting with stakeholders in November. However, these guidelines have not yet been published.

Sumeroy said it is also unclear how other laws in books can interact with the prohibition of artificial intelligence law and relevant rulings. The clarity may not reach the later time of the year, as the implementation window approaches.

“It is important for organizations to remember that the organization of artificial intelligence is not in isolation,” Someroy said. “Other legal frameworks, such as GDP, NIS2, and Dora, will interact with the law of artificial intelligence, which creates potential challenges – especially about the requirements of intertwined notification.

Leave a Reply

Your email address will not be published. Required fields are marked *