Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From Harry JohnsonCalmness
This story was originally published by CalmattersS Register about their ballots.
The Privacy Agency first in California withdraws from an attempt to regulate artificial intelligence and other forms of computer automation.
The California Privacy Agency was under pressure to pull away from the draft rules. Business groups, legislatorsAnd governor Gavin Newm said he would be expensive for business, potentially suffocating innovation and usurping the authority of the legislature, where the proposed AI provisions have spread. With a unanimous vote last week, the agency’s board was irritating the rules that impose precautions on AI -like systems.
Agency employees estimate that changes reduce business costs to comply with $ 834 million to $ 143 million in the first year of performance and predict that 90% of the businesses initially required to comply with it will no longer have to do so.
The retreat marks an important turn in a continuing and heated debate about the role of the board. Created after the passage of State Legislation Legislative by Legislators in 2018 and voters in 2020, The agency is the only organ of its kind in the United States.
Thehe Project Rules have been in the work for more than three yearsBut they have been reviewed after a series of changes to the Agency in recent months, including the departure of two leaders regarded as a pro-consumper, including Vinhcent Le, a member of the Council, which leads the process of drafting AI rules, and Ashkan Soltans, CEO of the Agency.
Consumer advocacy groups are worried that the latter shifts mean that the agency is delaying excessively to the business, more specially technological giants.
Changes approved last week mean that the agency’s rules no longer regulate behavioral advertising, which is aimed at people based on profiles built by their online activity and personal information. In a previous project of the rules, businesses should conduct risk assessments before using or applying such advertising.
Behavioral advertising is used by companies such as Google, Meta and Tiktok and their business customers. Can perpetuates inequalityposture a threat to national securityand place children at riskS
The revised design rules also eliminate the use of the phrase “artificial intelligence” and narrows the scope of the business activity, regulated as “automated decision -making”, which also requires assessments of the risks of processing personal information and protective measures to mitigate them.
Proponents of stronger rules say that the greater definition of “automated decision -making” allows employers and corporations to abandon the rules, claiming that an algorithmic instrument is just a advice on making decisions for humans.
“My only concern is that if we are just calling for the industry to identify what a risk assessment in practice looks like, we could reach a position we write the exam with which they are ranked,” said Brandi Nenke member during the meeting.
“CPPA is charged with protecting the confidentiality of Californians’ data and the reduction of the rules he proposed in favor of Big Tech does nothing to achieve this goal,” said Sacha Howworth, CEO of the technological supervision project, “said a statement of intercession. “By the time these rules are published, what will be the meaning?”
The Rules Project retains some Protect for workers and students In cases where a fully automated system determines the results of the services for finance and lending, housing and health care without a person in the decision -making cycle.
The business and organizations that represent them make up 90% of the comments on the rules project before the Agency holds a listening session throughout the country last year, Soltans said last year.
In April after pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights and privacy groups wrote a Letter Together, summoning the agency to continue working to regulate AI and protection of users, students and workers.
“With each iteration, they have become more glorious and lighter.”
Kara Williams, Law Assistant, Electronic Privacy Information Center, regarding the AI Rules Project by California’s Confessor Regulator in California
About a week later the Newsom government intervened, Send a letter to the Agency Stating that he agreed with critics that the rules exceed the agency’s authority and support a proposal to return them.
NEWSOM cites a proposal 24, the 2020 vote measure, which paved the way to the agency. “The agency can fulfill its obligations to issue the regulations required by a proposal 24 without embarking on areas outside its term,” the governor wrote.
The initial rules for projects were great, said Kara Williams, an associate of the law at the Information Center for the Privacy of the Advocacy Group. On a phone call before voting, she added that “with each iteration they have become more free and more free, and this seems to correlate quite directly with the pressure from the technology industry and the commercial association groups so that these regulations are increasingly protective for consumers.”
The public has until June 2 to comment on the change in order to draw up rules. Companies must comply with decision -making rules by 2027.
Before voting to strengthen its own regulation last week, at the same meeting the Agency’s board voted to cast its support behind four projects of the bill in California’s law, including one who protects the privacy of people who connect computer devices to their brain and other who prohibits the collection of location data without permissionS
This article was Originally Published on CalMatters and was reissued under Creative Commons Attribution-Noncommercial-Noderivatives License.