California AI education guidelines coincide with 4th grade scandal


from Harry JohnsonCalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

In December, fourth-graders in a class at Delevan Drive Elementary School in Los Angeles were given a homework assignment: Write a book report about Pippi Longstocking, then draw or use artificial intelligence to make a book cover.

When Jody Hughes’ daughter asked Adobe Express for Education, a graphic design software provided by her teacher, to generate an image of “a red-haired girl in long stockings with straight braids sticking out,” nothing came out that resembled the Swedish children’s book character she had just described. Instead, using recently added artificial intelligence generated by it sexualized images of women in underwear and panties. Hughes quickly contacted other parents who said they were able to replicate similar results on their own school-issued Chromebooks. Days later the parent group Schools beyond screens told the Los Angeles school board that they oppose further use of Adobe’s software.

The incident raised questions not only about the Los Angeles school district’s use of a particular AI product, but also about guidelines that state administrators provide to schools across California on how to safely embrace technology. A few weeks after the incident, the state Department of Education released a new edition of the guidelines, which it had been working on for several months with the help of a group of 50 teachers, administrators and experts. The revision came in response to instructions from the legislature, which passed two laws in 2024, essentially telling the department to deal with the rapid spread of AI among students, teachers and administrators.

Critics wonder if the guidelines would help avoid what parents call Pipigate; the dispute, they say, provides evidence that districts, schools and parents, who often don’t have the time or resources to ensure software tools don’t produce harmful results, need more support from the state. The guidelines, they add, are also too vague in places and don’t do enough to specify safeguards for how teachers use AI in the classroom.

Problems with the guidelines call into question whether the department can effectively respond to instructions from elected officials about how to protect technology that the guidelines themselves say can leave children isolated and with narrowed perspectives.

With the rapid penetration of AI into society, effective management of the technology has become a pressing issue. Although OpenAI’s ChatGPT popularized generative AI only three years ago, studies show that most teachers and students across the country are now using the technology in some capacity.

While AI can help save teacher time, personalization of learningand support students who do not speak English or who have disabilities can also inaccurately evaluate their works and generate images that perpetuate or are getting stronger stereotypes or sexualized images of womenespecially women of color. Most of California K-12 students are people of color. Since the rapid expansion of adoption of generative AI began, teachers who spoke with CalMatters felt both a need to prepare their students for a future in which AI is ubiquitous and a fear that AI tools could allow test cheating and lead to deficiencies in reasoning, logic and critical thinking.

“Teachers have a narrow window to set norms before they harden,” said LaShawn Chatmon, CEO of the National Equity Project, an Oakland group that helps teachers achieve more equitable outcomes. “Local education agencies that take advantage of this opportunity to co-design instruction and policy with students and families can help change who decides the role of AI in our learning and lives.”

A district spokesperson told CalMatters that the images generated by the AI ​​model did not meet district standards and “we are working with Adobe to address the issue.” Adobe vice president of education Charlie Miller said the company implemented changes to address the issue within 24 hours of hearing about the incident. Miller did not respond to questions about how the tool was vetted before deployment.

As a result of his child’s experience, Hughes believes that students should not be told to use text-to-image generators for homework. But he sees no attempt to place such limits on the use of technology in the Department of Education’s guidelines.

“These tech companies are offering kids things that haven’t been fully tested,” he said. “I don’t know where to draw the line, but elementary school is too small because it can get very nasty very quickly, as we saw with the Grok thing,” he added, citing recent abuse of the Grok AI system to remove clothing in images of women and children without consent.

Issues with AI guidelines

The guide provides a list of unacceptable uses of AI by students, such as plagiarism, and urges educators to integrate real-world scenarios and case studies into discussions to help students apply ethical principles to practical situations. He also says students should be taught to “think critically and creatively” about the “benefits and challenges” of AI tools.

Julie Flappan, director of the Computer Science Equity Project at UCLA’s Center X, said the Pippi Longstocking incident is reminiscent of a 2024 study that found young black and Latino more likely to use generative AI than young white people. That data, combined with the historical disparity in access to computer science education, means, she said, that some parents and students will need help thinking critically about AI.

“These tech companies are offering kids things that haven’t been fully tested.”

Jody Hughes, parent of a student at Delevan Drive Elementary School, Los Angeles

“We often think of technological advances as ways to level the playing field,” she said. “But the reality is that we know they are exacerbating inequalities.”

Flapan said it makes sense for the guidelines to call for critical thinking and vetting of AI tools before use, and to encourage educational leaders to engage communities in decision-making. But, she added, the manual doesn’t detail how to do that.

Charles Logan, a former teacher who is now in a responsible technology lab at Northwestern University, said the guidelines fall short because they don’t offer teachers and parents clear guidance on how they can opt out of using the technology. Brookings Institution study published in January, based on interviews with students, teachers and administrators in 50 countries, concluded that the risks of AI in classrooms currently outweigh the benefits and could “undermine children’s basic development.”

Mark Johnson, Head of Government Affairs at Code.orgpraised the guidelines but said the state should offer more AI educational support to educators and set AI and computer science skill requirements for graduation. A a recent report by Johnson found that four states have adopted such graduation requirements since the release of AI guidelines.

Kathryn Goyette, who served as the Department of Education’s computer science coordinator until January when asked about the Longstocking incident, pointed to parts of the guidelines emphasizing the importance of engaging families, communities and school board members when evaluating AI tools. She also said critical thinking is important to prevent such outcomes, pointing to guidelines that prompt administrators to consider potential harms before use.

Additional guidance is on the way on how to put the recently released guidance into practice: the department’s artificial intelligence task force will present specific policy recommendations based on the guidance by July.

The pressure of the AI ​​inevitability narrative

The latest version of the California Department of Education’s AI guidelines comes as local education agencies move away from general AI bans considered after the 2022 release of OpenAI’s ChatGPT. Instead, districts are moving toward deciding when and how students and teachers can use the technology. These local decisions will be critical to how the technology is actually used in schools, since the state cannot require school districts to adopt its guidelines.

Even California’s largest school districts can run into serious problems implementing AI. In June 2024, the head of Los Angeles Unified promised the best AI teacher in the world, but had to recall it weeks later. A week later, news broke that a majority of board members in the San Diego Unified School District, the state’s second-largest district, signed off on a curriculum they didn’t know included an artificial intelligence assessment tool.

The move to state and local AI guidelines, rather than bans, reflects a broader sense of inevitability in the state surrounding the adoption of the technology. In his October veto a bill that would prevent the use of some chatbots by minors, Gov. Gavin Newsom said that artificial intelligence is already shaping the world and that “we cannot prepare our youth for a future in which artificial intelligence is ubiquitous by preventing them from using these tools altogether.”

Logan, who recently advised San Diego parents about how to resist and reject the use of AI in classroomsopposes this idea. He says the California Department of Education’s guidelines should address situations where parents would like to avoid their children using AI altogether.

“It’s surprising that management wants to make AI proficient users out of kindergarten, and there was no room to say ‘no’ or give up,” he said in a phone call.

The state AI guidelines join a series of efforts to protect children from artificial intelligence, including bills now before the Legislature that seek to put moratorium on toys with accompanying chatbots and protecting student privacy in the age of AI. Common Sense Media and OpenAI are working to get the child online safety initiative on the November election ballot.

This article was originally published on CalMatters and is republished under Creative Commons Attribution-NonCommercial-No Derivatives license.

Leave a Reply

Your email address will not be published. Required fields are marked *