Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
California Governor Gavin Newsom recently halted controversial legislation that would have required safety testing for some artificial intelligence (AI) models before their public release. One take-away: regulating this high-stakes technology may simply be too big a task for state governments. Governing AI will require a national effort led by decisive leaders—including America’s next president.
I have seen firsthand how one candidate thinks about AI. As senior policy advisor to Vice President Kamala Harris, I saw up close how the Democratic nominee for president approaches this complex issue. She is studious and scrutinizing, thorough and pragmatic, skeptical of dogma, and focused on results. As in everything she does, her primary concern is the real, day-to-day experiences of people.
One moment from my White House tenure illustrates this approach. Months after powerful new tools like ChatGPT had set off seismic waves of angst in Washington and beyond, Harris gathered a small group of consumer advocates and labor leaders in her office to discuss artificial intelligence. She wanted to hear firsthand about how regular people were grappling with the fast-moving technology.
I watched as the vice president surveyed the leaders, asking each what most worried their constituents—workers, patients, older Americans, students, women. They voiced concern about how AI surveillance systems surveilled and scored factory workers. How rogue algorithms had kicked sick patients off their health care benefits. How scammers had bilked thousands of dollars from seniors by using tech to impersonate their grandchildren’s voices, and how teenage girls had been devastated after their faces were imposed on explicit deepfake images.
Harris was well acquainted with these issues. As California’s attorney general, she had established the country’s first privacy protection unit, prosecuting hackers who stole and sold intimate images online, and striking a global agreement with top tech platforms to adopt new rules for protecting users’ personal information. As the meeting wound down, the vice president made a promise. She would do all she could to ensure that pioneering technologies empowered—and did not harm—Americans.
In the months that followed, the vice president worked across the government to tackle the problems raised in that meeting. President Joe Biden developed and then signed an executive order addressing problems with tenant screening algorithms, automated worker surveillance tools, and synthetic content like voice cloning. These initiatives—alongside the administration’s efforts to maintain America’s AI edge over China and give small businesses the resources to compete in the emerging AI market—responded directly to the concerns of the person who had been listening in that meeting with civil society leaders—the vice president of the United States.
I thought back to this chapter last month as Harris laid out a detailed agenda to make life better for the U.S. middle class. The daughter of a research scientist, the vice president spent some of her formative years living and working in the Bay Area, the cradle of American innovation. She often shares how these experiences showed her the power of technology to help solve humanity’s most complex problems, from curing stubborn diseases to strengthening America’s national defense. At the same time, she has warned that without clear guardrails, such tools can fall short of their potential.
This approach—an innovation-forward, people-centered balancing act—has come alive in Harris’ record on technology issues as vice president. Last November, she rallied world leaders around a vision for AI that ensures “privacy is protected and people have equal access to opportunity.” The speech followed her behind-the-scenes work with tech CEOs to secure voluntary safety commitments that would not stifle the technology’s extraordinary potential to shape and improve the world around us.
The efforts Harris has championed could serve as the basis of the kind of “safe business environment” for the U.S. AI sector the vice president has promised on the stump—unless a second Trump administration torpedoes them. The former president vowed in his 2024 platform to roll back safety measures and rigorous national security safeguards the current administration has achieved.
What would he replace them with? Not much, per reporting in The Washington Post. Trump allies have drawn up plans to let AI industry players grade their own homework, paving the way for the kind of technology crisis that would deal a blow to the trust already-skeptical consumers have in these systems. To workers, startup founders, and others navigating the profound implications of this evolving class of technology, Trump’s message is, “You’re on your own.”
We can’t know the course advanced technology will take in the next four years. But as the role of AI in daily life accelerates, America’s next president will grapple with its impact on our safety, security, and social fabric. Working families don’t have to wonder how President Harris will handle these challenges. She has already shown us. She will listen to regular people, and then she will act.
Ami Fields-Meyer served as senior policy advisor to Vice President Kamala Harris at the White House.
The views expressed in this article are the writer’s own.