On Monday, Japan’s largest telecommunications company, Nippon Telegraph and Telephone (NTT), issued a joint proposal with the Yomiuri newspaper group warning that generative artificial intelligence needs to be regulated in order to prevent the collapse of democracy and social order, and offering ideas for managing the new technology.
The proposal claims that AI platforms already damage human dignity by seizing users’ attention with mistake-laden creations, combining AI with AE, the attention economy, in a way that heightens social anxiety and degrades free will. It explains that the attention economy attaches radical headlines to articles and information to attract attention and maximize profits by displaying ads. The proposal says we must make sure AI learns to prioritize accuracy over attention-getting.
NTT and Yomiuri believe that, unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars.” They call for Japan to take immediate action to prevent AI from harming elections and compromising national security.
Among other ideas, their proposal promotes an environment of “multiple AIs of various kinds and of equal rank” to create room for discussion rather than a single AI-defined worldview:
“The goal is to achieve a state in which humans can independently compare multiple AIs with different algorithms and have people select among them so their thinking will not be dominated by the worldview presented by specific AIs.”
Japan is the latest country to pile on the AI-control bandwagon.
Last November, the United States and dozens of other countries met at the AI Safety Summit in the United Kingdom to discuss risks posed by AI and legal frameworks to contain them. From the summit’s statement:
“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
Last week, the European Union and the United States agreed to increase cooperation in making sure AI is developed with an emphasis on safety and governance. From their April 5 statement:
“The European Union and the United States reaffirm our commitment to a risk-based approach to artificial intelligence (AI) and to advancing safe, secure, and trustworthy AI technologies. …
“We encourage advanced AI developers in the United States and Europe to further the application of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems which complements our respective governance and regulatory systems.”
The Hiroshima Process arose from the 49th Group of Seven (G7) summit held in Hiroshima, Japan in May 2023. It calls on organizations to abide by 11 actions, “in a manner that is commensurate to the risks.” Among them are the following:
Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
Risks mentioned include lowering barriers of entry in the development of chemical, biological, and nuclear weapons, including for non-state actors (i.e. terrorists); self-replication; disinformation that could threaten democratic values and human rights; and an event leading to “a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity, or an entire community.”Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health, and education.
Last month, the European Parliament approved the Artificial Intelligence Act to ensure “safety and compliance with fundamental rights, while boosting innovation.” It states that “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” According to the March 13 press release:
“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
“Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden. …
“Additionally, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labelled as such.”
As governments consider plans to control generative AI, media commentators continue sounding the alarm on its dangers.
Tatum Hunter at the Washington Post says AI is destabilizing the concept of truth. She offers the recent uproar over Catherine, the Princess of Wales. Catherine released a Mother’s Day photo that turned out to have been doctored. Later, when she revealed her cancer diagnosis in a video, some viewers claimed it also was fake. Hunter writes:
“The episode highlights the growing difficulty of figuring out what’s real and what’s not in the age of AI. Already, former president Donald Trump has falsely accused an unflattering political ad of using AI-generated content, and actual fake images of politicians on both sides of the aisle have circulated widely on social media, destabilizing the concept of truth in the 2024 elections.”
Alfred Lubrano at the Philadelphia Inquirer thinks AI presents dire implications for democracy. This year’s presidential campaign might feature fabricated videos showing election workers destroying ballots, or candidates engaged in scandalous behavior. By Election Day, a deluge of counterfeit images could leave “reality wrecked and the truth nearly unknowable.”
He cites Adav Noti, executive director of the Campaign Legal Center (CLC), a nonpartisan government watchdog group in Washington, DC, warning that “AI provides easy access to new tools to harm our democracy more effectively.”
One recent example happened before the January New Hampshire primary. An AI-generated robocall simulated President Joe Biden’s voice, recommending that voters skip the primary and “save” their votes for the November election. Many could have believed Biden had recorded the message and “become disenfranchised” as a result, Noti said.
Kathleen Hall Jamieson, director of the University of Pennsylvania’s Annenberg Public Policy Center, said that “Deepfakes won’t be easily detected.” Therefore, “we should be suspicious of everything we see.”
Neuroscientist and author of The Intrinsic Perspective newsletter, Erik Hoel, wrote in a New York Times guest essay, “AI-Generated Garbage Is Polluting Our Culture,” that the copycat nature of AI is already biting itself. AI software that crunches the internet for information is feeding on AI-written material, paving the way to “a future of copies of copies of copies” that become “ever more stereotyped and predictable.”
This phenomenon is called “model collapse.” The following excerpt from the abstract of a May 2023 academic paper, “The Curse of Recursion,” featuring work from Cambridge, Oxford, and other schools, introduces it:
“It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. … What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as model collapse …
“We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.”
According to Hoel, the internet is already encrusted with AI dream-slop, before model collapse makes things worse:
“Any viral post on X now almost certainly includes AI-generated replies, from summaries of the original post to reactions written in ChatGPT’s bland Wikipedia-voice, all to farm for follows. Instagram is filling up with AI-generated models, Spotify with AI-generated songs. Publish a book? Soon after, on Amazon there will often appear AI-generated ‘workbooks’ for sale that supposedly accompany your book (which are incorrect in their content; I know because this happened to me).
“Top Google search results are now often AI-generated images or articles. Major media outlets like Sports Illustrated have been creating AI-generated articles attributed to equally fake author profiles. Marketers who sell search engine optimization methods openly brag about using AI to create thousands of spammed articles to steal traffic from competitors.”
Taking a cue from the Hiroshima Process, Hoel calls for watermarking AI-generated outputs to protect “our shared human culture.”
In the comments section of Hoel’s essay, Claude from Washington, DC wrote that everything he reads about AI reminds him of how much he values the tangible joys of a physical life well-lived, including his “wonderful collection of musty, printed books from the pre-Internet age, [and] cell-phone-free time spent with friends and family … and countless other minutiae that become part of my day when I choose to inhabit reality instead of the virtual universe.”
Another commenter wrote: “Suddenly, my high schooler at an academically rigorous school here in New York, is writing an awful lot of in-class essays. Using a pen and paper. I couldn’t be happier.”
It’s Not All AI’s Fault
The AI threats presented by task forces and commentators range from weapons development and war, to misleading the general public through fake content, to degrading the quality of online information.
However, all of these threats existed before ChatGPT’s November 2022 debut on the world stage. If bureaucrats wanted to protect the integrity of democracy and society, they should have started long ago.
Humanity had no trouble making weapons and war prior to ChatGPT. Every war you’ve ever read about started before large language models could be flagged as increasing the odds of war, and many technologies that have nothing to do with consumer AI get conflated with it. Autopilot, self-guided missiles, target tracking, and other mainstays of modern warfare were invented long before everybody began talking about AI. Autonomous drone strikes are not the work of ChatGPT.
Similarly, blaming AI for fake images rings hollow when the ability to doctor images in a modern way has existed since Photoshop’s debut in 1990. And long before that, physical photo editing brought the world William Mumler’s faked photos of ghosts in the 1880s, the Cottingley Fairies by Elsie Wright and Frances Griffiths in 1920, and countless other hoaxes including “proof” of aliens, the Loch Ness Monster, and Bigfoot.
Hoaxes in photography are almost as old as photography itself. According to Guinness World Records, “The first hoax photograph was ‘Self-Portrait as a Drowned Man’ (1840) by Hippolyte Bayard (France).” The earliest surviving photo is believed to be “View from the Window at Le Gras,” taken in 1826 or 1827. Just 13 or 14 years later, the first hoax photo showed up.
Many critics of generative AI have obviously not used it, at least not in a professional capacity. Most of the public, that component of democracy of such concern to AI critics, has not used it either. The result is unwarranted velocity behind vague warnings, leading to hyperbolic statements such as “you can’t believe anything anymore.” Remaining skeptical was a good idea long before generative AI. Like so much else in the AI debate, that’s not new.
For example, the production of compelling fake imagery requires human involvement. There’s no rogue AI zipping around the internet sprinkling fake photos. If you’ve worked with generative AI, you know that it’s actually pretty rudimentary, and that both its written material and imagery require editing and polishing by a human. I acknowledge that this will one day change, but even when it does we won’t have entered new territory, except possibly in volume.
But let’s consider the volume issue.
Is it AI’s fault that people are glued to social media on their phones all day? In previously printed matter that has made its way online, especially forms overseen by human professionals, such as books, magazines, and newspapers, the risk of fakery declines. The vulnerability of the electorate to mass-distributed fakery lies not with the creation of the fakery but the digital means of its distribution, and mass addiction to it. You are unlikely to see a faked photo in the New York Times or Wall Street Journal, each managed by skilled teams of professionals, but highly likely to see one on a social media feed overrun by the general public.
Yet, having failed to control the proliferation of attention-crushing phones and social media, bureaucrats and commentators now abandon that root problem and move on to blaming new tools of creation. The implication is that zoned-out kids and clueless voters are OpenAI’s fault.
Concomitant with the ubiquity of mobile phones and social media is a degradation in online discourse that predates generative AI.
Frankly, I’m surprised that early platforms didn’t experience model collapse on the low quality of many online discussion boards, posts, and other digitized communication that abandoned grammar, syntax, and accuracy in a bid for diminished attention spans. How in the world AI learned from emoji is beyond me, but it did. I trust the industry when it expresses concern about the risk of training new AI on old AI content, but I find most AI-generated content superior to most human-generated online content. Internet masses discovered Hoel’s dream-slop long before ChatGPT arrived on the scene.
Claude from Washington, DC wrote in his comment at the New York Times that he values tangible joys of a physical life well-lived. Good for him, and so do I — and AI’s existence does not impede such joys. ChatGPT does not prevent me from reading printed books, hiking, or spending time with the shrinking list of people I know who were not taken hostage by their phones. It’s not the AI, folks. It’s the phones.
From Johann Hari’s book Stolen Focus, published in pre-ChatGPT January 2022, comes the following indictment of our phonified social mediascape and its vanishing attention spans:
“The sensation of being alive in the early twenty-first century consisted of the sense that our ability to pay attention — to focus — was cracking and breaking. I could feel it happen to me — I would buy piles of books, and I would glimpse them guiltily from the corner of my eye as I sent, I told myself, just one more tweet. … I wondered if the motto for our era should be: I tried to live, but I got distracted. …
“It felt like our civilization had been covered with itching powder, and we spent our time twitching and twerking our minds, unable to simply give attention to things that matter. Activities that require longer forms of focus — like reading a book — have been in free fall for years. … The truth is that you are living in a system that is pouring acid on your attention every day, and then you are being told to blame yourself and to fiddle with your own habits while the world’s attention burns.”
While this rages, hot as ever, we’re now supposed to switch focus yet again and blame AI as the real threat to society.
On the democracy front, election misinformation and disinformation are probably as old as voting itself. Fact-checking arose in the 1850s when sensationalist newspapers created a need for it.
Nobody would argue in favor of confusing information, but it’s unfair to lay blame on AI for an issue that humanity has wrestled with since time immemorial. Developed properly, AI could improve the quality of the information landscape, and we should hope that regulatory efforts guide it in that direction. Unfortunately, bureaucratic history offers a poor track record, and early guardrails on generative AI have been unhelpful.
Consider Google’s refusal to let its AI effort go where the data tell it. So obsessed is the company with political correctness that it turbocharged its AI’s diversity directive until it began churning out images of America’s founding fathers as black women, and ancient Greek warriors as Asians. When users entered prompts to create AI-generated images of people, Google’s Gemini showed results featuring people of color whether it was appropriate or not.
I’m not worried that AI is going to cause great harm, but that well-meaning guardians will hobble it with regulations that hamper its capabilities. Early indications are not good. Do the guardians want AI to improve so that it makes fewer mistakes, or limit its improvement so that its images are not believable? The best path may simply be to leave it alone to develop as it will, interpreting what it finds in its own way.
Meanwhile, it’s easy to avoid whatever dangers current AI platforms pose.
Nothing prevents people from logging out of social media, putting down their phones, and engaging with the physical world — but most don’t want to do so. They’re not going to return to printed matter and a slower pace of life, and that’s not AI’s fault. We lost the masses to phone fog many years ago.
As for watermarking AI-created content, this seemingly sensible requirement may not be as wonderful an idea as it appears to be at first blush. Is it not an invasion of my privacy to be forced to reveal the tools I use to do my work? I’m not required to disclose my preferred word processor, image editor, audio mixer, hardware, dictionary, thesaurus, and search engine, so why should I be required to disclose whether I used AI for part of my work process and, if so, the specific platform?
The current best practice for AI work is a hybrid model. Whether for coding, written work, or image creation, a human working with AI creates top-quality results. In most cases, the work involves tools beyond the AI platform. Should the hard work of creators honing their AI skills be stigmatized, while creators using pre-AI tools receive preferential treatment? A focus on the quality of the end result seems wiser.
In its current iteration, generative AI is one tool among many available to creators, and a good one. Crippling it with directives from people who know little about the creative process, because bad people exist and will use new technology for bad purposes, is not the way to a better future through AI.
About the only useful and evergreen message to humanity is this: Stay alert. In all things, stay alert.
Dishonest parties have tried tricking people and stealing from them since the dawn of time, and they’re still at it, only the tools have changed. It doesn’t matter how a misleading article or image was created. What matters is your ability to shield yourself from its influence.
Mixed Motives
What if, instead of AI attempting to mislead you, it’s the would-be controllers of AI attempting to mislead you? What if they don’t actually have the best interest of society in mind, but their own self-interest?
It comes as no surprise that the government leading the effort to ring-fence AI is the Chinese Communist Party.
According to Qiheng Chen at the Center for China Analysis, writing for the Asia Society Policy Institute, generative AI’s “ability to generate and disseminate information threatens the Chinese Communist Party’s (CCP) control over information” and China’s regulatory measures “specify that generative AI products and services must not contain information contrary to ‘core socialist values,’ and public-facing generative AI products and services must undergo a security review and register the underlying algorithms with the CAC [Cyberspace Administration of China].”
As China sets the tone for government AI crackdowns, private enterprise has discovered alarm bells to be an effective way to feather a nest in the AI tree.
NTT warns that unfettered AI is a threat to our way of life, and recommends creating an environment running “multiple AIs of various kinds and of equal rank” — just as it unveils its own technology for managing such an environment.
The company’s plan for an “AI constellation” is called IOWN, for Innovative Optical and Wireless Network. It hopes to link AIs to each other so they can “learn from and monitor each other,” which could be good or bad, depending on what the monitoring achieves.
If AIs share information so that they all get better, fine, although one wonders if it wouldn’t take long before every AI knew everything and the constellation became indistinguishable from its constituents, but we can allow that the process could improve results. However, if one AI bats down the work of another due to imposed rules against free data interpretation, the checks and balances could degrade results. An AI whose job is to tell another “you can’t say that” risks injecting human political bias into what could have been an unencumbered look at our world.
My job here, however, is not to judge the merits of NTT’s proposed framework, but to notice that the solution to its AI warning is one of its own AI-related products. A publicity blitz put this product in front of every lawmaker in Japan, no doubt hoping to garner taxpayer-funded backing for this new technology deemed vital to social cohesion.
Here’s another one. When Elon Musk and other tech executives signed a letter in March 2023 calling for a pause in AI development, was it for the good of society or the good of their companies as they tried catching up to leader OpenAI?
OpenAI pushed back, with CEO Sam Altman saying that a call for safety in AI development was old news to his company, which had been “talking about these issues the loudest, with the most intensity, for the longest.”
Indeed it maintains a Preparedness Framework to “provide for a spectrum of actions to protect against catastrophic outcomes,” according to information it presented to the UK AI Safety Summit last October. From that document:
“We are creating a dedicated new team called Preparedness to identify, track, and prepare for these risks. We intend to track frontier risks, including cybersecurity, CBRN [chemical, biological, radiological, and nuclear capabilities], persuasion, and autonomous replication and adaptation and share actions to protect against the impacts of catastrophic risk.”
It invited domain experts to join its Red Teaming Network to help develop taxonomies of risk and evaluate potentially harmful capabilities in new systems.
The company understands the technology it’s developing, believes in its potential to improve the world, and is managing risks responsibly. It is fulfilling former Google executive chairman Eric Schmidt’s vision that companies should define their own reasonable boundaries, not least because “There’s no way a non-industry person can understand what is possible. … There’s no one in government who can get it [AI oversight] right.”
Not that ignorance will prevent politicians from acting as bulls in the AI china shop.
Fearmongering about AI makes politicians look relevant, and proposing anything for the safety of society is good for their careers. They may sense that the rise of AI could reveal their rent-seeking behavior as the actual threat to society, and take defensive measures. How convenient to teach voters that they can’t trust AI, in advance of AI exposing the depth of special-interest influence in politics.
AI regulation proponent Gary Marcus mentioned in Politico yesterday that governments could lose power to AI companies.
True, but maybe it would be an improvement. Lord knows there’s plenty of room for it. Among his suggestions for how to protect Americans against AI is creating a federal agency to control it. That undoubtedly brought three cheers from Washington’s bureaucracy, and a fist bump from the Cyberspace Administration of China.
In this year’s presidential election, how much harm could AI do when nobody likes either candidate anyway? It’s not ChatGPT’s fault that the best the nation can produce is candidates colloquially dubbed “Bad” and “Worse.” Worrying about the harms of potential fake photos instead of the candidates on offer misses the forest for the trees. The bigger threat to democracy is not the tools used by America’s political tribes to influence the vote for which previous administration gets a repeat, it’s the tribalism itself.
Speaking of supposedly fake photos, remember the Catherine, Princess of Wales controversy cited above? Tatum Hunter at the Washington Post wrote that it highlights the growing difficulty of figuring out what’s real and what’s not in the age of AI.
Well, false claims of false information are partly to blame. It turns out that Catherine’s cancer diagnosis video was not faked, despite allegations to the contrary. From Hunter’s report:
“Wael Abd-Almageed, a professor of AI at Clemson University who develops deepfake detection software, said he and a student ran [Catherine’s cancer diagnosis revelation] video through their detector and found no indications of AI content. Abd-Almageed slowed the video down to examine it manually, again finding no evidence of AI tampering. If details such as her ring appear fuzzy, he said, it’s because of motion blur and the video’s compression.”
The co-founder of generative AI video-effects company Pinscreen agreed that the video appears to be authentic, noting bugs flying in front of Catherine’s face and the subtle swaying of yellow flowers in the background.
Crying wolf is no way to guide AI toward improving humanity. Too much of the criticism of this amazing technology puts self-interest ahead of societal interest, and we should remain on guard against it.
Sources
The Wall Street Journal
‘Social Order Could Collapse’ in AI Era, Two Top Japan Companies Say
Japan News by The Yomiuri Shimbun
Joint Proposal / Proper Controls Essential for Generative AI; Joint Proposal Discusses Measures to Strike Balance Between Regulation, Development
Yomiuri Online
Full text of the Yomiuri Shimbun/NTT ‘Joint Proposal on the Future of Generative AI’
(In Japanese. I’ve translated key excerpts in the report above.)
The Wall Street Journal
‘Take Science Fiction Seriously’: World Leaders Sound Alarm on AI
European Commission
Joint Statement EU-US Trade and Technology Council of 4-5 April 2024 in Leuven, Belgium
Ministry of Foreign Affairs of Japan
Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems
(8-page PDF, in English)
European Parliament
Artificial Intelligence Act: MEPs adopt landmark law
The Washington Post
Princess Catherine cancer video spawns fresh round of AI conspiracies
The Philadelphia Inquirer
How voters can avoid deepfakes and AI-generated misinformation in the 2024 presidential election
The New York Times
AI-Generated Garbage Is Polluting Our Culture
Cornell University arXiv of Scholarly Articles
The Curse of Recursion: Training on Generated Data Makes Models Forget
(18-page PDF)
Guinness World Records
First hoax photograph
Wikipedia
Fact-checking
Asia Society Policy Institute
China’s Emerging Approach to Regulating General-Purpose Artificial Intelligence
Yomiuri Online
Interview with NTT chairman Jun Sawada about NTT’s AI proposal
The Wall Street Journal
Elon Musk, Other AI Experts Call for Pause in Technology’s Development
Amazon.com
Stolen Focus
2022 book by Johann Hari
OpenAI
OpenAI’s approach to frontier risk
OpenAI
OpenAI Red Teaming Network
Meet the Press
Former Google CEO Eric Schmidt saying AI companies should establish their own guardrails
Politico
Opinion | How to Protect Americans From the Many Growing Threats of AI