Multiverse Hacking

Finding our purpose in a created world.


Avoiding the next Chornobyl or Vioxx: Why an AI Regulatory Body is Essential for Public Safety

As AI technology advances, its potential applications become increasingly diverse and complex. From large language models to autonomous vehicles to medical diagnosis and treatment, AI systems are being rapidly developed and deployed in high-risk applications.

These applications can have serious consequences if they fail. Therefore, it is crucial to prioritize safety in AI development, which can be achieved by creating an AI safety regulatory body, similar to the Nuclear Regulatory Commission (NRC) and the Food and Drug Agency (FDA).

Like nuclear energy and drugs, AI systems have the potential to cause significant harm if not developed and deployed responsibly. For example, autonomous vehicles have the potential to reduce traffic accidents, but they also pose a risk of causing accidents due to software failures. Similarly, medical AI systems have the potential to improve diagnosis and treatment, but they also have the potential to cause harm if the algorithms are biased or inaccurate.

OpenAI and large language models like GPT are examples of AI software that can pose significant risks. These models can generate highly convincing fake text and media, which can be used to spread misinformation or manipulate public opinion.

They can also be used to create advanced phishing scams or other forms of cybercrime. The potential for harm from these models is significant.

The NRC and FDA were created to provide specialized oversight of the risks posed by nuclear energy and drugs. They provide regulatory frameworks for safety and transparency in the development, deployment, and operation of nuclear power plants and drugs. The AI industry needs a similar regulatory body to provide oversight and guidance in the development and deployment of AI systems.

An AI safety regulatory body could provide independent oversight and ensure that safety and transparency are prioritized from design to deployment. It could also help to identify potential safety issues early in the development process, allowing for their prompt resolution before they become larger problems. Inspectors could meet with the programming teams, review product development plans, and review code for safety. This would help ensure that AI development is done responsibly and with the public’s safety in mind.

Critics of the idea of AI safety regulations and a regulatory body may argue that such an approach could slow down the development of AI systems and stifle innovation. However, history has shown that prioritizing safety and transparency is essential to avoiding catastrophic outcomes. For example, the FDA has faced criticism in the past for being too slow in approving drugs for market, leading to pressure from pharmaceutical companies to expedite the approval process.

One example of this was the drug Vioxx, which was approved by the FDA in 1999 but later found to increase the risk of heart attacks and strokes. It is estimated that Vioxx was responsible for tens of thousands of deaths before it was finally taken off the market.

Similarly, the nuclear industry has faced criticism for disregarding safety concerns in the pursuit of profit and technological advancement. The Chornobyl disaster in 1986, which was caused by a combination of design flaws, operator error, and a disregard for safety procedures, resulted in the deaths of 31 people and the displacement of thousands more.

The disaster was caused in part by a culture that prioritized the success of the nuclear program over the safety of its workers and the surrounding community. We must learn from these examples and prioritize safety and transparency in the development of AI systems, including large language models like GPT and organizations like OpenAI.

“We’re a small group of people, and we need a ton more input in this system and a lot more input that goes beyond the technologies — definitely regulators and governments and everyone else,” said OpenAI CTO Mira Murati in an interview with Time magazine. “It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible.”

In conclusion, the unique risks posed by AI systems, including large language models like GPT, require specialized oversight and regulation, similar to the Nuclear Regulatory Commission and the Food and Drug Agency. An AI safety regulatory body could provide independent oversight and ensure that safety and transparency are prioritized from design to deployment. By creating an AI safety regulatory body, we can ensure that AI is developed in a responsible and ethical manner, benefiting society as a whole.

Note: The title, body text, two pf the “photos” and tags of this article were generated using ChatGPT-4 an DALL-E from OpenAI. =)



Leave a comment

About Me

Purpose-driven, husband, dad. Science & tech geek. Nonprofits. Screenwriter, filmmaker. Podcast host.

Newsletter