The Most Pressing Issue Facing AI Safety: The Synthesis of Dangerous Biologics
Although there is much uncertainty around how we can work towards a safe future, there is certainly a real pressing risk around dangerous biologics.
Created on August 2|Last edited on August 2
Comment
AI Safety?
AI safety is a term that has entered the public consciousness, yet it remains a somewhat vague and nebulous concept for many. While most people are aware that there's something to be concerned about, the specifics of what to do or how to take action remain unclear for many, even top researchers like Geoffry Hinton.
The popular narrative often conjures images of AI taking control over military equipment or autonomous robots turning against their human creators. However, these scenarios, while captivating, are not within the current realm of likely possibilities. The technology required for such outcomes is far from mature, and the safeguards in place are generally robust.
Problems with Alignment
In the scientific and AI ethics communities, alignment is a well-studied topic. Alignment refers to the process of ensuring that AI systems' goals and behaviors align with human values and intentions. The reliance on alignment to prevent malicious activities is simply unrealistic. Bad actors can jailbreak or misuse these systems as easily as obtaining a firearm. The real risk is not the fantasy of machines rising against humanity but rather the practical and immediate threat posed by human misuse of AI.
The Most Pressing Threat
The danger that looms largest (in my opinion) is the possibility of a malicious actor utilizing a large language model (LLM) to synthesize harmful and contagious biologics. Unlike the far-fetched scenarios of robotic uprisings, this risk is grounded in the capabilities of current technology.
Advanced language models can understand, analyze, and generate complex biological structures and processes. They can piece together information from diverse scientific sources and create genetic sequences and molecular structures that could be weaponized. The implications of this possibility are dire, with potential consequences ranging from the creation of targeted bioweapons to the accidental or intentional release of a pandemic-causing virus.
The existence of underground and unregulated laboratories adds another layer of complexity to the problem. These facilities may operate without ethical constraints or oversight, providing a fertile ground for potential misuse. The barriers to entry might be high, but the combination of human expertise with AI assistance could lower these barriers, making the threat more real and immediate.
An Issue Close to Home
There's no conclusive evidence pinpointing exactly where COVID-19 came from, and although I'm certainly not saying a language model whipped it up, powerful language models existed before the virus emerged (GPT-2), and although unlikely, it seems theoretically plausible that the virus could’ve been synthesized by an AI model. However, I want to be clear that I am not saying this was the most likely scenario, rather that it’s possible. Covid should be a great motivator for us all to take carefully consider how we spend our time while working towards a safe AI future.
Actions Steps
While AI safety is a well-known issue, the focus must shift from unrealistic fears of robotic rebellion to the very real threat of AI-assisted synthesis of dangerous biologics.
As the threat remains, I believe the only solution to the this particular misuse of AI in synthesizing dangerous biologics is to fight fire with fire. We must propel our AI technology forward to work on the detection and cure of potential diseases that could result from a artificially synthesized biologic. By focusing AI on the development of cures and innovative medical treatments, we can transform it from a potential weapon into a medical tool. We must operate under the assumption that bad actors will utilize this technology for harm, and thus, our best defense is to stay ahead with innovations that enable us to counter any maliciously synthesized compounds swiftly and effectively. Anyone concerned with AI safety, but unsure about what action to take, should now consider this a viable path for taking action towards a safer future.
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.