Generative Models Showing Promise at Novel Scientific Discovery
Large language model's (LLM) don't just write emails. Find out how LLMs could be used to drive discovery in the fields of science and novel idea generation.
Created on April 14|Last edited on April 14
Comment
The process of brainstorming new ideas is one that humans have been doing forever. The best scientists are able to generate novel ideas and filter out the best ideas, test them, and present results to the rest of the scientific community.
Most large language models currently are thought of as tools for productivity, automating tasks that can typically easily be done by humans. However, it seems these models have a few capabilities in the field of science and novel idea generation that are quite exciting and terrifying at the same time.
Creativity is Chaos
Hallucination is a well-known issue that current transformer-based language models suffer from and occurs when a language model produces text that is factually irrelevant or nonsensical. However, this behavior could have possible advantages in science, where creative and unique thinking could be advantageous. With the ability to generate a vast number of ideas and connections, language models could help scientists make new discoveries, solve complex problems, and innovate at a much faster pace than ever before.
The Good
The opportunities for AI in science are quite exciting, and many biotech companies are already making huge discoveries with them. A biotech company Absci, for example, takes existing antibodies and uses models trained on data from lab experiments to redesign them for better binding to targets [1]. Another company, Apriori Bio, is using machine learning to predict how the best antibodies would fare against 100 billion more variants of covid-19, with the goal of designing variant-proof vaccines [1]. These approaches are allowing scientists to tap into a vast pool of biological and chemical structures that could become the ingredients of future drugs.
The Bad
As new technologies arise, their potential for misuse can be a major threat, and AI is no exception. Led by Fabio Urbina, a senior scientist at Collaborations Pharmaceuticals, Inc., researchers demonstrated how an AI typically used for discovering helpful drugs could be easily abused by putting it into a "bad actor" mode [2]. In less than six hours, the AI invented 40,000 potentially lethal molecules when its methodology was tweaked to seek out toxicity rather than weed it out [2]. The AI created tens of thousands of new substances, some similar to VX, the most potent nerve agent ever developed [2]. These alarming findings were published in Nature Machine Intelligence, raising concerns about the potential consequences of such technology falling into the wrong hands.
We Need Heroes
It’s clear AI can accelerate research and drive breakthroughs that can benefit humanity. However, as AI continues to advance, it is crucial to develop measures and systems that can ensure the ethical and responsible use of these powerful technologies, and hopefully, more entrepreneurial focus in this area will arise.
Sources:
[1] https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/
[2] https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.