Generative AI is becoming an increasingly powerful tool in biology, enabling the creation of new versions of basic molecules. However, as scientists warn, the ease of access to these technologies carries risks that require the immediate development of biosafety measures.
Back in 2024, Nobel laureate David Baker (University of Washington), creator of the RoseTTAFold model (predicts protein structure based on amino acid sequence), and geneticist George Church (Harvard) emphasized the need for implementing “barcodes” in the genetic sequences of new proteins to track their origin. Yet, as a recent Microsoft study showed, these measures are insufficient. AI-generated genetic sequences often bypass existing biosafety systems. Programs analyzing DNA fail to recognize dangerous sequences if they contain “safe” fragments.
The authors of the latest research call for a comprehensive biosafety system before something irreversible occurs. AI is already capable of designing not only proteins but also RNA, as well as entire cells and tissues. Models like RFdiffusion2 and PocketGen, for instance, allow for the creation of proteins at the atomic level for specific tasks, such as initiating biological reactions or binding to drugs. RNA therapy is a promising field, not affecting the genetic code; however, the development of RNA drugs is complex due to the peculiarities of forming the three-dimensional structure of molecules.
Scientists have demonstrated that algorithms can be used to create dangerous biological materials. In one experiment, AI models designed toxic proteins that successfully bypassed biosafety systems. In another case, an algorithm developed to find antiviral molecules suggested a known neurotoxin as a potential drug.
Strict rules and control are necessary at all stages when working with AI in biology. In the UK, guidelines have already been issued for DNA synthesis checks, while in the US, biosafety has been included in the action plan for developing AI models for biotechnology. Technological companies also declare intentions to exclude dangerous viral sequences from training model databases and conduct strict verification of new developments.
The authors advocate for a multi-level protection system, including controlled access to data and algorithms, supervised model training, and stress testing to identify vulnerabilities. In their opinion, an effective biosafety system should be a “living guardian,” capable of adapting to new threats.
Recent advancements indicate that generative AI plays a crucial role in overcoming the challenges associated with RNA therapy, notably its complex molecular structuring. Collaboration among tech companies and research institutions is focusing on AI-enhanced design processes to streamline drug development, potentially revolutionizing personalized medicine.
A growing trend is the collaboration between government bodies and technology corporations to tighten biosafety protocols, ensuring AI technologies do not outpace regulations. Comprehensive global guidelines are being crafted to manage the potential dual-use nature of AI in biological research, balancing progress with safety.
OpenAI plans to introduce a new audio model in the first quarter of 2026, marking…
IBM has announced the launch of its first Nighthawk architecture quantum processor - the IBM_Miami…
Turkey has commenced constructing a spaceport in Somalia, having completed preliminary research and design phases.…
In the past year alone, residents of the United States lost over $333 million due…
China Commissions Hualong One ReactorsChina has brought two Hualong One reactors into operation. The first…
In the city of Shenzhen, a humanoid robot was spotted patrolling the streets alongside police…