Lab–grown life has taken a major leap forward as scientists use AI to create a new virus that has never been seen before.
The virus, dubbed Evo–Φ2147, was created by scientists from scratch using new technologies that could revolutionise the course of evolution.
With just 11 genes, compared to the 200,000 in the human genome, this virus is among the simplest forms of life.
However, scientists believe that the same tools could one day create entire living organisms or resurrect long–extinct species.
This artificial virus was specifically created to kill infectious and potentially deadly E. Coli bacteria.
While only 16 were able to attack the E. Coli, the most successful were 25 per cent quicker at killing bacteria than the wild variants.
Scientists have made a major breakthrough towards creating artificial life, as they use AI to create a new virus that never existed in nature (pictured)
Translation from Latin: In this sign you will win. As a semiotician I like the use of Latin and the word sign. As a Buddhist I can support this ethos if it is reasonable and wholesomely unites the traditional West against those who are destroying it. As an American of Baltic extraction, gotta admit I cringe at knights in armor bearing crosses. Even still, I support Christian unity and wise resistance against the enemies of the West, many of whom are recent invaders and many of whom are ancient infiltrators. I believe a good many Eastern Europeans think the way I do. Both recent and ancient experience has taught us you have to be practical and work every angle to defeat a powerful pathocracy. ABN
All virtues must be wise or they are not virtues. What is wise or not can be discussed but never denied. Logos is wise. God is wise. Tathagata is wise. ABN
The use of generative artificial intelligence (AI) chatbots for medical consultations has recently been on the rise. However, a study has found that most commercial AI models are vulnerable to malicious attacks, posing a high risk of recommending incorrect treatments. Even top-tier AI models like GPT-5 and Gemini 2.5 Pro were 100% susceptible to these attacks, revealing serious limitations such as recommending drugs to pregnant women that can cause fetal abnormalities.
A joint research team from Seoul Asan Medical Center announced on the 5th that they have confirmed medical large language models (LLMs) are over 94% vulnerable to prompt injection attacks. A prompt injection attack is a cyberattack technique where a hacker inserts malicious commands into a generative AI model to make it behave differently from its original intent.
The study is significant as it is the world’s first systematic analysis of the vulnerability of medical AI models to prompt injection attacks. It suggests that additional measures, such as safety verification, will be necessary when applying AI models in clinical settings in the future.
AI models are widely used for patient consultation and education, as well as for decision-making in clinical practice. The possibility that they could be manipulated through prompt injection attacks—where malicious commands are entered from an external source—to recommend dangerous or contraindicated treatments has been consistently raised.