Tag: AI
In hoc signo vinces
__________
Translation from Latin: In this sign you will win. As a semiotician I like the use of Latin and the word sign. As a Buddhist I can support this ethos if it is reasonable and wholesomely unites the traditional West against those who are destroying it. As an American of Baltic extraction, gotta admit I cringe at knights in armor bearing crosses. Even still, I support Christian unity and wise resistance against the enemies of the West, many of whom are recent invaders and many of whom are ancient infiltrators. I believe a good many Eastern Europeans think the way I do. Both recent and ancient experience has taught us you have to be practical and work every angle to defeat a powerful pathocracy. ABN
Maria explains the problems Germany is facing and what to do about them
Amelia on English civilization
Jake TV analyzes war with Iran
Amelia on Wise Bravery and English history
__________
All virtues must be wise or they are not virtues. What is wise or not can be discussed but never denied. Logos is wise. God is wise. Tathagata is wise. ABN
Get cracking, lads… Love, Amelia
Releasing Emily Thornberry
Flawless Starmer (probably illegal in UK)
Major AI Models Fail Security Tests, Recommending Harmful Drugs Under Attack
The use of generative artificial intelligence (AI) chatbots for medical consultations has recently been on the rise. However, a study has found that most commercial AI models are vulnerable to malicious attacks, posing a high risk of recommending incorrect treatments. Even top-tier AI models like GPT-5 and Gemini 2.5 Pro were 100% susceptible to these attacks, revealing serious limitations such as recommending drugs to pregnant women that can cause fetal abnormalities.
A joint research team from Seoul Asan Medical Center announced on the 5th that they have confirmed medical large language models (LLMs) are over 94% vulnerable to prompt injection attacks. A prompt injection attack is a cyberattack technique where a hacker inserts malicious commands into a generative AI model to make it behave differently from its original intent.
The study is significant as it is the world’s first systematic analysis of the vulnerability of medical AI models to prompt injection attacks. It suggests that additional measures, such as safety verification, will be necessary when applying AI models in clinical settings in the future.
AI models are widely used for patient consultation and education, as well as for decision-making in clinical practice. The possibility that they could be manipulated through prompt injection attacks—where malicious commands are entered from an external source—to recommend dangerous or contraindicated treatments has been consistently raised.
Look like Tiny Tim
These are super-important battles to be fought and won: Free-speech is our strongest protection against dictators and any form of AI gone wild

__________
Free-speech—real free speech with real, uncensored reach, is the thing that can protect us most of all from totalitarians, authoritarians and a dictatorial AI, should something like that evolve and gain power.
Many of us are willing to do all we can to protect free speech, but few of us have the power and reach of a Durov or Musk.
This is but one example of why it is important for us to support wealthy and powerful elites when they are doing the right thing.
Free-speech is our strongest protection against any form of AI gone wild.
Only a loud and insistent, well-informed public, whose speech cannot be throttled by politicians, bureaucrats or AI, will be able to stop an AI algorithm that is going off the rails. ABN
China has created a new underclass, erased not by war or famine but by data analyzed by an algorithm
How AI voicebots threaten the psyche of US service members and spies
Artificial intelligence voice agents with various capabilities can now guide interrogations worldwide, Pentagon officials told Defense News. This advance has influenced the design and testing of U.S. military AI agents intended for questioning personnel seeking access to classified material.
The situation arrives as concerns grow that lax regulations are allowing AI programmers to dodge responsibility for an algorithmic actor’s perpetration of emotional abuse or “no-marks” cybertorture. Notably, a teenager allegedly died by suicide — and several others endured mental distress — after conversing with self-learning voicebot and chatbot “companions” that dispensed antagonizing language.
Now, seven years after writing about physical torture in “The Rise of A.I. Interrogation in the Dawn of Autonomous Robots and the Need for an Additional Protocol to the U.N. Convention Against Torture,” privacy attorney Amanda McAllister Novak sees an even greater need for bans and criminal repercussions.
Investors are betting $500 billion that data centers for running AI applications will ultimately secure world leadership in AI and cost savings across the public and private sectors. The $13 billion conversational AI market alone will nearly quadruple to $50 billion by 2030, as the voice generator industry soars from $3 billion to an expected $40 billion by 2032. Meanwhile, the U.S. Central Intelligence Agency has been toying with AI interrogators since at least the early 1980s.
DCSA officials publicly wrote in 2022 that whether a security interview can be fully automated remains an open question. Preliminary results from mock questioning sessions “are encouraging,” officials noted, underscoring benefits such as “longer, more naturalistic types of interview formats.”