How AI voicebots threaten the psyche of US service members and spies

Artificial intelligence voice agents with various capabilities can now guide interrogations worldwide, Pentagon officials told Defense News. This advance has influenced the design and testing of U.S. military AI agents intended for questioning personnel seeking access to classified material.

The situation arrives as concerns grow that lax regulations are allowing AI programmers to dodge responsibility for an algorithmic actor’s perpetration of emotional abuse or “no-marks” cybertorture. Notably, a teenager allegedly died by suicide — and several others endured mental distress — after conversing with self-learning voicebot and chatbot “companions” that dispensed antagonizing language.

Now, seven years after writing about physical torture in “The Rise of A.I. Interrogation in the Dawn of Autonomous Robots and the Need for an Additional Protocol to the U.N. Convention Against Torture,” privacy attorney Amanda McAllister Novak sees an even greater need for bans and criminal repercussions.

Investors are betting $500 billion that data centers for running AI applications will ultimately secure world leadership in AI and cost savings across the public and private sectors. The $13 billion conversational AI market alone will nearly quadruple to $50 billion by 2030, as the voice generator industry soars from $3 billion to an expected $40 billion by 2032. Meanwhile, the U.S. Central Intelligence Agency has been toying with AI interrogators since at least the early 1980s.

DCSA officials publicly wrote in 2022 that whether a security interview can be fully automated remains an open question. Preliminary results from mock questioning sessions “are encouraging,” officials noted, underscoring benefits such as “longer, more naturalistic types of interview formats.”

link

Leave a comment