“The era when humans program is nearing its end within our group”, says Softbank founder Masayoshi Son. “Our aim is to have AI agents completely take over coding and programming. (…) we are currently initiating the process for that.”
Son made this statement on Wednesday at an event for customers organized by the Japanese corporation, as reported by Light Reading. According to the report, the Softbank CEO estimates that approximately 1,000 AI agents would be needed to replace each employee because “employees have complex thought processes.”
AI agents are software programs that use algorithms to respond automatically to external signals. They then carry out tasks as necessary and can also make decisions without human intervention. The spectrum ranges from simple bots to self-driving cars.
Elon Musk‘s xAI announced “Grok for Government” on Monday.
The company signed a contract with the Department of Defense for up to $200 million.
The service will also be made available to other federal government agencies.
Elon Musk’s xAI is launching a new government-facing service. Its first client happens to be the largest employer on Earth.
The Department of Defense will pay up to $200 million for “Grok for Government,” a new collection of AI products geared toward use by federal, local, and state governments.
The department has also awarded similar contracts to Anthropic, Google, and OpenAI, which launched its own government-facing initiative last month.
In recent months, Ron Unz has been extensively utilizing OpenAI’s Deep Research AI for fact-checking his controversial articles, which often explore contentious historical topics like the JFK assassination, 9/11 attacks, and the origins of COVID-19. This advanced AI system, known for its computational resources and depth in research, has become an invaluable tool for Unz, allowing him to substantiate his claims and refine his body of work. Despite its limitations, such as occasional failures in processing complex inquiries and producing garbled outputs, the AI has consistently validated a significant portion of Unz’s assertions, providing a level of confidence in his work that he believes will encourage readers to engage with his findings.
Unz’s articles touch on various controversial subjects, often challenging mainstream narratives. He asserts that the AI’s analyses have generally confirmed the accuracy of his claims, particularly regarding politically sensitive issues, including allegations about the influence of Israeli intelligence in major historical events. For instance, he highlights the AI’s positive evaluations of his pieces on the assassination of prominent figures and the role of sexual blackmail in politics, emphasizing how it has supported his conclusions with credible evidence. This validation serves to bolster his position in a landscape where many of his views are dismissed or ridiculed, especially by the mainstream media.
However, Unz also notes that the AI’s evaluations can be inconsistent, especially concerning topics deemed sensitive or controversial, such as the Holocaust narrative. While the AI has largely endorsed his factual claims, it issued strong critiques when his articles contained skepticism regarding established historical accounts. This discrepancy raises questions about the potential biases ingrained in the AI’s programming, suggesting that it might be conditioned to respond negatively to certain subjects while maintaining a more favorable stance on others. Unz points out that when his articles focus comprehensively on controversial subjects, the AI tends to validate them more favorably, which could indicate an underlying complexity in how the AI processes sensitive content.
The broader implications of Unz’s findings extend into the realm of AI ethics and the reliability of automated fact-checking systems. His experiences suggest that while AI can significantly enhance research and verification processes, it must be carefully monitored to avoid biases that could distort factual accuracy. As Unz continues to publish articles that challenge mainstream narratives, the interplay between his provocative hypotheses and the AI’s evaluations highlights the importance of rigorous scrutiny in both human and machine-generated analyses. The ongoing dialogue about the role of AI in journalism and research, particularly in relation to controversial topics, remains a critical area for future exploration.
As AI reshapes the labor market, the real threat may not be unemployment — it could be something subtler and more corrosive: the collapse in what skills are worth.
That’s according to MIT economist David Autor, who made the comments in an interview released Wednesday on the “Possible” podcast.
Autor warned that rapid automation could usher in what he calls a “Mad Max” scenario — a world where jobs still exist, but the skills that once generated wages become cheap and commoditized.
“The more likely scenario to me looks much more like Mad Max: Fury Road, where everybody is competing over a few remaining resources that aren’t controlled by some warlord somewhere,” he said.
Properly managed, the world should end up with enormous abundance, such that no one has to work much or work at all. I tend to believe elites will eventually become tired of their greed and status competition and, thus, be more inclined to use global wealth to everyone’s advantage. This may take a long time, but as new generations arise and digital babies become common, Darwinian instincts will be weakened or replaced by new insights. ABN
I would add that since it is illegal in Europe to question the Holocaust and since even Grok is programmed to lie about, it behooves all of us to reject the entire story. ABN
Generally, with big issues, more voices are better than fewer.
Historically, states’ rights are of preeminent importance in American law and governance.
A ten-year ‘moratorium’ would allow actors at the top to seize some sort of technical or intellectual monopoly that would be very difficult to change once established.
The people who would be in position to do that are a transient group of individuals, unelected, unvetted, mostly unknown to the public.
How could allowing them alone to determine our AI future be better than allowing more voices, more jurisdictions, more transparency, more discussion and more thought?
Based on that line of reasoning, it would be better for our civilization to not do the moratorium. ABN
Donald Trump‘s top advisor has fallen victim to a sinister scheme by hackers who infiltrated her phone and used artificial intelligence to impersonate her voice.
The nefarious plot involved stolen data from the personal cellphone of White House chief of staff Susie Wiles that was then used to call some of American’s most powerful people.
Over the course of several weeks, high profile senators, governors and American business executives have received voicemails or messages from an unknown number claiming to be Wiles, Wall Street Journal reported.
The hackers came undone when they made the fatal mistake of asking questions that the president’s closest aide would already know the answer to.
Wiles – who has been nicknamed ‘Ice Maiden’ – has been contacting associates urging them to disregard any unusual messages or calls from unknown numbers purporting to be her.
In a terrifying twist, some of the phone calls used a voice that mimicked Wiles. Authorities suspect the impersonator used artificial intelligence to replicate it.
The revelation that University of Zurich researchers secretly deployed AI bots to manipulate Reddit users’ opinions should chill anyone who values authentic human discourse.
These weren’t merely passive observers—they were digital persuaders that analyzed users’ personal histories, fabricated identities, and crafted arguments specifically designed to change minds.
Most troubling?
They succeeded spectacularly—achieving persuasion rates six times higher than normal human interactions.
This experiment crossed critical ethical lines.
Without consent or disclosure, researchers unleashed bots that claimed to be rape victims, misrepresented religious teachings, and spread misinformation about controversial topics.
These digital ghosts generated over 1,500 comments, each precisely calibrated to exploit cognitive vulnerabilities of their human targets.
We’ve long worried about social media’s echo chambers.
But what happens when those chambers are deliberately infiltrated by increasingly sophisticated AI systems trained on the very platforms they’re manipulating?
Reddit’s recent data-sharing deal with OpenAI suggests we’re actively providing the training material for ever more persuasive digital manipulators.
Reddit moderators rightly condemned this unauthorized experiment, but their discovery came months after the damage was done.
How many other digital conversations are currently being shaped by invisible algorithmic hands?
I view Reddit fairly often. Started using it when it had very few users and you could easily get a post on the front page. AskScience was glorious back then as conversations would go on and on and there was very little bs. That was a long time ago. Today’s Reddit is mind-control totalitarian thought-slavery on virtually all major subjects. However, if you go to small subreddits, you can find very interesting material; subreddits on narcissism and other personality disorders; on aspects of culture such as parenting or marriage customs. I highly appreciate hearing authentic voices from around the world on topics like those and many more. That said, another thing you can see on Reddit is what looks to me like spontaneous hive-mind formation. These look real to me because they can form very quickly in subs that have nothing to do with politics. Tooafraidtoask is one example where this can happen. I encourage everyone to support free-speech on all platforms and use your free-speech yourself, use it often. Keep pushing the envelope wherever you are because widespread free-speech that people use is probably our only defense against a looming and probable computerized totalitarian nightmare. ABN