Smack’s Military AI Ambitions
Anthropic might be squeamish about giving the military its AI toys, but Smack Technologies isn’t. They just snagged $32 million to craft AI models that could outsmart Claude in military ops. Unlike Anthropic, Smack isn’t losing sleep over banning certain military uses. CEO Andy Markoff, a former Marine commander, thinks ethical deployment requires a uniform. Because nothing says ‘ethics’ like a soldier’s oath.
Markoff, with his Marine buddy Clint Alanis and Tinder’s ex-tech VP Dan Gould, is on a mission. Their AI learns military strategies through trial and error, echoing Google’s AlphaGo. Smack’s budget isn’t Google-sized, but they’re pouring millions into training their AI. Because who needs a massive budget when you have war games and expert analysts?
Silicon Valley’s Defense Drama
Military AI is the new hot potato in Silicon Valley, thanks to a spat between the Department of Defense and Anthropic over a $200 million deal. Anthropic wanted to keep its AI away from autonomous weapons, which didn’t sit well with defense secretary Pete Hegseth. He labeled Anthropic a supply chain risk, because nothing says ‘trust issues’ like a contract dispute.
Markoff argues that today’s general-purpose AI models aren’t military-grade. They’re great for summarizing reports but clueless about the physical world, making them useless for controlling hardware. ‘Target identification?’ Markoff scoffs. ‘Not a chance.’ And despite the noise, no one’s automating the kill chain just yet. Apparently, even the Department of War draws the line somewhere.
AI’s Future on the Battlefield
Autonomous weapons aren’t sci-fi; they’re in use today, especially for missile defense. Rebecca Crootof from the University of Richmond points out that over 30 countries have deployed weapons with varying autonomy. But Smack’s AI could also help with mission planning, automating the tedious parts of sketching battle plans. Because who needs whiteboards when you have AI?
Markoff envisions AI giving the US ‘decision dominance’ in a war against a ‘near peer’ like Russia or China. But reliability is still a question mark. A King’s College London experiment alarmingly showed that LLMs could escalate nuclear conflicts in war games. So, while AI might help plan missions, let’s hope it doesn’t plan the end of the world.
Quick Facts
- •💡 Smack Technologies raised $32 million for military AI models.
- •💡 Anthropic’s $200 million military contract fell apart over autonomous weapons.
- •💡 Current AI models lack military-grade capabilities like target identification.
- •💡 Over 30 countries use autonomous weapons with varying degrees of autonomy.
- •💡 AI reliability in military contexts remains uncertain, with potential escalation risks.

