Christmas, New Year and the AI Tempo Problem
Greetings to everyone in the Daneel Olivaw community
In most domains we care about (security, defense, resilience, governance) December is not a pause. It is a maintenance window: the brief moment when systems are quieter, attention is less fragmented, and you can run the checks you never have time to run.
So here is the only holiday protocol worth executing:
Patch the basics: sleep, relationships, attention span.
Recalibrate your priors: what you believed in January is rarely what reality delivered by November.
Protect your signal: fewer inputs, higher-quality evidence.
Plan for tempo: January is not “new”; it is simply “fast” again.
May your Christmas be calm enough to think clearly, and your New Year be structured enough to act decisively.
“An Unrestricted AI Arms Race”: Notes from Luis Martín “The Druid”
Luis Martín’s recent interview in El Independiente is not a futurist monologue. It reads like a systems designer doing threat-modeling in public.
His core claim -we are at the beginning of an AI arms race with few meaningful restrictions- lands because it is framed as an engineering and incentive problem, not a moral one.
A few takeaways worth extracting:
1) The fog of capability is a feature, not a bug
Luis draws a sharp contrast between what can be observed in open research ecosystems and what remains opaque inside state programs. He also hints at a reality that practitioners already internalize: not every “breakthrough” headline is real. In strategic competition, narratives are deployed the same way tools are deployed.
The practical consequence is straightforward: you cannot build strategy on claims; you build it on evidence structures—traceable chains of justification.
2) Legal asymmetry becomes operational asymmetry
One of the most uncomfortable points is also the simplest: some actors can move faster because they face fewer legal, ethical, or procedural constraints. When one side treats AI as a national survival lever—and another treats it primarily as a regulatory dilemma—the tempo gap becomes an advantage in itself.
If that sounds abstract, translate it into timelines: time-to-field, time-to-iterate, time-to-scale.
3) Compute and energy are strategy, not “infrastructure”
Luis ties data-centric AI to energy and industrial realities: data centers, compute supply chains, and sustainability constraints. In that context, he argues for approaches that reduce dependence on massive proprietary datasets—methods that are more controllable, more auditable, and often more deployable in constrained environments.
This is where geopolitics meets architecture: if your posture assumes unlimited data, unlimited energy, and unlimited chips, you have not built superiority—you have built dependency.
4) Regulation tends to arrive after escalation, not before it
He points to a familiar pattern from other strategic technologies: meaningful restrictions often appear only when competitors approach equilibrium. Early-stage races tend to be permissive, because each actor fears being the only one to slow down.
That is why “governance” that is not paired with deployable capability and verifiable constraints tends to remain aspirational.
If you want the full interview, read it here:
https://www.elindependiente.com/futuro/inteligencia-artificial/2025/11/30/luis-martin-el-druida-estamos-al-inicio-de-una-carrera-de-armas-de-ia-sin-restricciones/
Year One: Daneel Olivaw Online. What We Built. What To Read Next?
This has been the first full year of Daneel Olivaw online: a year of putting a stake in the ground around a simple thesis: In high-stakes environments, capability comes from engineered reasoning (systems that can justify, adapt, and operate under adversarial pressure).
If you joined recently, here is a guided entry point through some of the year’s most representative work (short context + direct links):
Reasoning boxes: from concept to operational programs
ACI2: Ultra-Early Warning and Adversary Identification with BDI Reasoning Agents
A “knowledge pill” that introduces Alcazar: a reasoning-box program for cyber defense and multi-domain operations, centered on adversary intent inference via BDI-style meta-agents.
https://www.daneelolivaw.com/p/aci2-ultra-early-warning-and-adversaryGRAN CAPITÁN Military Reasoning Box
A concrete architecture roadmap: 24×7×365 virtual agents designed to support Intelligence, Strategy, and Operations decision-makers in multi-domain environments—positioned as deployable capability, not a research artifact.
https://www.daneelolivaw.com/p/gran-capitan-military-reasoning-box
Smartification and superiority: engineering advantage under constraints
Smartification of weapons, weapons systems, and combat platforms
An executive abstract on introducing reasoning capabilities inside weapons and platforms, with distributed complex reasoning architectures as a path to multi-domain battlefield advantage.
https://www.daneelolivaw.com/p/smartification-of-weapons-weaponsIntelligent Systems Design: A neurocognitive AI approach
A design philosophy that treats intelligent systems as self-restructuring: shaping perception, interpretation, and action—anchored in a neurocognitive stack spanning neuroscience, psychology, and AI.
https://www.daneelolivaw.com/p/intelligent-systems-design-a-neurocognitive
Automated reasoning for investigation and intelligence
Sherkbox Project: Putting Sherlock Holmes in an AI Box
A clear example of structured reasoning sequences applied to investigation and intelligence analysis: hypothesis generation, evidence evaluation, and iterative refinement across operational modes (real-time, assist, simulation/training).
https://www.daneelolivaw.com/p/sherkbox-project-an-automated-reasoning
If you read only one thing as you close the year, read with this question in mind:
Are we building systems that merely predict—or systems that can justify, adapt, and withstand adversarial pressure?
Thanks for reading, sharing, and arguing with us this year. If you’ve been here since the beginning, you helped shape the frame. If you arrived last week, you’re right on time.


