0:00
/
0:00
Transcript

Beyond automation

Why the future of AI is about reasoning, human augmentation and dual intelligent ecosystems

This post is based on a brief talk Luis Martín, The Druid, recently gave to a group of university students. In that exchange, he outlined, in a concise and accessible way, several of the core ideas behind our research work: complex reasoning, BioNeuroCognitive AI, human augmentation, and the design of dual intelligent ecosystems in which humans and machines collaborate effectively.

Artificial intelligence is often discussed as if it were a single thing: a model, a chatbot, a prediction engine, or a software layer added on top of existing processes. That view is too narrow.

In our work, AI is not reducible to pattern matching over large datasets. It is a broader engineering discipline concerned with how intelligence can be designed, distributed, and operationalized across machines, people, and organizations.

The central question is not merely whether a machine can produce an answer. The real question is whether an intelligent system can reason, decide, adapt, and justify its behavior in environments where uncertainty is high, time is limited, and errors have real consequences.

That is the frame from which we approach BioNeuroCognitive AI and complex reasoning systems.


1) AI is not one field. It is three converging fields

A useful starting point is to separate three major domains that are often collapsed into one.

1. Machine Intelligence

This is the domain most people already know. It includes AI systems running on digital computers, distributed infrastructures, and large-scale computational architectures. It covers machine learning, large language models, predictive systems, probabilistic models, and many of the tools currently dominating public discussion.

These systems matter. They already create value in automation, classification, optimization, and language processing. But they are only one part of the larger picture.

2. High-Performance Human Intelligence

The second domain is less discussed, but equally important. It concerns the improvement of human cognitive performance through cognitive restructuring, biochemical support, neurotechnologies, or augmentative devices.

Its purpose is not to replace human intelligence, but to strengthen it: to improve judgment, focus, speed of interpretation, memory support, and reasoning performance in demanding contexts.

This matters because the future will not be built by machines alone. It will be built by humans whose cognitive capabilities are increasingly supported, amplified, and reorganized.

3. Dual Intelligent Ecosystems

This is the third domain, and arguably the most strategic one.

A dual intelligent ecosystem is a socio-technical environment in which artificial intelligent entities and augmented human intelligence interact continuously and synergistically. It is not a simple tool-user relationship. It is a new organizational form.

In such ecosystems, reasoning is distributed. Some processes remain human. Some are delegated to machines. Some are shared. The design challenge is to decide which reasoning tasks should be performed by whom, under what timing constraints, with what degree of autonomy, and under what governance rules.

That is where the future of organizations, public administrations, research systems, healthcare, security, defense, and industry will increasingly be decided.


2) The real frontier is not prediction. It is reasoning

Much of today’s AI conversation is dominated by data volume, model size, and computational scale. Those variables are important, but they do not exhaust the problem of intelligence.

A system may be excellent at detecting correlations and still be weak at reasoning.

Reasoning requires more than output generation. It requires the capacity to:

  • interpret a changing situation,

  • preserve context over time,

  • weigh competing hypotheses,

  • operate under incomplete information,

  • align decisions with goals,

  • and adapt when conditions shift.

This is especially critical in environments such as command and control, intelligence analysis, criminal investigation, scientific research, cyber defense, healthcare, and strategic planning. In these domains, errors are not cosmetic. They affect missions, institutions, resources, and human lives.

For that reason, our research has long focused on introducing advanced reasoning capabilities into both military and civilian systems, built or yet to be built.

The problem is not simply how to automate a task. The problem is how to engineer systems that can support coherent reasoning under pressure.


3) Why a bioneurocognitive approach matters

To build reasoning systems, one must study reasoning itself.

That is the basic rationale behind a bioneurocognitive approach. If the goal is to introduce reasoning capabilities into an artificial system, then we need to model the processes that make reasoning possible in the human mind.

This does not mean copying the brain in a simplistic way. It means identifying the relevant structures, functions, and processes that support human reasoning, and then translating those insights into engineering methods, models, and architectures.

Human cognition is not just data ingestion. It is selective, layered, adaptive, goal-oriented, and deeply constrained by time, attention, physiology, and survival priorities. A useful intelligent system must therefore be designed not only to process information, but to structure, filter, prioritize, and transform it in ways that support effective action.

That is why we speak of reasoning systems, not merely data systems.

The future of AI will not belong only to those who accumulate more data. It will also belong to those who learn how to model intelligence in a more functional, structured, and operationally resilient way.


4) From isolated tools to intelligent organizations

One of the most important implications of this work is organizational.

AI should not be introduced as a cosmetic layer, a fashionable feature, or a pretext for crude labor substitution. The simplistic fantasy that one can dismiss large portions of a workforce and replace them with a small group of people “asking questions to AI” is not a serious model of transformation.

The real opportunity lies elsewhere.

The goal is to design organizations in which humans and machines collaborate according to their respective strengths. The human brain, shaped by millions of years of evolution, brings extraordinary capabilities in abstraction, contextual understanding, meaning formation, intuition, and adaptive judgment. Machines bring scale, continuity, speed, memory persistence, and the capacity to sustain certain forms of reasoning and monitoring 24/7.

The engineering task is to distribute reasoning intelligently across both.

This is what we mean by intelligent factories, intelligent administrations, intelligent research units, and next-generation command-and-control environments. These are not simply digitized institutions. They are organizations redesigned around cognitive cooperation.

Their performance will depend on how well they allocate perception, interpretation, analysis, anticipation, and decision-making between people and machines.


5) A human-centered view is not optional

Working in defense, security, intelligence, or other high-uncertainty domains tends to sharpen a basic truth: power without design discipline is dangerous.

For that reason, any serious work in AI must remain human-centered. Not in a sentimental sense, but in an architectural sense.

Human-centered AI means that systems should be aligned with human purposes, embedded in real institutions, and designed with responsibility, observability, and role clarity. It also means recognizing that the human being is not the “obsolete component” of the system, but one of its central sources of judgment, adaptation, and ethical orientation.

This is particularly important at a moment when public discourse often oscillates between naïve enthusiasm and theatrical fear.

A more rigorous position is available: AI should be developed not to erase human agency, but to improve how humans think, decide, learn, and organize.

That is why the most promising future is not one of pure machine replacement, but one of human-machine synergy.


6) What universities should do now

The university has a strategic role to play here.

AI is no longer a marginal technical topic. In many countries, it already sits at the level of state strategy, public policy, industrial policy, and national security. That alone should be enough to force a rethinking of how it is taught.

But there is a second reason.

Too many current educational programs are shaped by immediate market demand and the technical fashions of the moment. They often train users of current tools rather than builders of future paradigms. They are too narrow, too reactive, and too weak in foundational understanding.

A stronger AI education should give students a wider and deeper view. It should include not only machine learning pipelines, but also logic, reasoning, cognition, systems design, uncertainty management, human-machine interaction, organizational transformation, and societal impact.

The true purpose of AI education is not merely to produce operators of fashionable systems. It is to produce designers of intelligent systems, organizations, and ecosystems that help us live better and think better.

That is a much higher ambition, and it is the correct one.


7) The classroom will change, but so will the role of the teacher

As intelligent agents become more personalized and more cognitively adaptive, education itself will change.

A student supported by a well-designed AI agent may access, structure, and rehearse knowledge at a level that dramatically exceeds what was possible in traditional classrooms. These agents will increasingly learn the user’s psychological, linguistic, and cognitive patterns, and adapt the pace, form, and sequence of learning accordingly.

This may transform specialization, tutoring, and the personalization of knowledge acquisition.

But it does not eliminate the need for teachers. It changes their role.

The teacher’s function will move away from being a mere transmitter of information and toward something more demanding: helping students formulate better questions, confront assumptions, and develop critical judgment in relation to both the world and the intelligent systems surrounding them.

In that sense, the rise of AI should not reduce the value of education. It should force education to recover one of its highest purposes: the formation of minds capable of questioning, discerning, and deciding well.


Closing

The future of AI will not be defined by software demos alone.

It will be defined by whether we can build systems with real reasoning capacity, augment human intelligence without diminishing human agency, and design dual intelligent ecosystems where humans and machines collaborate coherently.

That is the core of our research agenda.

Not AI as spectacle.
Not AI as mere automation.
But AI as the disciplined engineering of intelligence across machines, people, and organizations.

That is where the next real transformation begins.

Discussion about this video

User's avatar

Ready for more?