Spain and the AI Path That Suits Latin America
From the Scale Game to the Reasoning Game
In this publication, we welcome our partner and co-author Ramón Casilda Béjar, economist and member of IELAT – University of Alcalá.
The relationship between Spain and Latin America in artificial intelligence (AI) is being reshaped into a strategic partnership that aims to move beyond dependence on the dominant English-language large language models (LLMs), placing Spanish-language digital sovereignty—and reasoning, explainability, and applied AI—ahead of sheer scale.
The relationship between Spain and Latin America in artificial intelligence (AI) is being reshaped into a strategic partnership that aims to move beyond dependence on the dominant English-language large language models (LLMs), placing Spanish-language digital sovereignty—and reasoning, explainability, and applied AI—ahead of sheer scale.
Spain’s strategy, set out in the National Artificial Intelligence Strategy 2024 and España Digital 2026, offers a bridge to Latin America in an international landscape dominated by two main powers, the United States and China, both focused on building massive base language models. These models are characterized by heavy energy and data consumption and are designed to maximize generalization.
That approach, however, comes with high costs and a form of technological dependence that doesn’t always favor regions with limited resources. Spain isn’t walking away from scale—the Artificial Intelligence Strategy 2024 reinforces high-performance computing through MareNostrum 5 and develops home-grown language models—but it pairs that effort with a focus on reasoning, open public infrastructure, and the practical application of AI, supporting open models such as the ALIA family. The ALIA 40B Instruct model, developed at the Barcelona Supercomputing Center (BSC-CNS), stands out: a multilingual model trained on MareNostrum 5 with an Iberian and European focus, paying special attention to Castilian Spanish and Spain’s co-official languages (Catalan/Valencian, Basque, and Galician), as well as English, with adaptation to the sociocultural features of its environment. Its open nature allows it to serve as a base or reference for future adaptations to local contexts in Latin America.
This strategy makes the most of the wealth of Spanish-language data available, which can improve the linguistic coverage of these models. Effective adaptation to Latin America will, however, require country-, sector-, and dialect-specific corpora, evaluation, and tuning, in order to avoid reliance on translations or biased models that fail to address the region’s real needs.
Spain, as already happens in the investment arena, can position itself as a tech hub between Europe and Latin America, drawing on capabilities such as BSC-CNS, ALIA, and its business-connection programs with the region. BSC-CNS, one of Europe’s most powerful centers, provides computing services that are essential for developing and deploying AI.
Spain is also pushing for regulatory cooperation, seeking to align AI policies with Latin America under the European vision of “trustworthy AI”—safe and ethical. At the domestic level, that vision is overseen by the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which acts as the national supervisory and reference authority for the application of the EU AI Act, in coordination with other competent authorities, and which also carries out advisory, inspection, training, and voluntary-certification functions. Its experience can serve as a reference in conversations with the region. Building up the startup ecosystem is another strategic pillar: programs such as Spain-Latam Scale-up help Latin American scale-ups land in the Spanish ecosystem as a gateway into the European market, including tech and AI companies, although the program’s scope is not limited to AI alone.
Key pillars of Spain–Latin America cooperation
Spain maintains an active cooperation strategy with Latin America centered on the development of ethical AI, technological sovereignty, and talent exchange. A possible shared agenda for 2026 could focus on linguistic and digital sovereignty as a priority, built on the following pillars:
Regulatory convergence. Regulatory convergence is one of the main challenges, and cooperation must be strengthened to secure a common, solid framework for AI.
Infrastructure and supercomputing. Memoranda of understanding have been signed—such as the one with Chile—to collaborate on supercomputing, knowledge exchange, and AI development.
Security and the fight against crime. The European Union, with Spain’s support, promotes the use of AI to combat organized crime in the region while reinforcing digital sovereignty.
Latin American AI Index (ILIA). It is used as a tool to measure AI progress across 19 countries and to encourage its use in the service of people.
Latam-GPT and linguistic sovereignty. Spain, through the data corpus it has contributed, has supported the development of Latam-GPT, an open and collaborative language model whose construction involved more than 65 institutions across 15 countries, including 13 from Latin America and the Caribbean. The project is designed to strengthen the region’s linguistic and cultural sovereignty, especially in Spanish and Portuguese, with attention to local variants and indigenous languages. Latam-GPT marks a milestone and a key step toward technological sovereignty—the result of a regional alliance unveiled on February 10, 2026—and is the first open Large Language Model (LLM) designed from and for Latin America and the Caribbean. The project was coordinated by Chile’s National Center for Artificial Intelligence (CENIA), with the support of CAF, Amazon Web Services, and Data Observatory, alongside governments, universities, multilateral organizations, and tech companies. The model was developed on top of a Llama 3.1 base architecture with 70 billion parameters, complemented by a regional corpus and benchmarks adapted to the Latin American context. It was built on clear ethical principles, with data selection and documentation processes that ensure transparency and responsible use. There is, therefore, a clear opportunity for Spain to connect its ALIA strategy with Latam-GPT and deepen linguistic and technical cooperation between both ecosystems.
Spain’s leadership. According to a UNIR study, Spain leads AI development in the Spanish-speaking world, which makes its role as a benchmark technology partner for Latin America more straightforward.
Madrid 2026 Ibero-American Summit. A regional AI initiative is expected to be advanced and submitted for approval at the Ibero-American Summit in Madrid, scheduled for November 4–5, 2026, and preceded by the first Ibero-American Digital Forum on November 3–4.
In short, Spanish cooperation seeks to ensure that Latin America does not merely adopt technology but actively takes part in its creation and regulation, with particular emphasis on the Spanish language and on ethics.
Benefits for Latin America
According to estimates from the World Economic Forum and McKinsey, accelerating AI adoption in Latin America could lift regional productivity by 1.9% to 2.3% per year and generate between $1.1 and $1.7 trillion in additional economic value annually. Spain can contribute to that agenda as a technological, regulatory, and linguistic partner, without those figures being attributed directly to a “Spanish route”. In terms of technological sovereignty, this cooperation allows Latin American countries to reduce their dependence on foreign vendors, ensuring that the technology they deploy aligns with their own regulatory and cultural frameworks.
All of this also reinforces sectoral specialization, prioritizing AI applied to key areas such as agri-food, healthcare, education, and public administration—moving beyond a generalist adoption phase and promoting solutions tailored to the region’s real needs.
On the investment side, Spain’s Artificial Intelligence Strategy 2024 was endowed with an additional €1.5 billion—mostly drawn from the Recovery, Transformation and Resilience Plan and its addendum—on top of the €600 million already mobilized. This commitment reflects the country’s bet on technological development and international cooperation, in which Spain can act as a strategic partner in infrastructure, supercomputing, talent, and regulation. The challenge isn’t to compete on the high cost of training enormous base models, but to lead in the ethical application and customization of artificial intelligence.
From the scale game to the reasoning game
On top of all this, the global conversation about artificial intelligence has turned binary: either you join the race for ever-larger models—more data, more compute, more data centers—or you accept a passive role as a consumer of foreign technology. That framing leads to a poor public-policy choice. For Latin America, the question isn’t whether AI will arrive—it already has—but what kind of AI is worth accelerating with scarce fiscal resources and uneven infrastructure.
The dominant paradigm rests on a triad: large volumes of data, accelerated hardware (GPUs and specialized chips), and data centers capable of sustaining the power, cooling, and connectivity needed at scale. It is an economy that rewards those who already have capital, supply chains, cloud infrastructure, and energy capacity. Trying to replicate it from the technological periphery tends to be asymmetric. The risk is ending up paying the costs in the form of energy, water, and contractual dependence, without capturing the benefits—intellectual property, autonomy, control of the value chain, or resilience.
The asymmetry is sharpened by the fact that the region starts from heterogeneous conditions. Connectivity and equipment remain uneven. The cloud doesn’t reach rural schools with the same quality as it reaches capital cities, and advanced computing is concentrated in just a few hubs. Add to that a structural investment gap created by global AI capital, which is allocated based on platform control and profitability rather than social urgency. In that context, “winning by scale” tends to deepen dependence on hyperscalers, imported hardware, and critical services beyond local control.
But there is a third path—more realistic and more useful for the state—a roadmap centered on reasoning, causality, and structured knowledge, with architectures designed to operate under real-world budget, energy, and connectivity constraints. This isn’t about giving up on statistical learning. It’s about shifting the focus from massive accumulation toward the capacity to decide well.
AI, moreover, is not a monolithic block. There are knowledge-based approaches—rules, ontologies, explicit representations—and hybrid approaches that are gaining traction today for the simple reason that the public sector needs traceability. In justice, healthcare, defense, or regulation, a system that recommends but cannot explain “why” creates a legitimacy problem.
Several lines of work today offer paths that are less data-hungry. Neuro-symbolic AI combines learning with logical structures to improve reasoning and control. Knowledge graphs integrate scattered information and enable understandable inferences. Causal approaches help answer the question that matters most in public policy: “what will happen if we intervene?”. In parallel, agentic architectures and cognitive systems organize decisions in dynamic environments where data is scarce or sensitive. And on the deployment side, efficient models and edge computing reduce latency and dependence on connectivity, opening up uses in remote hospitals, schools with patchy networks, and far-flung municipalities.
This isn’t about academic theory—it’s about an opportunity to build a state-grade AI with capabilities that strengthen effective sovereignty and institutional resilience. Wherever the state has both mandate and advantage—institutional knowledge, jurisdiction, regulatory capacity—there is room to act.
Measuring success by model size is an imported temptation. Measuring it by verifiable improvements in public-sector productivity, security, service quality, and institutional trust is strategy.
What does this mean in concrete sectors?
Defense and security. In a hybrid-threat environment, the advantage doesn’t come from training a giant model but from decision-support systems that integrate scattered signals, handle uncertainty, and explain their recommendations. Reasoning- and agent-based architectures can deliver fast returns without requiring massive compute centers.
Healthcare. Clinical data is sensitive, and massive centralization is not always feasible or desirable. Combining frugal models deployed locally, privacy-preserving techniques, and causal frameworks for evaluating interventions makes it possible to improve decisions without putting rights at risk or relying on perfect connectivity.
Education. If the digital divide persists, betting on solutions that assume always-on cloud access only widens inequality. It makes more sense to prioritize tools that work with limited connectivity (teacher support, structured content, explainable analytics) and to treat talent as infrastructure, with teacher training and specialization tracks.
Regulation, justice, and spending oversight. Here there are immediate gains to be had with explainable AI: prioritizing inspections, detecting anomalies, improving public procurement, and strengthening accountability through traceable evidence and criteria.
Energy and the environment. AI’s material footprint tends to be left out of the public debate. But intensive data-center deployments put pressure on power grids and water resources and create local bottlenecks. AI policy should therefore build in efficiency criteria from the outset, transparency requirements regarding energy and water consumption in public procurement, and planning that prevents digitalization from competing with climate goals or from driving up socially sensitive electricity tariffs.
Out of this approach comes an actionable public agenda that requires defining “sovereign AI” by decision-making capabilities (traceable decision-making, data governance, and hybrid deployment); building lightweight knowledge infrastructure (sector-specific ontologies and graphs as interoperable public goods); and moving toward efficient implementation, transparency in public procurement, and high-value state pilots with rigorous evaluation and the transfer of capabilities to the state.
In short, the scale game benefits, by design, those who control capital, compute, and platforms. Latin America shouldn’t resign itself to being a passive consumer, but it also makes no sense to mortgage budget and sovereignty in a race that begins lopsided. The competitive alternative is a frugal, explainable, reasoning-oriented AI, designed for the real problems of the state and for the real constraints of the region.
In the end, this is a political choice. The future isn’t determined by model size but by the strategic clarity to choose the path that maximizes public value, reduces dependence, and strengthens institutions. That is the debate the region needs to have.





