AI TRENDS 2026: from regulation and data to models that can (and can’t) do everything

On 26 February 2026, AI TRENDS took place at Vytautas Magnus University (VMU), bringing together the academic community, researchers, and practitioners to discuss where artificial intelligence is heading right now. The event was aimed at people who want more than headlines, those looking to understand direction and implications: which approaches are becoming sustainable, what Europe’s regulatory landscape is signalling, how the logic of model development is changing, and where today’s promises start to meet practical responsibility.

AI TRENDS was organised as part of SustAInLivWork, the Centre of Excellence of Artificial Intelligence for Sustainable Living and Working (SustAInLivWork). The project focuses on developing and applying AI-driven research and experimental development solutions for sustainable living and working, strengthening collaboration between academia, industry, and the public sector. AI TRENDS translated that mission into a clear format: short expert talks followed by discussions that highlight not only what is new, but also what it means in practice and where the limits are.

“Legal paranoia” and scenario-based risk management

Dr Mult. Paulius Astromskis

Dr Mult. Paulius Astromskis (VMU) opened the event with a metaphor that stayed in the room all day: in AI, it matters how to be “paranoid” in a lawful and responsible way.

As AI systems become more complex and increasingly autonomous, risk management shifts from an optional add-on to the backbone of development and deployment. 

Astromskis reminded the audience that EU technology governance is increasingly built around a risk-based approach, and the EU AI Act classifies systems by risk level.

But his key point went beyond formal obligations: even “voluntary” compliance commitments often become de facto requirements in the real world. Organisations face a duty of care, along with reputational, contractual, and legal accountability. In other words, not managing AI risk is becoming more costly than managing it.

The Digital Omnibus and GDPR under pressure to adapt

In the second talk, Prof. Dr Saulė Milčiuvienė (VMU) extended the first theme by showing how risk thinking is embedded in legal architecture. Her focus on the Digital Omnibus highlighted a point that is often overlooked in AI conversations: trends are shaped not only by labs and code, but also by legislation, which redraws the boundaries of what is permitted, under what conditions, and at what cost.

Milčiuvienė began with citing a provocative idea: privacy may become a privilege rather than a right. This does not suggest Europe is abandoning privacy as a value; it reflects practical reality. The scale of platforms and data flows makes privacy harder to maintain with traditional governance tools. Against that backdrop, regulatory change becomes inevitable, and the Digital Omnibus is framed as an attempt to bring fragmented digital rules into a more coherent system and and to address the urgent issues that arise in day-to-day practice.

The most visible pressure point is the GDPR. Milčiuvienė’s comparison of GDPR to a child’s outfit that has been outgrown captured the moment: it remains foundational, yet technological acceleration has moved beyond it in several areas.

Trust in health data and federated learning

DSC_2582

The regulatory thread continued naturally with Dr Rytis Augustauskas (Kaunas University of Technology, KTU), who presented the logic of federated learning in the CVDLINK project.

In healthcare, AI progress starts with trust, and trust starts with architecture, specifically, design choices that keep sensitive data protected. The CVDLINK (2023–2026) example showcased a model where data stays with the data holder, while the model travels:

each institution trains locally on its own infrastructure and only shares training outputs (such as model weights) for aggregation. This approach enables AI training without centralising raw data and better aligns with privacy and regulatory expectations. The talk also referenced practical tooling that makes federated learning feasible in real deployments, not only as a conceptual framework.

Augustauskas was explicit about the trade-off: centralised learning can often deliver slightly higher accuracy, while federated learning offers a good enough outcome in exchange for what is critical in medicine, privacy and reliability. This, too, is a current AI trend: optimisation is no longer only about accuracy, but about the full package of accuracy + trust + legal compliance.

When regulation can’t keep up

In the first session’s panel discussion, moderated by Prof. Dr Paulius Pakutinskas (Mykolas Romeris University, MRU), the panel brought together Prof. Dr Antanas Čenys (Vilnius TECH), Prof. Dr Saulė Milčiuvienė (VMU), Dr Mult. Paulius Astromskis (VMU), Dr Kristina Šutienė (KTU), and Dr Arnas Karužas (Lithuanian University of Health Sciences, LSMU).

A key theme was straightforward: technology moves fast, regulation often lags, and the real challenge is implementation.

The discussion offered measured criticism of compliance burden: when organisations allocate significant resources to documentation and processes, innovation risks becoming secondary. At the same time, the panel highlighted a practical danger: if rules become too complex, non-compliance can start to feel “normal”, weakening the entire trust ecosystem.

A second thread was capacity and skills. For AI oversight, reading legal text is not enough; regulators need technical and mathematical literacy to understand how systems work, and this expertise remains scarce across institutions and the public sector.

The discussion also moved into values and culture: the privacy paradox (we defend privacy in principle but trade it for convenience in practice), the need for human-in-the-loop oversight, and AI’s dual impact on education. Used as an answer machine, AI can erode critical thinking; used as a partner, it can support learning and creativity.

Large language models and the shift toward efficiency

The second session focused on large language models (LLMs) and opened with Prof. Dr Jurgita Kapočiūtė-Dzikienė (VMU), who showed how language is represented numerically, how meaning is encoded in vector spaces, how learning is driven by error, and why hallucinations are not a random bug but a systematic outcome of generative modelling.

A key message echoed other speakers: after the wave of ever-larger models, the direction is shifting toward smaller, specialised systems and efficiency. 

Bigger does not automatically mean better; speed, cost, adaptability, and control increasingly matter.

She also highlighted the European angle: LLMs are often Anglocentric, and maintaining linguistic and cultural accuracy requires local data, terminology, and consistent data curation.

Smaller models, agents, and the cost of reliability

DSC_3463

Dr Mantas Lukoševičius (KTU) grounded that shift in practical terms. In his framing, after 2023, scaling alone stopped guaranteeing a qualitative leap. The field is moving toward approaches that let models “think”: structuring tasks, breaking them down, rewriting prompts, and solving problems through multi-step workflows. This direction naturally connects to agentic systems, where models can use external tools and resources.

He also emphasised the value of smaller language models: they can run locally, cost less, adapt more easily to specific tasks, and better support sensitive-data constraints.

Examples of Lithuanian-focused applications (from text correction to news clustering, summarisation, and sentiment analysis) illustrated how specialised systems can deliver real value. At the same time, his warning was that models can produce confident outputs that are factually wrong. Thus reliability remains a central cost and constraint.

Multimodal AI, vision-language models, and the Lithuanian data bottleneck

In the final talk, Assoc. Prof. Dr Linas Petkevičius (Vilnius University, VU) outlined another major trend: beyond pure text-based LLMs, the field is moving toward multimodal systems where images, audio, and increasingly action are part of the model’s understanding. Vision-language models are becoming the industry baseline, and the horizon is expanding to Vision–Language–Action systems that can not only interpret but also execute instructions in real-world contexts.

Petkevičius also identified Lithuania’s concrete bottleneck: data. To build robust vision-language systems for Lithuanian, the ecosystem needs paired datasets, Lithuanian images with Lithuanian descriptions and labels. Without such resources, the risk is to remain primarily a consumer, not a creator, especially while global models remain Anglocentric and culturally distant. He also noted a security dimension: information manipulation is not only text-based, so capabilities to analyse images, video, and memetic formats are becoming increasingly important.

Closing discussion: large models without illusions

The closing discussion, moderated by Prof. Dr Tomas Krilavičius (VMU), with Dr Darius Amilevičius (State Digital Solutions Agency, VSSA), Prof. Dr Jurgita Kapočiūtė-Dzikienė (VMU), Prof. Dr Simona Ramanauskaitė (Vilnius TECH), and Dr Mantas Lukauskas (Hostinger; Nexos.ai), returned to a practical question: where do LLMs genuinely deliver value, and where do they bring disproportionate risk and inflated expectations?

The overall tone was pragmatic: LLMs are powerful but not universally necessary. Hallucinations were emphasised as a systemic property: models attempt to answer even when they do not know. That shifts responsibility to process design. In high-stakes domains, organisations need verification workflows, testing, boundaries, and clear rules. The discussion also expressed scepticism toward public benchmarks: real value is measured in concrete organisational scenarios, not in leaderboard optics.

The broader conclusion reflected the full event: after the “bigger” phase, the field is entering a phase of specialisation and smaller models, particularly where privacy, cost, and local contextual accuracy are critical.

Co-funded by the European Union logo
The project is co-funded under the European Union’s Horizon Europe programme under Grant Agreement No. 101059903 and under the European Union Funds’ Investments 2021–2027 (project No. 10-042-P-0001).