Knowing Without Thinking: How AI is Changing Our Relationship with Information

Artificial intelligence (AI) is rapidly penetrating our daily lives – from smart searches to decisions made with just a few clicks. However, this technological progress raises not only fascination but also questions: does AI enhance our thinking abilities, or conversely, does it suppress them?

Dr. mult. Paulius Astromskis, Associate Professor at Vytautas Magnus University (VMU) and a member of the SustAInLivWork project developing an AI competence centre, argues that, on the one hand, AI significantly eases everyday situations – it instantly collects and processes preliminary information, enabling faster and more rational decision-making.

“On the other hand, such reduction in cognitive load can develop into cognitive apathy – that is, laziness to think. This may weaken motivation and the ability to think actively, while also reducing self-confidence, critical thinking, decision-making, leadership and managerial skills,” he explains.

Cognitive Apathy

Astromskis notes that by constantly relying on AI-generated answers, it is easy to become accustomed to quick results obtained with minimal effort – but this comes at the expense of the thinking process.

Dr mult. Paulius Astromskis
Dr mult. Paulius Astromskis

“Overusing this technology, especially among children, school pupils and university students who are still developing thinking skills, carries the risk of losing the deeper analysis and critical thinking abilities. This could have negative consequences later in life, when individuals need to independently evaluate information and make informed decisions,” he warns.

The VMU researcher highlights that continuous use of AI may weaken a sense of responsibility, as it becomes easier to shift blame onto another – especially when that ‘other’ is a machine or a tool. This can encourage avoidance of personal responsibility and commitments, reduce manifestations of leadership, and cause decision paralysis, particularly in situations requiring independent action without technological aid.

“AI can sometimes make better decisions than humans – especially when analysing large datasets, performing complex calculations, objectively assessing various criteria or reacting swiftly to changes. However, AI still lacks intuition, empathy, the ability to assess contextual details, and to adopt creative and innovative solutions,” he adds.

According to Astromskis, the ideal balance between AI assistance and human decision-making occurs when AI is used responsibly – as a tool offering possible alternatives, with the final decision made by a person critically evaluating information from multiple sources. This can be achieved by applying the so-called triangulation principle, where information, opinions or advice come from three different sources. One may be generative AI, another more thorough research, such as via Google, and the third, the opinion of colleagues, experts or other trusted people.

“With three distinct sources of information, the data needed to make a decision is usually sufficient, and the quality of the decision high. This is the essence of critical thinking: to learn, check, and double-check – as the old saying goes, ‘measure nine times, cut once.’ Generative AI offers a unique opportunity to have one easily accessible source of information on almost any topic, but it should be only one of several sources, not the sole one,” he emphasises.

The Importance of Formulating Questions

The VMU scholar explains that AI technology usage in the European Union (EU) is regulated by the AI Act, which, although currently in a transitional phase, already defines clear provisions – from prohibited practices to strict requirements for high-risk AI systems and transparency rules for lower-risk cases. This means the EU regulatory environment has gone beyond ethical guidelines or recommendations and become legally binding.

“Of course, ethical guidelines, good practice examples, scientific research, cultural factors and community pressure remain important elements of regulation and self-regulation. However, principles of responsible and trustworthy  AI have already been incorporated into legal imperatives – failure to comply with these can result in severe sanctions,” he adds.

Astromskis suggests that, in theory, AI systems could be developed based on Socratic dialogue principles – where answers are not simply given but discovered through dialogue, consistent questioning and reflection.

However, it is unlikely that such solutions – encouraging thinking, raising questions and promoting analysis – would be commercially more attractive than those that deliver quick answers with no extra effort. Therefore, the main role here lies less with the architecture of the systems themselves and more with the education system.

“Critical thinking and the ability to formulate questions must be cultivated consistently – from early childhood education settings such as nurseries, through schools and universities, and across all fields of study. If generative AI becomes an unavoidable reality that can provide answers to almost any question, one of the most important skills will no longer be the ability to generate the answer, but the ability to ask the right question,” he notes.

Generative AI responds to any query, so to obtain valuable and high-quality answer, it is essential to be able to formulate precise questions. In this context, the so-called “promptology” or prompt engineering is not only a new but a vital skill set, helping navigate the complex world of information.

“As the good soldier Švejk once said (a character from Jaroslav Hašek’s satirical novel The Good Soldier Švejk): ‘It has never yet been that it wasn’t somehow.’ Even in the greatest chaos, a certain order, patterns, and structures eventually emerge. The coming generation will be different from the previous ones, but in the history of humankind, this is nothing unusual. Just as our ancestors managed to adapt, so will we,” concludes Astromskis.

The project is co-funded under the European Union's Horizon Europe programme under Grant Agreement No. 101059903 and under the European Union Funds’ Investments 2021-2027 (project No. 10-042-P-0001).