Veronica Sacco / March 23 2026
Commentary n. 031 NS/2026
The decision of the University of Latvia to adopt Senate-level regulations on the use of artificial intelligence in the study process represents a highly significant institutional development, both within the Latvian higher education system and in the broader European academic landscape. It signals a transition from a reactive to a proactive model of governance, where universities do not merely respond to technological change but actively shape its integration into educational practices. From an analytical perspective, this initiative reflects what may be defined as normative institutional innovation: the capacity of academic institutions to internalize external technological shocks—such as the rapid diffusion of generative AI—by translating them into formalized rules, principles, and accountability mechanisms.
The four pillars identified by the University of Latvia—integrity, transparency, responsibility, and data security—are not only ethical guidelines but also foundational elements of a new epistemic framework in which knowledge production is increasingly hybrid, involving both human and algorithmic contributions. Particularly noteworthy is the balance achieved between regulation and flexibility. By delegating to individual instructors the authority to determine the permissible scope of AI use within specific courses, the Latvian model preserves academic autonomy while embedding it within a shared normative structure. This reflects a federal-like logic within the university system itself: central principles coexist with decentralized implementation. Such an approach is consistent with broader European trends in multi-level governance, where coordination does not eliminate diversity but rather structures it. In addition, the explicit limitation on the use of AI-detection tools as sole evidence of academic misconduct is of crucial importance.
This provision acknowledges the current epistemic limits of algorithmic detection and avoids the risk of over-reliance on opaque technological systems in evaluative processes. In doing so, the University of Latvia reaffirms a fundamental principle of higher education: the ultimate responsibility for assessment remains human, interpretative, and context-sensitive. When compared to the Italian context, the Latvian case appears particularly advanced. Italian universities, while increasingly aware of the challenges posed by generative AI, have so far tended to adopt fragmented and often informal approaches, typically relying on departmental guidelines or general recommendations rather than binding, institution-wide regulations.
Although initiatives by bodies such as the Conferenza dei Rettori delle Università Italiane (CRUI) have begun to address the issue, a coherent and legally grounded framework comparable to that of the University of Latvia is still largely absent. This reflects a broader structural characteristic of the Italian higher education system, where regulatory change often proceeds incrementally and with significant delays relative to technological developments. This delay is not neutral. On the contrary, the absence of clear regulation generates a series of systemic risks.
First, it creates uncertainty for both students and faculty: in the absence of shared standards, the boundary between legitimate and illegitimate use of AI remains ambiguous, undermining the very notion of academic integrity. Second, it produces inequalities across institutions and even within the same university, as different courses or departments may adopt inconsistent or contradictory practices. Third, and perhaps most importantly, it encourages a tacit normalization of unregulated AI use. Students are already using generative tools extensively; pretending otherwise does not preserve academic standards—it erodes them by pushing these practices into a grey zone where they escape both pedagogical guidance and ethical scrutiny. In this sense, the most significant risk for universities is not the misuse of AI per se, but the institutional denial of its existence. Ignoring AI does not prevent its diffusion; it merely relinquishes the university’s role in shaping how it is used.
Such a stance risks producing what could be termed a hidden curriculum of AI: students learn to use these tools informally, without critical awareness, without transparency, and without a clear understanding of their limitations. The long-term consequence is a weakening of core academic competencies, particularly in areas such as argumentation, original writing, and critical thinking. Conversely, the Latvian initiative can be interpreted as a case of early institutional adaptation, positioning the University of Latvia—and potentially the Latvian system more broadly—at the forefront of educational governance in the age of artificial intelligence. It also aligns with the regulatory trajectory established at the European level, particularly with the adoption of the EU Artificial Intelligence Act (2024), which emphasizes risk-based approaches, transparency, and accountability in AI deployment.
More broadly, this development highlights a fundamental transformation in the nature of education itself. As AI tools become embedded in cognitive processes, the objective of higher education can no longer be limited to the transmission of knowledge but must increasingly focus on the development of critical, interpretative, and meta-cognitive skills. Regulating AI, therefore, is not simply a matter of preventing misuse; it is a strategic necessity to ensure that universities remain relevant as institutions of knowledge production and transmission in the 21st century.
*Young Visiting Fellow Fondazione CSF

En
It



