Another Version of DigComp 3.0 Is Out.

Old Wine in New Vessels?

Between March and June, I worked closely with my team within the ETH-TECH project (Anchoring Ethical Technology – AI and data in education) on the analysis of a large corpus of course syllabi from pre-service teacher and educator education programmes across several European countries. The study explored how the ethics of AI and data are represented in these public curricular documents, understood as artefacts that crystallise pedagogical priorities, institutional assumptions, and normative orientations. The findings revealed a limited, fragmented, and often implicit presence of ethics, frequently reduced to generic formulations or absorbed into instrumental competence discourses, with notable variations across national contexts (Raffaghelli & Negru-Subtirica, 2025).

This analytical phase was followed by a series of Awareness Raising Sessions conducted in four countries, designed as spaces for situated reflection with faculty and students. These sessions exposed the persistent tensions between European policy frameworks, formal curricula, and everyday teaching practices, as well as the difficulty of translating ethical principles into concrete pedagogical action (Negru-Subtirica, Raffaghelli & Marinica, 2025).

This empirical trajectory led me to reflect on the strong presence and authority of European digital competence frameworks within the European educational landscape. This is particularly evident in countries such as Italy and Spain, where EU recommendations carry significant weight, not least due to structural dependencies on European funding for educational research and innovation.

Against this backdrop, it felt necessary to pause and critically engage with the newly released DigComp 3.0.

My engagement with DigComp is not new. I was deeply involved in the development of DigComp 2.2, coordinating the Data Literacy working group. That work later bifurcated into two strands: one focused on informational data literacy (reading and using data), and another addressing algorithms, data capture, and AI systems in what was still a pre-generative-AI phase. DigComp 2.2 was published just a few months before November 2022, when the sudden public release of ChatGPT dramatically reshaped both public and academic debates. Unsurprisingly, the framework became rapidly outdated.

As already suggested by the research I coordinated at the time, calls for a more critical perspective on technology were increasingly entering mainstream discourse, while the expanding capabilities of AI pushed ethical questions to the centre of educational debate. DigComp 3.0 appears, at least on the surface, to respond to this shift. The question, however, is how far it actually does so.

My reading of DigComp 3.0 is informed by a critical tradition in educational technology, particularly the work of Neil Selwyn, who repeatedly reminds us that educational technologies should not be judged by their promises or aspirations, but by their material consequences. This means asking uncomfortable questions about power, inequality, governance, and political economy: who defines the problems these technologies claim to solve, who benefits from their adoption, and which interests are normalised through seemingly neutral discourses of competence, innovation, and responsibility.

Here is a video from the presentation made on December 12th (with reference to the documents)

To begin, the process entailed literature review, something that I’d never seen in the DigComp versioning workflows, based mostly on experts’ professional and academic knowledge. They took into consideration papers generated in Europe and beyond. I was part of the team producing the “Critical Digital Competence” report and I must say there was a rather “pre-critical” vision in that group, like, acknowledgement of the issues and harms of using technology the bad way, but little awareness of power and social structures and companies’ interests, something very clear for Critical Edtech Studies. I watched the video (above) and read the report displaying the process of elaboration of this new framework.

Also, I went through the open data generated, which is very interesting and could be the object of discourse analysis. I might try, not in my immediate plans since my pipeline is rather “obturate” 😀 So here are some highlights from my notes while reading. I’d love to get your comments if it is of interest, particularly, how this document might cross our project’s pathway (that could end on a post positioning our project, but let’s see, this exercise is not an obligation!)

The first thing is that DigComp 3.0 goes always in the direction of the “competence” discussion launched since the ’90s in Europe. This framework keeps the old categories and updates the definition of digital competence to encompass traditional digital skills (like using tools, managing info, solving problems…the language adopted is KSA, Knowledge, Attitudes, Skills). Keeping the old structure is efficient in terms of referring to the framework as instrument for policy making and training: there is a common background for all, that is already well-known. I see here the launch event and documentation as an action for positioning DigComp 3.0 as a common policy language and for many certification schemes. It’s meant to align with EU strategies on digital skills, digital rights and the digital economy (this is also a tool of soft power, as we read from the article of Luci Pangrazio and Julian Sefton-Green, I think). This version builds on the prior version using the KSA in context, to help users to engage with the areas and levels of competence. There is effort put in making this work in practical terms: over 500 learning outcomes linked to four proficiency levels (Basic to Highly Advanced), making it easier to design curricula.

As for the content, I found words like AI, cybersecurity, wellbeing, rights and misinformation. Wellbeing and cybersecurity were in the prior version (wellbeing was very important actually) but rights and misinformation were not that visibile. Here is how the key elements are placed:

  • Cybersecurity and digital safety (not just “use tech” but understand threats).
  • Digital rights, choice and responsibility (your entitlements and how tech shapes them).
  • Wellbeing online (mental, physical and social consequences of tech use).

As you can see, the definitions are quite focused in the individual, not the society and/or communities, something we discuss (and it is very present in our intellectual production and effort here at ETH-TECH).

Of course, as you all might expected, AI isn’t a footnote anymore. In the 2.2. version there was a whole section but GenAI had not been released, so the concerns were different (I coordinated the data literacy working group so I can tell about the discussion inside the groups). AI competence appears now as a systematic element integrated across all 21 competences. That means people should understand AI not just as a tool but as something that permeates digital environments (including generative AI and ethical implications, JCR document).

And here we go with the “critical part”. The harms are considered as a critical issue indeed, and there is a claim for the subject to develop functional skills toward critical and reflective capacities. to be functional in a technological society. Digitally transformed. There is no contestation on the fact of whether we actually need more digital, that is taken for granted (I’m here a double gamer for I accept to go in the direction of digital transformation but understand that we don’t need THIS digital). To me, most of the DigComp 3.0 is still framed as skills enhancement rather than a structural critique of why digital competence is unevenly distributed or how power and data infrastructures shape digital participation (that’s where real equity questions live).

But as far as EU policy tools go, DigComp 3.0 anchors these messier socio-technical concerns into what counts as “competence” rather than leaving them on the margins. Could EU do differently? Can we do differently? If I go through the document, I find that there are “critical issues” (problems with wellbeing, cybersecurity, data privacy, quality of information) which can be dealt with becoming an “informed, reflective, responsible and self-regulated citizen…these are the words I found in the open data. Therefore, there is no engagement (as I found in some French working documents supporting policy making from the Ministry of Education) with things like geopolitics, platform capitalism and private interests operating in a public space, data extraction regimes and the uncomfortable position of the EU in this context, data governance asymmetries (even researchers have troubles to use private companies’ data/information, while obligated as governments to produce tons of public open data), and of course, the labour of citizens, teachers, students in testing and improving the dominating digital infrastructures.

I was asking myself: where is the ethical take? Should we keep on using ethical as a replacement of critical -with its academic connotations-?

I struggle between what we are to produce in this project, the soft (and hard!) power we feel operating within an EU project framework and what we discuss in our scholarship. Will we ever be perfectly aligned with our values? This is an ethical dilemma, actually. The real critical work happens around DigComp, I guess (interrogating its assumptions, exposing what it cannot say, using it as an object of critique rather than a solution). But could this effort take us to become isolated and solipsistic?

With a sarcastic note, I thought, with my work at the ETH-TECH project, I’m closer to Selwyn or to Brussels?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.