The challenge of digital technology for educational organizations

 A learning theory for the Digital Age: Connectivismo 

Siemens begins from a powerful intuition: the classical theories of learning — behaviorism, cognitivism, constructivism — were formulated in a period when knowledge was scarce, relatively stable, and largely located in books, teachers, or institutions. Today, knowledge grows exponentially and becomes obsolete with great speed. The “half-life of knowledge” is shrinking. What we learn today may become outdated within a few years — even months. In this context, learning can no longer simply mean “storing information in memory.”

In 2005, Siemens published “Connectivism: A Learning Theory for the Digital Age” to explain how learning begins to function in a society where we relate to one another and learn through the internet, through mobile devices, through connecting to networks of people and sources of information on a global scale.

For Siemens, we learn by developing networks of interaction that provide us with information or knowledge. We select certain channels on YouTube, Instagram, or TikTok; we consult specific applications; we navigate the web through particular searches; or we “converse” with systems such as ChatGPT through our prompts or instructions. Through all these actions, we become part of a network, as if we were a node in relation to many other nodes.

From this perspective, the entity that learns is not merely the individual subject, but the network itself. Intelligence, therefore, is not only individual but also collective. Accessing networks — and developing one’s own network within them — shapes the possibilities of learning. Tell me which networks you consult and which interactions you maintain within them, and I will tell you who you are and who you are learning to become.

In sum, two fundamental principles of Connectivism:

  • Your learning is your network of connections. In an environment where information is abundant, deciding what to ignore is just as important as deciding what to learn. Each time you choose which accounts or channels to follow, which authors to read, which videos to consume, and which sources to consider trustworthy, you are actively shaping your future professional identity. Connecting once is not enough. Networks must be cultivated: following updated sources, verifying and contrasting information, engaging in dialogue with professional communities, and participating in spaces where ideas circulate. A teacher who becomes professionally isolated disconnects. A teacher who participates in educational communities, pedagogical networks, professional development, conferences, or digital spaces expands their network and, consequently, their capacity to learn.
  • Intelligence is also collective, and knowledge is distributed. Siemens challenges the notion of the isolated subject. Intelligence is not solely individual; it emerges from networks. Knowledge is not concentrated in a single place but distributed across interconnected systems. We may think, for instance, of Wikipedia, communities of educators sharing resources, or international collaborative projects. The scale of learning, therefore, is not confined to the boundaries of the individual, as suggested by the Cartesian tradition and much of modern psychology, from Freud to Piaget or Vygotsky. Intelligence is, in essence, collective and trans-individual. Each subject, each node within the network — including technological nodes such as a YouTube channel or an Artificial Intelligence system — participates in evolving webs of interaction.

And what about schools? What mission do they serve, and how should they function from a connectivist perspective? From this theoretical standpoint, a school is not merely a collection of individuals or groups learning separately. It is a network of relationships. The quality of those connections determines the organization’s capacity to learn.

The school ceases to be the place where knowledge is “transmitted” and instead becomes a strategic node within a much broader learning network. Its mission is no longer to accumulate content in students’ memory, but to teach learners how to construct, evaluate, and sustain their own networks of knowledge. Schools should function as spaces where students learn to distinguish reliable sources from informational noise, connect ideas across domains, collaborate with other nodes (people, communities, technologies), and update what they have learned as contexts evolve.

Rather than focusing exclusively on what to know, the central function of schooling becomes cultivating the competence of knowing how to connect — developing judgment, critical thinking, and navigational capacity within complex information ecosystems.

In an era of cognitive overabundance, the school is not a repository of knowledge, but a workshop for learning how to orient oneself within networks..

The challenge starts with our human condition: ¿are/will be cyborgs?

Yet beyond these different scenarios shaped by digital technology, the technological revolution seems to be pushing us even further. It points toward an uncertain future in the relationship between human beings and technology, where both sides of this relationship appear to be evolving toward highly diverse forms of integration and convergence.

On the one hand, humans are incorporating technology not only into the ways they live and communicate, but increasingly into their own bodies. A striking — and already real — expression of this interdependence between humans and technology can be found in cyborg artists such as Neil Harbisson, who “hears” colors through an antenna implanted in his brain; Manel de Aguas, who perceives complex environmental data, such as atmospheric pressure, through sensory devices installed on his head; or Moon Rivas,, who senses seismic activity through subcutaneous sensors in her feet.

Without reaching such extremes — perhaps only a matter of time — we are nevertheless progressively binding our bodies to technology, whether through constant internet connectivity via mobile devices or through wearable technologies: watches, glasses, headphones, and a growing array of bodily extensions (and, perhaps soon, subcutaneous chips).

We are, it seems, becoming cyborgs. 


In the opposite direction, machines, robots, and digital applications — driven by increasingly sophisticated developments in Artificial Intelligence — are acquiring capacities and behaviors that until recently seemed exclusively human. We may not be far (perhaps only a matter of years) from what technology experts describe as “singularity” or “superintelligence”: the possibility that AI systems could design, program, train, and even educate other machines.

Machines, therefore, may become progressively more “human” — or, at the very least, more intelligent, adaptive, and interactive.

We may, then, be approaching the threshold of a cyborg era, a turning point that some compare to earlier transformative moments in the history of Homo sapiens, such as the domestication of fire or the emergence of spoken language.

How might such a profound transformation reshape our educational institutions?

And as if this were not enough… now, Generative Artificial Intelligence

Artificial Intelligence is not an encyclopedia that “knows the truth,” nor a brain that thinks as humans do. It is a statistical system trained on vast volumes of data, designed to predict which word, image, or sound is most likely to follow another. For this reason, it can produce essays, generate images, or propose instructional activities in a matter of seconds.

Yet AI does not understand. It has no intention, no moral judgment, no consciousness. It does not distinguish between truth and falsehood; it generates what appears plausible. It may be brilliant, efficient, astonishing, even remarkable — and at the same time profoundly “stupid.”
Artificial Intelligence opens up a range of possibilities for educators, who may learn to use AI as…...
  • An assistant for the automation of tasks with limited pedagogical value, thereby freeing time and allowing attention to be focused on what truly matters. What kinds of tasks? For example, the design of instructional materials (images, presentations, exercises, etc.), assessment instruments (exams, cases, rubrics), and administrative duties (attendance lists, school documents, and similar activities).
  • A learning scientist. AI can, for instance, help identify patterns in students’ responses, detect recurring errors, and suggest adapted learning pathways for different learner profiles. In this sense, AI becomes a powerful tool for observing who is learning, what is being learned, and how learning unfolds — and, consequently, how educators might more effectively support the diversity of learning processes in more personalized ways.
  • A pedagogical advisor. Contemporary teachers now have access to an “interlocutor” with whom they may reflect on how to better prepare their lessons, which learning activities to design, how to engage students, or how to respond to complex classroom situations. AI thus functions as a form of cognitive partner, enabling educators to critically examine their own pedagogical decisions.

Yet the risks are also enormous:

  • The “crisis” of truth. Artificial Intelligence does not distinguish between what is true and what is false; it produces what appears statistically plausible. It may fabricate references, combine incompatible data, or reproduce cultural biases without any awareness of doing so. The problem is not merely that AI can be wrong, but that it may do so with such persuasive confidence that we are passively drawn into accepting its outputs. Within educational organizations, this demands a renewed emphasis on critical literacy: verifying sources, scrutinizing claims, and teaching learners to “dialogue” with AI systems without granting them automatic epistemic authority.
  • The “crisis” of cognition. A recent study conducted at the MIT Media Lab (Kosmyna et al., 2025) introduced the concept of cognitive debt to describe what occurs when we delegate cognitively demanding tasks — such as writing, synthesizing, or reasoning — to AI systems. According to this line of research, the entity that becomes more capable is the AI, while our own cognitive engagement and development may weaken.
  • The “crisis” of authorship. It remains difficult to determine with precision which parts of a task were produced by AI and which were not. This uncertainty introduces a pervasive logic of suspicion into educational systems, which may end up validating work that students scarcely performed themselves. Educational institutions are therefore confronted with the need to redesign assignments and assessment practices so that value resides not solely in the final product, but in the process, the reasoning, and the conscious decisions surrounding AI use. Analogous tensions are visible in other domains, for example in cultural artifacts entirely generated by AI or in digital identities that do not correspond to real individuals.
  • The “crisis” of the teacher’s role. Students today may rely on personalized AI tutors available at any time, capable of explaining concepts, answering questions, and adapting exercises. In such a scenario, how necessary does the teacher remain? The most significant risk is not, at present, the disappearance of educators, but the gradual erosion of their perceived value. Teachers may risk becoming, in the eyes of AI-assisted learners, authorities whose relevance is diminished. The central question for educational organizations thus becomes unavoidable: if AI can explain, inform, generate, and respond, what distinctive human, social, and formative experiences do schools and educators uniquely provide?






Comentarios