Starting on June 30, 2025, the Bocconi community will have free access to ChatGPT Plus, the virtual assistant powered by artificial intelligence and developed by OpenAI. Bocconi has signed a two-year agreement with the California-based company to offer this tool to students, faculty, and staff, thereby integrating ChatGPT into the set of services available to its members. In many ways, this is a pioneering decision. Bocconi is the first Italian university to take such a step, which is seen as a strategic move to foster innovation in theory-building, the simulation of human behaviour, the translation of abstract concepts into numerical data, and the design of new educational models.
Naturally, the use of such tools in an academic environment demands particular care. The AI Act, which regulates the use of artificial intelligence in the European Union, classifies the use of AI in education as “high-risk.” This classification entails strict requirements for safety and transparency in the management, storage, and protection of data, as well as the implementation of human oversight mechanisms. This explains why queries submitted by Bocconi users will not be used to train ChatGPT, why data will be stored within Europe, and why Bocconi will retain intellectual property over both the inputs and outputs of the system. So far, so good, one might say.
And yet, a natural concern arises. Let’s try to articulate it through a simple line of reasoning. Few would dispute that universities are institutions dedicated to the production and transmission of knowledge. They produce knowledge through research and transmit it through teaching. If, as epistemology tells us, knowledge is to be understood as a set of justified true beliefs, it follows that universities are fundamentally oriented toward the pursuit and dissemination of truth based on epistemically sound reasons.
Given this, one might wonder: why would an institution of this kind offer its members a tool which, at the current stage of technological development, often lies, in the sense that it generates false claims or draws unjustified conclusions? A natural objection to this is that large language model applications—ChatGPT Plus among them—do not “lie” in the strict sense of the term. Lying involves asserting something one believes to be false, with the intention of deceiving the listener. That is clearly not what ChatGPT does, not least because it is highly disputed whether such systems can have intentions at all.
Leaving aside such philosophical subtleties, the fact remains that ChatGPT sometimes produces responses that are false despite appearing justified, or true but lacking an intelligible rationale, or neither true nor justified because they stem from mere “hallucinations.” If that is the case, are large language models not somewhat awkward guests within the academic community?
Bocconi’s decision to enter into an agreement with OpenAI should not be seen as an eccentric attempt to promote the use of an unreliable tool. Quite the opposite: ChatGPT and similar systems are already part of our world. They are being used by students, researchers, and professionals across both the public and private sectors. Their growing presence and influence cannot be ignored. Moreover, these tools may profoundly transform many aspects of our lives—our work, our relationships, our self-understanding, and much more.
What the university should do, as an institution committed to the production and dissemination of knowledge, is to educate its members to use these tools critically. This means developing strategies for recognizing when LLM systems produce false assertions, unjustified conclusions, or imagined worlds presented as real. The aim is to correct these outputs and design oversight mechanisms which, precisely because they are “all too human,” can prevent undesirable or even catastrophic outcomes.
In short, the use of ChatGPT within universities should be viewed as a stimulus for critical thinking, not as a capitulation to pseudo-knowledge. But to achieve this, we must rehabilitate the Cartesian doubt at the foundation of scientific inquiry: users of these tools must be trained to systematically question their outputs and adopt methods that turn them into genuine instruments of knowledge. This is particularly important when considering students’ attitudes toward ChatGPT and similar systems. These tools should not be regarded as substitutes for education, but rather as technologies whose effectiveness depends on it. Without education, they are not only useless, they may even prove harmful. There is still a long way to go in this regard. But this is a challenge that the university, more than any other institution, is uniquely equipped to address.