Artificial intelligence (AI) creates both opportunities and challenges for universities. Developments in this field are progressing rapidly, and the university cannot afford to miss out. At UGent, we welcome AI systems as useful tools, applicable in education, research, and the daily operations of our university.

AI offers a wide range of new possibilities for all colleagues at UGent. For teaching staff, this includes the automation of certain tasks, assistance in designing course materials, generating exercises, and providing feedback. Colleagues engaged in research can use AI to help write scientific articles, prepare grant applications, conduct literature reviews, and analyse large datasets. By generating hypotheses and simulating experiments, new research avenues can be opened, and new insights discovered.

AI assistants can provide real-time support for practical questions or support needs within UGent. AI can also offer solutions for easing administrative tasks by streamlining processes, simplifying tasks, and automating routine activities such as data entry, reporting, and documentation. This can free up time for innovative, strategic, and customer-oriented work. Finally, AI can help make more informed decisions. In the context of strategic decision-making, large amounts of data can be analysed, trends identified, and insights generated.

UGent has recently taken several initiatives to teach our students and colleagues to use these tools critically and responsibly. We will continue these efforts so that staff and students can stay informed about important new developments and be supported in their use. Throughout other parts of our programme, various ideas and proposals are discussed on how we can use AI as a university. However, a broader policy on AI is necessary. As a knowledge institution, we are at the forefront of further developments, but we can also provide a critically healthy voice in the societal and scientific debate on AI. AI does not come without risks and limitations. The following text forms the basis for further developing UGent’s AI policy.

  1. AI and Our Education

AI is particularly a game-changer for education. This is also evident from the latest Digimeter report: more than seven out of ten students use it for educational purposes. To ensure that students learn to use AI optimally, all our programmes will need to address this, either in existing courses or new ones. AI is already a significant aid for students in writing essays, theses, and research reports by improving language, analysing data, etc. However, the impact of AI on programmes can be even more profound. We will therefore initiate and guide a reflection exercise on the necessary adjustments to the university’s current curricula. For example, what will be the impact of AI on the professions we train for? We cannot predict the future, but as a university, we must not lag behind the changes that AI brings.

To also offer alumni and those already working the opportunity to update their skills, faculties are invited to explore the usefulness of offering refresher courses on this topic. This can be done, for example, in the context of lifelong learning or in collaboration with VAIA.

We embrace the possibilities of AI to provide students with a more personalised learning experience. Course materials and feedback can be tailored to the individual needs of students (study pace, proficiency level, immediate feedback on assignments and exercises, etc.). It can also help make our education more inclusive. By creating accessible learning resources, such as providing transcripts and translations, AI offers opportunities to provide an excellent learning experience to even more students.

  1. Responsible and Accountable Use of GenAI

The use of AI models comes with ethical challenges. We must not blindly trust the output of AI tools, as it is not always reliable. AI can sometimes generate inaccurate or misleading information, affecting the quality and reliability of education and research. Additionally, the use of AI raises legitimate questions about the protection of personal data and the confidentiality of research data and university business information. Ensuring the privacy and security of data is crucial. For this reason, the university has recently chosen Copilot as the standard generative chatbot.

Another challenge is the increased risk of plagiarism. AI-generated texts are not always easily distinguishable from original works. The university must explicitly address this issue. Finally, AI models can reinforce existing biases in data or cause new biases, leading to discriminatory outcomes. We must be critically aware that AI responses are often based on insights from a dominant, non-bias-free Anglo-Saxon culture. ‘Open source’ alternatives can be one way to reduce the risk of bias, and UGent has considerable research expertise on bias and how to avoid it.

The user of an AI tool is responsible for the output, even when it generates incorrect or misleading information. We recognise that it is not always easy for users to assess all risks. Therefore, we focus on providing information to all UGent members about AI tools, the risks they entail, and how to use these tools best.

  1. Implementation and Acceptance

The implementation of AI in the operations of our university must be carried out with the necessary care and caution to harness the benefits and avoid pitfalls. We will need to continue training our students, researchers, and teaching and other staff to use AI tools effectively and responsibly, both for generic and research-specific systems. The training offerings, supporting internet and intranet pages, and the work of the working groups will be continued and expanded.

Additionally, we must not be blind to the sometimes very unequal access to AI, particularly concerning paid AI models. The use of GenAI technologies – and the necessary training – can entail very high costs, both financially and in terms of time and resources. This invites us to explore what is possible through ‘open source’ variants. When this is not feasible, we hope to partially reduce these costs by making strategic choices, such as focusing on collaboration, and partially externalising the costs, for example, through project funding or infrastructure calls.

Finally, it is a shared responsibility to consider the ecological disadvantages. The energy consumption of AI tools, the production of hardware, the installation of data centres, etc., comes with a significant energetic and ecological cost. Research and innovation, such as that of IMEC, can play an important role in this regard.