Insights from Research in Distance Education conference #RIDE2023
We can view the introduction of generative AI as an opportunity to progress in our evolution of knowledge building
A few days ago, I attended the seventeenth annual Research in Distance Education (RIDE) conference, organized by the University of London’s Centre for Online and Distance Education. The conference’s theme this year was “Sustaining Innovation and Sustainable Practices,” and it aimed to explore ways to promote and maintain innovative and sustainable practices in distance education. In addition to meeting some of the best minds in online and distance education, I engaged in stimulating conversations that taught me a great deal.
It doesn’t come as a surprise to hear that ChatGPT and other generative AI tools were widely discussed by researchers, learning designers, learning technologists, and academics at the conference. Many of them seem to be grappling with how to approach these tools. The main questions that were raised include: How can we address the potential challenges that generative AI may pose in learning and teaching? How can we maximize the benefits of generative AI in education? And, what policies, practices, and sanctions should be put in place to regulate the use of generative AI in educational settings?
There has been a heated debate surrounding generative AI systems like ChatGPT since late 2022. Despite being just a language model, ChatGPT offers a range of capabilities such as holding conversations, writing student essays, summarizing scientific texts, producing lesson plans, and drafting academic papers, among other things. However, Professor Mike Sharples suggests that this debate is far from over, and we must use generative AI with caution. During conference, Sharples emphasised the need to rethink written assessment, exercise caution when using AI for factual writing, explore its potential for creativity, argumentation, and research, develop AI literacy alongside other digital literacies, and establish guidelines for its use.
Furthermore, the discussions focused on how to accurately identify students using this technology and how assessment methods could be changed to so that students would not rely on generative AI more they should. Additionally, attendees explored policies that could be introduced to address potential wrongdoings. Although generative AI presents many exciting possibilities, it is still in its early stages, which generated some degree of worry, panic, and fear about the unknown consequences that may unfold without proper control.
Thereafter there was a discussion about what policie could regulate, monitor, and scrutinize the use of generative AI. Who should implement these policies was a crucial question that still remains unanswered. Is it the responsibility of higher education institutions or governments?
As generative AI is still in its infancy, it is challenging to predict its future implications or propose specific recommendations or policies at this time, particularly in relation to its adoption in education.
I believe that we have entered a period of uncertainty, where change is inevitable and unavoidable. Therefore, not only higher education institutions, but probably at every level education, we must adapt to this change by developing new pedagogical models and innovative assessment methods. Additionally, teachers and students must be re-trained to ensure responsible and effective use of generative AI.
Ultimately, we can view the introduction of generative AI as an opportunity to progress in our evolution of knowledge building. By challenging us to change and offering new avenues for exploration, we can move forward in our understanding of the world. However, it is crucial that we approach this technology with caution and establish guidelines to ensure its responsible use.