A new open source AI tool has been sending shock waves across the sector as educators witness for the first time, the true power and availability of machine-authored content.
The reality has sobering implications for traditional models of teaching and assessment in global education, including reimagining the skills needed for the future world of work.
Openai.com has been testing a research preview version of its online software called ChatGPT. The system works like a regular chatbot, but instead of simply returning links to source material on the internet, it generates fully formed answers, code, suggestions and solutions tailored to your requirements.
The more users that interact with the interface, the smarter the artificial intelligence becomes and the more accurate the answers appear.
The AI remembers what users said in earlier conversation and responds to follow-up corrections, effectively allowing you to ‘train’ the system to respond in your required style, tone of voice or desired format.
Examples given on the ChatGPT homepage of the type of request that can be posed, include requests such as “explain quantum computing in simple terms” or “how do I make a HTTP request in Javascript?”
The product rocketed to over one million users during its first week of launch in November 2022 and has now reached a high level of mainstream attention, forcing the site to suspend new subscribers while it seeks to expand server capacity.
Many commentators are predicting the chatbot has the potential to replace the Google monopoly in the long term, especially for search requests related to knowledge and learning.
Academic concerns about plagiarism, fraud and proctoring all appear to be valid as the powerful AI can produce thousands of words in minutes on many subjects.
Well-documented scientific theorems, mathematical equations, code, theory, business models and case studies pre-dating 2021 are all explained with extreme accuracy.
“The response is both amazing and unnerving at the same time”
Similarly, styles and templates of communication can also be assimilated, including everything from legal texts to screenplay dialogue – the more specific instructions the user gives, the more detailed the level of output is given – including prescribing word counts, tone of voice, academic referencing and analysis.
Martin van der Veen, chief development officer at ICEF, has been testing ChatGPT on its knowledge of the international education sector, and was impressed by the results, saying “[the response is] both amazing and unnerving at the same time; not just because of the accuracy, but especially the fact that this is written by a machine”.
The language translation function also appears to be very accurate too, which spells trouble for online language test providers. Online proctoring will no doubt prevent direct use in live tests but there is clearly a danger that this could find a way through the firewall.
Perhaps the AI itself could suggest an effective digital hack to employ for this task?
While The PIE News considered the implications of an AI tool completing academic assignments on behalf of students, it seemed only logical to ask the chatbot itself for an answer.
The response the machine generated was as follows: “AI can definitely produce an original academic essay, but the quality and content of the essay may vary significantly depending on the capabilities and limitations of the specific AI system being used.”
It goes on to list mainstream technology as examples of existing use, such as AI language tools, AI essay scoring tools and AI-assisted writing tools that are all widely deployed in productivity applications including academic assessment software.
A mind-bending reminder that AI moderators already exists, and that in reality, they could be marking AI-generated student work in the near future.
The chatbot answer finishes with a bullish prediction, stating “it is important to note that AI technology is rapidly advancing, and it is likely that in the future, AI systems will be able to produce original academic essays of a high quality.”
I’m not sure if the machine has learnt to brag but I am feeling increasingly redundant.
The current limitations of ChatGPT are listed as occasionally generating “incorrect information” and “may occasionally produce harmful instructions or biased content”.
AI-generated content has for many years been credited with the spread of disinformation in democratic societies including our academic institutions.
As universities struggle to be relevant in the modern world, their research and impact is often drowned out by false truths and algorithms with an alternate agenda.
The implications of open access to unmoderated content that can be replicated across social media and academia alike, must be a major concern, although tools such as the Hugging Face AI detector do exist to help identify artificially generated content.
While colleagues across the sector grapple with this new window into reality, the need for change in educational deliver increases. Gone are the days when an essay can be taken on face value as original thought.
Which begs the question – was this article written by a machine?
The post Will new AI signal the ‘death’ of the essay? appeared first on The PIE News.