Categories
The Narrative

Cognition conundrum

“So, how’s the Under Cloud any different from using ChatGPT?”

An obvious question, and while I knew from intuition and simple experience there was a world of difference between the two, the first time someone asked me … I couldn’t articulate a response without some rambling jumble of parenthetical arguments and a bit of hand waving.

But, let’s hold that thought…

“How do I write a literature review?” I asked Google.

You’re perhaps wondering why I didn’t ask an AI? Then read on.

After a few searches I had an understanding of the purpose, the structure, and some gist of the writing style. What concerned me were — in the inimitable words of the former United States Secretary of Defense Donald Rumsfeld — the known unknowns.

So often Google Gemini would intercede and provide a response to a question that was so expansive I could have copied and pasted from it and that would have been the end of it — job done and mission accomplished!

In the past we would have used libraries, but then came the web and the search engines, allowing us to browse the growing corpus of human knowledge from almost anywhere. In each case, libraries and search engines do not provide answers in a direct sense, but are instead the venues where answers are found.

Let’s be honest, regardless of libraries with their index cards and Google with its list of search results, we’re still consuming and not authoring.

AI has been compared to the introduction of the pocket calculator in the 1970s and its widespread adoption during the following decade, and while there’s merit in the comparison, what’s not a parallel is the sheer scale of change artificial intelligence brings with it, something I’ll return to at the end.

Revealed to me by several professional researchers was a trend amongst undergraduates who’s task was to write a dissertation that was vehicular to them getting a job, and this trend involved using ChatGPT, a trend Jose Gomez, a director of architecture and platforms on LinkedIn, found vexing, where he noticed a 65% reduction in grade spread, and that almost all students were scoring between 93-100% (the standard deviation went from ~11% to 3.9%).

A prompt response

Here is where we return to the original question put to me, because those students still have to read those responses from the AI and make sure the sources and the statistics are correct and relevant, free of hallucinations, and that’s assuming they’re not required to provide a section on search methodologies, using something like PRISM, in which case there would be no traceable evidence of search activities beyond conversations with ChatGPT and their accompanying prompts.

Now, it’s feasible the responses are fabulous, the student makes the grade, and their trust in AI grows — but, with this trust come unsafe assumptions.

In this scenario, as the number of AI-generated reports increases, those students (who’re now in employment) must somehow organise them, share them with colleagues, use past reports to generate future reports, somehow keep track of what each reports is, contains, it’s point of origin and the context specific to that…

Yes, the context — often the first victims of how we create and organise are the what, the where, when, and the why of it.

We know from experience (and as a result of the data I’ve accumulated from the market research I’ve conducted) organisation degrades with scale, and it does so because the Mark 1 human brain is required to keep track of where everything is.

What I’ve described is an assortment of problems (pain points) that the Under Cloud was designed to handle … strange that, isn’t it? It’s as if I anticipated these problems when I first came up with the idea of the Under Cloud in 2003.

Using an AI to write an entire dissertation is the wrong approach, and I’ve said as much in the literature review:

An AI that writes your literature review for you is replacing cognition; an AI that surfaces semantically similar evidence for a claim you are constructing is augmenting cognition. The distinction is significant from an epistemological perspective: in the first case, it’s possible the researcher does not understand the argument they are presenting; in the second, they are the architect of the argument, with AI as a research assistant.

A Knowledge Synthesis Framework for Students and Professional Researchers: Design, Implementation, and Validation in Under Cloud, by Wayne Smallman

As a kid at school, we could have asked the teacher for the answer to a question (assuming it wasn’t something we could figure out with a pocket calculator), but the likelihood of them revealing it to us would have been almost zero. Yet, with AI we face a conundrum of cognition, where answers flow free, while our comprehension is at risk of grinding to a halt from lack of use.

So, the question is: Did I cheat and use AI to write the literature review? Now that’s an interesting question…

Photo by Xavi Cabrera on Unsplash