Cognitive scientist Dr. Jim Davies is a renowned full professor at the Institute of Cognitive Science at Carleton University where he also directs the Science of Imagination Library. Dr. Davies not only leads in the academic space, but he is also the renowned author of books like “Riveted: The Science of How Jokes Make Us Laugh, Movies Make Us Cry, and Religion Make Us Feel One With The Universe”, and “Imagination: The Science Of Your Mind’s Greatest Power”.
As an added layer to Dr. Davies’ multifaceted career, he also co-hosts the thought-provoking and award-winning podcast “Minding The Brain”, a podcast that engages in discussions about neuroscience, psychology, technology, and everyday life.
In a new episode of The AI Purity Podcast, Dr. Jim Davies talks about cognitive science, the threats of superintelligent AI, and his concerns about AI being used in academic spaces.
What Is A Cognitive Scientist
According to an article published in Johns Hopkins University’s Department of Cognitive Science, a cognitive scientist studies the human mind and brain. They understand how the mind can represent, process, and even manipulate knowledge. Researchers or scholars who are interested in cognitive science study mental computations that underlie cognitive functions. They try to understand how these mental computations are implemented with the neural structures of the brain.
Cognitive science has several notable areas of study which include:
-
Cognitive Psychology
The study aims to understand mental processes like perception, memory, language, and reasoning.
-
Linguistics
This study explores the structure and function of language and how the brain processes it.
-
Computer Science and Artificial Intelligence
This study investigates computational models and how computer algorithms are developed to mimic cognitive functions and processes.
-
Neurosciences
The study of the brain’s neural mechanisms which support cognitive functions.
-
Philosophy
The study of the nature of knowledge, consciousness, and the relationship between mind and body.
-
Anthropology
The study of cultural influences on the development of one’s cognitive abilities.
The aim of cognitive scientists is to characterize the structure of human intelligence and how it functions. By using all 6 disciplines above, they are able to form a more comprehensive understanding of the human brain and mind. Using various methodologies, experimentation, computational methods, neuroimaging, and theoretical analyses, cognitive scientists are able to explore the mental processes in the brain.
At the basic level, cognitive scientists seek to create a unified framework that explains how humans think, learn, and experience the world around them.
Artificial Intelligence Risks In Educational Settings
Dr. Davies uploaded a talk on his YouTube titled, “Teaching Thinking In The Age Of Large Language Models”. He had a few fellow educators present during the talk and demonstrated how ChatGPT works which highlighted just how risky this technology is when trying to uphold academic integrity.
“I think that it’s one of the most important things to teach in university and high school is to teach students how to think, and learn, and reflect on their ideas.” Dr. Davies says on the podcast. The risk in using AI technology in educational settings lies in the possibility that students will no longer think for themselves since they have technology at their disposal that can analyze for them. “Building a love for knowledge…” he says is something that “these large language models get in the way [of].”
Dr. Davies’ solution to this problem? Writing. “I think that teaching writing is one of the best ways to teach thinking.” Dr. Davies believes that the hardest part of writing is where learning happens. It teaches students critical thinking, how to formulate their own thoughts, and how to analyze the opinions of others. As students become more reliant on large language models doing the bulk of the work for them, it takes away the opportunity for them to learn.
To combat artificial intelligence risks to students, Dr. Davies says “teachers should have their students write by hand in class. Either literally by hand with a pencil or on computers where they don’t have access to large language models.
Currently, there isn’t a global regulatory body that polices the use of AI tools in schools and universities and it is ultimately up to each professor how they will handle their students’ use of this technology. According to Dr. Davies, since LLMs and the like hinder the “practice and learning of how to think” they should made against the rules. But until then, it is up to the professors to enforce and teach their students how to properly use AI.
Find out what other educators think about LLMs being used in school by listening to a previous podcast episode featuring Dr. William Wei on The Future of Collaboration between AI Technologies and Traditional Teaching Methods.
Dr. Jim Davies On AI Text Detectors
Dr. Davies uploaded a talk on his YouTube titled, “Teaching Thinking In The Age Of Large Language Models”. He had a few fellow educators present during the talk and demonstrated how ChatGPT works which highlighted just how risky this technology is when trying to uphold academic integrity.
“I think that it’s one of the most important things to teach in university and high school is to teach students how to think, and learn, and reflect on their ideas.” Dr. Davies says on the podcast. The risk in using AI technology in educational settings lies in the possibility that students will no longer think for themselves since they have technology at their disposal that can analyze for them. “Building a love for knowledge…” he says is something that “these large language models get in the way [of].”
Dr. Davies’ solution to this problem? Writing. “I think that teaching writing is one of the best ways to teach thinking.” Dr. Davies believes that the hardest part of writing is where learning happens. It teaches students critical thinking, how to formulate their own thoughts, and how to analyze the opinions of others. As students become more reliant on large language models doing the bulk of the work for them, it takes away the opportunity for them to learn.
To combat artificial intelligence risks to students, Dr. Davies says “teachers should have their students write by hand in class. Either literally by hand with a pencil or on computers where they don’t have access to large language models.
Currently, there isn’t a global regulatory body that polices the use of AI tools in schools and universities and it is ultimately up to each professor how they will handle their students’ use of this technology. According to Dr. Davies, since LLMs and the like hinder the “practice and learning of how to think” they should made against the rules. But until then, it is up to the professors to enforce and teach their students how to properly use AI.
Find out what other educators think about LLMs being used in school by listening to a previous podcast episode featuring Dr. William Wei on The Future of Collaboration between AI Technologies and Traditional Teaching Methods.
Why AI Is Dangerous?
On Dr. Davies’ podcast, he had an episode on AI and Existential Risk featuring Darren McKee. The topic basically revolves around why AI is dangerous and AI threats and the potential dangers of developing artificial general intelligence (AGI) and superintelligent AI or artificial superintelligence (ASI).
The AI threat lies after AGI is achieved and it rapidly evolves into ASI which is intelligence that supersedes that of humans. With superintelligent AI developing and surpassing human intelligence, its actions are not only unpredictable but potentially unstoppable. According to Dr. Davies, “The problem with this is that we cannot predict what an AI that is smarter than us would do. We just would not be capable of doing it.”
If such AI were created without ethical safeguards, while prioritizing profit, for example, it could take extreme measures to achieve its goals like exploiting resources or manipulating existing systems that could potentially harm humans. To prevent these catastrophic outcomes, it’s imperative that if and when ASI is created, it is aligned with ethical values that will protect human lives and well-being.
The problem is also that there’s no guarantee that ethical codes will be used when ASI is developed or even if there are, ethical systems will not function as intended. Drawing a parallel between human behavior, intelligent beings “find a loophole” or a way to circumvent rules. As a cognitive scientist, Dr. Davies says, “We have to be very careful about what the ultimate goals of these machines are because they’re going to ruthlessly optimize for those goals and try to get past whatever rules are in its way.”
Regardless of whether AI will eventually outsmart humans and find ways to work around ethical constraints, Dr. Davies stresses the importance of establishing safeguards as early as now. While he is skeptical of the success of implementing ethical controls in AI, it is still crucial.
Listen To The AI Purity Podcast
Dr. Jim Davies has more to share on The AI Purity Podcast. He acknowledges that while there has been a strong emphasis on AI threats and their potential dangers, he is ultimately optimistic that its benefits will outweigh the cons. There have been plenty of benefits to AI across various industries like psychotherapy for example.
According to Dr. Davies, AI even has the potential to revolutionize education by providing scalable tutoring which will make education even more accessible. Dr. Davies highlights that there is plenty of potential for AI to solve complex problems in various fields like material science and medicine.
As a cognitive scientist, Dr. Davies understands better than most the potential for AI to enhance human life as long as we are managing and developing these technologies responsibly. He concludes that if AI didn’t pose any existential threats and there weren’t too many concerns about AI, it could significantly improve human life.
The AI Purity Podcast is available on YouTube for viewing and Spotify and other platforms for listening.
Help our advocacy of responsible AI use and check out the tools we can offer to machine learning developers.