top of page

Why Look Under the AI Hood?

  • bryan07965
  • 2 days ago
  • 3 min read

Educators have forever championed the idea that knowledge dispels fear. So why are faculty slow to learn about AI? I read recently that only 17% of higher education faculty feel that they have advanced or expert AI skills, while nearly 40% have not used AI at all. These are stats from the Digital Education Council's 2025 global survey. And faculty in the US and Canada are the least optimistic about the impact of AI (download the full survey here).



Source: Digital Education Council Global AI Faculty Survey 2025
Source: Digital Education Council Global AI Faculty Survey 2025

These statistics are striking. AI promises to have the most transformative influence on education we've ever seen, and yet there seems to be a widespread wait-and-see attitude. A healthy dose of caution is always appropriate, and certainly there is reason for caution here. But the speed at which students fearlessly adopt new technologies suggests that shying away from AI opportunities may carry a much bigger risk.


What's the solution? Knowledge. As Christian Bovee said, "We fear things in proportion to our ignorance of them." And there's Joseph Campbell's insight that "the cave you fear to enter holds the treasure you seek." From my own journey, I believe that knowing how AI works under the hood will alleviate fear and unlock ideas and opportunities.


I've found three significant benefits to understanding the conceptual basics of AI science:


First, we will become adept at evaluating AI's capabilities and limitations. When we grasp that these systems are essentially doing sophisticated pattern-matching rather than truly "knowing" things, we can recognize where AI might sound convincing and yet be incorrect. The limits of Large Language Models and GenAI are not hard to find if you use them enough. But understanding why helps with designing thoughtful assignments and assessments that leverage AI's strengths while avoiding its weaknesses. It also helps us teach students when to trust AI outputs and when verification is necessary.


Second, we become considerably more skilled at eliciting what we need from AI. Understanding concepts like context windows (AI's short-term memory) helps improve the effectiveness of our prompts. Understanding how hidden layers process information by grinding away in sequence to produce more and more complex outputs helps us understand why a prompt may have gone astray, and how to stair-step prompts to ensure accurate and helpful results. Following the recipe will get results, but understanding why certain ingredients work well with other ingredients leads to actual expertise.


Third, and perhaps most importantly, knowledge helps us develop an ethical literacy around AI. Understanding how training data influences biased outputs or how AI can generate convincing misinformation facilitates richer discussions with students about responsible use. Remarkable conversations emerge when teachers possess and share this kind of foundational knowledge with students.


The encouraging news is that developing an understanding of AI under the hood doesn't require becoming a programmer or data scientist. A conceptual framework is all it takes. Many districts and college campuses now offer some form of AI fundamentals in their teachers' development offerings. The free 5-part series from Khan Academy and Code.Org called, fittingly, AI 101 for Teachers, is particularly valuable, being designed specifically for educators. It doesn't go into great technical depth, but it does present concepts with exceptional clarity.


I'm curious about your experience. Does your institution provide professional development on AI fundamentals? Which resources have you found most valuable in this rapidly evolving landscape? You can comment below.



 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page