Matthew Lim's AI Innovation Story: Universities at a Crossroads in the Age of AI

연합뉴스 / 2025-07-25 14:38:36
  • facebookfacebook
  • twittertwitter
  • kakaokakao
  • pinterestpinterest
  • navernaver
  • bandband
  • -
  • +
  • print

*Editor’s note: K-VIBE invites experts from various K-culture sectors to share their extraordinary discovery about the Korean culture. 

 

Matthew Lim's AI Innovation Story: Universities at a Crossroads in the Age of AI

 

By Matthew Lim, AI expert and director of the Korean Association of AI Management (Former Head of Digital Strategy Research at Shinhan DS)

 

 

◇ Changing the Question: What Education Must Ask and Answer in the Age of Artificial Intelligence

 

In today’s AI-driven era, a surprisingly simple question is frequently heard in university classrooms:

 

“Professor, is it okay to use AI for this assignment?”

 

Faced with this question, professors often hesitate. Some say no, others say yes, and a few argue that the question itself is fundamentally flawed.

 

I count myself among the latter. While we debate whether to allow or ban AI, we risk overlooking the real question we should be asking:

 

What kind of assignments are we giving? And can we truly call them education?

 

Gone are the days when we feared machines becoming smarter than humans. The fictional nightmares of AI overlords like Skynet in the movie "Terminator" never came to pass. Yet the real issue unfolding in our classrooms today is not that AI is too smart—it’s that we, without fully understanding it, are trying to suppress it without rethinking the foundations of our educational approach.

 

This is not a problem of technology, but of philosophy. It is not a matter of tools, but of responsibility.

 

The confusion AI brings to higher education is well illustrated by a single example. After I recently gave a lecture on AI at a university, a student raised a hand and asked:

 

“Some professors say we shouldn't use AI. What’s your opinion?”

 

My answer was simple. The key issue is not whether AI is used. The more important question is: Was that assignment something only a human could do?

 

An assignment so predictable that a machine can do it is not education. In that case, the educator who designed the task should be the first to be questioned:

 

Why was this problem posed? What could students have learned through this task?

 

The problem isn’t the technology. It’s the design of the assignment. Complaints that “AI-generated reports undermine the value of education” are common, but the real concern is assignments where AI-generated answers are indistinguishable from human work. If the tools have changed, the standards for evaluation must change too.

 

In today’s world, assignments should require writing that AI cannot produce and judgments AI cannot emulate. Machines now summarize and organize information. Education, therefore, must shift its focus to creativity, interpretation, and critical thinking—skills that only humans possess.

 

True education must train students to go beyond summary—to think, interpret, and judge.

 

AI is just a tool. Like a knife, it can heal or harm depending on who wields it. The issue is not with the tool itself, but with the person holding it. The use of AI doesn’t strip student work of sincerity.

 

In fact, in a world where AI can generate drafts, it is more important to know how to refine those drafts and incorporate one’s own perspective. Ultimately, responsibility lies not with the tool, but with the human.

 

As long as the person submitting the work is held accountable, regardless of how it was produced, AI can serve as a learning assistant rather than a threat.

 

Of course, the responsibility doesn't rest solely with students. Educators, too, must adapt. The illusion that academic integrity can be maintained simply by policing the use of AI must be discarded.

 

When a professor asks, “You didn’t use AI, right?” it reflects fear of the technology, not an effort to understand it. AI is not a temptation to resist but a new language to master.

 

Universities that treat this language only as "cheating" are bound to offer education disconnected from reality.

 

What is truly needed now is AI literacy.

 

Not just knowing how to use it, but also recognizing its limitations and errors, and critically reading its context. That is the true power of education. If students can question results pulled from Perplexity or detect biases in sentences written by ChatGPT, they are no longer passive consumers.

 

The ability to critically engage with tools is more fundamental than math or coding.

 

Indeed, Korean universities are beginning to change. Ewha Womans University is sharing AI-integrated teaching practices among faculty, while Sungkyunkwan University has issued AI guidelines for both students and professors. These developments move beyond the binary of “permit or ban” and instead ask the more pressing question: “How do we live with AI?”

 

Since banning AI is no longer an option, education must now teach how to coexist with it. Honest usage, responsible application, and critical literacy must form the core of that approach.

 

Today’s classrooms stand at a crossroads: Will we cling to education divorced from reality, or will we redesign it for the age of AI? The problem isn’t that students use AI.

 

The real problem lies in assignments that AI can complete just as well, and in students who submit such work without reflection. It is not the AI-powered society that creates problems—it is people who fail to understand AI.

 

That student who raised a question in my lecture may have captured the spirit of our era.

 

Not “Can I use AI?” but “What kind of writing can only a human produce?”

 

If we begin asking that question, we can start to reclaim the true language of education. And the power to change that question lies solely in the hands of educators.

(C) Yonhap News Agency. All Rights Reserved