Facilitating Coherent AI Governance in Higher Education
Author
Keevyn Hirschfield
Editor
Vic Passalent
Publications Lead
Gianluca Mandarino
The proliferation of generative AI tools in Ontario’s universities signals a profound need to change how students are taught and evaluated. The central challenge we ought to be addressing is no longer whether to permit or restrict AI use, but how to preserve the core functions of post-secondary education when the foundational assumptions of effort, authenticity, and assessment no longer hold.
Ontario’s Government and universities have yet to produce coherent policies clarifying how AI should (or should not) shape learning. This is because, in part or in full, no one can agree on what the purpose of higher education actually is. As University of Arkansas philosopher Megan Fritz aptly put it:
“[T]he biggest obstacle to developing coherent AI guidelines… [is] the lack of consensus about which skills and formative experiences institutions are prepared to lose, and which they will fight to retain.”
Disagreement over this purpose is nothing new, but AI has made the issue more urgent and more concrete. ChatGPT, Grammarly, Perplexity and similar tools are now a part of most students’ daily routines. According to a 2025 KPMG survey, 73% of Canadian students report relying on AI for schoolwork, up from 59% the previous year. Nearly half (48%) acknowledge this reliance has weakened their critical thinking skills, and 77% want universities to offer more structured guidance on responsible AI use. Nevertheless, decision makers have continued to come up short.
1. Why Enforcement Failed
When ChatGPT launched publicly in 2022, universities in Ontario and around the world responded with the tools they already had. Namely, AI use was treated as any other case of plagiarism: detect students who use AI and punish those who are caught. This enforcement-first approach has failed because it is almost impossible to prove beyond a reasonable doubt that a student used AI. As such, prosecuting cases of suspected AI use is typically more trouble than it's worth. Turnitin and other software traditionally used for plagiarism detection have proven wholly inadequate. Studies have shown that AI text detection tools are vulnerable to a high rate of false positives and negatives, and universities now largely advise against their use.
Recognizing this, many universities have shifted to honour-based approaches. A popular pick is having students submit an AI disclosure form, alongside their assignments which declares how they used AI (if at all), alongside any chat logs. Instructors can then check these declarations against the course's AI policy to verify appropriate use. Effectively, this amounts to asking students to report their own academic integrity violations; an ineffectual strategy at the best of times, and certainly not a strong foundation to build institution-wide AI policy on. A different approach is needed.
2. Why AI forces clarity
A more coherent AI strategy is impossible until institutions can clearly state what their programs are meant to deliver. Answers will vary because disciplines vary: engineering, philosophy, and business all have different value propositions. Yet most universities have relied on assumptions that were never fully articulated and are now strained by students’ AI use. To clarify this tension, consider the following three overlapping functions of higher education: signalling, credentialing and workforce preparation, and personal and intellectual development.
Signalling, Credentialing, and Workforce Preparation
Some of the value of a post-secondary degree lies in its ability to signal to potential employers that you have work ethic, intelligence, and the ability to do what is asked of you. The signal works because earning a degree is costly: it's time-consuming, demanding, and, ideally, resistant to shortcuts. If students can complete much of their work effectively coasting on AI, the cost and consequently the value of a degree decreases. Employers, aware that AI use is widespread, cannot credibly differentiate students coasting on AI and those putting in the hard work, and so are forced to discount the value of every degree. A highly restrictive AI policy does not solve this if students circumvent it. A permissive policy does not solve it either. So long as AI use is pervasive and unverifiable, the credibility of the signal erodes.
Of course, the value of a degree isn’t just what it signals, but also the skills picked up along the way. Organizations are and will continue to use AI tools as a method of reducing costs and increasing productivity. Employers therefore want to hire students who are familiar with AI, when it can help their workflows, and when its more trouble that its worth. This is the logic behind Ohio State’s AI Fluency strategy, which mandates the integration of AI into classwork. Institutions that completely ignore these tools risk graduating students who are unprepared for the labor market. Yet, it seems fundamentally unwise to graduate students who can use AI to accomplish tasks but cannot perform or even understand the underlying work they are doing. While employers want AI-literate employees, they don’t want software developers who are wholly dependent on generative AI to code, or finance students who can’t independently operate a spreadsheet.
Personal and Intellectual Development
Where the waters are muddied in terms of workforce preparation, AI is an existential threat to another purpose of higher education: the cultivation of critical thinking, argumentation, and the ability to learn. If AI completes the intellectual heavy lifting, the student does not meaningfully develop. In the extreme case, a student can go through entire courses and programs without coming up with their own idea for a paper, reading a primary source, or meeting with their peers to grapple with difficult concepts. To ignore the fact that students are using AI, or to broadly encourage it, abandons any notion that higher education is charged with developing well-rounded graduates. At the same time, intellectual development cannot occur in a vacuum disconnected from the “real world”. Completely banning students from using AI is akin to completely banning the internet: counterproductive and entirely unenforceable.
The Core Problem
The core problem entailed by this discussion is that existing pedagogy and assessment models were built for a pre-AI world. Instructors now have no choice but to assume AI is being used for any evaluation that is not physically proctored. And while some assessments can move in-person, others must be redesigned entirely. AI does not force entire institutions to choose one purpose over the others, but it does force them to be explicit about priorities. Decision makers at the institutional, program, and course level must be able to articulate what value their offering provides and how AI affects that value. The liberal arts may prioritize independent thought, STEM may prioritize core-competencies, and business may emphasize AI-enabled productivity. That is expected. The key is for the pedagogy and evaluations which flow from that prioritization actually incentivize learning and assess students in a way that reflects stated goals. In doing so, universities can deliver education programs that protect the legitimacy of the degrees they award.
The alternative is that AI continues to allow students to bypass foundational learning. In turn, the signal of a degree evaporations, students are not being properly trained, and universities risk losing credibility with employers and the public.
It is in support of this pedagogical re-vamping that the Government of Ontario can intervene.
3. Recommendation: Establish the Ontario Centre for AI Pedagogy and Practice
Ontario should establish the Ontario Centre for AI Pedagogy and Practice (OCAPP) with a mandate to provide guidance and incentives to departments on developing AI policies. The goal is to help administrators and professors work through the tensions between their educational goals and their AI stance. The Centre would serve as the hub for helping universities develop discipline-specific, learning-outcome-aligned AI strategies and redesign assessment.
The core functions of OCAPP are as follows:
(1) Provide funding for departments and faculty to redesign assessments, update curricula, and develop program-level AI policies.
(2) Offer consultation to help departments articulate learning outcomes and match them to AI-use policies.
(3) Host an annual provincial AI-pedagogy forum to share research, results, and current best practices.
(4) Coordinate with employers to ensure AI policies align with labour-market expectations.
Risk and Mitigation
Risk The most significant threat to OCAPP’s success is organizational. Without meaningful uptake, OCAPP becomes a feckless advisory body whose recommendations are largely ignored. Universities and faculty may view the Centre with indifference or even disdain, as organizations accustomed to complete autonomy may resist provincial coordination. Moreover, faculty disengaged from their students or already overburdened with research and teaching loads may view the push to develop AI pedagogy as a burden rather than an opportunity.
Mitigation Ultimately, the province cannot outright force universities to meaningfully participate in OCAPP. Nevertheless, low uptake risk can be mitigated through strong financial incentives. Rather than providing broad funding under function (1), priority for grant funding should be awarded to faculty and administrators who submit proposals which clearly align AI use or restriction with learning outcomes. There are some examples of this model in practice. For example, OCAPP can take inspiration from North Carolina’s Elon University AI teaching innovation model: faculty submit proposals for innovative uses of AI in teaching and compete for funding. The best proposals are funded and shared. Reusing a similar competitive grant structure within OCAPP would incentivise uptake and experimentation and accelerate the development of discipline-specific AI pedagogy across Ontario.
Additionally, rather than engaging with universities as single entities, OCAPP should establish relationships with key departments, administrators, and faculty who see the value of the Centre. These individuals can advocate internally for reforming AI pedagogy and kickstart the normalization of OCAPP. This approach would build institutional momentum for AI reforms and reduce the provincial coordination taboo among universities.