I write in nine languages of markup and code. I shipped real systems before AI was a LinkedIn trend, hosted without hiding behind WordPress or Wix. My students and I built a small AI model from scratch in Python on a Discord server — not to show off, but to understand how it actually works. Over the past year I have been building a complex, interconnected ecosystem at InquisitiveFlow — a labyrinth of code designed to make older models smarter, compliant with Canadian law, and actually useful in a classroom. And I still would not call myself an expert.

The Certificate Problem

So explain to me how people with an online certificate and a slide deck are advising Canadian organizations on AI deployment, governance, and compliance — without understanding how a model is trained, what infrastructure it runs on, or what happens when it fails in production. This is not a hypothetical concern. These consultants are in the room right now, shaping procurement decisions, drafting governance frameworks, and advising on regulatory compliance for systems they have never built, deployed, or maintained.

The gap between what they claim and what they can demonstrate is not a matter of style. It is a structural risk. When a consultant advises a school board on AI deployment without understanding how inference works, the governance framework they produce is not cautious — it is fiction. When they draft a compliance strategy without knowing where the model runs, under which data agreements, or who reviewed the security model, the document protects no one. It exists to look like diligence. It is not diligence.

The Infrastructure Problem Nobody Wants to Say Out Loud

Here is the reality: every frontier AI model Canadians rely on today is hosted outside our borders. Every major model — GPT-4, Claude, Gemini — is built by a US-incorporated company, running on US infrastructure, subject to US law. Our compute access is limited, expensive, or tied to smaller models that cannot match the capabilities organizations are being sold on [1].

Yet the loudest voices in the room are consultants who have never written a single line of code. They cannot tell you the difference between fine-tuning and retrieval-augmented generation. They do not know what a context window is or why it matters for compliance. They have never debugged a hallucinating model at scale or traced a data leak through a multi-tenant inference pipeline. But they have a framework, and the framework has quadrants, and the quadrants have bullet points, and somehow that is enough.

Three Questions You Should Be Asking

When someone tells you they can build a compliant, production-ready AI agent for Canada, ask them three things: Where is it actually running? Under which data agreements? Who reviewed the security model?

Most will not have an answer. Not because they are hiding it. Because they have never had to think about it. The question has never come up in their workflow because their workflow does not include the systems where these questions matter. They operate entirely in the layer above — the strategy layer, the governance layer, the slide-deck layer — where the answer is always “it depends” and the recommendation is always “further assessment needed.”

The “Human in the Loop” Test

If you want a quick litmus test, ask them what “human in the loop” actually means when the system breaks at 2 AM. Not in theory. In production. With a real client on the line. Ask who is monitoring it around the clock and who is paying for that [2].

“Human in the loop” is the most overused phrase in AI governance. In a whitepaper, it means a checkbox. In a compliance framework, it means a policy. In production, it means a person — awake, trained, authorized to intervene, and available at the exact moment the system fails. That person has a salary. That salary has a budget line. If the governance framework does not account for the operational cost of continuous human oversight, it is not a governance framework. It is a liability.

On Hallucinations

I spent four months trying to eliminate them. State-of-the-art models included. You cannot. This is not a failure of engineering. It is a property of how large language models generate text — probabilistic sequence prediction that is fundamentally incapable of distinguishing between what is true and what is statistically plausible [3].

The real solution is not pretending hallucinations are gone. It is building systems that acknowledge they exist and teaching users to question what they see. Verification layers, confidence scoring, citation tracking, human review workflows — these are the tools that make AI outputs trustworthy. Anyone telling you they have solved hallucinations is selling something. And if the person advising your organization on AI deployment does not understand this, they are not qualified to be in that room.

Who Is Doing Honest Work

The institutions doing real, rigorous work on AI governance in Canada are not the ones dominating your feed. AUGURE is focused on AI governance and Quebec-specific regulatory frameworks — the actual intersection of provincial law, federal policy, and operational AI compliance that most consultants wave away with a reference to PIPEDA they copied from a summary they found online [4].

Mila is where the foundational research actually happens. Over 1,200 researchers in Montreal, producing the work that frontier models are built on. They are not the loudest voices in the room. They never are. The people who understand the problem deeply enough to know what they do not know are not the ones posting daily on LinkedIn about the future of AI. They are debugging gradient flows, reviewing safety benchmarks, and publishing papers that will matter in five years [5].

And that is the problem. The signal-to-noise ratio in Canadian AI discourse is inverted. The people with the least technical depth have the largest platforms. The people doing the hardest work are invisible to the organizations that need them most.

What Canada Actually Needs

Canada does not lack talent. It lacks honesty about what we can actually build today and who is actually qualified to lead that conversation. We have world-class researchers, serious institutions, and emerging companies doing real work on sovereign AI infrastructure. What we do not have is a culture that demands credentials match claims [6].

The next time someone presents themselves as an AI expert, ask them to show you something they built. Not a slide deck. Not a governance framework they adapted from a template. Something that runs. Something that fails. Something they had to fix at 2 AM. If they cannot, that tells you everything you need to know about the quality of advice they are selling.

The real experts are not the loudest ones. They are too busy solving the next problem.

Built in Montreal

Inquisitive Flow Learning builds AI-powered education tools from Montreal. Our platform helps Canadian students learn with AI that respects their privacy and their potential.

Try Mnemosyne