CFOtech Canada - Technology news for CFOs & financial decision-makers
Jamie ng

Compression, redistribution and friction: Three ways AI is shifting lawyers from 'performers' to 'producers'

Sun, 22nd Feb 2026

Much of the current discussion about AI in legal circles still centres around debating the technology itself - what it can do; how powerful it is and whether it introduces new risks.That framing is understandable, but it fundamentally ignores the real shift. On a flight back from the US recently, I watched The Defiant Ones, and what struck me wasn't Dr Dre's talent as an artist. Instead, it was his move into producing - stepping back from performing to shaping the sound, the direction and the outcome. That feels like the moment we're in right now as a profession. AI isn't just another tool to debate; it's changing the role of the profession and the question isn't how impressive the technology is - it's whether we're willing to move from doing the work to orchestrating it.

AI has not rewritten the rules of professional responsibility. Professional duties remain. Oversight and supervision are still expected and lawyers are accountable in largely the same way they were before. What has changed, however, are the operating conditions under which those same frameworks apply.

In practice, there are three structural shifts reshaping how risk is created, carried and governed inside legal firms - compression, redistribution and friction - and right now, understanding these shifts matters more than debating whether AI is inherently good or bad for the profession.

Compression

There's no denying that AI compresses time, judgement and exposure. Drafting, modelling and analysis that previously moved between people over days or weeks can now be produced almost instantly and work that once unfolded across multiple stages now converges into a single integrated workflow. 

That acceleration delivers obvious benefits; but it also changes how the attendant risk behaves. Errors now travel faster and decisions are reached more quickly. As a result, the window to detect, challenge or remediate issues narrows.

Nothing about professional legal standards has been relaxed but what shifts is the margin for error because where risk was once absorbed and mitigated across time and process, it is now borne almost instantaneously, often by fewer individuals.

As a consequence, supervision in a 'compressed' environment cannot move at the same pace. It needs to be redesigned and firms have to be clear about how judgement is exercised when work arrives seemingly fully formed rather than gradually assembled. The question becomes less about capability and more about control, i.e. how do we ensure that dramatically compressed workflows still allow for meaningful review?

What this means: firms must deliberately embed human-in-the-loop gates inside compressed AI workflows - structured checkpoints at issue framing and spotting, at the level of assumptions and factual inputs, at initial output review and again at final sign-off - so that judgement is exercised visibly and accountability remains clear, even when work arrives fully formed and at speed.

Redistribution

As a follow on from this and as execution becomes automated, a team's focus naturally shifts away from mostly performing work towards almost exclusively 'producing' (i.e. orchestrating) and supervising it (think: typical leverage models). Critically, when "junior" tasks are streamlined, drafting is assisted and routine analysis is accelerated, accountability does not disappear - it consolidates and concentrates upward. In this environment, time spent becomes less relevant than judgement applied and our professional role becomes one of validation, interpretation, escalation and sign off.

This redistribution is often underestimated because automation can look like reduced exposure as fewer people are involved. In reality, sign off risk is a lot sharper and clients and senior legal teams may carry more concentrated accountability even as machines perform more of the underlying activity.

Embedding AI into workflows is therefore not just a technical decision but also a governance decision and firms need clarity about where judgement sits, how review occurs and what supervision means in practice. If a machine produces an output, the human becomes responsible for interrogating it and that requires very explicit checkpoints including: Where does review occur? What must be challenged? What documentation evidences oversight? What decisions require escalation? Without those answers, risk becomes less visible.

What this means: Firms must deliberately build capability alongside automation - structured training on interrogating AI outputs, including examining reasoning, spotting hallucinations, checking citations, testing assumptions and identifying missing issues - and require that all AI-supported deliverables clearly surface their underlying assumptions and sources, so that scrutiny is systematic, judgement is informed and accountability remains explicit rather than implied.

Friction

Many of the safeguards that historically protected firms were never described as controls but were simply embedded in the way work happened. Time creates ample space for reflection and multiple reviewers introduced alternative perspectives. Manual drafting forced closer reading (sometimes with a red pen - remember those days?) and even this hesitation played a part.

AI accelerates production and reduces repetitive tasking - but that friction was also performing a function. It slowed decisions enough to allow challenge and it created space for uncertainty in ways we did not always recognise. When all friction is removed, the safeguards it created do not automatically reappear; so they must be rebuilt deliberately. 

The issue is not with faster work; but rather faster decisions without redesigned guardrails. Firms that treat AI as a productivity layer only will struggle. 

What this means: Firms must deliberately reintroduce structured friction where it matters - clear guardrails such as requiring a second reviewer for specified categories of work, including first-time advice, complex documents, board-level material or regulator-facing outputs - so that speed does not eclipse scrutiny and challenge remains embedded in the process rather than assumed.

Changing implications for capability and value

These three structural shifts have implications not only for governance but also for how lawyers develop and apply their expertise. The technical knowledge required hasn't fundamentally changed as much as the context in which this is learned and applied. 

Historically, judgement was developed through repetition and lived experience which included drafting documents, reviewing precedents and working through scenarios over time. Today,  information and scenarios can be generated instantly and work might be produced first by a machine and then reviewed by a human. 

The substance is similar; but the pathway to competence is different and training therefore needs to focus more explicitly on judgement; on challenge; and on understanding the assumptions and being able to discern the inputs behind outputs. 

For legal firms, this has even further implications. If judgement, supervision and system design become more central, then the value attached to those capabilities should be properly recognised. Margin does not disappear in an AI-enabled environment, it simply shifts toward those who can design resilient systems and apply informed oversight.

The conversation about AI in legal circles is often framed around disruption; but the more useful lens is structural adjustment. Compression changes timing; redistribution changes accountability; and the removal of friction changes control. The practice of law and associated risk remain intact, but the conditions under which they operate have shifted - and that is the real shift we should be focused on. The question is no longer what technology can do, but whether we are prepared to move into the next phase of legal professional services, i.e. from performing the work to orchestrating it.