From mid-2026, an algorithm called I-CAN v6 will set every NDIS participant's funding budget. Staff are not permitted to override the tool's output. Independent medical and allied health reports from participants may be disregarded. The Administrative Review Tribunal loses the authority to fix plans — it can only refer cases back to the same tool. The Guardian broke the story in early December. Every Australian Counts responded the same week. The NDIA's own development page confirms the timeline.
Most of the coverage so far has framed this as a participant rights issue. It is. It is also a provider operations issue, and that part has been almost entirely missed.
We have spent thirteen years inside NDIS. Rayan and Richard scaled a provider from zero to twenty-six homes together over that period. We know how regulators behave when they automate one side of a funding system. The enforcement side follows the same pattern, always.
This is the provider's reading of what is happening, and what to do about it.
What is the I-CAN v6 tool?
I-CAN v6 is the Instrument for Classification and Assessment of Support Needs, version six. It is a structured assessment covering twelve life areas, conducted as a guided semi-structured conversation between a trained assessor and a participant, typically running one to three hours. The output is a numerical score that feeds a budget calculation. The NDIA describes I-CAN as "the gold standard of available, validated needs assessment tools," developed over more than twenty years of academic research.
The instrument itself is not new. What is new is the policy decision to bind funding directly to the algorithm's output, and to strip staff of the discretion they previously had to weight clinical reports, lived environment, or participant-supplied evidence against the structured score.
When do computer-generated NDIS plans start?
The new framework rolls out in stages from mid-2026. The exact transition schedule has not been finalised. Provider associations expect new participants to be assessed under I-CAN v6 first, with existing participants progressively reassessed at plan review. The window overlaps with the July 1, 2026 mandatory registration deadline, which means two regulator-level shifts hit operators in the same quarter.
The combined effect is what providers should be planning against, not either change in isolation.
The pattern providers should know
When a regulator automates the intake side of a funding scheme, the regulator automates the enforcement side soon after. The aged care assessment debacle is the closest local parallel. Every Australian Counts made that comparison directly in their piece When Algorithms Decide. Robodebt is the more public one.
Both followed the same sequence: build a tool to standardise intake decisions, then deploy a complementary tool to standardise enforcement decisions, then audit at scale against the standardised pattern.
For NDIS providers, the second-order effect is the one nobody is naming clearly: the documentation patterns the NDIA has built I-CAN v6 to recognise will also be the patterns its audit classifiers expect to see in provider records. Records that don't pattern-match a clean I-CAN signature look anomalous. Anomalous, to an algorithm, looks indistinguishable from fraudulent.
This is not a hypothetical. The Australian Financial Review framed the next phase of NDIS oversight explicitly as a fraud crackdown ten days ago. The Conversation summarised the government's spending controls the same week. The direction of travel is established.
How will I-CAN v6 affect NDIS providers?
I-CAN v6 affects providers in five specific ways:
- Algorithm-generated budgets may not match the support intensity currently being delivered. Modelled funding compression sits at eight to fifteen percent for the same plan cohort, based on stated spending controls.
- Records that don't fit the algorithm's expected pattern trigger automated audit flags. Pattern-matching does not distinguish between messy and dishonest.
- Fast-tracked cohorts face tightened eligibility. The motor neurone disease cohort is the test case currently in public debate. Revenue concentration in high-acuity cohorts becomes a structural risk.
- Compliance enforcement automates alongside intake. The same classifier logic applies in both directions.
- Independent reports lose the leverage they currently carry. Clinical letters and allied health assessments will not override the I-CAN score under the policy as written.
Each of these compounds the others. An operator whose participants are reassessed under I-CAN v6 may carry annualised revenue exposure between $50,000 and $150,000 per home depending on cohort intensity. That same operator faces audit-defence labour exposure between $15,000 and $40,000 per home if documentation is reviewed adversarially. The exposures stack.
The audit gap operators do not yet see
This is the part missing from every other piece on the topic.
When we sit with operators running on spreadsheets and human memory, the evidence trail looks the same as the evidence trail of providers we know have been struck off for fraud. Not because the operators are doing anything wrong. Because messy documentation reads the same as evasive documentation when pattern-matched at scale.
Specifics we have seen across operators audited in the last sixty days:
- Incident notes filed retrospectively at the end of shift rather than at intake. The NDIS Commission notification window is twenty-four hours; an algorithm watching for that pattern will flag late filings the same way a Commission auditor used to.
- Claims with line-item drift across the same participant profile. Small variances that read as opportunistic billing to software, even when each individual line is defensible.
- Coordinator outreach happening over phone and Facebook DM with no written paper trail. An algorithm cannot see a relationship that does not exist in writing.
- SCHADS contraventions caught after pay run instead of before. A flagged pattern, even when self-corrected.
None of these examples reflect operator dishonesty. They reflect operations running on staff memory and goodwill. That worked when the regulator was reviewing samples by hand. It does not work when the regulator is reviewing every record by classifier.
The 55.7% provider operating loss rate from 2023-24 is the visible scar of this gap, not a pricing failure. We wrote the deeper version of that argument here.
What should providers do before I-CAN v6 rolls out?
Five actions, ranked by leverage:
- Audit the incident documentation chain. Map the path from incident occurrence to Commission-ready record. If the answer involves "we email it to the ops manager and she writes it up," the chain is exposed.
- Reconcile billing line items quarterly. Variance against NDIS Practice Standards is the easiest fraud signal a classifier reads. Catching drift before submission removes it from the audit dataset.
- Build a coordinator engagement paper trail. Personalised outreach with timestamps, response records, and a tracked pipeline. The kind of record that pattern-matches a legitimate referral relationship rather than an empty directory listing.
- Run SCHADS roster pattern review before pay commits, not after. Pre-commit reviews don't create a breach record; post-commit reviews do.
- Get a written operational diagnostic. Thirty minutes. Free. You leave with a costed estimate of the bleed and a ranked list of which records are exposed.
These five operational fixes solve July 2026 mandatory registration, the AFR-framed fraud crackdown, and the I-CAN v6 audit-pattern shift simultaneously. One operational posture covers all three.
What changes when the agent does the documentation chain
We install three agents that handle these five fixes. None of them write care plans. None of them make funding decisions. They handle the documentation chain that sits underneath the care.
- The Triage Agent structures every incident at intake. Commission-ready inside 24 hours of occurrence. Zero manual intervention. The incident chain becomes algorithm-readable by design.
- The Compliance Agent tracks policy currency, flags overdue document reviews, and generates pre-audit checklists. Quarterly billing reconciliation runs as a weekly task.
- The Referral Agent maintains the coordinator engagement paper trail automatically — personalised messages, response tracking, and pipeline records that produce a documented referral pattern.
The first cohort of providers we ran the Referral Agent for produced eight new participant opportunities in week one using only the messaging frameworks, before the automation layer was even fully active. That is the pattern-match the algorithm rewards: structured outreach, traceable response, complete records.
Each of these agents installs in three weeks. Each carries an outcome guarantee.
This is what "AI that does the admin, humans that do the care" means in practice. The agent does not assess the participant. The agent does not approve the plan. The agent makes sure the documentation underneath the care looks the way the audit classifier expects it to look — because the alternative is documentation that looks like fraud through no fault of the operator.
The window is shorter than it looks
Mid-2026 rollout sounds remote until you count the working days. From today the gap is roughly seventy working days to the July 1 registration deadline. I-CAN v6 begins immediately after.
Operations trying to fix this with overtime and spreadsheets are running the same play that produced the 55.7% operating-loss rate. The arithmetic is unforgiving. A spreadsheet does not pattern-match cleanly to a classifier. Staff memory does not produce a paper trail. "We will get to it on Monday" is not a defence the algorithm can read.
The lever is operational. The fix is structured documentation produced at intake, not retrospective compliance, not staff memory, not goodwill.
If you are not certain where your operational exposure sits before I-CAN v6 rolls out, the Operational Diagnostic is the read for exactly this question. Thirty minutes. Free. You leave with a written report naming the specific bleed and a costed estimate of what changes if it gets fixed — yours either way.
The full agent catalogue with outcome guarantees sits at /agents. The Scorecard is the five-minute version of the diagnostic for operators who want a Red/Amber/Green readout before booking the call.
Either way, the maths is the same. The regulator is automating one side of the system. The provider side has to automate to match. The window for that is now.