AI in accessibility can be useful, but it becomes sloppy very fast when people let it wander into judgement. I was not interested in that. I was interested in bounded help: summarising, explaining, grouping, drafting, and supporting the tester without pretending the model had become the compliance authority.
That is roughly where A11Y Cat landed. The extension keeps AI assist optional and local-device oriented, and the older assistant path is documented with bounded actions, structured output contracts, session tokens, allowlists, and grounding rules. That is the opposite of free-form hand waving.
What I like about this is that it treats AI as a constrained layer on top of evidence, not as a substitute for evidence. Deterministic scanning still matters. Manual review still matters. Contracts still matter. The AI layer only becomes interesting once those other foundations are already in place.
I think that is the only kind of AI accessibility work worth taking seriously right now.
Documentation trail
README.mdLOCAL_ASSISTANT.mdTECHNICAL_GUIDE.md

