This is one of the clearest places where the repo stopped treating failure handling as an afterthought.

A lot of browser tools quietly act embarrassed when something fails. They either show a vague error, or they stay silent, or they hide the limitation in a footnote. A11Y Cat goes in the other direction during April: diagnostics, support boundaries, and partial coverage become part of the visible product.
There are several layers to that shift.
First, the repo adds a structured diagnostics taxonomy. Failures stop being just thrown strings and start carrying explicit categories and guidance. That is useful for users, but it is also useful internally because it forces the project to say what kinds of failure it believes exist.
Second, the runtime adds a capabilities registry. Instead of vaguely implying support, the product can say more concretely what delivery channel it is in, what page support state it has, whether AI is available, whether history storage exists, whether comparison baselines are supported, and so on. Then those capabilities show up in diagnostics and exports.
Third, coverage limits become explicit. Frames, shadow roots, cross-origin boundaries, restricted pages, lazy-rendered content, unsupported schemes: these stop being silent gaps and start getting surfaced as limitation metadata. That is a big trust improvement. A partial scan is still useful, but only if the product refuses to present it as complete coverage.
This is also where the project starts sounding more mature in the docs. SUPPORTED_ENVIRONMENTS.md is extremely explicit. docs/page-diagnostics-capture.md describes exactly what the page diagnostics bridge does and does not do. Later, f27895d narrows the diagnostics bridge further by making it listener-only and stopping default intrusive patching. That is a nice example of the project tightening its own footprint after seeing the risk.
I think this phase matters because it changes what counts as a feature. A support boundary can be a feature if it keeps users from making the wrong assumption. A failure category can be a feature if it turns a confusing dead end into a bounded state with next steps. That is not flashy product work, but it is real product work.
Visual evidence
The repo does not include a dedicated screenshot that cleanly isolates the diagnostics taxonomy itself. The closest visible phase evidence is the hostile-page isolation screenshot, because this whole diagnostics/boundary layer exists to make those failure states explicit:
What I was really learning here
I was learning that honesty about limitations should not live only in docs. The runtime and the UI need to surface those limits directly, or people will still misread the result.
Evidence
- Commits:
8ade0e4– diagnostics taxonomy, support summary, and failure handling6e89c4f– capabilities registry and support boundaries909a77f– structured page diagnostics bridge and UIf27895d– diagnostics bridge made less invasive by default
- Files:
../../src/runtime/modules/diagnostics.js../../src/runtime/modules/capabilities-registry.js../../src/runtime/modules/scan-coverage.js../../docs/page-diagnostics-capture.md../../SUPPORTED_ENVIRONMENTS.md../../tests/playwright/extension-mv3.spec.js

