I can see the enthusiasm in this stage, but I can also see that I had not settled the trust story yet.

The first AI phase in this repo lands fast. On February 16 the bookmarklet agent picks up a Groq AI toggle, API key support, visible AI errors, selector inference improvements, and a default model choice. If you only skim those commits, it can look like a natural feature progression: the command tool got an AI layer, so the tool became smarter.
Looking back, I think this phase was more like active exploration than settled product direction.
The first clue is how concentrated it is. These commits happen in a tight burst. The tool is still a bookmarklet. The command system is still young. The tests and release discipline that arrive in April are not there yet. So the AI work is landing in a product that is still figuring out its own boundaries.
The second clue is what the AI is trying to do: explain things, infer selectors, help with actions, recover when the page shape is messy. That makes sense. When you have a browser tool that needs to work across real pages, AI starts to look tempting exactly where deterministic logic gets awkward.
But this is also where the repo later gets stricter. Once the local assistant backend appears in April, and later the extension AI Assist surface, there is much more emphasis on bounded actions, structured payloads, grounded docs, output contracts, and explicit warnings that AI is optional and not authoritative. The February phase does not look wrong to me. It just looks early.
I think that is why I find it useful to keep this stage in the story instead of smoothing it away. It shows a normal development instinct: I had a tool already working on real page workflows, I saw places where inference was useful, and I tried an AI layer before the trust model around it was fully mature. That is a believable sequence.
There is another detail I like here. Even in the early AI phase, the repo keeps shipping practical fixes around failure visibility and selector inference. That means the work was not just “turn on AI.” It was already about how AI breaks, what users see when it breaks, and how much fallback behavior is needed. That is less polished than the later architecture, but it is real product work.
By the time the project reaches on-device AI in the extension, the tone is much more careful. AI becomes optional, bounded, device-dependent, hidden when unavailable, and clearly secondary to deterministic scanning. That later discipline makes the early Groq phase look almost like a prototype of the product philosophy rather than the final form of it.
So I do not read these commits as “AI made the tool complete.” I read them as “I was testing where AI might actually help, before I had built the stricter contracts needed to keep it honest.”
Visual evidence
There is no reliable repo-held screenshot that isolates the early Groq toggle itself. The closest historical visual evidence is the bookmarklet agent demo from the same general phase:
What I was really learning here
I was learning that AI features feel easiest to add exactly where certainty is hardest to guarantee. That is useful, but it also means you need stronger boundaries around output, trust, and fallback than I had at the start.
Evidence
- Commits:
c5e2be1– Groq AI toggle and key support8c85d86– AI errors surfaced in agent output289ffb4and470e114– form selector inferenceeb26fb8– Groq 70B set as defaultdfaf15c– selector inference and fallback improved
- Files:
- February 16 state of
../../bookmarklet.js - February 16
README.mdhistory
- February 16 state of
- Later comparison evidence:
../../LOCAL_ASSISTANT.md../../extension/README.md../../extension/content/prompt-ai-service.js
- Inference:
- Calling this phase “experimental” is an inference from the speed of iteration and from how much stricter the later AI architecture becomes.

