Budgie logo
Budgie

Automatic Tag Suggestions — Tap, Don't Type

After selecting a category, the on-device LLM proposes up to three tags as tappable pill chips — with an embedding fallback that stays instant even when the model is still warming up.

Why manual tagging breaks down at scale

Tags are the most powerful dimension in Budgie analytics — they let you slice spending by project, trip, person, or purpose rather than just merchant or category. But their value only materializes when tagging is consistent. Keyboards are slow, spelling varies, and the right tag name is easy to forget. The result is an analytics view full of labeling gaps that make the data less actionable.

Automatic tag suggestions eliminate the friction without removing control. After you pick a category, the on-device LLM looks at the merchant name, category, and your historical tagging patterns to propose the three most relevant tags as pill-shaped chips. A single tap adds the tag. You can still type new ones — the suggestions are additive, not a replacement for the text field.

LLM primary, embedding fallback — always fast

The primary engine is the on-device Qwen3 1.7B language model. It ranks tag candidates from your existing tag vocabulary by semantic similarity to the transaction context — matching phrasing variations that a simple text lookup would miss. When the LLM is still loading or busy with another inference, the embedding fallback takes over: a nearest-neighbor lookup over 768-dimensional embeddings of your past tagged transactions, running in milliseconds without waiting for the LLM.

Both engines run entirely on your device. No network call, no vendor profiling. The LLM never sees a raw tag vocabulary upload — it reasons from the transaction context and returns ranked suggestions from the tags you already use in Budgie.

What you get

Up to three tag suggestions as tappable pills after category selection — zero typing required

LLM-powered ranking with embedding fallback keeps suggestions instant on any device

Additive interface — suggestions sit alongside the text field, never replacing it

Fully offline — both engines run on-device with no cloud dependency

Frequently Asked Questions

How are tags chosen?
The LLM ranks candidates by similarity to your past tag usage on similar transactions. The top three become tappable pills.
What if the LLM is slow on my phone?
The embedding fallback runs in milliseconds and proposes the same tags from a 768-dim nearest-neighbor lookup over your history.
Can I add new tags from the suggestion strip?
Yes — typing a new tag still works in parallel; the suggestions are additive, not exclusive.
Does this work offline?
Yes. Both engines run on-device.

Ready to take Budgie for a spin?

Join the waitlist — be first to try the offline-first expense tracker.