On-Device AI Finance — Private AI for Your Money
Qwen3 1.7B and a 768-dim embedding model run entirely on your phone. Smart suggestions, voice entry, and auto-categorization — no data leaves your device.
- Qwen3 1.7B runs on your phone — no API key, no subscription
- 768-dimensional embedding model for instant category suggestions
- Voice transaction entry via whisper.rn — audio never leaves the device
- Merchant name normalization for Cyrillic and foreign bank statements
- Every AI correction improves future suggestions via on-device learning
Frequently Asked Questions
Which AI models does Budgie use?
Budgie uses two on-device models: Qwen3 1.7B for natural-language understanding and chat, and a 768-dimensional embedding model for nearest-neighbor categorization from your own transaction history. Both run locally with no external API call.
How large are the model downloads?
The embedding model is approximately 90 MB. Qwen3 1.7B in quantized form is approximately 1.1 GB. Both are downloaded once on first use of AI features and cached on-device.
Does voice entry send audio to a server?
No. Voice entry uses whisper.rn — a React Native binding for whisper.cpp — which transcribes audio directly on your device. Your voice recordings are never streamed to any external server.
What languages does the AI support?
Categorization and suggestions work in any language because they are driven by your own history. Voice entry supports English, Ukrainian, German, French, and Spanish as primary languages, with Whisper-small providing broader coverage for other languages.
How does AI improve over time?
Every time you accept, edit, or reject a category suggestion, the embedding model index updates locally. The more you use Budgie, the more accurately the embedding model mirrors your personal spending patterns.