DMind-3-nano: Privacy-First On-Device Crypto Intent Recognition
Inference stays on your device. Standardized function calling for wallets, DEXs, and agents. Built on
google/functiongemma-270m-it.
Model Description
DMind-3-nano is a small, edge-optimized language model fine-tuned for crypto wallet and DEX intent recognition using standardized function-calling protocols. It is designed to run entirely on-device, enabling privacy-preserving, low-latency intent parsing for Web3 wallets and local agents.
This repository hosts the open-source training and evaluation pipeline as well as the released model artifacts.
Repo purpose: host the open-source training/eval pipeline and release artifacts.
Performance Snapshot
Figure 1. DMind-3-nano significantly outperforms both the untuned base model and a similarly sized general-purpose model (Qwen3-0.6B), especially in multi-turn success.
Highlights
- 🔐 Privacy-first: 100% on-device intent recognition; no data leaves the device.
- 📱 Edge-optimized: 270M params; runs on phones/tablets/edge CPUs.
- 🔄 Standardized protocols:
SEARCH_TOKEN/EXECUTE_SWAPwith unified schemas. - 🌐 Multi-chain: Solana, Ethereum, BSC, Base.
- 🌍 Multilingual: English + Chinese intents (Chinese samples kept in data/benchmarks).
- 🤖 Agent-native: designed for local-first wallet/agent workflows where a growing share of trading decisions and execution happen on-device.
- 📊 Training data: the final full fine-tune used 12,000+ samples in total; LLM-generated data is only a subset, and 60%+ of the data comes from real trading scenarios.
- 🧾 (To our knowledge) first public vertical-domain FunctionGemma case study: an end-to-end example of fine-tuning
google/functiongemma-270m-itfor a real wallet/DEX intent domain, including the practical training/evaluation pipeline and reproducible scripts.
Why This Matters for Web3 (Standardization as a Step-Change)
Web3 is composable at the protocol layer (tokens, RPCs), but still fragmented at the intent layer. Today every wallet, DEX, and agent framework invents its own “swap/search intent” schema and function-calling format. The result is high integration cost, brittle adapters, inconsistent safety guarantees, and poor ecosystem interoperability.
This work targets a transformative goal: standardize wallet intents as a small, versionable protocol between natural language and transaction builders. Concretely, DMind-3-nano enforces a minimal set of typed tools (e.g. SEARCH_TOKEN, EXECUTE_SWAP) with strict schemas and a deterministic wrapper output format.
What standardization unlocks:
- Interoperability: one protocol works across wallets/DEXs/agents; integrations become plug-and-play.
- Safety & auditability: tool calls are structured data—easy to validate, simulate, policy-check, and display for confirmation before signing.
- Benchmarkability: shared datasets and comparable evaluations across models and releases.
- Ecosystem scaling: new tools can be added via versioning without breaking existing clients.
In short, DMind-3-nano is not only a model—it is a proposal for a standard protocol layer that can make wallet intelligence as interoperable as ERC-20 made tokens.
The next wave: local agents executing trades
We expect a large share of future Web3 activity to be agent-driven: wallets will run local copilots that continuously parse user intent, monitor context, and propose/execute transactions. In that world, “cloud-only” intelligence becomes a bottleneck and a risk:
- Privacy: trading intent, token preferences, and behavioral signals should not be streamed to third-party servers.
- Latency & reliability: agents must work instantly and offline (mobile, hardware wallets, poor connectivity).
- Security boundaries: local agents can keep a tighter loop between intent → policy checks → simulation → user confirmation → signing.
This is why a small, high-accuracy on-device function-calling model is necessary infrastructure for the agent-native wallet era—and why standardizing the intent protocol matters even more when millions of agents need to speak the same language.
Equally important, this repository serves as a public reference implementation for applying FunctionGemma to a concrete vertical domain. By openly sharing fine-tuning details (data format, training configs, evaluation, and benchmarks), it lowers the barrier for the community to replicate, extend, and standardize on a common intent protocol.
Model Overview
| Property | Value |
|---|---|
| Model | DMind-3-nano |
| Base | google/functiongemma-270m-it |
| Params | 270M |
| Context | 2048 |
| Precision | BF16 (train) |
| Best tokens | SOL, USDC, JUP, RAY, BONK, WIF, ETH, BTC, POPCAT, BOME, TRUMP |
| Chains | solana, ethereum, bsc, base |
Experimental notice: Highest accuracy on the token/chain set above; other assets may need further tuning. Validate outputs before transacting.
Repository Layout
model/We have uploaded an experimental version of the model weights. Please note that this is a bold exploratory release, and we do not take responsibility for any financial losses incurred from using this model in production environments.src/training/eval utilitiestrain.py(LoRA or full fine-tune)evaluate.py(benchmark evaluation)prepare_dataset.py(SFT-ready formatting)generate_benchmark.py(100-case benchmark)config.py(tools, prompts, token maps)
data/sample datatraining_data.json(raw; open-sourced subset for reproducibility)benchmark_dataset.json(eval set; includes Chinese test prompts by design)
results/evaluation_results.jsonsample outputrun_training.sh,requirements.txt
Quick Start (Training & Eval)
Install:
pip install -r requirements.txt
Train (LoRA default):
python -m src.train \
--model_path /path/to/functiongemma-270m-it \
--dataset_path ./data/training_data.json \
--output_dir ./runs \
--bf16
Switch to full fine-tune: add --no-use-lora. Use --use_4bit/--use_8bit + --gradient_checkpointing for low memory.
Evaluate:
python -m src.evaluate \
--model_path ./runs/<run>/final_model \
--benchmark_path ./data/benchmark_dataset.json \
--output_path ./results/eval_$(date +%Y%m%d_%H%M%S).json
Data utilities:
# Prepare SFT data
python -m src.prepare_dataset --input ./data/training_data.json --output ./data/prepared_dataset.json
# Regenerate benchmark
python -m src.generate_benchmark --output ./data/benchmark_dataset.json
Note: data/prepared_dataset.json is a generated artifact (optional) and is intentionally not committed.
Tool Definitions & Schemas
To ensure interoperability, DMind-3-nano uses strict JSON schemas for tool definitions. Below are the standard definitions used during training and inference.
1. SEARCH_TOKEN Used to find token metadata or address on a specific chain.
{
"name": "SEARCH_TOKEN",
"description": "Search for a cryptocurrency token on-chain to retrieve its metadata or address.",
"parameters": {
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "The ticker symbol of the token (e.g., 'SOL', 'USDC')."
},
"address": {
"type": "string",
"description": "The specific contract address (CA) of the token, if known."
},
"chain": {
"type": "string",
"enum": ["solana", "ethereum", "bsc", "base"],
"description": "The target blockchain network."
},
"keyword": {
"type": "string",
"description": "General search keywords (e.g., project name) if symbol/address are unclear."
}
},
"required": []
}
}
2. EXECUTE_SWAP Used to construct a swap transaction intent between two assets.
{
"name": "EXECUTE_SWAP",
"description": "Propose a token swap transaction.",
"parameters": {
"type": "object",
"properties": {
"inputTokenSymbol": {
"type": "string",
"description": "Symbol of the token being sold (e.g., 'SOL')."
},
"inputTokenCA": {
"type": "string",
"description": "Contract address of the token being sold."
},
"outputTokenCA": {
"type": "string",
"description": "Contract address of the token being bought."
},
"inputTokenAmount": {
"type": "number",
"description": "Absolute amount of input token to swap."
},
"inputTokenPercentage": {
"type": "number",
"description": "Percentage of balance to swap (0.0 to 1.0), used if exact amount is not specified."
},
"outputTokenAmount": {
"type": "number",
"description": "Minimum amount of output token expected (optional/slippage related)."
}
},
"required": ["inputTokenSymbol"]
}
}
Output Format The model outputs the function call wrapped in special tokens (standard FunctionGemma format):
<start_function_call>call:FUNCTION_NAME{key1:val1, key2:val2}<end_function_call>
Example:
User: "Search for SOL on Solana" Model:
<start_function_call>call:SEARCH_TOKEN{symbol:"SOL", chain:"solana"}<end_function_call>
License & Governance
- Code: MIT (
LICENSE) - Model card intent: Apache-2.0 (as in metadata above)
- Protocol specs (SEARCH_TOKEN / EXECUTE_SWAP): public domain for maximal adoption
- Contributions are welcome via issues/PRs.