How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Archi-medes/LabGuide_Preview# Run inference directly in the terminal:
llama-cli -hf Archi-medes/LabGuide_PreviewUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Archi-medes/LabGuide_Preview# Run inference directly in the terminal:
./llama-cli -hf Archi-medes/LabGuide_PreviewBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Archi-medes/LabGuide_Preview# Run inference directly in the terminal:
./build/bin/llama-cli -hf Archi-medes/LabGuide_PreviewUse Docker
docker model run hf.co/Archi-medes/LabGuide_PreviewQuick Links
LabGuide Preview Model
Model Summary
The LabGuide Preview Model is a demonstration release built entirely with Madlab, using its synthetic dataset generator and training workflow.
It is based on LiquidAI/LFM2-700M, adapted to showcase Madlab’s end-to-end capabilities for dataset creation, model training, and assistant deployment.
This model illustrates how applications can leverage Madlab to train their own assistants in a reproducible and accessible way.
It is not intended for production use, but rather as a preview for contributors, collaborators, and community feedback.
Training Data
- Source: Synthetic dataset generated entirely with Madlab’s dataset generator.
- Purpose: Designed to demonstrate Madlab’s ability to produce structured, reproducible training data.
- Scope: Preview-scale dataset, not representative of real-world or production-ready corpora.
Training Process
- Framework: Madlab training pipeline.
- Base Model: LiquidAI/LFM2-700M.
- Workflow: Synthetic dataset generation → Madlab training loop → Magic Judge Evaluation → Preview model release.
- Objective: Demonstrate Madlab’s integrated workflow for building application-specific assistants.
Intended Uses
- Contributor onboarding and workflow validation.
- Demonstration of Madlab’s synthetic dataset generator and training pipeline.
- Benchmarking and experimentation in controlled preview settings.
Limitations
- Demo-only: Not suitable for production or deployment in real-world applications.
- Synthetic data: Training data is fully synthetic and may not reflect natural language distributions.
- Preview scale: Model performance is illustrative, not optimized for accuracy or robustness.
Ethical Considerations
- This model is provided for demonstration and educational purposes.
- It should not be used in applications where accuracy, safety, or reliability are critical.
- Contributors are encouraged to treat outputs as illustrative examples only.
Acknowledgements
- Base model: LiquidAI/LFM2-700M.
- Built and trained with Madlab.
- Downloads last month
- 14
Model tree for Archi-medes/LabGuide_Preview
Base model
LiquidAI/LFM2-700M
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Archi-medes/LabGuide_Preview# Run inference directly in the terminal: llama-cli -hf Archi-medes/LabGuide_Preview