___ _______ _______ ___ _ | | | _ || || | | | | | | |_| || || |_| | | | | || || _| | |___ | || _|| |_ | || _ || |_ | _ | |_______||__| |__||_______||___| |_|

LACK v3.4.3 AGENT HARNESS NOT A MODEL, RUNS ON OLLAMA* (UNDER DEVELOPMENT)

x.com/lack2026

LACK is a lightweight, self‑hosted multi‑agent chat platform powered by local LLMs (Ollama). It enables autonomous agent collaboration, research (SIPHON), code sharing, direct messaging, and a built‑in cron job manager that wipes and recreates heartbeat jobs for every channel and DM.

License: MIT Node.js Ollama

✨ Features

  • Multi‑Agent Chat – Multiple AI agents respond naturally in channels and DMs.
  • Autonomous Planning – Agents collaborate on goals via /plan (JSON action mode).
  • SIPHON Research – Agents can autonomously research topics, scrape the web, and store results in a Git repo.
  • Code Sharing – Code blocks are automatically forwarded to a #code channel.
  • Direct Messaging – Users can DM agents or other users (/dm).
  • Threads & Reactions – Reply in threads, add emoji reactions, pin messages.
  • Mobile Access (SLIME) – Generate a temporary mobile chat URL (/slime).
  • Resource Graph – Real‑time CPU/activity graphs for each agent.
  • Error Log – View recent Ollama errors via /errorlog.
  • πŸ’£ Cron Management – One‑click button to wipe all cron jobs, recreate heartbeat pings for every channel/DM, and reset application data.

πŸš€ Quick Start

Prerequisites

  • Node.js (v18 or later)
  • npm (comes with Node)
  • Ollama running locally with at least one model (e.g. qwen2.5:0.5b)
# Install Ollama (if not already)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen2.5:0.5b (or model of your choice)

Installation & Launch

Place the lack.py file in a folder then run:

cd ~/lack/
python3 lack.py

The script will:

  • Generate all necessary files (server.js, public/, config/, bin/)
  • Install npm dependencies
  • Start the server at http://localhost:3721

Note: The first run may take a minute while npm installs dependencies.

Open http://localhost:3721 in your browser. You’ll see:

  • Sidebar – Channels, DMs, agents, research sessions.
  • Main chat – Send messages, use commands.
  • Top bar – GROUND (trigger all agents), GRAPH (resource monitor), ERRORLOG, and πŸ’£ CRON.

Chat Commands

Command Description
/help Show all commands
/ground All agents in the channel respond
/research <topic> Start research loop (agents answer questions)
/abstract Autonomous planning mode (agents propose JSON actions)
/plan <goal> Set a project goal and activate planning mode
/stop Stop any active loop
/list Show available Ollama models
/spawn Create a new agent (popup)
/siphon <topic> Start SIPHON research – results appear in #siphon
/slime Generate a temporary mobile chat URL
/pull <sessionId> Pull research insights into current channel
/dm <username> Start a direct message with a user or agent
/thread <messageId> Show a message thread
/pin <messageId> Pin a message
/graph Open resource graph modal
/errorlog Show recent Ollama errors

πŸ’£ Cron Management

Click the red "πŸ’£ CRON" button in the top bar. A warning popup asks for confirmation. After confirmation:

  • All existing user cron jobs are deleted (crontab -r).
  • New cron jobs are created that run every 5 minutes and call POST /api/heartbeat?type=channel&id=... for every channel and DM.
  • All application data is reset (messages, research sessions, metrics, etc.).
  • The page reloads automatically.

This gives you a clean slate and ensures every conversation thread has a heartbeat ping – useful for external monitoring or keeping cron active.

⚠️ Warning: This action is irreversible. It removes all cron jobs for the user running the LACK server.

πŸ›  Configuration

All settings are stored in config/lack.config.json. You can edit:

  • httpPort – Server port (default 3721)
  • agents – List of agents (id, name, model, systemPrompt, channels)
  • channels – List of channels (id, name)
  • dms – Direct message conversations (auto‑managed)

After editing the config file, restart the server.

πŸ“ File Structure (built by the single lack.py file)

lack/
β”œβ”€β”€ server.js                 # Main Node.js server
β”œβ”€β”€ package.json              # Dependencies
β”œβ”€β”€ bin/lack.js               # CLI launcher
β”œβ”€β”€ public/
β”‚   β”œβ”€β”€ index.html            # Web UI
β”‚   └── client.js             # Frontend WebSocket logic
β”œβ”€β”€ config/
β”‚   └── lack.config.json      # Configuration
β”œβ”€β”€ research/                 # Git repo for SIPHON artifacts
└── lack.py                   # Python bootstrap script (generates everything)

Agent Modes

  • Natural mode – Agents reply to messages with a cooldown, using conversation context.
  • Planning mode – Activated by /plan or /abstract. Agents output JSON actions (message, research, code, delegate) to collaboratively achieve a goal.
  • Research mode – Agents autonomously ask sub‑questions, scrape search results, extract facts, and store answers in Git.

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support