How It Works Preview Security Skills Compare Docs Contact
Sign In Get Started

Manage your servers
with natural language.

Tell Claude what you need. ManageLM agents execute it locally — using a local LLM, scoped to only the commands you allow. No SSH. No scripts. No risk.

Claude — ManageLM MCP Session
// You type natural language in Claude. That's it.

You: Restart nginx on web-server-01 and show its status

Claude: I'll restart nginx and verify it's running.

  → Calling Production__web-server-01__services__restart
    service: "nginx"

  ✓ nginx restarted successfully
   Active: active (running) since Sun 2026-03-08 14:23:01 UTC
   PID: 48291  |  Memory: 12.4M  |  Tasks: 5

You: Now check disk usage across all production servers

  → Querying 8 agents in group "Production"…

Built on trusted foundations

Anthropic MCPClaudeOllamaPostgreSQLWebAuthn / FIDO2Ed25519OAuth 2.0 PKCEWebSocketFastifyTypeScript Anthropic MCPClaudeOllamaPostgreSQLWebAuthn / FIDO2Ed25519OAuth 2.0 PKCEWebSocketFastifyTypeScript
How It Works
Three layers. Zero complexity.

From natural language to server execution in seconds — with every command validated and constrained.

STEP 01

Talk to Claude in plain English

Use the Claude app — the same AI you already know — to describe what you need. "Restart the app", "Check logs", "Update packages on all staging servers".

Natural language → MCP → Portal
Claude — MCP Y Restart nginx on web-01 C ✓ nginx restarted · active → services__restart {nginx} Y Check disk on all production
STEP 02

Portal authenticates & routes

The ManageLM cloud portal verifies your identity via OAuth 2.0, checks permissions, identifies the target agent, and dispatches the task over a secure WebSocket channel.

Auth · RBAC · Skill validation · Routing
Cloud Portal Auth · RBAC · Skills · Routing AUTH RBAC SKILL WebSocket outbound web-01 nginx · docker 3 skills assigned db-01 postgresql 2 skills assigned app-01 services · files 4 skills assigned Zero inbound ports on any server
STEP 03

Agent executes locally with a local LLM

The lightweight agent uses Ollama (or any compatible LLM) on your server to interpret the task, generates commands, validates each one against the skill's allowlist, and executes. Sensitive data never leaves the machine.

Local LLM · Command validation · Sandboxed
YOUR SERVER — DATA STAYS HERE AI Local LLM (Ollama) Interprets task → generates commands COMMAND ALLOWLIST Hard-enforced in code — not prompts $ systemctl restart nginx ✓ exit 0 120s limit 8KB cap
Preview
See it in action.

A clean, dark interface built for sysadmins who need clarity, speed, and full control.

app.managelm.com
ManageLM
Security
Security isn't a feature.
It's the architecture.

Every layer prevents unauthorized actions — even if the LLM hallucinates or faces prompt injection.

Command Allowlisting

Skills define explicit permitted commands. Every AI-generated command is validated in code. Anything outside is blocked.

Local LLM — Data Stays On-Server

Task interpretation runs locally via Ollama. Passwords, configs, logs — nothing leaves the machine.

Read-Only by Default

Agents with no allowed_commands can only run read-only operations. Write access requires explicit config.

Zero Inbound Ports

Agents connect outward via WebSocket. Your servers never expose a port. No SSH, no VPN, no attack surface.

$

Secrets Hidden from AI

Secrets are env vars. The LLM only sees $VAR_NAME — actual values injected at execution time.

Three-Layer Enforcement
1
Skill Scope
Enforced
2
Command Allowlist
Enforced
3
Execution Sandbox
Enforced
✓ LLM is untrusted by design

The AI generates commands, but every command is validated in code before execution. Prompt injection or hallucinations are blocked.

↻ Execution limits per task

Max 10 turns · 120s timeout · 8KB output cap. Every operation logged in a full audit trail.

Built-in Skills
30 skills. 230+ operations.

From systemd to Kubernetes, databases to VPNs — every skill is security-scoped with exact command allowlists.

Services11 ops
Web Server11 ops
NoSQL11 ops
Containers10 ops
Files10 ops
Email10 ops
Database9 ops
Kubernetes9 ops
Virtualization9 ops
Certificates8 ops
Message Queue8 ops
Users8 ops
Web Apps8 ops
Packages7 ops
Firewall7 ops
Monitoring7 ops
Security7 ops
Storage7 ops
Backup7 ops
Git7 ops
LDAP7 ops
Proxy7 ops
System7 ops
Network6 ops
DNS6 ops
Logs6 ops
File Sharing6 ops
VPN6 ops
Automation5 ops
LLM9 ops
+ Custom Skillsunlimited
Each skill defines exact allowed commands — nothing more, nothing less
Why ManageLM
Not just another management tool.

The only platform combining AI automation with hard-enforced security.

CapabilityManageLMSSH + ScriptsAnsible / PuppetGeneric AI
Natural language interface
Command allowlisting (hard-enforced)✓ In code~ Limited
Local LLM (data on-server)N/AN/A✗ Cloud only
Zero inbound ports✗ Port 22✗ SSH~ Varies
No learning curve✓ Just talk✗ Bash✗ YAML
Skill-scoped security✗ Full access~ Roles
Full audit trail~ Manual
Multi-tenant RBAC~ Limited
Platform
Everything you need at scale.

Multi-Tenant Teams

Owner, admin, member roles with granular permissions. Invite teammates, scope access per server or group.

Server Groups

Organize agents into groups. Run operations across entire groups with a single request.

Scheduled Tasks

Cron-based schedules for backups, log rotation, health checks — all automated.

Webhooks & API Keys

Real-time notifications on events. Full REST API for integration into existing workflows.

Full Audit Trail

Every action logged with timestamps, IPs, and full context. Complete accountability.

Passkeys & MFA

WebAuthn/FIDO2 passwordless login. Multi-factor auth and IP whitelisting for MCP.

Ready to manage servers
the intelligent way?

Free to start. Deploy your first agent in under 5 minutes. No credit card required.

Contact
Get in touch.

Questions, demos, or enterprise needs? We'd love to hear from you.

Response Time

We typically respond within 24 hours on business days.

Email client opened!

Please send the email from your mail application to complete the message.