Run open LLMs on your own computer

VirexaLLM is the desktop runtime for open-source language models. Llama, Mistral, Phi-3, Gemma, Qwen, DeepSeek — served from localhost:1775/v1 with the OpenAI API you already use. Private by default. macOS, Windows, Linux.

Local inference, without the ops project

100%

Local Inference

Prompts, files, and conversations never leave your device

1B–8B

Param Sweet Spot

Llama, Mistral, Phi-3, Gemma, Qwen, DeepSeek — quantized and fast

~4 GB

Typical RAM Footprint

Q4_K_M quants run on ordinary laptops, no GPU required

localhost:1775

OpenAI-Compatible API

Point the OpenAI SDK, Cursor, or LangChain at your machine

30+ tok/s

On Apple Silicon

Metal acceleration on M-series; CUDA and ROCm where available

3 OS

macOS · Windows · Linux

Signed, notarized desktop binaries for every workstation

How it works

From install to local inference in four moves

1Install

Drop the VirexaLLM app on your machine.

Signed installers for macOS (Apple Silicon + Intel), Windows, and Linux. Under 100 MB, no dependencies.

2Pick a model

One-click download from the curated library.

Llama 3, Mistral, Phi-3, Gemma, Qwen, DeepSeek-R1 — each with a quantization tuned for your CPU and RAM.

3Chat or serve

Use the built-in chat UI, or serve the local API.

Flip the server on and http://localhost:1775/v1 starts speaking the OpenAI format to any tool you already use.

4Stay private

Nothing ever leaves the device.

Air-gap mode blocks every outbound call. Zero telemetry. Reproducible builds. Open weights you can inspect.

A desktop LLM runtime. Everything you need. Nothing you don't.

Local inference, a polished chat UI, and an OpenAI-compatible API server — shipped as one signed binary so your team stops wrangling Python environments.

Run LLMs on your laptop

Download a model, double-click, start chatting. No cloud account, no API key, no data leaving your machine.

OpenAI-compatible local API

Expose a localhost endpoint at http://localhost:1775/v1. Point the OpenAI SDK, Cursor, or LangChain at it and ship.

One-click model library

Browse, download, and hot-swap curated open models — with Q4_K_M, Q5_K_M, and Q8_0 quants picked for your hardware.

Private by default

Zero telemetry option. Air-gap mode disables every outbound call. Your prompts stay on your disk, forever.

Built-in chat UI

Desktop app with conversations, file attachments, prompt templates, side-by-side model comparison, and instant model switching.

Lightweight everywhere

CPU-first with Metal, CUDA, and Vulkan acceleration when present. Ships as a small signed binary, not a 4 GB Electron tumor.

Drops into the tools you already use

Point the OpenAI SDK, Cursor, continue.dev, LangChain, or the Vercel AI SDK at http://localhost:1775/v1 and your app is now running on a model on your machine. No rewrites.

VirexaLLM local LLM runtime

Open model families and the tools you already use

Curated open-weight models you can download in one click — served to every SDK and IDE that speaks the OpenAI format.

Browse the full model catalog

Privacy your security team will actually sign off on

Nothing leaves the device by default. No cloud attack surface. Open weights, reproducible builds, and code-signed installers shipped from day one.

100% Local Inference
100% Local Inference
Zero Telemetry Option
Zero Telemetry Option
Open-Source Core
Open-Source Core
Air-Gapped Ready
Air-Gapped Ready
Code-Signed Binaries
Code-Signed Binaries
Reproducible Builds
Reproducible Builds
SOC 2 (Desktop Build)
SOC 2 (Desktop Build)
GDPR-Friendly by Design
GDPR-Friendly by Design
View security details →

VirexaLLM for Teams

Deploy VirexaLLM across your fleet

Push the same model set to every workstation, lock access by device, and run inference behind the firewall. Admin console included — air-gapped environments supported.

Fleet model pushes

Ship a curated model list to every workstation. Version-pin, block unapproved weights, and update on your schedule.

Per-device licensing

One admin console for seats, machines, and model access. Offboard a laptop and its models follow — or stay revoked.

Air-gapped installs

Side-load signed bundles onto isolated networks. No phone-home, no update checks, no cloud dependency of any kind.

Regulated industries

Purpose-built for healthcare, legal, finance, and defense workflows where prompts cannot cross the device boundary.

Your laptop is the server now

Download VirexaLLM and run Llama, Mistral, Phi-3, Gemma, or Qwen locally in minutes. Free desktop app for macOS, Windows, and Linux — your prompts never leave the device.