← Back to posts

How to get started with the Pi coding agent (on a VPS)

A few people asked me about my coding agent setup. This is a brief guide on how to set up the Pi coding agent on a VPS (virtual private server) with a hosted LLM service. This works with Open Router, and should work with anything that supports the OpenAI (ChatGPT) API, including local models, anthropic, OpenAI etc. There are other APIs available, and Pi has a great unboxing experience, as I found while writing this post.

I have used Pi for a month or two on my laptop, in a sandbox. I have Claude Code in a VPS, but also wanted Pi there. Chris Parsons (see Afterword for his blog) asked about this, so I wrote this how to for myself, and then ran and updated it. It was easier than I thought.

The idea of using a VPS (a virtual machine in the cloud) is that it provides you a sandbox to run an agent in. If the agent deletes your home folder, you can just recreate it. There are other ways to sandbox agents, but this I found by far the easiest and most comforting.

Steps in this recipe

  1. Install Pi
  2. Put your API key in an environment variable, so Pi can access it
  3. Tell Pi where your LLM is hosted, and what model you want to use
  4. Start Pi and enjoy

Or that is what I thought. It is simpler than that.

  1. Install Pi
  2. Follow the guidance and complete the installation for your model and provider in small steps.
  3. /Reload in Pi and enjoy (*)

(*) after fixing syntax errors in ~/.pi/agent/models.json where all of your configuration can live, unless you decide to separate it out.

I thought it was still useful to show my workings, the Pi UI is a lot more responsive than Claude code, and guides you on your way. But I did not notice that at first. I hope this helps. Have fun!

Install Pi

Pi assumes you have NodeJs installed. If you don’t have that, NodeJS has instructions, it is usually in your package manager in the VPS’s linux distribution.

Once you have NodeJs, run

npm install -g @mariozechner/pi-coding-agent

in the terminal.

Now you can start pi and it will guide you to where to find the rest of the documentation. This is what it showed me:

Warning: No models available. Use /login to log into a provider via OAuth or API key. See:
   [somewhere on your disk]/lib/node_modules/@mariozechner/pi-coding-agent/docs/providers.md
   [somewhere on your disk]//node/24.0.1/lib/node_modules/@mariozechner/pi-coding-agent/docs/models.md

This is one of the surprising things I like best about Pi: the documentation (of the version you are using) is on your machine, and it goes out of its’ way to point you and your model to the documentation, so you can figure out how to use and extend it in a conversation.

We can’t have a conversation just yet. Because we have no provider, and no model.

So we need to tell Pi two things:

  • what is the ‘provider’ (the party or server hosting your model(s))
  • what models are available there

For the second point you need a bit more detail than I would like. Hence this post. I will take openrouter as provider and will go there and find the cheapest model I can find - we just want to fire off a prompt and see if Pi + provider + a model can work together.

One thing you can do in Pi without a model, is use ! to run a shell command. I’m going to run cat on the providers doc to see how I can set up a provider.

!cat  [..]/lib/node_modules/@mariozechner/pi-coding-agent/docs/providers.md

Now we can edit $HOME/pi/agent/models.json to set our provider endpoint and api key. You can specify models there as well, but that is not necessary to get started. /model in the UI lets you search provided models.

I already have set up ‘pi’ on another machine, so I asked it. Next section co-written with Qwen3.6:27b :

Tell Pi where your LLM is hosted (by Qwen)

Pi has built-in OpenRouter support. You just need to configure it in ~/.pi/agent/models.json.

Quick Setup

Create or edit ~/.pi/agent/models.json:

   {
     "providers": {
       "openrouter": {
         "baseUrl": "https://openrouter.ai/api/v1",
         "apiKey": "OPENROUTER_API_KEY",
         "api": "openai-completions"
       }
     }
   }

That’s it — no models array needed. Pi loads all built-in OpenRouter models automatically. Your API key can be:

  • An environment variable name (e.g., “OPENROUTER_API_KEY”)
  • A literal key (e.g., “sk-or-…”)
  • A shell command (e.g., “!op read ‘op://vault/item/credential’”)

After Configuring

  1. Open Pi and run /model to see available OpenRouter models
  2. Pick one with /model openrouter/anthropic/claude-sonnet-4 (or whatever model you want)

See the full compat.openRouterRouting options in the models docs for order, ignore, max_price, preferred_min_throughput, etc.

First prompt

Finally, we can issue a prompt. I learned (from Claude Code) that asking a model to write a haiku is a way to get started when you just want to see if it works. Sometimes I put my frustrations in the prompt. Not this time.

Model: qwen3.5:35b-a3b

Me: write a haiku

(reasoning tokens:)

The user is asking me to write a haiku. A haiku is a form of Japanese poetry with three lines following a 5-7-5 syllable pattern. Let me write one about nature, which is the traditional subject.

Soft rain falls gently, Droplets kiss the green leaves now, Spring’s breath fills the air.

I hope this works for you. Enjoy!

Afterword

Chris Parsons asked me what it took me to get going with Pi, amongst other things, as we were discussing his How I use AI to Code. As I am getting questions on my setup and how I use it, this seemed like a good place to start. It would be nice to have some more people around me use open source agents with open weights and open source models.