Category: Planning

Getting ready to build

  • Models (March 2026 Edition) – What LLMs should I use with my OpenClaw

    I am adding the date due to the short shelf life nature of the post. New models are coming out every week. The next couple of weeks will not be an exception. Every couple of months, we will have to rewrite this post.

    Here’s the deal with OpenClaw and the Large Language Models you should use with it. You should use Claude Opus 4.6. That is the best. It supposedly has the best personality as an assistant, the strongest resistance to prompt-injection attacks, and the highest overall intelligence.

    But there’s a catch. In the early days of OpenClaw, people could use their Anthropic Claude Max subscriptions to run it. That meant you pay 90 EUR/$100 USD (or double that if you need more) and you get what, to OpenClaw, appeared to be unlimited LLM access. You just connect your subscription to OpenClaw via what’s often called the OAuth method, and it uses your monthly subscription for AI access. But Anthropic cracked down on this approach, first banning the open-source Claude Code alternative, OpenCode, from accessing Claude LLMs, and then quickly suspending OpenClaw users’ accounts too. Alex Finn claims most people are still using this approach undisturbed. Besides, what’s the worst thing that will happen to you? You’ll have to make a new account with a new email? (This conversation, by the way, is amazing social scientist bait. You should listen despite the length.) Let’s just say I am not going to recommend you try this approach (and I won’t even wink, as Alex did in the interview). Unless I can’t make anything else work reasonably, I am not going to try to use it myself. Of course, you can also pay Anthropic for Claude Opus through their API, but after Matthew Berman ran up a 4-digit (USD, not HUF) bill in a single YouTube live session, I think I am going to stay away from anything along these lines.

    So what are the alternatives? There’s, of course, OpenAI’s ChatGPT subscription, and, probably thanks to Peter Steinberger, who, since starting his work on OpenClaw, also joined OpenAI’s ranks, they are not banning OpenClaw users. The problem is that while Peter Steinberger won’t shut up about how wonderful OpenAI’s Codex is as opposed to Claude Code for coding, I have never seen anyone talk enthusiastically about using GPT models in OpenClaw. Alex Finn mentioned that he has GPT5.4 check in every hour on all his agents, making sure they are doing what they are supposed to be doing (and not off on some weird token-hungry and ultimately useless tangent). And this use makes perfect sense to me. He also said this is not very token hungry. He just uses the $20 subscription. I suppose I will be getting that $20 subscriotuib and trying this myself. I suspect Steinberger is working hard on something that will work great for OpenClaw (and agents in general), but if something is coming from OpenAI, we do not have it yet.

    So what are the alternatives? There are the Chinese models. And it is hard to say how well these will perform overall, especially as the lead agent. There’s very little info out there beyond how Claude Opus is great, Claude Sonnett is OK, and that’s that. The lack of info about GPT models (especially given the circumstances) screams very loudly, but I do not know what to make of the lack of info about Chinese models. There’s this one tweet by Steinberger soon after Anthropic shut everyone out, saying “Been running Clawd on  @MiniMax_AI the last few days after optimizing the implementation, and it’s a really great alternative. Now recommending this over Anthropic.” There’s also a discussion that suggests he tried several models. And since this tweet, Minimax went from M2.1 to M2.5, and now, with some explosive fanfare, they jumped to M2.7. I don’t suppose it got worse. This is what I will be trying first.

    Beyond Peter’s personal recommendation (and tweaking to make OpenClaw work better with it), there are a couple of great advantages of MiniMax over other open models. First, they just started a subscription with extremely generous limits that, at least one YouTuber I trust on matters of OpenClaw, who was commissioned to promote it, claims we would be happy to use with OpenClaw even at the lowest price point of $100 a year. (I just signed up for a year myself just to test.) He claims that OpenClaw is simply not that Token- or API-hungry to use up the 1500 model requests / 5 hours, and that the speed is perfectly OK at 50 tokens per second (with 100 tokens per second off-peak). Secondly, Minimax is actually not that big. It has 230b parameters, with only 10b active (read: it is fast and it does not need that much RAM, 256GB is more than enough, and it may even run OK on 192GB). My team can run it locally on our two interconnected DGX Sparks (Asus Ascents, actually) with mixed precision (read: without being too dumbed down) and a near-full context window (read: context window is near its theoretical maximum of around 200k). I don’t know yet how fast, but from what I know of the hardware, I suspect it would be usable and, more importantly, it wouldn’t flinch if a dozen or so people/agents were querying it at the same time, even if the output tokens-per-second weren’t that high. It would be usable for a large lab or a small department full of OpenClaws, with a $7,000–$8,000 hardware investment. Finally, my limited research shows that, when it comes to security (e.g., prompt injection attacks), Minimax is a good model, even compared to some of the larger open cousins.

    There are other open models out there that are considered highly capable. Qwen 3.5 has a 397b variant. Kimi K2.5 is a 1t model and is considered outstanding. NVIDIA is now also giving away a free API key that lets you use it. (Use it while it lasts.) Here’s a video showing you how (and yes, he is also pushing Hostinger, but you do not need Hostinger to make this work). There are GLM 4.7 (355b) and GLM5 (745b) models. And there’s a subscription ($84 for the year) that I got to have a backup plan. They also allow OpenClaw OAuth. The limits are brutal (80 prompts per 5h and 400 per week on GPT 4.7, and GPT5 will be half to a third that). I have it for the year. It is a reasonable backup option. Clearly, all of these models are much bigger than Minimax. While you could run some of them on a 512GB Mac Studio (and all of them on two interconnected Mac Studios), those cost $13-14k each. There are some other massively large open models out there, like Meta’s Llama 4 or Mistral’s new Large. But they lag behind on capabilities.

    And anything smaller, I wouldn’t mess with unless the whole Internet is lighting up about how awesome it is for some specific task. There’s one exception: the heartbeat of OpenClawl. This is what checks periodically to see if everything is running smoothly and, if not, whether it should be running something. Most people are setting this to run on Claude Haiku or a smaller GPT model.  In my personal testing, GPT-OSS-20b appeared to be the most competent at this task, and it can run locally on a 24GB RAM Apple Silicon Mac or on the free Oracle Cloud 4 Amere instance.

    So here’s the plan. We are going to test running OpenClaw both with the (cheapest) MiniMax subscription and with the locally running (2 Sparks) mixed prevision Minimax M2.5. We will use GLM as a backup. We will likely add a $20 GPT subscription as oversight and as a heartbeat backup. And we will also use what NVIDIA gives us for our testing. It is worth mentioning that Mistral’s API is free for a long while, as long as you do not have multiple concurrent calls and do not make more than one API call a second. And Mistral models are very good at writing, in general (in case you need an author for some tasks). I think we are in excellent shape to start building this thing, and honestly, you should be too. Between the Mistral and NVidia APIs, you can probably start building with zero investment in the LMM tokens.

  • Hardware – where should I run my OpenClaw

    We will need a computer to run OpenClaw. Here are our options with the pros and cons.

    Your computer (DON’T!!) – And the advice is that you REALLY should not do this. Giving OpenClaw unrestricted access to your entire computer can quickly turn into a nightmare. So don’t do it. You’ll need a machine where you can better control what OpenClaw has access to and what it doesn’t. While on your regular computer, OpenClaw will be the most capable; the advice is to build up. Treat OpenClaw like an employee. Give it its own hardware. Give it its own accounts. Provide additional access through sharing, forwarding, etc., as you build trust and get to know its capabilities. So we are not doing this. (I am not even sure it would make sense. I am mostly a Chromebook user who organized his life so he can pick up any Chromebook and just use it. I do have a few lying around at the in-laws’, the office, etc. I have no idea how OpenClaw could even run on a Chromebook. It can use its Linux container, of course, but I digress…)

    Mac Mini – This is the favorite. And as such, there seems to be a shortage, though it hasn’t reached Hungary just yet, I don’t think. I could have a base model delivered tomorrow, even a bit under list price. But ever since my family’s iCloud accounts got jumbled up (over a decade or so ago), I have been hating Macs hard. (They even mixed all my parents’ photos with mine in Google through some autosync. And my mother photographs like a stereotypical Japanese tourist.) Even though I use an iPad and an M2 Mac Mini (with a lot of storage to locally sync everything I have in various clouds) on my home office desk, I have been doing everything I can to avoid becoming any more of a Mac person than I already am. 

    But the advantage of using an Apple Silicon Mac is undeniable. You could keep it at home. Home is the best place to access the Internet from. OpenClaw creator Peter Steinberger said, (and yes, I listened to that 3+ hour interview, and you should too) If you access any website from inside a data center, you are going to get more CAPTCHAs and more blocks than if you access it from a residential network. You even get more such things from your University network. I am sure you noticed too. The Internet is getting locked down, and the inevitable reaction to the proliferation of AI agents will be further lockdowns. And while most agents who can use your desktop click through a CAPTCHAs like a human, my (playwright) browser simulator on Linux today got trapped by CAPTCHAs and asked me to log in somewhere and do one part of the job myself as it waited. That’s not ideal when you expect your agents to work while you sleep (or not looking). 

    At the same time, most new tools are now invariably developed for Macs. Whenever a new AI app comes out, new features are rolled out to older apps; they are invariably first available on Macs. (It is just like how new apps always come to iPhones first, and later, maybe there will be an Android version.) It is also very easy to think through what an AI agent could do on a computer. It is the same thing you can do yourself. You want your agent to be using the various AI tools. You can have your agents use the various apps. Let’s say the agent is living on a Linux server; its computer use will be more limited, and the apps it has access to will be more limited as well. Its browser use will be more curtailed. On a Mac, of course, you have access to the terminal. You have Unix (BSD) running in there. So you have the advantages of both a Unix machine and a Mac. Running OpenClaw on Mac is the clear winner here.

    Other Apple Silicon? There’s, of course, the question, if a Mac, which Mac. Most people buy the M4 base model. One of the reasons I could not get myself to do this was that it is quite outdated. There’s an M5 coming soon for sure. Does it matter? No. But I hate paying full price for outdated hardware. So I guess I could go with an older model, buy a used one. But Macs hold their resale value well, and the M4 upgrade included 16GB of RAM by default, whereas that was at a premium on earlier generations. Looking at older model prices, sometimes they made no sense. They were not even discounts spec-per-spec. I tried deal hunting. Nothing made sense. Of course, you could be like my young Russian colleague (who could easily live on the smell of an oily rag) and find a well-used M1 MacBook Air with a broken screen, but honestly…?

    There’s also the option of spec-ing up. Macs are great at running LLMs locally. Unfortunately, the models you’d want to use in an OpenClaw setup need a LOT of hardware. Realistically, you can maybe run some routine tasks with the small models (the 20b-35b range), and that is it. Anything usable, like MiniMax M2.5, you will need at least 192GB of RAM, but you are paying with fire under 256GB, and the Mac Studios that come with this much are pricey. Maybe one can make some lower-level tasks work with ~120b models (like Qwen 3.5 122b a10b or Mistral 4 Small 123b), but you’d need at least 96GB of RAM for them. So the bottom line is that (new or used), we are looking at a higher-end Mac Studio before you can run anything worthwhile locally. If you max out the Mac Mini’s RAM (and now we are talking $2000 minimum, probably closer to $2500 to make sense), you are still only at 64GB. MacBooks, Mac Studios, we are talking substantially more. With 64GB, you can run Qwen 3 Next 80b at best. That’s probably not good enough for most things most of the time, and QWEN 3.5 is out already, but not with a model in this size range. Most models are over 100b, which is too big for 64GB, and are under 35b, which require 32GB max, but they won’t be good for much in the world of OpenClaw. Maybe if the M5 Pro Mac Mini will have 96GB of RAM. The M4 Pro MacBooks had a max of 48GB, while the Mac Mini went up to 64GB. Now, the M5 Pro MacBook has a max of 64GB. Maybe the M5 Pro Mac Mini will have more. But I doubt the price will be below $2000. At minimum, I am gonna wait for the M5 Mac Minis.

    Whatever hardwareAlex Finn recommends that you run your OpenClaw locally on whatever hardware you have lying around unused. Your last machine is fine. Unfortunately, my last four laptops were all Chromebooks. I have a 2018-ish Surface base model, which I got when COVID lockdowns drove us to use new tools that weren’t immediately available on Chromebooks. But that thing is useless at this point. I also have a 2015 MacBook that can’t even run a current version of Chrome (or anything else, for that matter). But your luck may vary. A Mac that still gets updates or a functional Windows device (in one ever existed – I certainly don’t think so), may be your ticket.

    Virtual Private Server – On YouTube, everybody (and their dog, too) pushes Hostinger (and Alex Finn swears none of them use it). I guess it is not a bad solution if you are comfortable running your own server and highly proficient in Linux. While looking for a solution, I searched for cheap VPS options (because I am often too cheap to pay a few bucks a month for Hostinger) and found an Always Free tier at Oracle Cloud Infrastructure for impressively powerful Oracle Ampere servers. You can run Ubuntu on these and get 4 CPUs, 24GB of RAM, and 200GB of storage for absolutely free. With Ollama, you can even run models at an impressive speed without a GPU. My testing suggests that I should just use a locally run GPT-OSS-20b as the OpenClaw heartbeat. There are a few catches. You need to have a credit card to create an account. (Debit cards, temp cards, etc., need not apply.) It is a serious pain in the ass to set up (though a better AI Chat will walk you through it). Only one account allowed per person (and if they catch you cheating thye may delete your machine.) The always-free plan has very few available servers at each location. (And you cannot change locations once you have your account.) I literally had to have Claude Code write a script, try to get one every 10-15 minutes, and it still took over a week. (Yes, that script ran nonstop.) Once you get it, you REALLY have to use it, otherwise they take it away from you. (This is not gonna be a problem if you are running OpenClaw on it, but better be quick to set it up.) You can upgrade your account to pay-as-you-go. Running this server will still be free, and I hope there are no surprise charges. But it is easier to get a slot. (This blog runs on such a server. And my OpenClaw will run on the one “my wife set up”.) My advice is maximum patience if you go this route. If you are in the US, set up in Chicago. Most machines. Multiple sites at one location (us-chicago-1, us-chicago-2, and us-chicago-3). And if you want to do this in the EU, go with Frankfurt. Same story. And if you are outside the US or the EU, set one up near you. Those are usually less oversubscribed. If there are several near you, do some research on which location is bigger with multiple sites.)

    In Sum, I have my Oracle Virtual Private Server, and I am a loyal 25-year Linux user who is very comfortable with Linux systems. (Windows Millennium Edition I thank you for pushing me away from Microsoft forever.) Now, with the help of Claude Code, I can even administer my own web server. (Let’s hope my incompetence here will not show as some attacker takes this website down.)  I will give this approach a month or two. In the meantime, we may get M5 Mac Minis, and my mother should be getting a new laptop, freeing up an M1 MacBook Air in the family. This is my plan. Take all these considerations and make the wisest choice for yourself.

  • I guess we are doing this.

    A little back story. Over Christmas break, like so many people, I was given some extra usage of Claude Code by Anthropic. So I started playing with it. And after a few days of use, I was impressed and wanted to build something difficult.

    I was always fascinated by social simulations. (Little-known fact: my MA thesis (2004, Political Science, I also have one in Survey Research from 2010) was an agent-based model simulating societal outcomes of various game-theoretic cooperation strategies.) The idea of having LLMs simulate the agents in simulations seemed intriguing. They can surely act like people. But all examples I have come across were quite rigid with a fixed number of players, very rigid worlds, and very old-school turn-based timing structures. I wanted something a little more free-flowing and a little more flexible. By this time, I have been following The Nerdy Novelist on YouTube and learning about the rhythm of LLM-assisted fiction writing, and I wondered if Sudowrite’s “beats” are possibly a better way to more fluidly handle turns in a social simulation. LLMs can invent dialogue. All we need to know is who is where, who they are with, and what they are doing. At the back of my mind, of course, were social scientific applications: simulations of societal disruptions like a natural disaster, the death of a society member, etc. But the idea could easily have been an art project too, where we just let agents live their lives and see if any interesting story emerges.

    At the time, my AI server wasn’t running anything. So it also seemed like a good technical exercise. We have three NVIDIA Quadro RTX 8000 GPUs. I can definitely do this well with a smaller (Ministral 3 14b) model that fits into one of these (even unquantized with a decent context window). Let’s make sure they run parallel, simulating the beats taking place simultaneously. So, early January, right around the release of Claude Code 2.1, I started building.

    It was insane. I only had the 17 EUR version of Claude, so for every 45 minutes of use, I waited 4 hours. Still, a few days later, I was simulating space-fantasy stories, urban interactions, and small-village life. I realized I needed to stop this vanity project and start building something for my current research. I had one use case where I needed to do quite slow but repetitive AI synthetic data simulation tasks. I conducted many experiments comparing the performance of various models, quantizations, and simulation approaches. After a quick upgrade to the (cheaper) Claude Max account, the next thing I knew, I was building general tools that went beyond my immediate use cases. (I am still on the cheaper accounts and have not hit a limit yet, though today I got to 99% building this blog.) Soon, I had more data than I knew what to do with, and I still wanted to run more and more case studies. The world felt like it had changed. And I had no idea how much.

    Still in January, this was around the time OpenClaw (or ClawdBot, as it was called at the time – definitely not to be confused with Claude or Claude Code) exploded. The little open source AI assistant that stitched together a few AI functionalities and produced magic. (Dangrous magic, but magic nevertheless.) And I wondered if a ClawdBot / Moltbot / OpenClaw could do what Claude Code did for me, supervised, but do it autonomously, handle data generation, run analyses, and organize and visualize the results based on a few already-working examples. If it could, that would be amazing.

    Around this time, I had to make a cross-continent move. And I quickly grew wary of all the OpenClaw horror stories: API accounts running up into the 1000s overnight, Anthropic accounts being blocked for terms-of-service violations, and OpenClaw randomly deleting their “watcher’s” emails or accessing their APIs and credit cards without permission. I figured it is better to wait, watch, and learn a little bit, understand the problems, and see if the security of OpenClaw improves. I have done this, and I feel now that it is time. A few days ago, I searched for academic (social-scientific) applications of OpenClaw. I found nothing. So I decided to document the journey.

    The question is, can OpenClaw also become a useful research (and academic admin) assistant? Let’s find out. Join me for the journey.