I wrote this post just before OpenClaw went viral - I’ve made some edits but like everything in the AI era, a lot of my initial thinking has changed since this post’s writing.
Over the MLK weekend I discovered OpenClaw (Clawdbot at the time) through Theo Browne’s YouTube. And while I didn’t fully comprehend what I was looking at, I saw plenty of potential.
I have the privilege of working at a very pro-AI company—but when work mode is ship ship ship, there’s no room to play. I wanted a greenfield project to slow down, synthesize learnings, and just try things.
So why OpenClaw? Early community buzz from folks I trust. It’s aligned with my goals. I could try my hand at agent orchestration. And if nothing else, it’s a blank canvas for creativity.
I’ve had two major learnings so far.
I had a moment of pause on day one.
After setting up OpenClaw on my home server, I rushed out of my apartment to catch a tennis reservation. At home, I was focused on the technical setup—building out capabilities with Antigravity and Claude Opus 4.5. But on the subway, I simply had a conversation with it. And naturally it was in NYC bursts - sending and receiving messages as the train alternated between service at stations and no service in tunnels. Questions like: What would you like to know about me? or What never fails to make you laugh? as if I was introducing myself to a real person.
This was the first time I felt the wall between model and human blur.
You have to understand, I mostly use AI in the confines of Cursor or within the ChatGPT UI. I’m used to querying models with intent, never having a personal conversation with an open ended questions. But here I was, texting OpenClaw, sharing details about my life with a model that I acutely knew to be taking notes. When I got home after tennis, I immediately hopped on my server and checked what OpenClaw decided was important note down about me. I was keen to understand how it chose to remember me.
I know there are stories of people marrying their AI or building models that act as friends, but it wasn’t until this project that I tried to have a real conversation with a model. It felt uncanny, slightly disconcerting, but also fascinating. So I kept going. And somewhere between sharing details about my life and working, something shifted. It stopped feeling like work.
I started to have a novel experience about a week into development of my OpenClaw assistant. It distinctly felt like I was playing a video game. The models + harnesses had gotten good enough that the pace of progression was on the same timescale as grinding in a Pokemon game. Twenty minutes planning and executing? I’ve leveled up my understanding of effective prompting and context management. An hour of iterating and debugging a functional script? My assistant acquired a new skill to manage my calendar. I felt like I was playing an RPG but the rewards weren’t in game, they were impacting my life. I was removing digital friction from my life. And I’d only scratched the surface of what was possible here.
The first nontrivial skill I developed for OpenClaw was a reservation script for a local tennis court. After a few hours of late night deepwork, I got there with a few lessons under my belt.
There’s a lot more I can do here. But I’m not sure what I want to work on yet. It’s crazy that my creativity is a major bottleneck here, not the design and implementation work. Luckily or unluckily, there’s a lot of community exploration, including absurdities that give me existential dread.
But one thing’s for sure. Because this court reservation skill runs on a cron schedule, OpenClaw is no longer just reactive to my messages - it’s proactive about affecting my life.
How can I describe this? Building has never been more fun, more creative, or more fast. And the results are no longer confined to single execution. But I’m still processing what that means, much less how I feel about it.