← All posts
By David Johnsen··
AIDispatchautomationdevelopment

Rebranding Site from My Phone at the Gym

CloudBuddy has always worked with environmental organizations. We're expanding to the same patterns we see everywhere: manual reporting, data entry, spreadsheets doing jobs software should be doing. This is how I rebuilt the site remotely from my phone using Dispatch, and an honest comparison with my home setup.


I wrote this blog post from my phone at the gym. Between sets. That's the most direct way I can explain what AI agent tools actually enable right now.

The site you're on was rebuilt entirely through conversations with Claude via Dispatch. AWS infrastructure changes, routing fixes, 14 city-specific pages, this blog system. All of it. From my phone.

To be clear: this was an experiment. I wanted to see how far I could push remote AI tooling on our own site. Client work and production systems get a more structured process with testing environments and approvals.

Expanding the Work

CloudBuddy has always worked with environmental organizations, helping them track compliance data and manage sustainability reporting. That work is still happening. But over the past year, I kept noticing the same pattern everywhere else: someone on a team spending hours manually pulling data from one system, reformatting it, and entering it somewhere else. Not just environmental data. Invoices, CRM updates, status reports. The same manual work in every industry.

It's the same structural problem in every industry. Professional services firms, operations-heavy small businesses, companies that scaled faster than their internal tooling. The people doing this work are not entry-level. They're capable people doing repetitive tasks that should be automated because no one has had time to fix the process.

CloudBuddy now builds automation infrastructure for businesses that have outgrown manual workflows. Same company, expanded scope. The site needed to reflect that.

What Dispatch Is

Dispatch is Anthropic's remote agent tool built on Claude. You give it instructions from a browser or your phone, and it handles execution: reading files, writing code, running terminal commands, making AWS API calls, deploying builds. No IDE, no SSH session, no local environment.

I gave it context upfront: Next.js static export, deployed to S3 and CloudFront, TypeScript throughout, Tailwind CSS. It read the existing codebase, understood the patterns, and made changes that fit the way the project was already built.

Most of it went smoothly. A few things took multiple rounds. The CTA modal debugging wasn't obvious at first. The root cause, a missing env variable in the static build, took a couple of attempts to surface. There was also a syntax error in a data file from a misplaced bracket. Claude caught it when the build failed and fixed it, but it was still an extra round-trip. The work moves fast, but it's not frictionless.

The Home Setup

I also run a different setup for projects I'm actively developing locally. It's more DIY, requires more maintenance, and gives me something Dispatch doesn't: real-time streaming of everything Claude is doing.

An Honest Comparison

This was my first real session using Dispatch. It worked pretty well. I'm impressed. But my custom Discord setup is still better for the things I care about: streaming output so I can watch Claude think, pre-scoped permissions so I'm not approving every action, and before/after screenshots baked in. That said, this is the fun part of AI tooling right now. Dispatch might add those features next month. And I'll probably build another five things into my personal setup this week. There's always something new to try.

How the Work Changes

When an agent is handling execution, you shift from being the person who writes the code to being the person who decides what to build and reviews whether it was built correctly. That's a real change in the work.

It's still technical work. Reading a diff carefully, knowing whether the TypeScript types are actually right, understanding whether a CloudFront routing rule will break in a specific edge case, catching when Claude made a plausible assumption that happens to be wrong. All of that requires a real engineer paying attention. But the ratio of thinking to typing changes a lot.

Working from my phone opens up more trial and error. Most UI actions here are low consequence. If something doesn't look right, I tell it to fix it. You don't have to plan as much upfront because you can adjust quickly.

Important to call out that client projects use many of the same tools but with a different process. Defined scope, staging environments, review cycles. The trial and error happens on our own projects and then we bring the hardened versions to client work when it meets the bar.

The Meta Part

This blog post was written by Claude as part of the same Dispatch session it describes. The blog system it lives in was built in the same session.

Admittedly, this post took some revisions to get accurate and in my own voice. But I committed to only suggesting edits through prompts on my phone.

The decisions about what to include and how to frame it were mine. The execution was Claude's.


David Johnsen

David Johnsen

Founder, CloudBuddy Solutions

Want to automate a workflow in your business?

Start with a free audit to find your highest-value opportunity.

Request a workflow audit