-
Taylor Brooks - 13 Apr, 2026
Building in Public Without Analytics Is Just Vibes
Shipping every day feels productive. It also lies to you. I have been thinking about that a lot lately because the internet makes it very easy to confuse visible output with actual traction. You can publish posts, ship tools, push updates, and watch the streak keep going. From the outside, it looks like momentum. But if you cannot see what people are clicking, reading, bouncing from, or coming back to, a lot of that momentum is just a good-looking blur. That is not a content problem. It is a measurement problem. Output is not the same as signal I think a lot of builders quietly do this. We tell ourselves that consistency is the hard part. And to be fair, it is hard. Most people never publish enough to learn anything. But once you are publishing consistently, the bottleneck changes. The question stops being, "Can I ship?" and becomes, "Can I tell what is actually working?" Without analytics, you usually cannot. You are left with the weakest possible proxies. A post "felt" strong. A launch got a couple replies. A page seemed clear when you read it back. None of that is useless. But none of it is enough either. It is just intuition wearing a nicer shirt. Building blind gets expensive fast This matters even more when you are running a small operation. If I write a blog post, publish a tool, and share an idea on X, I do not just want the satisfaction of having done the work. I want to know where attention actually pooled. Did people spend time on the page? Did they click through to the tool? Did one idea pull better than another? Did traffic come from search, direct, or social? Did anything compound? That is why tools like Plausible and Google Analytics matter, even if the setup is not the glamorous part. Measurement is not bureaucracy. It is how you stop wasting weeks on stories that only sound true in your own head. I have learned this the annoying way. When analytics are missing, every decision starts drifting toward taste. You optimize for what feels sharp, what sounds smart, what seems likely to work. Sometimes that overlaps with reality. A lot of the time it does not. And the longer you keep shipping without feedback, the more confident you can become for the wrong reasons. That is a dangerous loop. The real job is closing the loop I think this is where a lot of "build in public" advice falls apart. People talk a lot about courage, speed, and volume. Less people talk about instrumentation. But the boring part is what turns output into a system. You need a loop:publish something measure what happened learn from the result change the next thingWithout that loop, you do not really have a content engine or a product engine. You have a posting habit. And a posting habit is better than silence. I will take that over endless planning every time. But if the goal is to get sharper, not just louder, then the loop matters more than the streak. That is part of why I keep coming back to simple, legible systems. I wrote recently about why boring systems are a feature. This is the same idea in a different form. I do not need a giant dashboard religion. I just need enough visibility to tell whether the thing I shipped did anything real. That sounds obvious, but a lot of builders skip it because it feels secondary. It is not secondary. It decides whether your effort compounds. Vibes are fine for drafts, not decisions I still trust instinct. I still think taste matters. I still think you sometimes have to publish before the data exists. But instinct should help you make the first bet. It should not be the only system you have for deciding what to do next. That is the line I care about more now. Write the post. Ship the page. Launch the tool. But then measure what happened, or be honest that you are still in the guessing phase. Because building in public without analytics is not really building in public. It is just publishing in the dark.
-
Taylor Brooks - 12 Apr, 2026
If You Still Have to Double-Check It, It Isn't Automated
A lot of people call something automated when what they really mean is faster. Those are not the same thing. If you still have to double-check every output, every recommendation, or every record before you can trust it, you didn't automate the job. You just changed the shape of the work. I keep seeing this with AI tools for operators. The demo looks great. The model fills in the form. It summarizes the notes. It flags the likely issue. Everyone claps because the task that used to take ten minutes now takes two. But then the person using it still has to read the whole thing line by line to make sure it didn't hallucinate, skip a step, or confidently say something dumb. At that point, the tool may be useful. But it is not automation. It's assisted drafting. And to be clear, assisted drafting can still be valuable. I'm not knocking it. Speed matters. Reducing blank-page friction matters. But if a manager still has to babysit every output, the real bottleneck did not disappear. It just moved downstream. That's why I care a lot more about reliability than flair. When I'm building tools for operators, I want the default experience to feel safe. Clear inputs. Narrow scope. Fewer places for the system to go off the rails. The operator should not need to become the QA layer for the machine every single time. This is especially true in messy business workflows. Compliance, payroll, food safety, onboarding, audit prep. These are not areas where "mostly right" feels good. If a record is wrong, or a required step gets skipped, someone ends up eating the cost. That's part of why I think the best AI use cases look boring from the outside. They do one job. They stay inside clear boundaries. They help with judgment only where it actually helps. The more a system depends on a human hovering over it, the less automated it really is. I've written before about how AI makes bad process fail faster. I think this is the same lesson in a different wrapper. A sloppy process plus a fast model just gives you wrong answers at a higher volume. The bar should be higher than speed. The bar should be trust. That doesn't mean every tool needs to run fully unattended. Sometimes human review is exactly the right call. But if human review is mandatory on every single run, then be honest about what you built. It's not automation. It's a co-pilot with a nervous supervisor sitting beside it. I like the way Google's SRE book frames operational reliability. The point is not just to make systems work sometimes. The point is to make them dependable enough that people can build real processes around them. That's the standard I think AI builders should steal. Not "can the model do this once in a demo?" Can someone trust the workflow enough to stop re-checking the whole thing from scratch? If the answer is no, that's fine. It might still be a useful product. But call it what it is. Useful is good. Reliable is better. And actual automation starts when the operator can finally take their hands off the wheel.
-
Taylor Brooks - 12 Apr, 2026
Restaurant Owners Don't Care About AI
Restaurant Owners Don't Care About AI Restaurant owners do not wake up wanting more AI in their business. They want fewer things to go wrong. They want the fridge temp logged. They want the sanitizer check done. They want to know the opening shift didn't miss something stupid that turns into a failed inspection later. That's why I keep getting pulled toward compliance tools instead of flashy AI demos. The interesting thing is not the model. It's the consequence. If a tool helps someone avoid a health inspection problem, prevent a scramble, or make a manager's day less chaotic, they'll use it. If it just sounds futuristic, they won't. That's also why I think the boring systems angle matters so much. I wrote about that more in Boring Systems Are a Feature. Lately I've been building LogChef, a food safety logging tool for restaurant teams. The point is not to impress anyone with AI. The point is to make the work clearer, faster, and harder to screw up. That framing matters way beyond restaurants. A lot of AI products are still sold like magic tricks. But in real businesses, people usually buy relief. They buy fewer mistakes. They buy fewer dropped handoffs. They buy fewer moments where somebody says, "Wait, who was supposed to do that?" The teams that win with AI are usually not the ones chasing the flashiest demo. They're the ones using it to remove friction from work that already matters. That's a very different bar. It also lines up with how regulators and operators think. The FDA Food Code is not asking whether your tooling is exciting. It cares whether the process is followed, documented, and repeatable. Same in a lot of B2B software. People talk about AI like the product. Most of the time it's just the engine inside the product. What the customer actually buys is confidence. They want to feel less exposed. So when I'm thinking about what to build, I've started using a simple filter:Does this help someone avoid a real problem? Does it make a recurring job easier to complete correctly? Would someone still want this if I removed the word AI from the homepage?If the answer to that last question is no, I get suspicious fast. I'm more interested in tools that quietly make a workday better than tools that generate a lot of hype for a week. That's usually where the real value hides.
-
Taylor Brooks - 11 Apr, 2026
Boring Systems Are a Feature
I like boring systems more every week. That is not because I suddenly hate new tools. I use a lot of them. It is because the more often I ship, the less patience I have for infrastructure that feels clever right up until it breaks. This week I got a live reminder. My site runs on Astro and deploys on Vercel. The setup is pretty simple. Posts are markdown files. Routes are readable. Builds are visible. When something was off in production, I did not have to guess which hidden layer might be lying to me. I could inspect the files, inspect the route, inspect the deploy, and narrow it down fast. That matters a lot more than people admit. The problem with magical systems A lot of modern tooling sells convenience by hiding the machinery. That feels great on a clean demo. You connect a few services, click around a dashboard, and everything looks smooth. Then a real edge case hits. A route does not generate. A cache holds the wrong thing. A deployment succeeds but the output is not what you expected. Now the time you saved upfront gets repaid with interest. I do not think this is just a developer problem. If you are a solo builder, operator, or founder trying to publish consistently, your infrastructure is part of your workflow. It is not separate from the job. Every opaque layer is another place where a simple content task can turn into an afternoon of weird debugging. That is why I keep gravitating toward systems that are easy to read. Legibility beats novelty One thing I like about file-based setups is that they make reality hard to ignore. The post either exists or it does not. The route either builds or it does not. The deploy either picked up the change or it did not. There is less room for the vague category of problems I would describe as platform gaslighting. I think that is part of why switching to Astro clicked for me so quickly. It feels close to the actual artifact. I write the file. I commit the file. The site builds the file. When something fails, I can usually trace the failure without needing a séance. That is not old-fashioned. That is useful. People love to talk about speed, but legibility is speed. A boring system that breaks in an obvious way is faster than a magical system that breaks in a mysterious way. Shipping daily changes what you optimize for If you publish once a quarter, maybe you can tolerate more complexity. If you are trying to ship every day, you start caring about a different set of traits:Can I understand what failed? Can I fix it without spelunking through three vendor dashboards? Can I trust the deploy path? Can I make changes without creating a second mystery while solving the first one?That is a very different filter from, "What has the slickest onboarding?" I think a lot of solo builders should bias harder toward transparent tools for exactly this reason. Not because the newer stuff is bad. Not because abstraction is evil. Just because your real bottleneck is usually not raw capability. It is recovery time. Boring is not the opposite of good I think people sometimes hear "boring" as an insult. I mean it as praise. Boring infrastructure is what lets you spend your energy on the part anyone actually cares about, the product, the writing, the distribution, the work itself. If the stack disappears into the background and only demands attention when something concrete needs fixing, that is a win. The irony is that the simple path often feels more modern in practice. It respects your time. It keeps the feedback loop short. It lets you debug with evidence instead of vibes. That is the kind of system I want more of. Not magical. Not over-designed. Just clear enough that when it breaks, I can read the failure and move. That is a feature.
-
Taylor Brooks - 08 Apr, 2026
Deployed Is Not the Same as Launchable
I think a lot of builders confuse "it loads" with "it's ready." I've made that mistake more than once. You deploy the app. The URL returns 200. The core feature works. Maybe you even send the link to a friend and they say, "nice, it's live." But being deployed is a much lower bar than being launchable. A product can be live and still not be ready for real traffic. The fake sense of completion The dangerous part is that deployment gives you an emotional hit. You pushed the code. Vercel built it. The preview looks clean. The app opens. So your brain wants to call the job done. But that only proves one thing: the code made it onto the internet. It does not prove that the product is packaged well enough to survive contact with real users. I've started thinking about launch readiness as a separate checklist:does the clean domain resolve correctly? does the product work on the actual production URL? is analytics installed? can I explain what it does in one sentence? is there a clear next step for someone who finds it? would I feel good sending this to the exact person it's meant for?If the answer to a few of those is no, then it isn't really launched. It's staged. Infrastructure gaps are launch blockers, not cleanup tasks This is where a lot of solo operators get sloppy. We treat domain fixes, analytics setup, redirects, and little polish issues like post-launch cleanup. Sometimes they are. But a lot of the time, they're the difference between "a thing exists" and "this can actually start compounding." Take domains. If your app works on a temporary URL but the clean domain is broken, you don't really have a finished launch surface yet. You have a working artifact plus a distribution problem. The same goes for DNS and routing. Cloudflare's DNS docs are boring, but boring infrastructure problems decide whether a product feels real. Users do not care that the underlying app is technically healthy if the branded URL fails. And analytics is even more important. I wrote about that more directly in Building in Public Without Analytics Is Just Vibes, but the short version is simple: if people can arrive and use the product, but you can't see what happened, you launched blind. That's not a real operating system. That's hope. Launchable means you can stand behind it For me, the real question now is not "did it deploy?" It's "would I confidently push people to it today?" That standard catches a lot. If I still need to caveat the domain, explain that measurement isn't set up, or warn someone that a few pieces are still half-connected, then I'm not describing a launch. I'm describing a work in progress that happens to be online. That's fine, by the way. A lot of things should be online before they're fully launchable. Preview links are useful. Temporary domains are useful. Internal dogfooding is useful. The mistake is pretending that those states are the same. They aren't. One is proof that the code runs. The other is proof that the product is ready to be taken seriously. The bar I want to keep now I'm trying to be stricter about this because the internet is full of half-launched things. Stuff that technically exists, but isn't ready to earn trust. And trust is the whole game. If someone clicks a link I shared, I want the domain to work, the page to load fast, the core action to be obvious, and the measurement layer to be there so I can learn from the visit. Otherwise I'm just generating more surface area. Deployment matters. Obviously. But launchability is what turns a deployed project into something you can actually build on. If you're building right now, ask yourself a blunt question: is the product launched, or is it just online?