TL;DR there’s an app for that, and it’s 99% vibe coded.
“I could do this in a weekend”…
Our boys started attending an all-outdoors school in Golden Gate Park recently. It’s awesome! They love it, and quite frankly, I wish I had had that option as a kid!
The only downside is the commute: from our part of the Mission to Golden Gate Park takes ~30-45min each way with the cargo bike, so after many years of barely using the car, we’re now seriously looking at the reality of a daily car commute :(
The other day, Mom was dropping off one of the boys, and was thrilled to find plenty of street parking! Only to find out in short order that it was only plentiful because it was during street cleaning hours, and she got a ticket for being at the wrong place at the wrong time!
She mentioned over dinner how she’s been asking various tech-inclined people in our extended network to build an app that would allow her to more effectively look for parking by surfacing relevant restrictions. And getting repeatedly told that it would be easy because the data is all publicly available, but no one actually ever taking the time to do it.
Bespoke Software for the masses
This is a typical “weekend project” scope, which looks very straightforward, but for which the “activation energy” is surprisingly high, and the tail of bugfixes and polish always result in multiple weeks overrun.
Which is why those barely ever get done, because the reality of being a parent and wanting to maintain a modicum of social life means time for such projects is extremely limited.
Or at least it was, until Claude Code entered the scene, and Opus 4.5 upped the ante!
As many people have realized for themselves, and started to shout on the metaphorical rooftops, we have entered a new era of highly accessible bespoke software, or bespochastic software as I sometimes like to call it.
When I brought this idea to Claude and defined the parameter of a prototype, he came back with a reasonable plan, and a “2-3 days” estimate. I happily told him to get to work, and about 7min later the first pass prototype was ready for me to test and give feedback on!
We were on! Suddenly this project went from a daydream to something that could realistically be complete in a matter of hours, even in my packed schedule!
Levels of abstraction
LLMs elicit a wide range of reaction. Some people see massive potential, others are disdainful of “stochastic parrots”. As for me, curious as always, I am still exploring their strength and weaknesses, and while they exhibit very jagged understanding and skills, I definitely see a lot of value in them. In particular, working on this project showed me a very clear use case where, with adequate guidance, they can act as incredible force multipliers.
Once upon a time, software was written by tediously recording static bit patterns on tape.
Then people assigned textual mnemonic to those bit patterns and called it “assembly”. Assembly language(s) went from straight mapping, to inferring memory offsets for jump labels and variable references, to even supporting various form of compile-time macros in many cases.
Eventually higher level languages emerged: C, Basic, Java, … each with a different take on how to best express human desires to make them intelligible to the underlying machine.
I would argue that at this point, the frontier models have become good enough that, when staying within a reasonable distance of the training distribution, they can essentially act as a high-level compilers. Nowadays, most people, most of the time, do not choose to stare at the binary code generated by their compiler. Similarly I can now build good web apps without so much as glancing at the implementation details of the frontend! This allows me to focus on the functional requirements, high-level design, and backend bits, which I am much better at, and also find much more enjoyable!
I particularly appreciate that, if I wanted to dive in an understand the ins and outs of the frontend, I believe Claude would do an excellent job of explaining it so I could learn. Likewise, I believe Claude would probably be quite capable to take the opposite role, and act as a backend dev if a frontend-inclined person was looking to delegate work in the opposite way. At this point, the human provides a goal and taste, and gets to pick which parts to focus on and own, and which to delegate to an imperfect but very competent coworker!
Behold Public Data
One of the cool things about recent LLMs is how effective they can be at basic research, cutting through a few iterative steps to get you to the relevant information quicker than my feeble hand could possibly achieve with a keyboard.
It took Claude merely seconds to point us towards all the relevant data sources for this project: data.sfgov.org and services.sfmta.com offer a variety of datasets in GeoJSON format that is perfect for overlaying on top of a map, and Claude quickly identified the relevant ones:
My original intent was to do everything client side, so I wouldn’t have to host anything more than a few static HTML/CSS/JS files on my VPS. However, the relevant dataset in its raw GeoJSON glory turned out to weigh more than ~50Mb, a rather prohibitive amount for a mobile connection!
I still didn’t want to have to build a complicated API backend, so I decided to write a go tool to fetch and pre-process all that raw data for consumption by the frontend. The idea was to run this as an hourly job on a reasonably beefy node on my homelab, and automatically rsync the updated file to my VPS.
The pre-processing was primarily intended to trim-down the raw data to just the relevant bits, and encode those more succintly. This optimization process was iterative, as Claude and I figured out exactly what to keep and what to drop, and experimented with various ways to structure the data, to balance encoded size, speed of decoding, and support on most browsers without pulling in 3rd party deps.
One of the most magical things about working with Claude on this project, was how deftly it handled multiple rounds of schema and encoding change for this pre-preprocessed data. It would always correctly identify all the places it was used and update them correctly. This is particularly impressive considering that many language-specific automated refactoring tools still fall short of achieving that!
Bespoke, but for real this time
Claude’s initial prototype was based on Leaflet, which seemed like a solid choice given my requirements.
However, I quickly became frustrated about a few limitations that we ran into, especially when trying get my custom overlay layers to react smoothly to scrolling and panning, and when exploring further performance optimizations.
Emboldened by the success so far, I instructed Claude to reimplement the equivalent functionality in pure JS, with no dependencies. This turned out to be surprisingly quick and successful, and it unlocked excellent opportunities for follow-up optimizations.
In particular, it allowed me to drop lat/long coordinates entirely, and standardize both the UI and the pre-processed data on world coordinates, offset to one of the corners of my bounded SF map, and quantized to 32bit integers. This transformation massively simplified a lot of the code (especially the zoom logic and its interaction with CSS transforms), and made everything significantly faster by avoiding repeated coordinate conversions all over the place.
The custom coordinate format also was key to compressing the pre-processed data: going from string representation of floating point lat/long that were in excess of 10 characters each, to 4 bytes per point was a huge win, and the fixed-width allowed further gain by compacting long sequences of strings in the JSON to singular base64-encoded runs, further reducing the overhead. As a cherry-on-top, the quantized coordinates are much more compressible than the original strings, leading to further gain for the gzip/brotli compressed files.
Going further down the rabbit hole, here’s a sample of what Claude allowed me to do, all in the span of a 2-3 evenings:
- Move from typical raster tiles for the base map, to a trimmed down vector outline of geography and major streets. Thus a gazillion image fetches while scrolling and panning became a single fetch of compressed json on first load.
- Support both a canvas backend and a WebGL one for rendering, both with excellent performance
- Optimize scroll/pan with smooth CSS transforms to avoid aggressive re-rendering
- Assess which of the source data files support conditional fetch with ETag, and rework the backend job that periodically refreshes the pre-processed data to avoid redundant work when the inputs are unchanged.
- Investigate surprising features of the raw data and work out how to properly handle right/left side of streets, regular day vs holiday schedules, …
- Explore various data sources for the coastline polygon, ultimately landing on OpenStreetMap, and work out how to assemble a closed polygon from a jumble of segments from the raw OSM data, and automatically simplify it to reduce transfer size and rendering complexity.
Results

Static Frontend:
- HTML: 1.5kb
- JS: 111kb (uncompressed)
- CSS: 27kb (uncompressed)
- vector basemap: 1.5Mb uncompressed / ~800kb compressed, ETag-based caching on client side
- pre-processed parking data: 5.1Mb uncompressed / ~800kb compressed, ETag-based caching on client side
Backend:
- hourly job to refresh basemap and pre-processed data
- takes less than 20s to pre-process data from raw sources on cold start (mostly limited by download size/speed)
- most runs no-op in ~1-5s when raw data is unchanged (dominated by slow HTTP endpoints to check for changes)
Or go peruse the source (NB: deploy scripts are specific to my peculiar homelab setup…)