← Home

February 8, 2026

People Are Actually Using This

When I posted SlugMeter, the honest expectation was that maybe 10 people would look at it, say "huh, cool," and move on. That is not what happened. The post got way more attention than I anticipated, and more interestingly, people started doing things with the data that I hadn't thought of.

This is a quick rundown of what's changed since launch.

Someone on Reddit built a better map than mine

A user named camojorts on Reddit took the citation data, ran it through Folium, and produced a hex-binned density map on a log₁₀ scale with per-weekday radio layer toggles. It covers 2025 through February 2026. You can hover any hexagon and see the citation count for that area.

It's genuinely better than any heatmap I had on the site. I embedded it directly as a new Community tab on the Insights page. The idea is that as more people build things with the data, they can go there. This is kind of the whole point of making public data accessible. It stops being one person's project and becomes something anyone can build on top of.

Anomaly detection

I added an Anomalies tab that surfaces statistical outliers from the dataset. There are four types of story cards:

  • Volume spikes: days where a specific device wrote significantly more tickets than its average (z-score based)
  • Meter traps: individual meters that generate an absurd number of citations per day, every day
  • Record days: the single highest-citation days in the dataset
  • Peak hours: hours where citation volume concentrates into a narrow window

Nothing here is subjective. It's just math on the raw data. But it surfaces patterns that you'd never notice scrolling through 218,000 rows.

Bulk data export

All 218,000+ citations are now downloadable as a CSV. No API key, no pagination, just a file. This is what enabled camojorts to build the hex map in the first place. If the data is public anyway, there's no reason to make people scrape it from AIMS themselves. One person already did that part.

Things that were quietly broken

The "View on Map" button on street rankings was broken in three separate ways at once. The link pointed to the wrong route. It passed the street name where latitude should go. And the map didn't read URL parameters at all. Three layers of failure stacked on top of each other, which is probably why nobody reported it. It never worked, so there was no regression to notice.

The risk score on every street was also showing 100/100. The formula was (citations / 50) * 100, capped at 100. Any street with 50+ citations maxed out. Since the page defaults to sorting by most citations, and the #1 street has 13,671, every street you'd actually look at was pegged at 100. Replaced it with a percentile rank: "this street is riskier than X% of all streets." A number that actually means something.

That's it for now. If you build something with the data, send it my way and I'll add it to the Community tab.