Open planning workspace
OpenPlanMaps, engagement, reporting

OpenPlan evidence catalog

A transparent record of what OpenPlan has actually run — caveats intact.

This is not a product tour and not a forecasting claim. It is a transparency-first look at one real screening run (Nevada County, 2026-03-24), with the same validation metrics, caveats, and prototype-only gate the platform enforces internally.

Current status

Internal prototype only

Not production-ready forecasting. No outward modeling claims are made from this run.

What is shown

One live run, verbatim

Five-station Caltrans validation, facility ranking, APE distribution, and screening gate pulled from the artifact.

AI disclosure

Drafting + QA only

AI accelerates drafting, data cleaning, and QA. Client-critical conclusions require qualified human review.

Screening runtime vs. Caltrans 2023 counts

One AequilibraE screening run against a five-station Caltrans priority-count subset on the Grass Valley corridor. What we ran, what it matched, and where it diverged.

Run context

nevada-county-runtime-norenumber-freeze-20260324

Engine: AequilibraE screening runtime. Counts source: Caltrans 2023 priority counts (five-station subset). Artifact generated 2026-03-24T19:42:28Z.

Validation metrics

Absolute percent error + facility ranking

Stations total5
Stations matched5 of 5
Median APE27.4%
Mean APE68.75%
Min APE4.10%
Max APE237.62%Above the 50% critical-facility threshold — disqualifies this run from outward modeling claims.
Spearman ρ (facility ranking)0.40

Facility ranking (observed vs. modeled)

Five Caltrans 2023 priority stations

StationObservedModeled daily PCEObs rankMod rank
SR 20 at Jct Rte 4945,50073,66611
SR 20 at Brunswick Rd35,50030,97523
SR 49 at South Grass Valley26,00027,06734
SR 20 at Penn Valley Dr17,50012,70545
SR 174 at Brunswick Rd10,30034,77552

What the run itself says about itself

Lifted verbatim from the validation artifact so the language cannot drift between the internal record and this public page.

Screening gate

internal prototype only

At least one core facility has 237.62% absolute percent error, above the 50.00% critical-facility threshold.

Model caveats (verbatim)

What the screening run explicitly is not

  • screening-grade only
  • OSM default speeds/capacities
  • tract fragments are not calibrated TAZs
  • jobs are estimated from tract-scale demographic proxies
  • external gateways are inferred from major boundary-crossing roads

Reference

Full internal proof record

The repository proof doc (with workflow, runtime commands, artifact paths, and the same numbers) lives at docs/ops/2026-04-18-modeling-nevada-county-live-proof.md inside the repository.

This page is generated from a single validation artifact snapshot. If the underlying run is re-validated or superseded, this page should be updated or removed — it is not a guarantee of current runtime state.

Open-source proof, then supervised service paths

The examples catalog should convert careful agencies without pretending OpenPlan is a closed SaaS black box. Public artifacts show the work; paid help starts when someone needs implementation, hosting, review, or planning support.

01

Inspect the proof trail

Evidence pages point back to repository records, validation artifacts, and caveats so technical reviewers can inspect how a claim was produced before asking for help.

02

Request implementation help

Request-access routes lead to service conversations — managed hosting, self-hosting readiness, workflow setup, or planning analysis — rather than instant unsupervised checkout.

03

No instant checkout demo claim

A public example is not a promise that every workspace will reproduce these results. Production commitments belong in scoped services with human review.

Guided demo fit check

A useful buyer walkthrough should end with one scoped first workflow, a named review owner, known data-sensitivity constraints, and the right delivery lane: self-hosted, managed-hosted, implementation-only, or a mix.

How additional examples enter the catalog

01

A run is added only when its validation artifact exists, its screening gate is captured, and its caveats are preserved verbatim.

02

Gate upgrades (e.g., beyond `internal prototype only`) require recalibration evidence, not just prettier framing. The catalog shows status truthfully.

03

Agencies and consultants who want to see the methodology behind a run can review service lanes or request a supervised walk-through.