Help us read Toronto’s parks better
The atlas is a data-informed reading, not a verdict. Every metric carries a confidence range, every score has a written explanation, and the model is wrong about plenty of parks in ways only people who use them will catch. Three ways to push the project closer to truth, in increasing order of commitment.
Volunteer observations
Sit somewhere with a clear view for 30 minutes, count what you see, and upload the completed template. Real human observation is the one signal sensors and OSM tags can't replicate.
- Aggregate counts only — no faces, names, or plates.
- Use the printable / spreadsheet template, then upload your CSV.
- Each row is weighted by the confidence you assign it.
Observation campaigns
Coordinated rounds where a roster of volunteers each cover one park. Better spatial coverage than ad-hoc submissions, and a quicker way to fill blind spots the model is most uncertain about.
- Sign up for an active campaign; we'll surface progress publicly.
- Best for community groups, classrooms, or design-school cohorts.
- We aggregate submissions and credit contributors by handle.
Provide feedback
If you study, design, programme, or simply spend a lot of time in Toronto's parks, your judgement is more valuable than any single signal we measure. Tell us where the model is wrong, what we're missing, or how to read a park better.
- Comment on any park or on the methodology itself.
- Choose private, anonymous, or attributed visibility.
- Structured fields (strengths / weaknesses / typology) are optional.
Why bother?
Three signals are genuinely hard to model from open data: how people actually use a park across the day, whether the place feels alive, and where the typology classification we automate doesn’t match the ground reality. Volunteer observations close the first gap. Campaigns scale it. Structured feedback fixes the second and third.
We aggregate every submission, weight it by confidence, and surface it back into the dataset on the next cache rebuild. Nothing is published with personal identifiers attached. The point isn’t a single “true” score per park — it’s a transparent reading that gets better as more people who know these parks contribute.