Bringing pedestrian maps to Crystal Palace

I’m leading a Transition Town project to bring Legible London to Crystal Palace. You’ll have noticed these signs around central London, conspicuously absent across most of the rest of the capital:

Bringing pedestrian maps to Crystal Palace

At the local Transition Town AGM, somebody suggested we should try to bring these to Crystal Palace.

But why wait on TfL? Using OpenStreetMap and TileMill, we can try to produce our own similar map style and stick them up ourselves!

Here is my prototype for the wide area map:


You’ll notice dark blue lines along the edges of roads and though the park. Those are pavements and footpaths, and indicate where you can walk.

Here’s a prototype for the more detailed map of the local area, with local points of interest, pedestrian crossings and bus stops:

Bringing pedestrian maps to Crystal Palace

You’ll notice that some roads don’t show any pavements. Eagle-eyed locals may also spot missing cut-throughs and wonder about missing points of interest.

To that end, this Saturday I’m running a stall at the Crystal Palace Food Market on Haynes Lane from 10.30am-1.30pm where people can help us gather data for footpaths, pavements and other useful features for pedestrians.

I’ll also be getting feedback on the cartography. One idea I’m pondering is producing themed maps. For example, we could omit pavements where air pollution is over legal health limits, or we could draw on political boundaries and colour-shade the infamous five boroughs that meet in the area, or we could add blue plaques.

Any mapping and cartography enthusiasts are welcome! I will have printouts of the local area for people to scribble on and bring back to the stall, or to drop off in a local cafe where I can collect them later in the week.


Robert commented that he has seen Legible London outside of central London. It’s true, I said, they do go further afield to some major town centres. Well, courtesy of the latest evaluation report for the scheme here is a map of their extent:

Bringing pedestrian maps to Crystal Palace

Just imagine if that map could be absolutely covered with yellow dots, courtesy of a community-led, low-cost OpenStreetMap solution!

Densifying London (part two)

Following yesterday’s post on making London more dense, Tim Lund suggested I do a slightly more sophisticated analysis. Planners in London use a metric called the Public Transport Accessibility Level, or ‘PTAL’, which does pretty much what you’d expect.

Rules for things like car parking levels and the density of housing you should build are based on these, because obviously if you’re in central London you have no need for a car and you can justify quite tall blocks of flats, but in low rise suburbia with only sporadic bus services it’s accepted that more car parking and less dense housing is appropriate.

So if you were to follow these rules, how much more housing could you build in London?

First, I took the data for PTAL levels (the map on the left). Then I took my wards, sliced up to remove any areas that cannot be built on, cut out the Heathrow airport too because it was such an extreme outlier, and worked out the median PTAL level for each one (the map on the right). Click for a larger version.

Densifying London (part two)

Then I took all the wards where the density was below London’s median. I calculated how many homes you could have, taking the midrange for urban areas for each PTAL level from the Housing Supplementary Planning Guidance (page 32). I deducted the actual number of households from that potential to arrive at the extra homes you could build if you were to bring the areas in line with the planners’ expectations.

This would imply flattening the lowest density half of London and building anew at densities between 80 and 225 homes per hectare.

Here are the results. This map shows all the wards that fell below the median, colour-coded based on the number of extra units you re-built them at the density suggested by the London Plan, using every bit of land for housing:

The result: an extra 6,500,000 homes! That’s twice as many as exist across the whole of Greater London already.

Obviously nobody is about to demolish such an extensive area and re-build it from scratch, and to take every inch of commercial and industrial land for housing and mixed-use development. The environmental impacts of such a huge demolition and construction program would also be ruinous. So it’s a slightly absurd number. But it gives you an idea of what’s possible. Maybe this is what would have happened if we had a sustainable planning system during the 1930s, when these sprawl suburbs were built?

An interesting ‘part three’ would be to take those areas and see what capacity there is on brownfield site. Sadly, I don’t know of any good up-to-date sources of brownfield data. There’s this data produced by the now defunct London Development Agency, not updated since 2009, which you can see on a map here. One to think about…

Making parts of London more dense

How do we build more homes in London? The Mayor’s latest exercise assessing needs suggests we need up to 690,000 over the next ten years, but a parallel exercise looking for land only came up with sites for 420,000 homes.

The usual debate is whether or not we build in London’s greenbelt to make up the difference. But there are at least three good reasons not to go down this route to solve our problems: there are an awful lot of protected habitats that we really cannot build on; building sustainable developments around transport hubs and avoiding those habitats could only deliver (in Andrew Lainton’s estimation) 72,000 homes; and if we ignore these,  it could lead to more low density, car-dependent urban sprawl, which the greenbelt was established to prevent.

The alternative, or perhaps complementary, approach is to make London more dense, particularly around transport hubs in sprawling, low density outer London. This has actually been pushed for over a decade by Ken Livingstone and Boris Johnson with the London Plan, the main planning document for the capital.

There is a lot more to be said about that debate, but it isn’t my purpose with this blog. Instead, I have indulged in one of my hobbies and done a rough-and-ready analysis of the current density across London, by local electoral ward.

My methodology was as follows. I started with the ward boundaries, and the ONS household estimates from 2011. I then chopped out all the areas covered by the land uses which I reasoned we cannot build on.

  • greenbelt
  • metropolitan open land (strongly protected in planning policy)
  • parks, commons, allotments, nature reserves and other important green spaces
  • railway lines with a 5m buffer either side

The data for these came from the London Datastore and extracts from OpenStreetMap.

I considered cutting out areas covered by roads, but then found it would take so long for my software package QGIS to process the data that I’d lose interest! So, given that roads cover pretty much every area, I decided it wouldn’t make a significant difference and left them out.

I also didn’t cut out industrial or commercial areas, for three reasons: first, often commercial buildings have flats above; second, the coverage in OpenStreetMap is too inconsistent; and third, while many should be retained for this use, there are also lots of areas that could be redeveloped for homes, or as mixed-use sites.

So given these caveats, I calculated the number dwellings per hectare of land that could potentially be built on. Here’s the result:

Making parts of London more dense

The green area is all the land that can’t be built on. The rest is coloured from deep red for very high density to light pinky grey for the lowest. I haven’t put a legend on because, well, it’s a very rough approximation. You can see obvious problems with the data, e.g. where Heathrow airport sits, and the Thames Gateway with lots of strategically important industrial land as well as lots of sites for new housing.

So what could densification achieve? Well, let’s say we increase the least dense half of London and brought it all up to the median density. That would increase the number of homes by 815,000!

I haven’t gone any further with these because it is such a rough calculation. If I can get my hands on better data to account for the flaws mentioned, I’ll give it another whirl. But it would be quite interesting to take an area I know really well and look for development sites, and see whether they could be brought up to that median.

Update see part two where I look at density and public transport accessibility.

Mapping dirty London

In the past couple of months I’ve been able to combine work and my mapping hobby, working on a web site about air pollution in London. I’m going to be speaking about this at the October geomob meeting.

I’m lucky enough to live in one of Europe’s most polluted cities. Air pollution causes more early deaths than obesity and road collisions, and is only bested by smoking. The Mayor published some really good open data on pollution levels, which of course is incomprehensible to ordinary folk. So despite having a sense that it’s not the cleanest city, Londoners don’t know all that much about the problem or how it could be solved. We want to help change that.

Our first splash was a map showing the quantities of some major pollutants dropped on sections of roads across the capital, so Londoners could find out – how polluted is my road?

Mapping dirty London

Lots of people loved that. The GLA’s GIS team did the mapping part, using our Ordnance Survey license and data to match pollution data up to ITN road sections. They also produced league tables for each borough, which we sent round to all the local papers. The Guardian featured it on their homepage and TimeOut blogged about it, driving many thousands to have a look.

Next, I checked a list of schools known to be within 150m of heavily polluted roads – there being strong scientific research to suggest a link between that proximity to pollution and higher rates of asthma in children. Currently there are estimated to be 1,148 schools suffering from this problem., revealed through fantastic work by the Campaign for Clean Air in London. We’ve mapped these, so you can see if your school is affected. This was really easy – turn the schools into GeoJSON and stick them into a Leaflet map, using the markercluster plugin to make it usable when zoomed out.


That wasn’t very difficult, but I think the map tells the story well – that this problem affects schools all over London, not just in the centre.

I’ve now been able to do some of the GIS work myself, and what fun it was! I’ve never had much call to really use Quantum GIS, but it’s a wonderful tool.

I was able to take raster files showing nitrogen dioxide concentrations across London from the London Atmospheric Emissions Inventory 2010, vectorise them, and filter them to find areas where levels were above legal limits. With this, I can then play around with other open data to see what lurks in areas suffering from illegally high levels of air pollution.

Mapping dirty London

Areas of London expected to exceed legal limits for the annual average concentration of nitrogen dioxide in 2020. For context, London was supposed to be under these limits in 2011 in order to comply with a European Directive introduced a decade ago.

My first experiment was to clip this to London’s road network. I used the Overpass API to extract all the roads from OpenStreetMap (for some reason I can’t connect to OSM-GB at work of late). From this I was able to determine that in 2020, around 45% of London’s main road network is still expected to exceed legal limits. Nasty!

I was also able to determine that in seven years time, there will still be 928 schools near to heavily polluted roads. So thousands of young Londoners will spend their whole time in primary school breathing in illegally high levels of air pollution.

I then started to think: where do I go that means I’m next to main roads for long periods of time? Pubs, cafes, bus stops, parks. Well, these are all in OpenStreetMap as well!

I started with bus stops, because we have a pretty comprehensive dataset there after the NAPTAN import. I did all the GIS analysis, producing tables of data for boroughs and the like. But it was only when I used Maperitive to produce tiles for a slippy map that it struck me – there are still LOTS of duplicate nodes where someone has manually added the bus stop years ago, then we imported the NAPTAN stop. So actually OpenStreetMap is a completely useless source for bus stops.

I got around this by just downloading the original NAPTAN data and using that instead. But it’s a shame because NAPTAN is really inaccurate. Where OpenStreetMappers have added bus stops, or manually checked NAPTAN stops, the locations are much more precise. It would be great if we could try to clean this data up to remove duplicates. Perhaps over the winter meetups, Harry?

With this, I produced a snazzy web page showing info on London in 2020.


I haven’t tried pubs and cafes because our coverage is so patchy. One day there may be enough contributors for OpenStreetMap to have a really excellent geodatabase of these features. What an amazing resource that will be! Though I wouldn’t want to be put off some of my favourite pubs.

One final step I didn’t take was routing. I’d really like to see somebody integrate the pollution data with a routing engine, to try and find reasonably direct walking and cycling routes that keep you off the most polluted roads. I blogged about this last year, and I still think it would be both cool and genuinely useful.

My friend Robert also suggested a routing engine where the polluted roads are off-limits, and tiles without those roads drawn. Getting around today without using those roads at all would make for an interesting challenge!

All of this work has had quite an impact. Take this cutting from my local paper:

It was also the top story on BBC London News for the whole of Wednesday when Jenny Jones AM questioned the Mayor about our findings:

Now we just need to fix the problem.

Getting enthusiasts into OpenStreetMap

I started writing this as an email to Richard Fairhurst, but then thought I’d post it to my blog. I wrote something on a similar theme just over a year ago.


I just wanted to say that I thought your talk at the SOTMUS conference was spot on.

But when you talk about the cycling community, I think there’s an important caveat missing. Lots of people in our mapping community (or lots of the 5% who do 95% of the mapping) are enthusiastic cyclists, but few in the enthusiastic cyclist community are mappers.

I’ve not come across the London Cycling Campaign or any of the borough groups really getting involved with cycle mapping. Some members (myself included) do, but my impression is that most don’t. Despite embedding CycleStreets on their homepage and collaborating on a cycle parking campaign with them, I’ve never come across a big concerted push from LCC and local borough groups to contribute to OpenStreetMap beyond the odd mention in their magazine [edit: and a two-page spread]. The same was true of Andy’s great DfT data project. To this day, coverage of cycling infrastructure in London is patchy (although far better than any online alternative).

The same goes for lots of enthusiast communities. You’re right that a lot of people map the thing they’re enthusiastic about. But not many communities organised around those enthusiasms get mapping.

I’ve tried, here and there, to talk enthusiastic people round to OSM. I’ve talked to community campaigners I know involved with cycling, walking, food growing, trees, transition towns, vegetarianism, housing and school catchment areas. All benefit from mapping; some do it themselves with varying degrees of proficiency, usually with pen and paper, Powerpoint slides or Google Maps. But the technical hurdles of using OpenStreetMap (and in some cases simply the effort of mapping) often seem to outweigh the benefits they’d really get from the results.

After a chat over some tea or beer I have to send interested people half a dozen links to different web sites that provide the editor, the not-very-usable tutorials, the place to find tags that aren’t presets, the way to go about inventing new tags if necessary, the custom renders (often a mix of ITO and other third party sites), the way to see recent changes in your area that is actually usable, the quality controls to check your work, the places to ask for help, the other people doing similar work in London and how to discuss it with them, the inspirational examples, and so on. The diversity of web sites and tools is a strength of OpenStreetMap, but it’s also terribly confusing and it can be hard to discover the tool you’re after. Often there simply aren’t tools there to do what they want, and they don’t have the skills to roll their own nor the money to pay someone to make them.

It’s too confusing and complicated. Usually the payback isn’t enough, so they don’t even start, or give up shortly after their first dabble with an editor.

So… your idea of a community page is excellent. It could help to create a central focus for people with the same enthusiasms, saving me the effort of compiling all those links and making everything seem terribly disjointed. It will take a few bricks out of the wall holding communities back from contributing to OpenStreetMap.

Here’s my extension to your idea that seems technically within our grasp since Potlatch 2/iD and the Overpass tool. Make it really simple (a few clicks in a web-based tool) to set-up a hub for your niche interest: a community page bringing everything together, a custom editor with the appropriate presets, a nice map showing the results, data extracts (kml, json, shp), and code snippets to display the results on your own web site. You like Welsh chapels? Spend half an hour on this web form and bob’s your uncle.

As another hapless arts graduate with big time commitments to the Green Party, I don’t have the skills or time for this. I’m left cobbling together halfway-decent sites like OpenEcoMaps to try and fill a niche. It would be wonderful if the more technically gifted folk could make this a priority to turn more enthusiasts into the 5%.

OpenEcoMaps is back!

OpenEcoMaps, eco-living maps using OpenStreetMap data, is now working again. Hooray! I decided to sit down and work out why the OpenLayers interface wasn’t working and it turned out to be quite simple to fix.

You can now browse around maps of low carbon energy generators in London, veggie restaurants in Edinburgh, allotments in Exeter, recycling facilities in Glasgow and more! The data is updated every hour, direct from OpenStreetMap, and is available on maps and downloadable/reusable KML and GeoJSON files. The code is also in Github, so you could set-up your own version for another country if you like.

OpenEcoMaps is back!

There are still some of the layers that aren’t working because the underlying data isn’t being extracted from OpenStreetMap properly. But I’m very glad that, after well over six months with it completely broken, the web site basically works again!

OpenEcoMaps halfway back

For almost a year now, my pet project OpenEcoMaps has been broken. The vagaries of unreliable XAPI servers meant the system couldn’t download OpenStreetMap data to create all the KML files, and (I think) some changes to OpenLayers meant the web maps also stopped working. It has taken me a long time to work up the energy to fix these.

Today I can happily say one half of the system is now working again, and the underlying code is much improved.

OpenEcoMaps halfway back

OpenEcoMaps KML files, and now GeoJSON files, are being created again. Hooray! I switched from XAPI to the Overpass API; grabbed JSON which enabled me to write a more powerful function to turn this into usable objects (for example building a complete Python object for an allotment merging data from relevant nodes, ways and relations); wrote a new library to create GeoJSON files; refactored everything else to fit with these changes; and made numerous other small improvements.

You can browse, download and use the KML files and GeoJSON files with those links. To see an example, look at this KML file of low/zero carbon energy generators overlaid on Google Maps.

Now I just need to fix the web maps so you can see the lovely features on the main web site, and so people can easily embed the maps on their own web sites. I did dabble with using Leaflet before Christmas but I got stuck trying to get the icons to match styles defined in the GeoJSON file. I had a quick look at the OpenLayers code and quickly decided I had better things to do with my time! If anyone fancies giving it a go, the code is all in Github and is all released under the General Public License.

Fixing problems with OSM-GB’s web service

The OpenStreetMap GB project aims to measure and improve the quality of OSM data in Great Britain, cleaning it up to get rid of silly little data errors, and to make this cleaned up version available in formats that local and central government types are used to.

I have been exploring whether I could use the Web Features Service as an easier source for landuse data (see my previous blogs on making a map of London’s green spaces and analysing Southwark’s landuse, which both required a somewhat complicated process to get the data set).

In the process I’ve found it’s actually a great way to identify gaps and problems in our data. Browse around the map on the OpenStreetMap homepage and you might think we have an impressive coverage of residential landuse, playing fields and parks. But probe around with the WFS and you can spot all sorts of missing data.

For example:

The red circle in this image highlights a likely gap in our data. The yellow blocks are all the different land covers – residential, retail, park, forest, school, etc. The area underlying it in blue is the extent of the SE21 postal code. So if blue is poking it means OpenStreetMap hasn’t got any object covering that land area, which could mean we’re missing something that is actually there. In this particular area, Dulwich in inner London, it’s pretty unlikely that any bit of land can’t be tagged as something – brownfield, grass, nasty disused area of asphalt!

What could that gap be? I went to check it out and found it was an incompletely mapped recreation ground, so I updated OpenStreetMap to fill the gap.

Here’s another more subtle one:

Don’t see it? That blue corridor coming from the north west into the red circle is a railway line. It divides up two yellow blocks (a residential area to the west and a park to the east). Then it runs into a big yellow block – a residential area that should really be split in two, leaving a corridor for the railway line. Again, it was easy to correct the data in OpenStreetMap.

The best thing about the OSM-GB WFS is that it’s updated daily, so I can gradually improve the data by checking back the next day and spotting something else to fix. I’m using QGIS, and setting the style property by category, categorising by the ‘boundary’ column and setting any boundary types to transparent. This stops boundary polygons obscuring all the other data. In this way I can quickly set the style to highlight anything I like, and save the data as a shapefile which I can then query to get something more sophisticated.

I’m currently following this method to get a really good dataset for the SE21 area, just to see what’s possible.

Analysing Southwark’s natural geography

Following my map of London’s green and blue infrastructure, I have been working on some analysis of the land uses.

I was inspired and encouraged to try this by Liliana’s interesting work called “imagining all of Southwark“. Lili and Ari have managed to get the council to release lots of data on properties and car parking, and they are producing analysis of this data by postal code area and by street. They haven’t managed to get anything on land uses, so I thought, why not produce this with OpenStreetMap data?

A few evenings later, here is the result shared on Google docs (direct link) covering the eight postal code areas that between them cover most of the borough (SE1, SE5, SE15, SE16, SE17, SE21, SE22, SE24):

What the data means

The “summary” worksheet shows the total land area, expressed in hectares (10,000 m2), for various different types of land coverage. I have also calculated the percentage of that postal code area that the land uses represent, which gives an interesting insight into the differences between the areas.

Some of the land uses will overlap, for example miscellaneous bits of green space are often mapped on top of residential areas. So the numbers aren’t supposed to add up to anything like 100%.

The spreadsheet also contains worksheets for each postal code area. These contain a dump of all the objects in OpenStreetMap in those postal code areas, and this is the raw data the summary spreadsheet uses to get the totals.

Flaws in the data

You should use this data with a large spoonful of salt. Here are the significant flaws I have noticed:

Postal code areas are approximate, for example the boundary between SE15 and SE22 should mark the boundary between Peckham Rye Common (SE15) and Peckham Rye Park (SE22). In my data both the park and the common show up in both of the postal codes, because the boundary isn’t quite right. Read down to my method to see why. The errors introduced are pretty tiny in most places (plus or minus a few meters along the full boundary), and probably cancel themselves out for big land uses like residential, but they probably also introduce some significant errors for parks where the boundaries go awry by 20-30m in places. Sadly there aren’t any accurate open data polygons I can use.

Data is missing because OpenStreetMap contributors haven’t mapped it. Of course the easy solution here is to get more of it mapped and up to date! My estimate of the different types is as follows:

  • Allotments: complete for the whole borough.
  • Parks and commons: all major and district parks complete.
  • Misc green spaces: very poor coverage of, for example, large areas of grass on estates, especially in SE5, the north pat of SE15 and SE17.
  • Woods/forest: all major woods complete, coverage of big clumps of trees e.g. on a housing estate or in a park is very uneven.
  • Residential: complete except for SE16.
  • Industrial, retail, commercial: large areas are complete, but small shopping parades, industrial parks and rows of offices are very patchy.
  • Brownfield/construction: patchy across the borough and sometimes out of date as sites are built on.

Data is also sometimes missing because of flaws in the Geofabrik shapefiles, not all of which I have corrected. For example, I noticed they were missing commons so I manually added those in, but I may have missed other land uses. One major omission, a shame given the interest in them, is the humble sports pitch/playing field.

How I produced this

After a lot of experimentation – I’ve never been trained to use GIS tools – I worked out this method. If you know of an easier way I’d love to hear about it.

  1. Prepare the boundary data:
    1. Extract a polygon for the London Borough of Southwark from the OS Boundary-Line data.
    2. Download the OS Code-Point-Open data, open the spreadsheet for the SE area in QGIS and use the ftools ‘Voronoi polygons’ plugin to infer polygons for the postal codes from the centroids. Post code centroids are very dense in the middle of residential areas, so the boundary between SE15 4HR and SE22 9BD is only going to be out by a few meters, but are quite far apart with large parks and commons, so the inferred boundaries get less accurate in those areas. See this map for an illustration of the Peckham Rye Park / Common problem mentioned above.
    3. Merge together postal codes into the areas (e.g. SE22 9QF, SE22 4DU etc. into SE22) by quering the shapefile for all objects with postal codes starting with SE22, then using the mmqgis merge tool to merge them into single polygons. Clean up the attributes so the shapefile just has one attribute for the correct postal code area.
    4. Clip the postal codes by the Southwark polygon and save the result – finally – as the postal codes shapefile for Southwark.
  2. Prepare the land use data:
    1. Download the  OpenStreetMap shapefiles from Geofabrik for Greater London.
    2. Download common and marsh ways/relations using the Overpass API (with the meta flag on), import the data into QGIS using the OpenStreetMap plugin, and save the data as a Shapefile.
    3. Merge together the Geofabrik natural and landuse shapefiles with my Overpass-derived shapefile into one land use shape file using the mmqgis plugin.
    4. Clip the land use file by the Southwark polygon and save the result – finally – as the land uses shapefile for Southwark.
  3. Produce the postal code stats; for each postal code:
    1. Select the postal code, and clip the land use layer to that selected code, saving it as a new shapefile.
    2. Open that shapefile, then save it in a new projection that will be in meters rather than degrees (I used  EPSG:32631 – WGS 84 / UTM zone 31N).
    3. Open the new shapefile, then run the ftools ‘Export/add geometry columns’ tool (in Vector/Geometry Tools) to add two attributes to the objects for the area and perimeter.
    4. Save the layer again as a CSV file.
  4. Produce the stats for the area of each postal code so we can calculate % of the area as well as ha for each land use:
    1. Save the Southwark postal codes polygon in the meters projection, add the geometry columns, and save as a CSV file.
  5. Collate all the data
    1. Tidy up and copy the data from each CSV file into a spreadsheet, then add in the formulae to tot everything up. You’re done!

For reference, some of the totals in the summary work off more than one land use type so here are the categories and the corresponding OpenStreetMap tags:

  • Allotments – landuse=allotments
  • Parks and commons – leisure=park / leisure=common
  • Misc green spaces – landuse=conservation / landuse=farm / leisure=garden / landuse=grass / landuse=greenfield / landuse=greenspace / landuse=meadow / landuse=orchard / landuse=recreation_ground
  • Woods and forest – landuse=forest / natural=wood
  • Residential, industrial, retail, commercial, brownfield, construction – corresponding landuse tags

Future ideas

One obvious improvement would be to get more data in. Perhaps this first analysis will encourage people to help out with that? I have also emailed Geofabrik about the flaws I have discovered in their shapefiles, so I hope those get fixed.

Another thought is to produce the stats by council ward. But given that there are far more wards, I’d like to find a quicker way of producing the stats for each ward (step three above) first.

It would also be interesting to do it by town/suburb, for example comparing Peckham to East Dulwich. But we don’t have any meaningful boundaries for those natural areas. It would be really interesting to do a mass version of “this isn’t fucking Dalston” for a whole borough, using the Voronoi polygons method to infer areas from surveys at thousands of locations around the borough. One day…

London’s natural geography

I’ve been playing around with open data from OpenStreetMap and Natural England to make a pretty map of “green and blue infrastructure” in London. Here’s the result:

You can download a PDF version suitable for printing here: natural_london.

I’m pretty happy with the result, my first real attempt to produce something useful with QGIS. The data I used was:

There’s no reason the Natural England data couldn’t be manually added to OpenStreetMap, giving us a complete dataset of natural features. I just chose to get on and do it this way rather than wait, or try to add all the data across areas of the city I don’t know well and am not going to visit any time soon. I also didn’t really need to use the Ordnance Survey data for boundaries, but it’s slightly more accurate and complete than OpenStreetMap data.

The map is probably missing lots of smaller patches of green space, including grass verges, green roofs and biodiverse brownfield sites. The biggest omission is the humble private garden. They cover 24% of London’s land!

But the map at least shows the more obvious, visible, public green spaces, and is a nice example of what a geek with no GIS training (but years of playing with OpenStreetMap) can do with free software and free data these days.