Monday 8 October 2012

Environment Agency Live Flood Warning Map

I was asked for some thoughts on the UK Environment Agency's Live Flood Warning Map designed by Shoothill so here goes...

Upon loading the map I must admit to feeling a sense of trepidation.  It's really not the most pleasing of sights as you get Bing satellite imagery combined with a baffling array of place names at various sizes.  Too much! And slapped on top are some rather odd looking marker symbols....but at least they are designed to be seen across the dark basemap and all that detail.

Navigating the map is straightforward.  The usual zoom and pan tools are available. The flood alert legend in the bottom left corner shows the status of current alerts and you can open a more detailed description which allows you to zoom to the areas directly. You can also easily set the alerts to filter what the map should show thus giving you a customised view of specific flood issues rather than have them all display. The colours generally work well and the traffic light approach provides a good link to our perceptual order of 'danger'.  I get an immediate sense of low level alerts and severe alerts simply through sensible use of colours.  the marker symbols are a little over-complicated in my opinion.  They seem to combine a point marker as well as a warning triangle.  I wonder if the triangle would suffice since the point markers are only locating general areas anyway?  They are clickable, giving access to further details...but the mouse pointer doesn't alter as you roll over them which doesn't distinguish them as offering any other content. It would help users to be able to know that the marker symbols are access controls to further information.

As you zoom, map content scales nicely and you get to see actual flood risk surfaces appear.  Personally, I'd like some further information on how these are calculated; and they aren't clickable so I don't get a sense of the scale of the potential threat other than visually.  Is this important? Well it's a flood risk zone and it seems sensible to use this as a gateway to details about the risk rather than have them listed only by the more generally located marker symbol.

There are options to switch between Bing imagery, Ordnance Survey raster data or a Bing road map and at different scales this becomes useful to ensure you can see the flood risk zones clearly.  Of course, none of these are perfect solutions because the basemaps are not primarily designed to act as the underlay for operational layers like flood risk zones...but that's a perennial problem when mashing up polygonized operational overlays on top of general reference/purpose basemaps. A nice touch is the ability to apply some transparency to the flood risk zone layer...and also to invoke an 'emphasis layer' which adds a dark grey transparent wash across the map, effectively darkening the background and allowing emphasis to be given to the flood risk zones and markers.  This is a novel way of countering over-cluttered basemaps in the event that you don't have access to a nice, clean de-cluttered basemap.

The search works well and I managed to locate a range of places easily using either place names or postcodes and you can sign up to receive alerts via other mechanisms.

Overall, a pretty good effort and a good example of a well constructed web map. It's not over-complicated and its use is fairly intuitive.  I guess the only real question is the accuracy of the flood risk surface itself...when you dig into the help it's clear that the risk zones are a little spurious.  Flood mapping is notoriously difficult and the map admits this.  It states that the zones do not include information on flood depth, speed or volume of flow or groundwater, runoff and other sources. it's what then? It's a predicted surface of areas of risk based on what?  Well that's not made clear and that's where maps like this fall over.  On the face of it, it gives access to what many would view as a detailed and accurate map of flood risk.  But it's only as detailed and accurate as the input data which is partial at best.  The danger here is the map, while generally well presented, implies a level of accuracy that might not be valid.  Not many (I would think) would read the help file to explore the details of how the risk surface was or wasn't constructed...they're going to go straight to their home location and see a great big orange or red polygon splattered over their house.  Notwithstanding that the map explains that it shouldn't be used to infer any risk to individual properties, this is the natural inference.

Implied accuracy is something that web maps in general need to wrestle with.  People view maps as accurate (they always have).  They rarely question them, they rarely ask the right questions about data efficacy, the mapping technique and how it was applied, or whether the map's form and function have been in any way subjectively manipulated.  Since we look at web maps rapidly and want rapid information retrieval there is a need for the map to somehow qualify what it's showing so that on the one hand the potential risk is communicated to the public but without causing panic or hysteria through implied accuracy.  One way this might be achieved is to create graded flood risk surfaces.  At the moment it's a binary surface...the zone is either there or it isn't...and it's graded as Flood alert, Flood warning or Severe Flood warning. Why not code this in the map itself as a layer with different internal zones...and why not calculate some measure of how 'believable' the alert might be.  For instance, not all areas have the same probability of flooding despite being in the same alert zone.  This is more complicated but this is also where data science needs to meet web mapping to provide more accurate frameworks.

Owen raises an interesting issue with regard to the complexity of the data as it is, what it shows and how more complex data might lead to a confusing map.  I agree to an extent. Making complex data easy to understand is tricky BUT to my mind this map is an example of less is less.  @mapperz tweeted the following this morning which characterises the problem very well:

So a map made for public consumption can be simple so that the public either don't need to think too hard or get bogged down with detail?  Whereas if the map was made for 'GIS statistical analysis' (whatever that is) then we by definition require something altogether more complex because either we can handle complexity or we demand the detail to do more with it.  I disagree here...and this is my beef with far too many maps (and web maps in particular)...they over-simplify and justify the approach is to satisfy a general public with low levels of understanding and cognitive capabilities.  They don't want to create confusion so the answer is to create a very vague map using data that generalises.  Actually the reverse is's an art to take something with a complex structure and deal with it in a sophisticated manner to communicate the complexity clearly and without ambiguity.  Just creating a simple version and claiming people don't want to do GIS analysis is a bit of a cop-out.  Using the argument that the map was made for public consumption shouldn't absolve the map-maker of dealing with complexity in a sophisticated manner.


  1. Lots of good points, and I was with you up until the last paragraph. As you probably know the existing polygons are only meant to describe the geographic area in which a flood alert or warning has been declared. Although the polygons do have relationships to both the flood plain and the built environment, the actual risk from a particular emerging flood event is unlikely to be evenly distributed within that area. I agree that isn't very clear, and that there is potential for confusion.

    However I think it would be either impractical or counter-productive to include additional information to qualify the flood risk. The ideal would be to show more information on the event as it emerges, such as outlines of actual flooding, river levels and flow rates, overtopping of defences, etc. But at the moment that's very difficult. EA systems simply cannot gather enough observations to present the full picture reliably in real time for a non-specialist audience.

    The other option would be to show the likelihood categories for each location within the polygon, i.e. overlay the EA's Flood Map or NaFRA spatial data. As you say, not all areas within an alert zone have the same probability of flooding. The problem is that those likelihood categories are based on modelling of multitudes of flood scenarios over long return periods (decades and centuries). In the face of an individual weather event on a particular day, that information could mislead. A flood warning map needs to avoid sending mixed messages; I think telling people their 'everyday' level of flood risk would just distract their attention from the risk today.

  2. Owen, I actually think we're agreeing regarding the last paragraph. It's almost impossible to go further with the map, given the data and uncertainties that exist and to do so might create a map with even more potential for confusion. What I was advocating, however, was the need to ensure the map is purposeful and understandable within the constraints of the data. I'm not sure that carries through clearly enough here and for a general audience the single zone provides a simplistic picture but one which hides a lot of nuance. No easy answer!

  3. Kenneth, thanks for the update. I agree with your overall point, and would distinguish my own view from that in the mapperz tweet. I don't think the fact that a map is designed for public consumption should per se be used as an excuse for over-generalisation.

    In an ideal world users should be able to "drill down" to additional information at a level of complexity that matches their understanding and interest. Per my previous comment I'm not sure mapping of flood alerts is a good example though, as delivery of more granular real-time information would be a challenge.

    It's important to note that the flood alerts map is part of the EA's emergency response strategy. That means the initial presentation has to prioritise clear communication of the most important information. I don't think the Shoothill implementation is entirely successful in achieving that. However I think the problem is less over-generalisation and more a failure to completely explain what the map is actually showing.

    My own preference would be for the Environment Agency to release their flood alert XML feeds and polygons under open data terms, which would enable the development of more applications for a wider range of users. I gather the infrastructure for that approach is mostly in place, but it might take some political will to change the licensing.