Advanced Data & Coding - City University

999 Call Response Times in London

The Metropolitan Police fail to meet 15% of their target times for emergency calls and 20% of priority calls, data from a recent Freedom of Information request reveals.

The aggregated data contains all 999 calls from July 2018 to the end of December 2018. An analysis of the figures show that each day, an average of 143 of 936 calls fail to meet the emergency call target time of 15 minutes. 177 of 885 response times go over the one hour target for priority calls each day.

In the city of London, the median emergency call is responded to within 9 minutes and 27 seconds. Priority calls can expect a response in 32 minutes and 29 seconds.

density_1.png

However, the data revealed shocking wait times for a select few: one emergency call didn’t get a response for over 3 days, and one priority caller only got a response after 11 days.

The data also showed significant differences in 999 wait times by borough. At 8 minutes and 4 seconds, Hackney has the shortest median response time for emergency calls. Residents of Kingston Upon Thames wait a median of 11 minutes and 55 seconds, nearly 4 minutes longer than Hackney.

density_2.png

Median Response Times to 999 Emergency Calls, by Borough

Data: London Metropolitan Police via FOI request | Analysis & Graphics: Anisa Holmes


About the Data & Process

I submitted an FOI request on 999 call response times in London to the Metropolitan police in December 2018, and received a reply about 2 months later. After cleaning NA values, the dataset contains 335,192 individual data points along with additional classifications for the type of 999 call (emergency “I” or priority “S”) and the borough, as I requested.

I began by performing summary statistics for “I” and “S” calls separately. Doing so helped me to determine that it would be more appropriate to use the median values than the mean due to the heavy skew in the data set.

After cleaning, sorting and summary statistics, I sorted the median wait time by borough, created some density maps to get a better feel for the distribution of the data. I took some time figuring out how to add vertical lines and labels to show the mean value. And once, I’d done so, I was quite puzzled as to why there were suddenly two legends, rather than just one. I learned that tying the aesthetics of my geom_vline to the dataset itself meant an additional legend would automatically be added. Of course there was an easy workaround of adding show.legend = F, but it took me FOREVER to figure that out.

From there I moved on to creating a simple bar and lollipop graph. The bar graph went smoothly but I ran into a strange issue with sorting the values in the lollipop. I couldn’t figure out why it wouldn’t sort by descending value rather than alphabetically, but I managed to find a clunky work-around.

Next, I moved onto map making. I found a London borough shapefile from the London datastore and created a choropleth map using the response times for I calls. After making the map and doing some troubleshooting I found that the boroughs had slightly different naming conventions (Upon > upon, and > &), so I went back to the top of my script to retroactively correct the dataset so it would later match up with the map. I used the same visual identity as my other graphs and also experimented with adding centered labels to each borough. Ultimately I decided the borough labels made the map too busy, so I resolved to try to create an interactive map instead.

I looked into different solutions for creating interactive maps and decided to try Leaflet. Adding a hover feature resolving the labeling issue in my static map and made for a more engaging visual. At this point, I was a bit rushed and ran into some issues when it came to embedding the interactive map in my squarespace website. I ended up hosting the map on my github and embedding it as an iframe until I found a better solution. In this case a different map making tool might be more efficient so I’ll continue to explore.  

Overall, creating a custom look and feel for the graphs took the longest amount of time. There was a lot of trial and error involved, but it was a great way to get a better feel for the aesthetics properties within R. I also had a lot of issues with aligning the plot titles. I didn’t have enough time to tie down an effective solution to achieve a consistent title location across graphics, so I just touched up the final files in Illustrator.

With more time I’d look into the volume of 999 calls per day, and how it potentially matched up to holidays, and see what other patterns come up.


Additional Graphics Iterations

bar_1.png
lolliplot_1.png
map_1.png
map_2.png