Data and Dashboards

Dashboards, Performance Management, and the Trouble with Different Kinds of Data

Mayor Garcetti Showcases Dashboard at US Conference of Mayors

Three different kinds of data

There are effectively three kinds of civic or public data that need to be collected, aggregated and summarized for effective decision making.

Transactional / Raw Data

First is transactional or operational data. This is what the city’s doing on a day-to-day basis. Easiest metaphor for that is crime: what crime is happening in the city and where? (Funny aside, the best data around crime doesn’t even in fact come from the city itself, it comes from the LA Times because city data, since it needs to be validated, has to run up the pole from the local government to the state and back that there’s about a two year backlog on that.) Or take 311. Are there lots of potholes in this part of town? Or homelessness. Is there an epidemic of homelessness in this other part of town? That kind of situational awareness is what you get from operational data. It’s what’s happening on-the-ground, all the time. That needs to surfaced up, organized, and then presented in a compelling way for a decision maker to say, “We need to do something different about that.”

Performance Data

The next kind of data is performance metrics. This area is popularized by initiatives such as COMPStat or CitiStat where they look at service delivery levels for performance management through what the private sector would describe as KPIs. In contrast to transactional data, this is aggregate. Instead of looking at every single call that came in to 311, say, you would ask, “What’s our average wait time for it to go in call?” Or “What’s our average time to resolution for a pressing call?” This gets more complicated than transactional data, because it’s not usually reported through just one system. Take a 311 request as an example. In LA, the IT department runs the call center, but any actual service delivery — say filling that pothole or cleaning up that street — is handled by another department (DWP or DPW). Thus, systems integration and aggregation are essential, but immensely difficult. Tracking performance metrics is hard. Made further difficult by their normative nature. Whether or not a wait time is good or bad changes on the time of year and historical context. And so not only are performance metrics tricky to track, but even trickier to assess.

Quality of Life

The final piece of this puzzle is probably the most obvious to real citizens: quality of life. How are things going in the city, and more directly, how do people feel about it? (My old boss, Deputy Mayor Rick Cole, described this as “gestalt” data.) Crazy as it might be to hear, local governments barely every track this. A few rare examples exist: Kansas City has a citizen survey, and DC runs a feedback program for its programs. But even still, these initiatives at best track citizen satisfaction with service delivery. They do not track general sentiment of quality of life. Do they feel safe? Do they feel like their children have a good education? Do you have access to the services and programs you need?

This has been, what I would describe as the third rail of government data: do you know how your constituents feel about their government?

Organizations like Pew and others try to collect that data usually at the national level, rarely at the local level, but this is the kind of information we need to know as we govern cities because that’s the ultimate barometer of our success.

Bringing that back to the running theme then, the question isn’t simply how many of the 311 calls came in, it’s not how quickly were we at responding to their own issues, but it’s how clean do you think your city is? How safe do you think it is to walk? These are difficult questions to ask and in general cities don’t ask them for multiple reasons. They don’t know how to. They don’t know where to.

But gathering data data, and then combining it with performance metrics and operational data is critical. All three are interrelated, and the art is in finding the overlap, as I sketched out with Rick (yes, actually on a napkin):

Data Sweet Spot

The sweet spot, the all too rare sweet spot, you see how those three connect together. Maybe it’s 311 issues that come in from 5-9pm ever night that lead to long call wait times and construction and thus delays for families getting to dinner late. That’d make the citizenry unhappy (if you could track it).

It’s this sweet spot, the alignment of transactions, performance, and quality of life, that’s the ideal function of data science and dashboarding.

Data Process

If you can figure out that sweet spot, that correlation, that causation, that’s profoundly interesting and useful, but that’s really hard to do. It is the work we have to do as civic data advocates and as data-driven public officials. Data isn’t just data. It’s more complicated that. Different kinds and different types. Different types that interact, interrelate, and connect, and it is our responsibility as public stewards to bring them together to help better serve citizens.

There’s a notion in Democratic Theory called Deliberative Democracy — yes, I’m a philosophy nerd) — where democracy only works well when people are forced to and willing to share their good reasons to defend their point. They can’t just say “I believe this and that’s that.” They have to say “I believe this is going well, and this is why.”

That’s where data can, should, and must come in.


More: Lecture on How Governments Use Data

Open Lecture in Browser