Part I – The Open Data Future: Interview with Joel Natividad

Editor’s Note: On the eve of the one-year anniversary of OpenGov acquiring open data leader Ontodia, we spoke with Ontodia’s Founder and OpenGov Director of Open Data Joel Natividad about the future of open data. Joel is a member of the CKAN Association’s Steering Group, a true innovator, and an influential thought-leader in the field. Here, he discusses open data’s meaning, relevance, and the path forward towards greater usability.

Part I of our interview, below, explores the concept of open data and its future. Part II will continue the conversation by exploring open data’s future use and implementation.



PART I: Understanding Open Data

How do you define open data?

The common or classical understanding of open data is that it is data published for transparency purposes. Many of us are familiar with the McKinsey Global Institute study, which cited the potential of open data to create $3 trillion in annual value in the global economy. To date, we haven’t really captured that potential.

Part of the reason we have not reached that level of potential is that the main lesson we learned during the first generation of open data: it was not enough just to publish data. We found that the reality is not simply, “publish it and innovation will come.” Having gone through the first generation of open data, we’ve learned the hard way that people don’t really care about raw data. The emphasis needs to change so that we are using “open” as a verb – as in “opening data.”


And what does “opening data” actually look like?

The purpose of open data is not just public transparency. Its value is also in driving decisions inside government. By treating open data as a strategic asset, you make the data cleaner and more operational. That increase of internal utility is open data’s future, and it drives an internal cultural shift as well, whereby government is not only the publisher of the data, but also a primary beneficiary.

Opening data creates a “data of record” for all government departments, internally first, and then to share among peers. Often, government staff view open data initiatives as another unfunded mandate: they aren’t sure what the outcomes are, or fear they may accidentally share sensitive information. However, if that same data is actively used internally, sharing and “opening data” becomes part of their normal workflow. Only then will everybody benefit. The staff benefits, the city council benefits, the public benefits. If you increase open data’s internal utility, you increase its relevance, and that is where things are heading.


What are the main drivers moving open data toward the future?

One of the things you find now, especially from the leading thinkers on 21st Century government like the Sunlight Foundation, the Government Center for Excellence at Johns Hopkins, and others, is that open data is a means to an end. The key point is that the end goal is performance management. How do you do performance management? You cannot do it without open data. Governments cannot perform well without this basic ingredient of clean, open data. At OpenGov, we are looking at it in the same way.



So what are examples of using open data for performance management?

We see the best way to operationalize open data to enhance performance is to correlate spending with outcomes. And that’s why we were so excited when Ontodia joined OpenGov, as OpenGov’s core competency was finance. Because in our mind, the most important dataset is the budget.

If we correlate and measure financial data against other data, we gain an insight of spending versus outcomes. This drives a feedback loop and enables governments to explore how they are operating; to assess whether they are efficient. They can also benchmark against other entities of the same size. What are they doing? How are their costs lower than ours?

Unlike in the private sector, the beauty of the public sector is that it actually encourages people to share notes. Not just for competition’s sake, but to learn from each other. Take Pepsi and Coke, for example. They are in competition and would never share the secret sauce. In government, that’s not the case. We want to talk to each other.


How do non-government organizations like OpenGov fit into the future of open data?

We offer the critical participation component. We work from the open-sourced, standards-based project CKAN (Comprehensive Knowledge Archive Network), which is the predominant solution for large governments. I say “large governments” because before OpenGov, you needed dedicated staff to stand up CKAN-based open data portals. We are essentially productizing CKAN, and including value-added applications that allow municipal-level personnel – especially non-programmers – to concentrate on the hard work of opening the data, rather than running a data portal. Our standards-based solution allows for innovation and enables conversation among cities.

The good thing is that, in general, people now agree that governments should treat data as infrastructure. When municipal systems were first built, they weren’t built with open data in mind. They weren’t designed to have common linkages or data standards. But that transition is underway, and OpenGov is facilitating that transition.


Is it currently possible to measure or quantify tangible benefits of open data?

Right now the easiest way to measure the return on investment or open data’s payback is in calculating the costs of complying with Freedom of Information Act (FOIA) and public records requests. Those requests are often onerous, redundant, and expensive. Through the utilization of open data, you can dramatically reduce your FOIA processing costs. That’s an easy one.

I always go back to prioritizing data and ensuring a linkage to the budget. If we operationalize data and correlate it to the budget, it will absolutely help cities achieve data-driven government. It’s not just a nicer way to understand the budget or get away from dealing with ERP reports. While there’s no easy measure of it, if you connect the data to high-value budget priorities, it leads to operational efficiencies.


Does the public have a responsibility in an open data “ecosystem”?

Yes. Part of the open data challenge is educating the public about that responsibility. For example, as Analyze Boston’s open data initiative enters its second phase, it includes a public education campaign. That is part of making data useful. If you think back to when 311 was introduced, it gave the public a way to interact with government using digital means. Software consumed the first generation of open data. But to make it both useful to and used by the common consumer, I think you have to correlate it to the budget. Then people will start to care.

What if, instead of giving people a piece of paper that shows how much they need to pay in property tax, we give them something that shows just what they are getting in return? That’s why I’m so excited about the potential of tying financial data to budget data. We can quantify for people – at a neighborhood level – how much the city has allocated on their behalf for infrastructure and other services. By completing this picture, citizens will come to care about open data. Ultimately, this will help to restore the lost trust in government.

Read Part II of the interview here.

Categories: GovTech, Open Data

Related Posts

Government Finance
How Can States Excel in Data Innovation?
Steve Ballmer’s USA Facts and Next Generation Open Data
Part II – The Open Data Future: Interview with Joel Natividad