Part II – The Open Data Future: Interview with Joel Natividad

March 29, 2017 – Autumn Carter


Editor’s Note: On the eve of the one-year anniversary of OpenGov acquiring open data leader Ontodia, we spoke with Ontodia’s Founder and OpenGov Director of Open Data Joel Natividad about the future of open data. Joel is a member of the CKAN Association’s Steering Group, a true innovator, and an influential thought-leader in the field. Here, he discusses open data’s meaning, relevance, and the path forward towards greater usability.

Part I of our interview explored the concept of open data and its future, asking “What is open data?” Part II, below, continues the conversation by exploring open data’s future use and implementation.


PART II: What is the Next Generation of Open Data?

Who out there is leading the pack with their open data initiative?

No single entity comes right to mind as the “poster child” for open data. There are glimmers in smaller jurisdictions that don’t have to deal with all the “legacy issues” that can be hurdles to innovation. Places in Eastern Europe are doing open data well; Moldova, has actually been celebrated as an exemplar internationally.

Are there hurdles to effective implementation that are causing municipalities to miss the mark?

When open data first emerged, the whole premise was to just publish the data first and that innovation would follow. That didn’t happen. Because those expectations were not met, there has been some level of disillusionment among some early adopters. There’s a lot of talk about how hackathons haven’t delivered on the promise of open data – now a lot of cities and jurisdictions are adopting a curated “challenge format” instead of the weekend hackathon approach which was popular at first.

It is more useful to define a particular problem instead of just saying, “Here’s some data. Go do something with it.” It’s a balancing act. Some jurisdictions have just let their data initiatives and portals go, so you’ll see stale data sets. It just becomes a box to check, and afterwards, since the data is not being utilized in an operational way, it just dies on the vine.

We have to overcome some of that disillusionment, and that’s why OpenGov’s approach is very relevant. At the end of the day, the most important data set is the budget. The budget drives everything else. It funds the services that generate all this open data.

Ontodia was actually a direct product of this “data challenge” format as we were born at NYCBigApps – the largest, longest running open data challenge in the world. After we won back in 2011, New York City went out of its way to help and mentor us. We were hosted at the NYU Varick Incubator. The city connected us to mentors inside and outside government, and our first commercial contract was with NYC’s Department of Education. We even received mentoring from NYU-CUSP and a research contract with DARPA with their coaching. Without this support and the great civictech ecosystem here in NYC – including BetaNYC, Civic Hall, etc – Ontodia wouldn’t have survived for 5 years as a boot-strapped start-up before we joined OpenGov.

So are there initiatives out there now that can provide a glimpse into the future of open data?

A transition is underway. Analyze Boston is a great concrete example of that.

Analyze Boston originated as a response to the Knight Foundation’s library challenge, which sought to examine the role of libraries and curators in the digital age. Instead of curating knowledge on dead trees, how can librarians also curate and catalog the data governments produce? Analyze Boston seeks to effectively catalog not just public data, but their accompanying insights.

This is a perfect example of operationalizing data. Boston rebranded its open data portal as “Analyze Boston.” That is itself a call to action, to analyze. With this clean, curated data, what insights can we gain? Not only should we catalog the data, but we should also catalog the insights. The project is agile, in that there have been rounds of improvements after internal beta launches, and it is moving toward a full public launch this spring. That will generate additional feedback for improvement.

The emergence of CDOs in cities is another real trend that provides a glimpse of the future of open data. This shows that data is beginning to be treated as a real infrastructure asset.

What about non-technical local government management? For instance, how could a Finance Director leverage and operationalize open data?

It’s good to remember that when the computer revolution first started, the IT function originally came up as a function of the Finance Director. IT staff originally reported to the CFO because the first automated systems were accounting systems. Eventually, over time, IT became its own function. I think we’re going through a similar arc now with open data – but in reverse. Currently, it is generally understood that data and IT are separate functions, but we will probably see some convergence.

At the ground level, we need to prioritize the kind of data we gather so that it directly relates to the budget. Property taxes, parcel data, permits – those are examples of high-value data and they have finance as well as classical open data components. We need to prioritize those data sets that share common financial lineage with the budget. Doing so makes it easier for a Finance Director to answer, questions like “Why do I care about open data?”

Whereas some consider open data a “feel-good” type of affair, it goes beyond meeting a transparency responsibility once its operationalized. We need to quantify non-financial data and link it with the financial data so that everybody cares. Everybody can start measuring not just the numbers, but how the budget supports the services the data describes.

There have been critiques of open data portals. For example, some think they are unengaging, difficult to navigate, or too static. What’s your perspective on those critiques?

A lot of the critiques are valid, to be candid, because we are still in the early days of open data. There was recently an open letter to the open data community from Chief Data Officers concerning what is lacking right now in terms of treating open data as infrastructure.

There are still gaps, but the great thing about the way we’re working now, is that we’re working with the community in a standards-based, open-source context to address those gaps. It’s not just OpenGov building and addressing them. It’s a wide community of innovators and governments working together. For example, the city of Karlsrühe in Germany built a better search engine for their open data. They built it for their own purposes, but contributed it to the wider open source CKAN community. We can take that from Karlsrühe and apply it in Boston.

This innovation ecosystem is what is exciting about the CKAN community. And it allows cities to focus on collaboration and better performance without needing to worry about running or administering the platform.

Can you discuss how open data technologies can be useful, useable, and used?

The key to making open data useful and used is in adopting the same techniques that made other technologies successful. I often compare it to the early days of the internet in the mid-90s. At the time, connecting to the Internet was something only hackers did. Then AOL came along and made it easy for anybody with a CD-ROM drive to have access. Even so, it was still somewhat of a walled garden that AOL curated. But it achieved the purpose of exposing non-technical people to the potential of the internet.

I think the first generation of open data is just like that. Many jurisdictions have tested the waters of open data using proprietary technologies. They concentrated more on ease-of-use and how it integrated with their back-end systems, rather than treating it as an infrastructure component or strategic asset. They’ve built the walled gardens.

We’re adopting the same arc to ease into open data, and now people are starting to better understand its potential.

Read Part I of the interview here.