The project’s third User Board meeting aimed to discuss progress of pilots’ rollout and to review the COMSODE deliverables focused on methodologies and Open Data Node software. The meeting took place on May 28-29, 2015 in Vienna City Hall.
There are different dimensions to support the assessment and benchmarking of the social value of open data initiatives. We propose a methodology that compares and evaluates open data social value according to a spectrum of measures going from intensional completeness to subjective meaning. We first suggest that open data made available online by an organization can be modelled in terms of the corresponding integrated conceptual schema, as a uniform construct. Then, a global schema is created with the integrated schemas, and intensional as well as extensional social value on data can be defined over such conceptual schemas.
On April 29th 2015, Open Data Node 1.0 was released. So now I’m going to describe what this release actually does, compared to what it is supposed to do (as described almost a year ago in my initial blog post: Open Data Node – what it is, what it does, what is next).
UnifiedViews is an Extract-Transform-Load (ETL) framework that allows users – publishers, consumers, or analysts – to define, execute, monitor, debug, schedule, and share RDF data processing tasks. UnifiedViews is one of the core components of Open Data Node – publication platform for Open data.
The data processing tasks may use custom plugins created by users. UnifiedViews differs from other ETL frameworks by natively supporting RDF data and ontologies. UnifiedViews has a graphical user interface for the administration, debugging, and monitoring of the ETL process. In this blog post, we focus on the description of new features of UnifiedViews 2.0, which was released on April 2, 2015; please see the website unifiedviews.eu for documentation of UnifiedViews, to get information how UnifiedViews may be obtained, and to see the community around UnifiedViews.
While there are many interesting technical topics – eg. producing data linked to other data, automating datasets transformations, data quality assessment, data enrichment – for sure the crucial point is to make content of published datasets easily accessible to users.
In current practice there are two distinct methods how to facilitate access to the data: