The LOD Laundromat provides access to all Linked Open Data (LOD) in the world. It does this by crawling the LOD cloud, and converting all its contents in a standards-compliant way (gzipped N-Triples), removing all data stains such as syntax errors, duplicates, and blank nodes.
A bash interface to the LOD Laundromat, allowing for large scale reproducible web science research on the Web of Data: the LOD Lab.
Linkitup taps into your favourite research data repository, and enriches your datasets with links to vocabularies, registries and the Linked Data cloud
YASGUI is a feature packed, lightweight, web-based SPARQL client interface
Virgil is a step-by-step wizard to compute disproportionality measurements for adverse events of drugs. It uses the open data from the FDA Adverse Event Reporting System
PLSheet is a SWI Prolog based spreadsheet dependency analysis toolkit + converter to RDF.
RecoPROV reconstructs provenance information from collections of documents
The brwsr tool is a lightweight browser for Linked Data, much akin to Pubby, but more lightweight and configurable..
Data2Semantics aims to provide essential semantic infrastructure for bringing e-Science to the next level.
A core task for scientific publishers is to speed up scientific progress by improving the availability of scientific knowledge. This holds both for dissemination of results through traditional publications, as well as through the publication of scientific data. The Data2Semantics project focuses on a key problem for data management in e-Science:
Data2Semantics is a collaboration between the VU University Amsterdam, the University of Amsterdam, Data Archiving and Networked Services (DANS) of the KNAW, Elsevier Publishing and Synerscope, and is funded under the COMMIT programme of the NL Agency of the Dutch Ministry of Economic Affairs, Agriculture and Innovation.
The discovery of new knowledge is the heart of scientific progress; the generation, support and maintenance of knowledge form the foundation of the scientific endeavour. e-Science is ultimately about discovering and sharing knowledge in the form of experimental data, theory-rich vocabularies, publications and re-usable services that are meaningful to the working scientist.
The complexity and abundance of data resources in an e-Science environment requires support for knowledge and metadata management: data is notoriously hard to share, find, access, interpret and reuse. This project targets scientific data publishers as primary facilitators of the e-Science process.
Scientists need tools to better understand the complexity characteristics of their data and its ability to answer scientific questions. They must be able to equip data with meaning and to generate a surrounding semantic context in which data can be meaningfully interpreted. Scientists must be given the means to make their data speak for itself, to move from data to semantics.
The targets of this project are to
To meet these targets, this project will develop a semantic infrastructure for data publishers, with facilities for finding, generating, tracing and interpreting scientific knowledge. It requires fundamental research on data complexity, knowledge acquisition, knowledge systems and services, and the development of a powerful set of tools that provides support for individual steps in the e-Science lifecycle.
We will collaborate closely with four COMMIT partner projects (P6, P12, P20 and P26), and develop against four use case domains: