Kingsley Idehen's Typepad

6 Things That Must Remain Distinct re. Data

Conflation is the tech industry's equivalent of macroeconomic inflation. Whenever it rears it head, we lose value courtesy of diminishing productivity.

Looking retrospectively at any technology failure -- enterprises or industry at large -- you will eventually discover -- at the core -- messy conflation of at least one of the following:

  1. Data Model (Semantics)
  2. Data Object (Entity) Names (Identifiers)
  3. Data Representation Syntax (Markup)
  4. Data Access Protocol
  5. Data Presentation Syntax (Markup)
  6. Data Presentation Media.

The Internet & World Wide Web (InterWeb) are massive successes because their respective architectural cores embody the critical separation outlined above.

The Web of Linked Data is going to become a global reality, and massive success, because it leverages inherently sound architecture -- bar conflationary distractions of RDF. :-)

rdflinked_datasemanticweb

01:02 PM | Permalink | Comments (2) | TrackBack (0)

Virtuoso Linked Data Deployment In 3 Simple Steps

Injecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this:

  1. You encounter DBpedia or the LOD Cloud Pictorial.
  2. You look around (typically following your nose from link to link).
  3. You attempt to publish your own stuff.
  4. You get stuck.

The problems typically take the following form:

  1. Functionality confusion about the complementary Name and Address functionality of a single URI abstraction
  2. Terminology confusion due to conflation and over-loading of terms such as Resource, URL, Representation, Document, etc.
  3. Inability to find robust tools with which to generate Linked Data from existing data sources such as relational databases, CSV files, XML, Web Services, etc.

To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso.

Step 1 - RDF Data Generation

Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities.

Many options allow you to easily and quickly generate RDF data from other data sources:

  • Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources.
  • Convert relational DBMS data to RDF using the Virtuoso RDF Views Wizard.
  • Starting with CSV files, you can
    • Place them at an HTTP-accessible location, and use the Virtuoso Sponger to convert them to RDF or;
    • Use the CVS import feature to import their content into Virtuoso's relational data engine; then use the built-in RDF Views Wizard as with other RDBMS data.
  • Starting from XML files, you can
    • Use Virtuoso's inbuilt XSLT-Processor for manual XML to RDF/XML transformation or;
    • Leverage the Sponger Cartridge for GRDDL, if there is a transformation service associated with your XML data source, or;
    • Let the Sponger analyze the XML data source and make a best-effort transformation to RDF.


Step 2 - Linked Data Deployment



Install the Faceted Browser VAD package (fct_dav.vad) which delivers the following:




  1. Faceted Browser Engine UI


  2. Dynamic Hypermedia Resource Generator

    • delivers descriptor resources for every entity (data object) in the Native or Virtual Quad Stores

    • supports a broad array of output formats, including HTML+RDFa, RDF/XML, N3/Turtle, NTriples, RDF-JSON, OData+Atom, and OData+JSON.




Step 3 - Linked Data Consumption & Exploitation



Three simple steps allow you, your enterprise, and your customers to consume and exploit your newly deployed Linked Data --




  1. Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri>


    • <cname>[:<port>] gets replaced by the host and port of your Virtuoso instance


    • <entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle.




  2. Follow the links presented in the descriptor page.

  3. If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation.


  4. Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor.


Related




  • Sample Descriptor Page (what you see post completion of the steps in this post)


  • What is Linked Data, really?


  • Painless Linked Data Generation via URIBurner


  • How To Load RDF Data Into Virtuoso (various methods)


  • Virtuoso Bulk Loader Script for RDF


  • Bulk Loader Script for CSV


  • Wizard based generation of RDF based Linked Data from ODBC accessible Relational Databases

webservicesatomrdfxmlxsltodbcsqllinked_datasemanticwebsparqlhowtovirtuosoDataSpace

06:54 PM | Permalink | Comments (1) | TrackBack (0)

Virtuoso Linked Data Deployment 3-Step

Injecting Linked Data into the Web has been a major pain point for those who seek personal, service, or organization-specific variants of DBpedia. Basically, the sequence goes something like this:

  1. You encounter DBpedia or the LOD Cloud Pictorial.
  2. You look around (typically following your nose from link to link).
  3. You attempt to publish your own stuff.
  4. You get stuck.

The problems typically take the following form:

  1. Functionality confusion about the complementary Name and Address functionality of a single URI abstraction
  2. Terminology confusion due to conflation and over-loading of terms such as Resource, URL, Representation, Document, etc.
  3. Inability to find robust tools with which to generate Linked Data from existing data sources such as relational databases, CSV files, XML, Web Services, etc.

To start addressing these problems, here is a simple guide for generating and publishing Linked Data using Virtuoso.

RDF Data Generation

Existing RDF data can be added to the Virtuoso RDF Quad Store via a variety of built-in data loader utilities.

Many options allow you to easily and quickly generate RDF data from other data sources:

  • Install the Sponger Bookmarklet for the URIBurner service. Bind this to your own SPARQL-compliant backend RDF database (in this scenario, your local Virtuoso instance), and then Sponge some HTTP-accessible resources.
  • Convert relational DBMS data to RDF using the Virtuoso RDF Views Wizard.
  • Starting with CSV files, you can
    • Place them at an HTTP-accessible location, and use the Virtuoso Sponger to convert them to RDF or;
    • Use the CVS import feature to import their content into Virtuoso's relational data engine; then use the built-in RDF Views Wizard as with other RDBMS data.
  • Starting from XML files, you can
    • Use Virtuoso's inbuilt XSLT-Processor for manual XML to RDF/XML transformation or;
    • Leverage the Sponger Cartridge for GRDDL, if there is a transformation service associated with your XML data source, or;
    • Let the Sponger analyze the XML data source and make a best-effort transformation to RDF.


Linked Data Deployment



Install the Faceted Browser VAD package (fct_dav.vad) which delivers the following:




  1. Faceted Browser Engine UI


  2. Dynamic Hypermedia Resource Generator

    • delivers descriptor resources for every entity (data object) in the Native or Virtual Quad Stores

    • supports a broad array of output formats, including HTML+RDFa, RDF/XML, N3/Turtle, NTriples, RDF-JSON, OData+Atom, and OData+JSON.




Linked Data Consumption & Exploitation



Three simple steps allow you, your enterprise, and your customers to consume and exploit your newly deployed Linked Data --




  1. Load a page like this in your browser: http://<cname>[:<port>]/describe/?uri=<entity-uri>


    • <cname>[:<port>] gets replaced by the host and port of your Virtuoso instance


    • <entity-uri> gets replaced by the URI you want to see described -- for instance, the URI of one of the resources you let the Sponger handle.




  2. Follow the links presented in the descriptor page.

  3. If you ever see a blank page with a hyperlink subject name in the About: section at the top of the page, simply add the parameter "&sp=1" to the URL in the browser's Address box, and hit [ENTER]. This will result in an "on the fly" resource retrieval, transformation, and descriptor page generation.


  4. Use the navigator controls to page up and down the data associated with the "in scope" resource descriptor.


Related




  • Sample Descriptor Page (what you see post completion of the steps in this post)


  • What is Linked Data, really?


  • Painless Linked Data Generation via URIBurner


  • How To Load RDF Data Into Virtuoso (various methods)


  • Virtuoso Bulk Loader Script for RDF


  • Bulk Loader Script for CSV


  • Wizard based generation of RDF based Linked Data from ODBC accessible Relational Databases

webservicesatomrdfxmlxsltodbcsqllinked_datasemanticwebsparqlhowtovirtuosoDataSpace

06:54 PM | Permalink | Comments (4) | TrackBack (0)

What is Linked Data, really?

Linked Data is simply hypermedia-based structured data.

Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.

The fundamental steps to creating Linked Data are as follows:

  1. Choose a Name Reference Mechanism — i.e., URIs.


  2. Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes




    1. Subjects (also known as Entities)


    2. Subject Attributes (also known as Entity Attributes), and


    3. Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).



  3. Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, OData, OpenGraph, and many others.



  4. Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.



  5. Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:

    1. Identify Subject(s) using Resolvable URI(s).
    2. Identify Subject Attribute(s) using Resolvable URI(s).
    3. Assign Attribute Values to Subject Attributes. These Values may be either Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.

You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.

Related

  1. Linked Data an Introduction -- simple introduction to Linked Data and its virtues
  2. How Data Makes Corporations Dumb -- Jeff Jonas (IBM) interview
  3. Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
  4. URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
  5. Linked Data Meme -- TimbL design issues note about Linked Data
  6. Data 3.0 Manifesto -- note about format agnostic Linked Data
  7. DBpedia -- large Linked Data Hub
  8. Linked Open Data Cloud -- collection of Linked Data Spaces
  9. Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
  10. LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
  11. LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
  12. .
gdatardfxmllinked_datasemanticwebDataSpace

07:10 PM | Permalink | Comments (4) | TrackBack (0)

What is Linked Data, really?

Linked Data is simply hypermedia-based structured data.

Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.

The fundamental steps to creating Linked Data are as follows:

  1. Choose a Name Reference Mechanism — i.e., URIs.


  2. Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes




    1. Subjects (also known as Entities)


    2. Subject Attributes (also known as Entity Attributes), and


    3. Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).



  3. Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, and OData; there are many others.



  4. Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.



  5. Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:

    1. Identify Subject(s) using Resolvable URI(s).
    2. Identify Subject Attribute(s) using Resolvable URI(s).
    3. Assign Attribute Values to Subject Attributes. These Values may be either Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.

You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.

Related

  1. Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
  2. URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
  3. Linked Data Meme -- TimbL design issues note about Linked Data
  4. Data 3.0 Manifesto -- note about format agnostic Linked Data
  5. DBpedia -- large Linked Data Hub
  6. Linked Open Data Cloud -- collection of Linked Data Spaces
  7. Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
  8. LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
  9. LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
  10. .
gdatardfxmllinked_datasemanticwebDataSpace

05:54 PM | Permalink | Comments (6) | TrackBack (0)

Solving Real Problems by Leveraging Linked Data: Unambiguous & Verifiable Identity for HTTP Networks

Problem: Unambiguous Verifiable Network Identity.

How Does Linked Data Address This Problem? It provides critical infrastructure for the WebID Protocol that enables an innovative tweak of SSL/TLS.

What about OpenID? The WebID Protocol embraces and extends OpenID (in an open and positive way) via the WebID + OpenID Hybrid variant of the protocol -- basic effect is that OpenID calls are re-routed to the WebID aspect which simply removes Username and Password Authentication from the authentication challenge interaction pattern.

WebID Components

  1. X.509 Certificate and Private Key Generator
  2. Structured Profile Document (e.g. a FOAF based Profile) published to an HTTP Network (e.g. World Wide Web) and accessible at an Address (URL)
  3. An Agent Identifier aka. WebID (an HTTP Name Reference re. URI variant) that's the Subject of a Structured Profile Document (actually a Descriptor Resource)
  4. Mechanism for persisting Public Key data from X.509 Certificate to Structured Profile Document and associating it with Subject WebID (e.g. SPARUL or other HTTP based methods)
  5. Mechanism for de-referencing Public Key data associated with a WebID (from its Structured Profile Document) for comparison against Public Key data following successful standard SSL/TLS protocol handshake (e.g. via SPARQL Query).

Demo

  • WebID + OpenID Hybrid Protocol Demo using ODS, Stackoverflow.com, and identi.ca. - Youtube Screencast Demo

Related

  • Prior Posts about WebIDs
  • Draft WebID Spec
linked_datasemanticwebfoafsparqlhowtoscreencastsocialnetworkingodsidentity_20openid

11:25 PM | Permalink | Comments (9) | TrackBack (0)

Data 3.0 (a Manifesto for Platform Agnostic Structured Data) Update 5

After a long period of trying to demystify and unravel the wonders of standards compliant structured data access, combined with protocols (e.g., HTTP) that separate:

  1. Identity,
  2. Access,
  3. Storage,
  4. Representation, and
  5. Presentation.

I ended up with what I can best describe as the Data 3.0 Manifesto. A manifesto for standards complaint access to structured data object (or entity) descriptors.

Some Related Work

Alex James (Program Manager Entity Frameworks at Microsoft), put together something quite similar to this via his Base4 blog (around the Web 2.0 bootstrap time), sadly -- quoting Alex -- that post has gone where discontinued blogs and their host platforms go (deep deep irony here).

It's also important to note that this manifesto is also a variant of the TimBL's Linked Data Design Issues meme re. Linked Data, but totally decoupled from RDF (data representation formats aspect) and SPARQL which -- in my world view -- remain implementation details.

Data 3.0 manifesto

  • An "Entity" is the "Referent" of an "Identifier."
  • An "Identifier" SHOULD provide a global, unambiguous, and unchanging (though it MAY be opaque!) "Name" for its "Referent".
  • A "Referent" MAY have many "Identifiers" (Names), but each "Identifier" MUST have only one "Referent".
  • Structured Entity Descriptions SHOULD be based on the Entity-Attribute-Value (EAV) Data Model, and SHOULD therefore take the form of one or more 3-tuples (triples), each comprised of:

    • an "Identifier" that names an "Entity" (i.e., Entity Name),
    • an "Identifier" that names an "Attribute" (i.e., Attribute Name), and
    • an "Attribute Value", which may be an "Identifier" or a "Literal".

  • Structured Descriptions SHOULD be CARRIED by "Descriptor Documents" (i.e., purpose specific documents where Entity Identifiers, Attribute Identifiers, and Attribute Values are clearly discernible by the document's intended consumers, e.g., humans or machines).
  • Structured Descriptor Documents can contain (carry) several Structured Entity Descriptions
  • Stuctured Descriptor Documents SHOULD be network accessible via network addresses (e.g., HTTP URLs when dealing with HTTP-based Networks).
  • An Identifier SHOULD resolve (de-reference) to a Structured Representation of the Referent's Structured Description.

Related

  • Referent, Identifier, and Descriptor/Sense (The Data Perception Trinity) illustration

  • Referent, Identifier, and Descriptor/Sense Trinity (as exploited in FOAF+SSL based Secure WebIDs) illustration

  • Demystifying Linked Data via EAV Model based Structured Descriptions

  • What do people have against URIs and URLs?

  • The URI, URL, and Linked Data Meme's Generic HTTP URI

  • Simple Explanation of RDF and Linked Data Dynamics

  • Linked Data and Identity

  • FOAF+SSL FAQ

  • LOD Community Thread (showing evolution of this manifesto based on feedback from members such as Richard Cyganiak).


  • Googlebase Data API Docs

  • Google Data Protocol (GData)

  • Microsoft's OData Protocol
  • Magic of De-referencable Names and actual Data via Binky Video



  • Social Objects Presentation (aka. Social Linked Data Objects) - by Jyri Engeström

  • What's a Reference?
webservicesweb2.0web20gdatardflinked_datasemanticwebfoafsparqlsocialnetworkingDataSpace

05:09 PM | Permalink | Comments (13) | TrackBack (0)

URIBurner: Painless Generation & Exploitation of Linked Data (Update 1 - Demo Links Added)

What is URIBurner?

A service from OpenLink Software, available at: http://uriburner.com, that enables anyone to generate structured descriptions -on the fly- for resources that are already published to HTTP based networks. These descriptions exist as hypermedia resource representations where links are used to identify:

  • the entity (data object or datum) being described,
  • each of its attributes, and
  • each of its attributes values (optionally).

The hypermedia resource representation outlined above is what is commonly known as an Entity-Attribute-Value (EAV) Graph. The use of generic HTTP scheme based Identifiers is what distinguishes this type of hypermedia resource from others.

Why is it Important?

The virtues (dual pronged serendipitous discovery) of publishing HTTP based Linked Data across public (World Wide Web) or private (Intranets and/or Extranets) is rapidly becoming clearer to everyone. That said, the nuance laced nature of Linked Data publishing presents significant challenges to most. Thus, for Linked Data to really blossom the process of publishing needs to be simplified i.e., "just click and go" (for human interaction) or REST-ful orchestration of HTTP CRUD (Create, Read, Update, Delete) operations between Client Applications and Linked Data Servers.


How Do I Use It?



In similar vane to the role played by FeedBurner with regards to Atom and RSS feed generation, during the early stages of the Blogosphere, it enables anyone to publish Linked Data bearing hypermedia resources on an HTTP network. Thus, its usage covers two profiles: Content Publisher and Content Consumer.




Content Publisher




The steps that follow cover all you need to do:

  • place a tag within your HTTP based hypermedia resource (e.g. within section for HTML )
  • use a URL via the @href attribute value to identify the location of the structured description of your resource, in this case it takes the form: http://linkeddata.uriburner.com/about/id/{scheme-or-protocol}/{your-hostname-or-authority}/{your-local-resource}
  • for human visibility you may consider adding associating a button (as you do with Atom and RSS) with the URL above.

That's it! The discoverability (SDQ) of your content has just multiplied significantly, its structured description is now part of the Linked Data Cloud with a reference back to your site (which is now a bona fide HTTP based Linked Data Space).

Examples

HTML+RDFa based representation of a structured resource description:

<link rel="describedby" title="Resource Description (HTML)"type="text/html" href="http://linkeddata.uriburner.com/about/id/http/example.org/xyz.html"/>

JSON based representation of a structured resource description:


<link rel="describedby" title="Resource Description (JSON)" type="application/json" href="http://linkeddata.uriburner.com/about/id/http/example.org/xyz.html"/>


N3 based representation of a structured resource description:

<link rel="describedby" title="Resource Description (N3)" type="text/n3" href="http://linkeddata.uriburner.com/about/id/http/example.org/xyz.html"/>

RDF/XML based representations of a structured resource description:

<link rel="describedby" title="Resource Description (RDF/XML)" type="application/rdf+xml" href="http://linkeddata.uriburner.com/about/id/http/example.org/xyz.html"/>

Content Consumer

As an end-user, obtaining a structured description of any resource published to an HTTP network boils down to the following steps:

  1. go to: http://uriburner.com
  2. drag the Page Metadata Bookmarklet link to your Browser's toolbar
  3. whenever you encounter a resource of interest (e.g. an HTML page) simply click on the Bookmarklet
  4. you will be presented with an HTML representation of a structured resource description (i.e., identifier of the entity being described, its attributes, and its attribute values will be clearly presented).

Examples

  • Description of a Book culled from an Amazon web page
  • Description of a product offering culled from a BestBuy web page
  • Description of a product (a camera) culled from a CNET web page
  • Description of the same CNET product as an Offer on eBay (exposed by the description above via seeAlso property value).

If you are a developer, you can simply perform an HTTP operation request (from your development environment of choice) using any of the URL patterns presented below:

HTML:



  • curl -I -H "Accept: text/html" http://linkeddata.uriburner.com/about/id/{scheme}/{authority}/{local-path}



JSON:



  • curl -I -H "Accept: application/json" http://linkeddata.uriburner.com/about/id/{scheme}/{authority}/{local-path}

  • curl http://linkeddata.uriburner.com/about/data/json/{scheme}/{authority}/{local-path}




Notation 3 (N3):




  • curl -I -H "Accept: text/n3" http://linkeddata.uriburner.com/about/id/{scheme}/{authority}/{local-path}


  • curl http://linkeddata.uriburner.com/about/data/n3/{scheme}/{authority}/{local-path}




  • curl -I -H "Accept: text/turtle" http://linkeddata.uriburner.com/about/id/{scheme}/{authority}/{local-path}


  • curl http://linkeddata.uriburner.com/about/data/ttl/{scheme}/{authority}/{local-path}




RDF/XML:




  • curl -I -H "Accept: application/rdf+xml" http://linkeddata.uriburner.com/about/id/{scheme}/{authority}/{local-path}


  • curl http://linkeddata.uriburner.com/about/data/xml/{scheme}/{authority}/{local-path}

Conclusion

URIBurner is a "deceptively simple" solution for cost-effective exploitation of HTTP based Linked Data meshes. It doesn't require any programming or customization en route to immediately realizing its virtues.

If you like what URIBurner offers, but prefer to leverage its capabilities within your domain -- such that resource description URLs reside in your domain, all you have to do is perform the following steps:

  1. download a copy of Virtuoso (for local desktop, workgroup, or data center installation) or
  2. instantiate Virtuoso via the Amazon EC2 Cloud
  3. enable the Sponger Middleware component via the RDF Mapper VAD package (which includes cartridges for over 30 different resources types)

When you install your own URIBurner instances, you also have the ability to perform customizations that increase resource description fidelity in line with your specific needs. All you need to do is develop a custom extractor cartridge and/or meta cartridge.

Related:

  • Virtuoso Sponger Middleware -- (technology behind URIBurner Service)
  • Animation demonstrating how the Virtuoso Sponger works.
atomrdfrssxmllinked_datasemanticwebopenlinkvirtuosoDataSpace

12:52 PM | Permalink | Comments (2) | TrackBack (0)

Meshups Demonstrating How SPARQL-GEO Enhances Linked Data Exploitation

Deceptively simple demonstrations of how Virtuoso's SPARQL-GEO extensions to SPARQL lay critical foundation for Geo Spatial solutions that seek to leverage the burgeoning Web of Linked Data.

Setup Information

SPARQL Endpoint: Linked Open Data Cache (8.5 Billion+ Quad Store which includes data from Geonames and the Linked GeoData Project Data Sets) .

Live Linked Data Meshup Links:

  • LinkedGeoData things within 2km ORDER BY Dist LIMIT 10 (Use from iPhone only since its an iPhone oriented Linked Data driven application)
  • LinkedGeoData things within 2km of Trafalgar Square | ORDER By Distance - closest first | ORDER By Distance - most distant first .

Related

  • Collection of Live Linked Data Demos
  • Virtuoso's SPARQL-GEO Extensions
linked_datasemanticwebsparqlvirtuosoDataSpace

05:43 PM | Permalink | Comments (6) | TrackBack (0)

Revisiting HTTP based Linked Data

Motivation for this post arose from a series of Twitter exchanges between Tony Hirst and I, in relation to his blog post titled: So What Is It About Linked Data that Makes it Linked Data™ ?

At the end of the marathon session, it was clear to me that a blog post was required for future reference, at the very least :-)

What is Linked Data?

"Data Access by Reference" mechanism for Data Objects (or Entities) on HTTP networks. It enables you to Identify a Data Object and Access its structured Data Representation via a single Generic HTTP scheme based Identifier (HTTP URI). Data Object representation formats may vary; but in all cases, they are hypermedia oriented, fully structured, and negotiable within the context of a client-server message exchange.

Why is it Important?

Information makes the world tick!

Information doesn't exist without data to contextualize.

Information is inaccessible without a projection (presentation) medium.

All information (without exception, when produced by humans) is subjective. Thus, to truly maximize the innate heterogeneity of collective human intelligence, loose coupling of our information and associated data sources is imperative.

How is Linked Data Delivered?

Linked Data is exposed to HTTP networks (e.g. World Wide Web) via hypermedia resources bearing structured representations of data object descriptions. Remember, you have a single Identifier abstraction (generic HTTP URI) that embodies: Data Object Name and Data Representation Location (aka URL).

How are Linked Data Object Representations Structured?

A structured representation of data exists when an Entity (Datum), its Attributes, and its Attribute Values are clearly discernible. In the case of a Linked Data Object, structured descriptions take the form of a hypermedia based Entity-Attribute-Value (EAV) graph pictorial -- where each Entity, its Attributes, and its Attribute Values (optionally) are identified using Generic HTTP URIs.

Examples of structured data representation formats (content types) associated with Linked Data Objects include:

  • text/html
  • text/turtle
  • text/n3
  • application/json
  • application/rdf+xml
  • Others

How Do I Create Linked Data oriented Hypermedia Resources?

You markup resources by expressing distinct entity-attribute-value statements (basically these a 3-tuple records) using a variety of notations:

  • (X)HTML+RDFa,
  • JSON,
  • Turtle,
  • N3,
  • TriX,
  • TriG,
  • RDF/XML, and
  • Others (for instance you can use Atom data format extensions to model EAV graph as per OData initiative from Microsoft).

You can achieve this task using any of the following approaches:

  • Notepad
  • WYSIWYG Editor
  • Transformation of Database Records via Middleware
  • Transformation of XML based Web Services output via Middleware
  • Transformation of other Hypermedia Resources via Middleware
  • Transformation of non Hypermedia Resources via Middleware
  • Use a platform that delivers all of the above.

Practical Examples of Linked Data Objects Enable

  • Describe Who You Are, What You Offer, and What You Need via your structured profile, then leave your HTTP network to perform the REST (serendipitous discovery of relevant things)
  • Identify (via map overlay) all items of interest based on a 2km+ radious of my current location (this could include vendor offerings or services sought by existing or future customers)
  • Share the latest and greatest family photos with family members *only* without forcing them to signup for Yet Another Web 2.0 service or Social Network
  • No repetitive signup and username and password based login sequences per Web 2.0 or Mobile Application combo
  • Going beyond imprecise Keyword Search to the new frontier of Precision Find - Example, Find Data Objects associated with the keywords: Tiger, while enabling the seeker disambiguate across the "Who", "What", "Where", "When" dimensions (with negation capability)
  • Determine how two Data Objects are Connected - person to person, person to subject matter etc. (LinkedIn outside the walled garden)
  • Use any resource address (e.g blog or bookmark URL) as the conduit into a Data Object mesh that exposes all associated Entities and their social network relationships
  • Apply patterns (social dimensions) above to traditional enterprise data sources in combination (optionally) with external data without compromising security etc.

How Do OpenLink Software Products Enable Linked Data Exploitation?

Our data access middleware heritage (which spans 16+ years) has enabled us to assemble a rich portfolio of coherently integrated products that enable cost-effective evaluation and utilization of Linked Data, without writing a single line of code, or exposing you to the hidden, but extensive admin and configuration costs. Post installation, the benefits of Linked Data simply materialize (along the lines described above).

Our main Linked Data oriented products include:

  • OpenLink Data Explorer -- visualizes Linked Data or Linked Data transformed "on the fly" from hypermedia and non hypermedia data sources
  • URIBurner -- a "deceptively simple" solution that enables the generation of Linked Data "on the fly" from a broad collection of data sources and resource types
  • OpenLink Data Spaces -- a platform for enterprises and individuals that enhances distributed collaboration via Linked Data driven virtualization of data across its native and/or 3rd party content manager for: Blogs, Wikis, Shared Bookmarks, Discussion Forums, Social Networks etc
  • OpenLink Virtuoso -- a secure and high-performance native hybrid data server (Relational, RDF-Graph, Document models) that includes in-built Linked Data transformation middleware (aka. Sponger).

Related

  • Hypertext Transfer Protocol 1.1 RFC
  • Open Data Protocol Glossary
  • Simple Explanation of RDF and Linked Data Dynamics
  • Collection of post from the past about Linked Data
  • Are We There Yet Re. Web++? -- includes link to podcast conversation with Jon Udell
  • Web of Linked Data Pivoting Demo from TED -- by Microsoft's Gary Flake
  • Microsoft Pivot atop Virtuoso Quad Store's Faceted Browser Engine-- My Demonstration of EAV model transcending data representation variations (i.e., RDF's EAV data model data served up in Microsoft CXML data representation format).
webservicesweb2.0web20atomrdfxmllinked_datasemanticwebvirtuosoodsopenlinkDataSpace

10:16 AM | Permalink | Comments (0) | TrackBack (0)

« Previous | Next »

About

Archives

  • February 2011
  • January 2011
  • November 2010
  • October 2010
  • July 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009

Recent Comments

  • tee times on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • Is it olay to mix klonopin and xanax on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • Is it olay to mix klonopin and xanax on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • sudocadany on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • 1911 compensator on SPARQL for the Ruby Developer
  • Tierraksx on Virtuoso, PHP Runtime Hosting: phpBB, Wordpress, Drupal, MediaWiki, and Linked Data
  • Strip poker online games on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • Inject xanax how too on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • woodhzxb on Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • Ktexugmw on Virtuoso, PHP Runtime Hosting: phpBB, Wordpress, Drupal, MediaWiki, and Linked Data

Recent Posts

  • New Preconfigured Virtuoso AMI for Amazon EC2 Cloud comprised of Linked Data from BBC & DBpedia
  • DBpedia + BBC (combined) Linked Data Space Installation Guide
  • SPARQL Guide for the Perl Developer
  • Virtuoso + DBpedia 3.6 Installation Guide (Update 1)
  • SPARQL Guide for the Javascript Developer
  • SPARQL Guide for the PHP Developer
  • SPARQL Guide for Python Developer
  • SPARQL for the Ruby Developer
  • Simple Virtuoso Installation & Utilization Guide for SPARQL Users (Update 5)
  • 7 Things Brought to You by HTTP-based Hypermedia
Subscribe to this blog's feed