information analyst
43.8K views | +0 today
Follow
information analyst
km, ged / edms, workflow, collaboratif
Your new post is loading...
Your new post is loading...
Rescooped by michel verstrepen from Robótica Educativa!
Scoop.it!

Operational Semantics - From Text Mining to Triplestores – The Full Semantic Circle

Operational Semantics - From Text Mining to Triplestores – The Full Semantic Circle | information analyst | Scoop.it
In the not too distant past, analysts were all searching for a “360 degree view” of their data.  Most of the time this phrase referred to integrated RDBMS data, analytics interfaces and customers. But with the onslaught…

Via Tony Agresta, Edward Chenard, juandoming
Tony Agresta's curator insight, February 15, 2015 6:22 PM

Semantic pipelines allow for the identification, extraction, classification and storage of semantic knowledge creating a knowledge base of all your data.   Most organizations have struggled to create these pipelines primarily because the plumbing hasn't existed.  But now it does. 


This post discusses how free flowing text streams into graph databases using concept extraction processes.  A well coordinated feed of data is written to the underlying graph database while updates are tracked on a continuous basis to ensure database integrity.  


Other important pipeline plumbing includes tools for disambiguation (used to resolve the definition of entities inside the text), classification of the entities, structuring relationships between entities and determining sentiment.


Organizations that deploy well functioning semantic pipelines have an added advantage over their competitors.   They have instant access to a completed knowledge base of data.  Research functions spend less time searching and more time analyzing.  Alerting notifies critical business functions to take immediate action.  Service levels are improved using accurate, well structured responses.  Sentiment is detected allowing more time to react to changing market conditions.



In general, the REST Client API calls out a GATE-based annotation pipeline and sends back enriched data in RDF form. Organizations typically customize these pipelines which consist of any GATE-developed set of text mining algorithms for scoring, machine learning, disambiguation or any of the other wide range of text mining techniques.

It is important to note that these text mining pipelines create RDF in a linear fashion and feed GraphDB™. Once the RDF is enriched in this fashion and stored in the database, these annotations can then be modified, edited or removed. This is particularly useful when integrating with Linked Open Data (LOD) sources. Updates to the database are populated automatically when the source information changes.

For example, let’s say your text mining pipeline is referencing Freebase as its Linked Open Data source for organization names. If an organization name changes or a new subsidiary is announced in Freebase, this information will be updated as reference-able metadata in GraphDB™.

In addition, this tightly-coupled integration includes a suite of enterprise-grade APIs, the core of which is the Concept Extraction API. This API consists of a Coordinator and Entity Update Feed. Here’s what they do:

  • The Concept Extraction API Coordinator module accepts annotation requests and dispatches them towards a group of Concept Extraction Workers. The Coordinator communicates with GraphDB™ in order to track changes leading to updates in each worker’s entity extractor. The API Coordinator acts as a traffic cop allowing for approved and unique entities to be inserted in GraphDB™ while preventing duplicates from taking up valuable real estate.
  • The Entity Update Feed (EUF) plugin is responsible for tracking and reporting on updates about every entity (concept) within the database that has been modified in any way (added, removed, or edited). This information is stored in the graph database and query-able via SPARQL. Reports can be run notifying a user of any and all changes.

 

Other APIs include Document Classification, Disambiguation, Machine Learning, Sentiment Analysis & Relation Extraction. Together, this complete set of technology allows for tight integration and accurate processing of text while efficiently storing resulting RDF statements in GraphDB™.

As mentioned, the value of this tightly-coupled integration is in the rich metadata and relationships which can now be derived from the underlying RDF database. It’s this metadata that powers high performance search and discovery or website applications – results are compete, accurate and instantaneous.

- See more at: http://www.ontotext.com/text-mining-triplestores-full-semantic-circle/#sthash.fg1RVcQN.dpuf

In general, the REST Client API calls out a GATE-based annotation pipeline and sends back enriched data in RDF form. Organizations typically customize these pipelines which consist of any GATE-developed set of text mining algorithms for scoring, machine learning, disambiguation or any of the other wide range of text mining techniques.

It is important to note that these text mining pipelines create RDF in a linear fashion and feed GraphDB™. Once the RDF is enriched in this fashion and stored in the database, these annotations can then be modified, edited or removed. This is particularly useful when integrating with Linked Open Data (LOD) sources. Updates to the database are populated automatically when the source information changes.

For example, let’s say your text mining pipeline is referencing Freebase as its Linked Open Data source for organization names. If an organization name changes or a new subsidiary is announced in Freebase, this information will be updated as reference-able metadata in GraphDB™.

In addition, this tightly-coupled integration includes a suite of enterprise-grade APIs, the core of which is the Concept Extraction API. This API consists of a Coordinator and Entity Update Feed. Here’s what they do:

  • The Concept Extraction API Coordinator module accepts annotation requests and dispatches them towards a group of Concept Extraction Workers. The Coordinator communicates with GraphDB™ in order to track changes leading to updates in each worker’s entity extractor. The API Coordinator acts as a traffic cop allowing for approved and unique entities to be inserted in GraphDB™ while preventing duplicates from taking up valuable real estate.
  • The Entity Update Feed (EUF) plugin is responsible for tracking and reporting on updates about every entity (concept) within the database that has been modified in any way (added, removed, or edited). This information is stored in the graph database and query-able via SPARQL. Reports can be run notifying a user of any and all changes.

 

Other APIs include Document Classification, Disambiguation, Machine Learning, Sentiment Analysis & Relation Extraction. Together, this complete set of technology allows for tight integration and accurate processing of text while efficiently storing resulting RDF statements in GraphDB™.

As mentioned, the value of this tightly-coupled integration is in the rich metadata and relationships which can now be derived from the underlying RDF database. It’s this metadata that powers high performance search and discovery or website applications – results are compete, accurate and instantaneous.

- See more at: http://www.ontotext.com/text-mining-triplestores-full-semantic-circle/#sthash.fg1RVcQN.dpuf
Rescooped by michel verstrepen from Web 3.0
Scoop.it!

W3C Tutorial on Semantic Web and Linked Data at WWW 2013

An introduction to SemanticWeb and Linked Dataor how to link data andschemas on the weba W3C tutorial byFabien Gandon, http://fabien.info, @fabien_gandonIvan He

Via Pierre Tran
No comment yet.
Rescooped by michel verstrepen from Web 3.0
Scoop.it!

Les 4 révolutions du web

Les 4 révolutions du web | information analyst | Scoop.it

Les 4 grands révolutions coperniciennes du web :

1. La dérive des continents documentaires à l'envers (constitution progressive d'une Pangée d'indexabilité) - 2. le capitalisme linguistique (Frédéric Kaplan) - 3. le changement d'axe de rotation du web...


Via Pierre Tran
No comment yet.
Rescooped by michel verstrepen from Web 3.0
Scoop.it!

VIZBOARD - Context-aware Visualization of Semantic Web Data

VIZBOARD - Context-aware Visualization of Semantic Web Data | information analyst | Scoop.it

Understanding and interpreting Semantic Web data is almost impossible for lay-users as skills in Semantic Web technologies are required. Thus, information visualization (InfoVis) of this data has become a key enabler to address this problem. However, convenient solutions are missing as existing tools either do not support Semantic Web data or require users to have programming and visualization knowledge. We propose a novel approach towards a generic InfoVis workbench called VizBoard, which enables users to visualize arbitrary Semantic Web data without expert skills in Semantic Web technologies, programming, and visualization.


Via Pierre Tran
No comment yet.
Rescooped by michel verstrepen from apps for libraries
Scoop.it!

Web 3.0 – Artificial Intelligence?

Web 3.0 – Artificial Intelligence? | information analyst | Scoop.it

Representing the Digital Enterprise Research Institute, Liam Ó Móráin , recently discussed the work of the DERI on the Semantic Web and the move from Web 2.0 to Web 3.0.

Liam explains how the evolution of Web 2.0 to Web 3.0 (also known as Semantic Web) is taking the web experience to the user in a new and more powerful way. Web 3.0 will quickly and easily combine information from very diverse sources and serve the information to the user, based on intelligent browsing.


Via Pierre Tran, aikker
No comment yet.