Natural Language Processing in RDF graphs

Published by:

This blog shows how to store text data in a RDF graph and retrieve and analyze information from that graph. Resource Description Format (RDF) graphs are very suitable structures for storing Natural Language Processing (NLP) data. They enable combining NLP data with other data sets in RDF (such as legal entities data from the Global LEI Foundation and the EIOPA register of European insurance undertakings, terminology data, for example Solvency 2 terminology and data from XBRL reports); and they allow adding text semantics in the form of linguistic annotations, which enables NLP analyses simply by executing database queries.

Here is what I did. To get a proper amount of text data I web-scraped the entire website of De Nederlandsche Bank (text in webpages and in PDF documents, including speeches, press releases, research publications, sector information, dnbulletins, and all blogs by Maarten Gelderman and Olaf Sleijpen, consisting of over 4.000 documents). Text extraction from the web pages was done with the Python package newspaper3k (a great tip from my NLP colleagues from the Authority for Consumers and Markets). Text data was then converted to the NLP Annotation Format (NAF), for which I defined a RDF representation (implemented in the Nafigator package) to upload the data in a RDF triple-store. For the triple-store I used Ontotext’s GraphDB, one of the best RDF database currently available. Then, information can be retrieved from the graph database with SPARQL queries for all kinds of NLP analyses.

Using a triple-store for NLP data leads to an efficient retrieval process of text data, especially if you compare that to a process where you search through different annotation files. Triple-stores for RDF (and the new RDF-star) have become efficient and powerful solutions with the equal capabilities as property graphs but with advantages of RDF and ontologies.

I will describe two parts of this process that are not straightforward in detail: the RDF representation of NAF, and retrieving data from the graph database.

The NLP Annotation Format in RDF

The NLP Annotation Format is an easy format for storing text annotations (see here for links to the description). All documents that were scraped from the website were processed with the Python package Nafigator, that is able to convert PDF document and HTML-files to XML-files satisfying the NLP Annotation Format. Standard annotation layers with the raw text, word forms, terms, named entities and dependencies were added using the Stanford Stanza NLP processor.

In this representation every annotation (word forms, terms, named entities, etc.) of every document must have an Uniform Resource Identifier (URI). To do this, I used a prefix doc_xxx for each document in the document set. This prefix can, for example, be set by

@prefix doc_001: <http://rdf.mangosaurus.eu/doc_001/> .

Which in this case is an identifier based on the domain of this blog. For web-scraped documents you might also use the original URL of the document. Furthermore, for the RDF representation of NAF a provisional RDF Schema with prefix naf-base was made with the basic properties en classes of NAF.

The basic structure is set out below. All examples provided below are derived from the file example.pdf in the Nafigator package (the first sentences of the first page starts with: ‘The Nafigator package … ‘).

Document and header

Every document has a header and pages.

doc_001:doc a naf-base:document ;
    naf-base:hasHeader doc_001:nafHeader ;
    naf-base:hasPages ( doc_001:page1 ) .

Here naf-base:document is a RDF Class and naf-base:hasHeader and naf-base:hasPages are RDF Properties. The three lines above state that doc_001:doc is a document with header doc_001:nafHeader and a single page doc_001:page1.

In the header all metadata of the document is stored, including all linguistics processors and models that were used in processing the document. Below you see the metadata of the NAF text layer and the document metadata.

doc_001:nafHeader a naf-base:header ;
    naf-base:hasLinguisticProcessors [ 
        naf-base:hasLayer naf-base:text ;
        naf-base:lp [ 
            naf-base:hasBeginTimestamp "2022-04-10T13:45:43UTC" ;
            naf-base:hasEndTimestamp "2022-04-10T13:45:44UTC" ;
            naf-base:hasHostname "desktop-computer" ;
            naf-base:hasModel "stanza_resources\\en\\tokenize\\ewt.pt" ;
            naf-base:hasName "text" ;
            naf-base:hasVersion "stanza_version-1.2.2" 
        ] 
        ...
    ] ;
    naf-base:hasPublic [ 
        dc:format "application/pdf" ;
        dc:uri "data/example.pdf" 
    ] .

Sentences, paragraphs and pages

Here is an example of a sentence object with properties.

doc_001:sent1 a naf-base:sentence ;
    naf-base:isPartOf doc_001:para1, doc_001:page1 ;
    naf-base:hasSpan ( doc_001:wf1 doc_001:wf2 ...  doc_001:wf29 ) .

These three lines describe the properties of the RDF subject doc_001:sent1. The doc_001:sent1 identifies the RDF subject for the first sentence of the first document; the first line says that the subject doc_001:sent1 is a (rdf:type) sentence. The second line says that this sentence is a part of the first paragraph and the first page of the document. The span of the sentence contains a ordered list of word forms of the sentence: doc_001:wf1, doc_001:wf2 and so on.

Paragraphs and pages to which the sentences refer are defined in a similar way.

Word forms and terms

Of each word form the properties text, length and offset are defined. The word form is a part of a term, sentence, paragraph and page, and that is also defined for every word form. Take for example the the word form doc_001:wf2 defined as:

doc_001:wf2 a naf-base:wordform ;
    naf-base:hasText "Nafigator"^^rdf:XMLLiteral ;
    naf-base:hasLength "9"^^xsd:integer ;
    naf-base:hasOffset "4"^^xsd:integer ;
    naf-base:isPartOf doc_001:page1, 
        doc_001:para1, 
        doc_001:sent1.

In the next layer the terms of the word forms are defined, with their linguistic properties (lemma, grammatical number, part-of-speech and if applicable other properties such as verb voice and verb form). The term that refers to the word form above is

doc_001:term2 a naf-base:term ;
    naf-base:hasLemma "Nafigator"^^rdf:XMLLiteral ;
    naf-base:hasNumber olia:Singular ;
    naf-base:hasPos olia:ProperNoun ;
    naf-base:hasSpan ( doc_001:wf2 ) .

For the linguistic properties the OLiA ontology is used, which stand for Ontologies of Linguistic Annotations, an OWL taxonomy of data categories for linguistic annotations. The ontology contains precise definitions and interrelation between the linguistic categories. In this case the grammatical number (olia:Singular) and the part-of-speech tag (olia:ProperNoun) is included in the properties of this term. Depending of the term other properties are defined, for example verb forms. The span of the term refers back to the word forms (if you create a NAF ontology then you would define this as a transitive relationship, but for now, by including both relations we speed up the retrieval process).

Named entities

Next are the named entities that are stored in another NAF layer and here as separate subjects in the triple-store. An entity refers back to a term and has a certain type (organization, person, product, law, date and so on). The text of the entity is already stored in the term object so there is not need to include it here. External references could be added here, for example references to legal entities from Global LEI Foundation. Here is the example referring to the triples above.

doc_001:entity1 a naf-base:entity ;
    naf-base:hasType naf-entity:product ;
    naf-base:hasSpan ( doc_001:term2 ) .

Dependencies

Powerful NLP models exist that are able to derive relationships between words in within sentences. The dependencies are defined on the level of terms and stored in the dependency layer of NAF. In this RDF representation the dependencies are simply added to the terms.

doc_001:term3 a naf-base:term ;
    naf-rfunc:compound doc_001:term2 ;
    naf-rfunc:det doc_001:term1 .

The second and third line say that term3 (‘package’) forms a compound term with term2 (‘Nafigator’) and has its determinant in term1 (‘The’).

There are more annotation layers in NAF, but these are the most basic ones and if you have these, then many powerful NLP analyses already can be done.

Information retrieval from the RDF graph database

The conversion of text to RDF described above was applied to all webpages and documents of the website of DNB, 4.065 documents in total with 401.832 sentences containing 9.789.818 words. This text data led to over 221 million RDF triples in the tripe-store. I used a local database that was queried via a SPARQL endpoint. These numbers mentioned here can easily be extracted with SPARQL queries, for example to count the number of sentences we can use the SPARQL query:

SELECT (COUNT(?s) AS ?count) WHERE { ?s a naf-base:sentence . }

With this query all RDF subjects (the variable ?s) that are a sentence are counted and the result is stored in the variable ‘count’. The same can be done with other RDF subjects like word forms and documents.

The RDF representation described above allows you to store the content and annotations of a set of documents with their metadata in one single graph. You can then retrieve information from that graph from different perspectives and for different purposes.

Information retrieval

Suppose we want to find all references on the website with relations between ‘DNB’ and the verb ‘supervise’ by looking for sentences where ‘DNB’ is the nominal subject and ‘supervise’ is the lemma of the verb in the sentence. This is done with the following query

SELECT ?text
WHERE {
    ?term naf-base:hasLemma "supervise" .
	?term naf-rfunc:nsubj [naf-base:hasLemma "DNB" ] .
    ?term naf-base:hasSpan [ rdf:first ?wf ] .
    ?wf naf-base:isPartOf [ a naf-base:sentence ; naf-base:hasText ?text ].
}

It’s almost readable 🙂 The first line in the WHERE statement retrieves words that have ‘supervise’ as a lemma (this includes past, present and future tense and different verb forms). The second line narrows the selection down to where the nominal subject of the verb is ‘DNB’ (the lemma of the subject to be precise). The last two lines select the text of the sentences that includes the words that were found.

Execution of this query is done in a few milliseconds (on a desktop computer with a local database, nothing fancy) and results in 22 sentences, such as “DNB supervises adequate management of sustainability risks by financial institutions.”, “DNB supervises the cash payment system by providing information and guidance on the rules and procedures, data collection and examining compliance with the rules.”, and so on.

Term extraction

Terms are often multi-words and can be retrieved by part-of-speech tags and dependencies. Suppose we want to retrieve all two-words terms of the form adjective, common noun. Part-of-speech tags are defined in the terms layer. In the graph also the relation between the terms is defined, in this case by an adjectival modifier (amod) relation (the common noun is modified by an adjective). Then we can define a query that looks for exactly that: two words, an adjective and a common noun, where the mutual relationship is of an adjectival modifier. This is expressed in the first three lines in the WHERE-clause below. The last two lines retrieve the text of the words.

SELECT DISTINCT ?w1 ?w2 (count(*) as ?c)
WHERE {
    ?term1 naf-base:hasPos olia:CommonNoun .
    ?term2 naf-base:hasPos olia:Adjective .
    ?term1 naf-rfunc:amod ?term2 .
    ?term1 naf-base:hasSpan [ rdf:first/naf-base:hasText ?w1 ] .
    ?term2 naf-base:hasSpan [ rdf:first/naf-base:hasText ?w2 ] .
} GROUP BY ?w1 ?w2
ORDER BY DESC(?c)

Note that in the query a count of the number of occurrences of the term in the output and sort the output according to this count has been added.

Most often the term ‘monetary policy’ was found (2.348 times), followed by ‘financial institutions’ (1.734 times) and ‘financiële instellingen’ (Dutch translation of financial institution, 1.519 times), and so on. In total more than 127.000 of these patterns were found on the website (this is a more complicated query and took around 10 seconds). In this way all kinds of term patterns can be found, which can be collected in a termbase (terminology database).

Opinion extraction

I will give here a very simple example of opinion extraction based on part-of-speech tags. Suppose you want to extract sentences that contain the authors (or someone else’s) subjective opinion. You can look a the grammatical subject and the verb in a sentence, but you can also look at whether a sentence contains something like ‘too high’ or ‘too volatile’ (which often indicates a subjective content). In that case we have the word ‘too’ (an adverb) followed by an adjective, with mutual relation of adverbial modifier (advmod). In the Dutch language this has exactly the same form. The following query extracts these sentences.

SELECT ?text
WHERE {
    ?term1 naf-base:hasPos olia:Adjective .
    ?term2 naf-base:hasSpan [ rdf:first/naf-base:hasText "too" ] .
    ?term1 naf-rfunc:advmod ?term2 .
    ?term1 naf-base:hasSpan [ rdf:first ?wf1 ] .
    ?sent1 naf-base:hasSpan [ rdf:rest*/rdf:first ?wf1 ] .
    ?sent1 a naf-base:sentence .
    ?sent1 naf-base:hasText ?text .
}

With the last three lines the text of the sentence that includes the term is found (the output of the query). With the documents of the website of DNB, the output contains sentences like: “It is also clear that CO2 emissions are still too cheap and must be priced higher to sufficiently curtail emissions” and “Firms end up being too large” (in total 343 sentences in 0.3 seconds).

The examples shown here are just for illustrative purposes and do not always lead to accurate results, but they show that information extraction can be done fairly easy (if you know SPARQL) and reasonably quick. Once the data is stored into a graph database, named entities can be matched with other internal or external data sources and lemmas of terms can be matched with concept-based terminology databases. Then you have a graph where the text is not only available on a simple string-level but also, and more importantly, on a conceptual level.

The Solvency termbase for NLP

Published by:

This blog describes a way to construct a terminology database for the insurance supervision knowledge domain. The goal of this termbase is provide a reliable basis to extract insurance supervision terminology within different NLP analyses.

The terminology of solvency and insurance supervision forms an expert domain of terminology based on economics, mathematics, accounting and finance terminologies. Like probably many other knowledge domains the terminology used is very technical and specific. Some terms are only used within this domain with only a limited number of occurrences (which often hinders the use of statistical methods for finding terms). And many other words have general meanings outside the domain that do not coincide with the specific meanings within the domain. Translation of terms from this specific domain often requires extensive knowledge about the meaning and use of these terms.

What is a termbase?

A termbase is a database containing terminology and related information. It consists of concepts with their verbal designations (terms, i.e. single words or composed of multi-word strings) of a specific knowledge domain, often in different languages. It contains the full form of concepts, but also abbreviations, synonyms and variants and additional information of concepts, such as definitions and external references. To indicate the accuracy or completeness often a reliability code is added to individual terms of a concept. A proper termbase is an important terminology tool to achieve standardization of information and consistent use of (translations) of concepts in documents. And because of that, they are often used by professional translators.

The European Union translates legal documents in all member state languages and uses for this one common publicly available termbase: the IATE (Interactive Terminology for Europe) terminology database. The IATE termbase is used in the EU institutions and agencies since 2004 for the collection, dissemination and management of EU-specific terminology. This helps to avoid divergences in the application of European Law within Europe (there exists a vast amount of literature on the effects on language ambiguity in European legislation). The translations of European legislation are therefore of the highest quality with strong consistency between different directives, regulations, delegated and implementing acts and so on. This termbase is very useful for information extraction from documents and for linking terminology concepts between different documents. They can be extended with abbreviations, synonyms and common variants of terms.

Termbases is very useful for information extraction from documents and for linking terminology concepts between different documents. They can be extended with abbreviations, synonyms and common variants of terms.

The Solvency termbase for NLP

To create a first Solvency termbase for NLP purposes, I extracted terms from Solvency 2 Delegated Acts in a number of languages, looked up these terms in the IATE database and copied the corresponding concepts. It often happens that for one language the same term refers to different concepts (for example, the term ‘balance’ means something different in chemistry and in accounting). But if for one legal document the terms from different languages refer to the same concept, then we probably have the right concept (that was used in the translation of the legal document). So, the more references from the same legal document, the more reliable the term-concept relation is. And if we have the proper term-concept relationship, we automatically have all reliable translations of that concept.

Term extraction was done with part-of-speech patterns (such as adj-noun and adj-noun-noun patterns). To do this, for every language the Delegated Acts was converted to the NLP Annotation Format (NAF). The functionality for conversion to NAF and for extracting terms based on pos patterns is part of the nafigator package. As an NLP engine for nafigator, I used the Stanford Stanza package that contains tokenizers and part-of-speech models for every European language. The termbase itself was made with the terminator repository (currently under construction).

For terms in Dutch, I also added to the termbase additional part-of-speech tags, lemma’s and morphological properties from the Lassy Klein-corpus from the Instituut voor de Nederlandse taal (Dutch Language Institute). This data set consists of approximately 1 million words with manually verified syntactic annotations. I expanded this data set with solvency related words. Linguistical properties of terms of other languages can be added it a reliable data set is available.

Below, you see one concept from the resulting termbase (the concept of which ‘solvency capital requirement’ is the English term) in TermBase eXchange format (TBX). This is an international standard (ISO 30042:2019) for the representation of structured concept-oriented terminological data, based on xml.

<conceptEntry id="249">
 <descrip type="subjectField">insurance</descrip>
 <xref>IATE_2246604</xref>
 <ref>https://iate.europa.eu/entry/result/2246604/en</ref>
 <langSec xml:lang="nl">
  <termSec>
   <term>solvabiliteitskapitaalvereiste</term>
   <termNote type="partOfSpeech">noun</termNote>
   <note>source: ../naf-data/data/legislation/Solvency II Delegated Acts - NL.txt (#hits=331)</note>
   <termNote type="termType">fullForm</termNote>
   <descrip type="reliabilityCode">9</descrip>
   <termNote type="lemma">solvabiliteits_kapitaalvereiste</termNote>
   <termNote type="grammaticalNumber">singular</termNote>
   <termNoteGrp>
    <termNote type="component">solvabiliteits-</termNote>
    <termNote type="component">kapitaal-</termNote>
    <termNote type="component">vereiste</termNote>
   </termNoteGrp>
  </termSec>
 </langSec>
 <langSec xml:lang="en">
  <termSec>
   <term>SCR</term>
   <termNote type="termType">abbreviation</termNote>
   <descrip type="reliabilityCode">9</descrip>
  </termSec>
  <termSec>
   <term>solvency capital requirement</term>
   <termNote type="termType">fullForm</termNote>
   <descrip type="reliabilityCode">9</descrip>
   <termNote type="partOfSpeech">noun, noun, noun</termNote>
   <note>source: ../naf-data/data/legislation/Solvency II Delegated Acts - EN.txt (#hits=266)</note>
  </termSec>
 </langSec>
 <langSec xml:lang="fr">
  <termSec>
   <term>capital de solvabilité requis</term>
   <termNote type="termType">fullForm</termNote>
   <descrip type="reliabilityCode">9</descrip>
   <termNote type="partOfSpeech">noun, adp, noun, adj</termNote>
   <note>source: ../naf-data/data/legislation/Solvency II Delegated Acts - FR.txt (#hits=198)</note>
  </termSec>
  <termSec>
   <term>CSR</term>
   <termNote type="termType">abbreviation</termNote>
   <descrip type="reliabilityCode">9</descrip>
  </termSec>
 </langSec>
</conceptEntry>

You see that the concept contains a link to the IATE database entry with the definition of the concept (the link in this blog actually works so you can try it out). Then a number of language sections contain terms of this concept for different languages. The English section contains the term SCR as an English abbreviation of this concept (the French section contains the abbreviation CSR for the same concept). For every term the part-of-speech tags were added (which are not part of the IATE database) and, for Dutch only, with the lemma and grammatical number of the term and its word components. These additional linguistical attributes allow easier use within NLP analyses. Furthermore, as a note the number of all occurrences in the original legal document are included.

The concept entry contains related terms in all European languages. In Greek the SCR is κεφαλαιακή απαίτηση φερεγγυότητας, in Irish it is ‘ceanglas maidir le caipiteal sócmhainneachta’ (although the Solvency 2 Delegated Acts is not available in the Irish language), in Portuguese it is ‘requisito de capital de solvência’, in Estonian ‘solventsuskapitalinõue’, and so on. These are reliable translations as they are used in legal documents of that language.

The termbase contains all terms from the Solvency 2 Delegated Acts that can be found in the IATE database. In addition, terms that were not found in that database are added with the termNote “NewTerm”, to indicate that this term has yet to be reviewed by a knowledge domain expert. This would also be the way to add synonyms and variants of terms.

The Solvency termbase basically allows to scan for a Solvency 2 concept in a document in any of the 23 European languages (given that it is in the IATE database). This is of course an initial approach to construct a termbase to test whether it is feasible and practical. The terminology that insurance undertakings use in their solvency reports is very likely to differ from the one used in legal documents. I will be testing this with a number of documents to identify Solvency 2 terminology to get an idea of how many synonyms and variants are missing.

Besides this Solvency termbase, it is in the same way possible to construct a Climate termbase based on the European Climate Law (a European regulation from 2021). This law contains a large number of climate-related terminology and is available in all European languages. A Climate termbase gives the possibility to extract climate-related information from all kinds of documents. Furthermore, we have the Sustainable Finance Disclosure Regulation (a European regulation also from 2021) for environmental, social, and governance (ESG) terminology, which could provide a starting point for an ESG termbase. And of course I eagerly await the European Regulation on Artificial Intelligence.

Two new Python packages

Published by:

Here is a short update about two new Python packages I have been working on. The first is about structuring Natural Language Processing projects (a subject I have been working on a lot recently) and the second is about rule mining in arbitrary datasets (in my case supervisory quantitative reports).

nafigator package

This packages converts the content of (native) pdf documents, MS Word documents and html files into files that satisfy the NLP Annotation Format (NAF). It can use a default spaCy or Stanza pipeline or a custom made pipeline. The package creates a file in xml-format with the first basic NAF layers to which additional layers with annotations can be added. It is also possible to convert to RDF format (turtle-syntax and rdf-xml-syntax). This allows the use of this content in graph databases.

This is my approach for storing (intermediate) NLP results in a structured way. Conversion to NAF is done in only one computationally intensive step, and then the NAF files contain all necessary NLP data and can be processed very fast for further analyses. This allows you to use the results efficiently in downstream NLP projects. The NAF files also enable you to keep track of all annotation processes to make sure NLP results remain reproducible.

The NLP Annotation Format from VU University Amsterdam is a practical solution to store NLP annotations and relatively easy to implement. The main purpose of NAF is to provide a standardized format that, with a layered extensible structure, is flexible enough to be used in different NLP projects. The NAF standard has had a number of subsequent versions, and is still under development.

The idea of the format is that basically all NLP processing steps add annotations to a text. You start with the raw text (stored in the raw layer). This raw text is tokenized into pages, paragraphs, sentences and word forms and the result is stored in a text layer. Annotations to each word form (like lemmatized form, part-of-speech tag and morphological features) are stored in the terms layers. Each additional layer builds upon previous layers and adds more complex annotations to the text.

See for more information about the NLP Annotation Format and Nafigator on github.

ruleminer package

Earlier, I have already made some progress with mining datasets for rules and patterns (see for two earlier blogs on this subject here and here). New insights led to a complete new set-up of code that I have published as a Python package under the name ruleminer. The new package improves the data-patterns package in a number of ways:

  • The speed of the rule mining process is improved significantly for if-then rules (at least six times faster). All candidate rules are now internally evaluated as Numpy expressions.
  • Additional evaluation metrics have been added to allow new ways to assess how interesting newly mined rules are. Besides support and confidence, we now also have the metrics lift, added value, casual confidence and conviction. New metrics can be added relatively easy.
  • A rule pruning procedure has been added to delete, to some extent, rules that are semantically identical (for example, if A=B and B=A are mined then one of them is pruned from the list).
  • A proper Pyparsing Grammar for rule expressions is used.

Look for more information on ruleminer here on github.

EIOPA’s Solvency 2 taxonomy in RDF

Published by:

To use the metadata from XBRL taxonomies, like labels, hierarchies, template structures and formulas, often licensed software is needed to process the taxonomy and convert the XML content to readable formats. In an earlier blog I have shown that it is useful to convert XBRL instance data to a linked data set in RDF and then query that data to retrieve the desired information. In this blog I will show how to do this with taxonomies: by using a number of small SPARQL queries the complete Data Point Model (DPM) of (European) XBRL taxonomies can be retrieved.

The main purpose of retrieving metadata in this manner is to be able to use taxonomy metadata in data science environments, for example to be able to apply machine learning models that use taxonomy metadata like hierarchies and to use concept and element labels from a taxonomy in NLP, for example Named Entity Recognition tasks to link quantitative reports to (unstructured, or: not yet structured) text data.

The lightweight solution that I show here is completely based on open source code, in the form of my xbrl2rdf package. This package converts XBRL instance files and all related taxonomy files (schemas and linkbases) to RDF and RDF-star and uses for this lxml and rdflib, and nothing else.

The examples below use the Solvency 2 taxonomy, but other taxonomies works as well. Gist with the notebook with the code below can be found here.

Importing the data

With the xbrl2rdf-package I have converted the EIOPA instance example file for quarterly reports for solo undertaking (QRS) to RDF. All taxonomy concepts that are used in that instance are included in the RDF data set. This result can be read with rdflib into memory.

# RDF graph loading
path = "../data/rdf/qrs_240_instance.ttl"

g = RDFGraph()
g.parse(path, format='turtle')

print("rdflib Graph loaded successfully with {} triples".format(len(g)))

This returns

rdflib Graph loaded successfully with 1203744 triples

So we have a RDF graph with 1.2 million triples that contains all data (facts related to concepts in the taxonomy, including all labels, validation rules, template structures, etc). The original RDF data file is around 64 Mb (combining instance and taxonomy triples). Reading and processing this file into an in-memory RDF graph takes some time, but then the graph can easily be queried.

Extracting template URIs

Let’s start with a simple query. Table or template URIs are subjects in the triple “subject xl:type table:table”. To get a list with all templates of an instance (in this case the first five) we run

q = """
  SELECT ?a
  WHERE {
    ?a xl:type table:table .
  }"""
tables = [str(row[0]) for row in g.query(q)]
tables.sort()
tables[0:5]

This returns a list of the URIs of the templates contained in the instance file.

['http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.01.01.02.01#s2md_tS.01.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.01.02.01.01#s2md_tS.01.02.01.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.02.01.02.01#s2md_tS.02.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.05.01.02.01#s2md_tS.05.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.05.01.02.02#s2md_tS.05.01.02.02']

Extracting the explicit domains

Next, we extract the explicit domains and related data in the taxonomy. A domain is specific XBRL terminology and means a set of elements sharing a specified semantic nature. An explicit domain has its elements enumerated in the taxonomy and can be found with the subject in the triple ‘subject rdf:type model:explicitDomainType’.

q = """
  SELECT DISTINCT ?t ?x1 ?x2 ?x3 ?x4
  WHERE {
    ?t rdf:type model:explicitDomainType .
    ?t xbrli:periodType ?x1 .
    ?t model:creationDate ?x2 .
    ?t xbrli:nillable ?x3 .
    ?t xbrli:abstract ?x4 .
  }"""

The first five domains (of 41 in total) are

indexDomain nameDomain labelperiod typecreation datenillableabstract
0LBLines of businessesinstant2014-07-07truetrue
1MCMain categoriesinstant2014-07-07truetrue
2TITime intervalsinstant2014-07-07truetrue
3AOArticle 112 and 167instant2014-07-07truetrue
4CGCollaterals/Guaranteesinstant2014-07-07truetrue
Domain names and labels with attributes

So, the label of the domain LB is Lines of businesses; it has been there since the early versions of the taxonomy. If a domain is modified then this is also included as a triple in the data set.

Extracting domain members

Elements of an explicit domain are called domain members. A domain member (or simply a member) is enumerated element of an explicit domain. All members from a domain share a certain common nature. To get the members of a domain, we define a function that finds all domain-members relations of a given domain and retrieve the label of the member. In SPARQL this is:

def members(domain):
    q = """
      SELECT DISTINCT ?t ?label
      WHERE {
        ?l arcrole7:domain-member [ xl:from <"""+str(domain)+"""> ;
                                    xl:to ?t ] .
        ?t rdf:type nonnum:domainItemType .
        ?x arcrole3:concept-label [ xl:from ?t ;
                                    xl:to [rdf:value ?label ] ] .
        }"""
    return g.query(q)

All members of all domains can be retrieved by running this function for all domains defined earlier. Based on the output we create a Pandas DataFrame with the results.

df_members = pd.DataFrame()
for d in df_domains.iloc[:, 0]:
    data = [[urldefrag(d)[1]]+[urldefrag(row[0])[1]]+list(row[1:]) for row in members(d)]
    columns = ['Domain',
               'Member',
               'Member label']
    df_members = df_members.append(pd.DataFrame(data=data,
                                                columns=columns))

In total there are 4.879 members of all domains (in this taxonomy).

index DomainMemberMember label
0LBx0Total/NA
1LBx1Accident and sickness
2LBx2Motor
3LBx3Fire and other damage to property
4LBx4Aviation, marine and transport
The first five member of the domain LB (Lines of Businesses)

This allows us, for example, to retrieve all facts in a report that are related to the term ‘motor’ because a reported fact contains references to the domain members to which the fact relates.

Extracting the template structure

Template structures are stored in the taxonomy as a tree of linked elements along the axes x, y and, if applicable, z. The elements have a label and a row-column code (this holds at least for EIOPA and DNB taxonomies), and have a certain depth, i.e. they can be subcategories of other elements. For example in the Solvency 2 balance sheet template, the element ‘Equities’ has subcategories ‘Equities – listed’ and ‘Equities unlisted’. I have not included the code here but with a few lines you can extract the complete template structures as they are stored in the taxonomy. For example for the balance sheet (S.02.01.02.01) we get:

 axisdepthrc-codelabel
1x1C0010Solvency II value
3y1 Assets
4y1 Liabilities
5y2R0010Goodwill
6y2R0020Deferred acquisition costs
7y2R0030Intangible assets
8y2R0040Deferred tax assets
9y2R0050Pension benefit surplus
10y2R0060Property, plant & equipment held for own use
11y2R0070Investments (other than assets held for index-…
12y3R0080Property (other than for own use)
13y3R0090Holdings in related undertakings, including pa…
14y3R0100Equities
15y4R0110Equities – listed
16y4R0120Equities – unlisted
17y3R0130Bonds
18y4R0140Government Bonds
19y4R0150Corporate Bonds
20y4R0160Structured notes
21y4R0170Collateralised securities
22y3R0180Collective Investments Undertakings
23y3R0190Derivatives
24y3R0200Deposits other than cash equivalents
The first 25 lines of the balance sheet template

With relatively small SPARQL queries, it is possible to retrieve metadata from XBRL taxonomies. This works well because we have converted the original taxonomy (in xml) to the linked data format RDF; and this format is especially well suited for representing and querying XBRL data.

The examples above show that it is possible to retrieve the complete Solvency 2 Data Point Model (and more) from the the taxonomy in RDF and make it available in a Python environment. This allows incorporation of metadata in machine learning models and in NLP applications. I hope that this approach will allow more data scientists to use existing metadata from XBRL taxonomies.

Converting XBRL to RDF-star

Published by:

Lately I have been working on the conversion of XBRL instances and related taxonomy schemas and linkbases to RDF and RDF-star. In these semantic data formats, you can link data in XBRL data with other data sources and you can query the data in a fairly easy manner. RDF-star is an extension of RDF that in some situations allows a more compact description of linked data, and by that it narrows the gap between RDF and property graphs. How this works, I will show in this blog using the XBRL taxonomy definitions as an example.

In a previous blog I showed that XBRL instance facts can be converted to RDF and visualized as a network. The same can be done with the related taxonomy elements. An XBRL taxonomy consists of concepts and relations between concepts that define calculations, presentations, labels and definitions. The concepts are laid down (mostly) in XML schemas and the relations in linkbases using XML schemas and XLinks. By converting the XBRL taxonomy to RDF, the XBRL fact data is linked to its corresponding metadata in the taxonomy.

XBRL to RDF

There has been done some work on the conversion of XBRL to RDF, most notably by Dave Raggett. His project xbrlimport, written in C++ and available on SourceForge, converts XBRL data to RDF triples. His approach is clean and straightforward and reuses the original namespaces of the XBRL data (with some obvious elements translated to predicates with RDF namespaces).

I used Raggett’s xbrlimport as a starting point, translated it to Python, added XBRL items that were introduced after publication of the code and improved a number of things. The code is now for example able to convert all EIOPA’s Solvency 2 taxonomy elements with all metadata available to RDF format. This code is available under the same license as xbrlimport (GNU General Public License) as a Python Package on pypi.org. You can take an XBRL instance with corresponding taxonomy (in the form of a zip-file) and convert the contents to RDF and RDF-star. This code will look up any references (URIs) in the XBRL instance to the taxonomy in the zip-file and convert the relevant files to RDF.

Let’s look at some examples of the Solvency 2 taxonomy converted to RDF. The RDF triple of an arbitrary XBRL concept from the Solvency 2 taxonomy looks like this (in turtle format):

s2md_met:mi362 
    rdf:type xbrli:monetaryItemType ;
    xbrli:periodType """instant"""^^rdf:XMLLiteral ;
    model:creationDate """2014-07-07"""^^xsd:dateTime ;
    xbrli:substitutionGroup xbrli:item ;
    xbrli:nillable "true"^^xsd:boolean ;

This example describes the triples of concept s2md_met:mi362 (a Solvency 2 metric). With these triples we have exactly the same data as in the related XML file but now in the form of triples. Namespaces are derived from the XML file (except rdf:type) and datatypes are transformed to RDF datatypes with proper RDF syntax.

This can be done with all concepts used to which the facts of an XBRL instance refer. If you have facts in RDF format, then in RDF these concept are automatically linked with the concepts in the taxonomy because the URIs of the concepts are the same. This creates a network of facts with all related metadata of the facts.

An XBRL taxonomy also contains links that relate concepts to each other for several purposes (to provide labels, definitions, presentations and calculations) . An example of a link is the following.

_:link2 arcrole:concept-label [
    xl:type xl:link ;
    xl:role role2:link ;
    xl:from s2md_met:mi362 ;
    xl:to s2md_met:label_s2md_mi362 ;
    ] .

The link relates concept mi362 with label mi362 by creating a new subject _:link2 with predicate arcrole:concept-label and an object which contains all data about the link (including the xl:from and xl:to and the attributes of the link). This way of introducing a new subject to specify a link between two concepts is called reification and a bit artificial because you would like to link the concept directly with the label, such as

s2md_met:mi281 arcrole:concept-label s2md_met:label_s2md_mi281

However, then you are unable in RDF to link the attributes (like the order and the role) to the predicates. It is one of the disadvantages of the current RDF format. There appears to be no easy way to do this in RDF, other than by using this artificial reification approach (some other solutions exist like the singleton property approach, but all of them have disadvantages.)

The new RDF-star format

Recently, the RDF-star working group published their first Draft Community Report. In this report they introduced new RDF-star and SPARQL-star specifications. These new specifications, although not yet a W3C standard, enable more compact specification of linked datasets and simpler graphs and less nodes.

Let’s look what this means for the XBRL linkbases with the following example. Suppose we have the following link definition.

_:link1 arcrole:breakdown-tree [
    xl:from _:s2md_a1 ;
    xl:to _:s2md_a1.root ;
    xl:type xl:link ;
    xl:role tab:S.01.01.02.01 ;
    xl:order "0"^^xsd:decimal ;
    ] .

The subject in this case is _:link1 with predicate arcrole:breakdown-tree, so this link describes a part of a table template. It points to a subject with all the information of the link, i.e. from, to, type, role and order from the xl namespace. Note that there is no triple with _:s2md_a1 (xl:from) as a subject and _:s2md_a1.root (xl:to) as an object. So if you want to know the relations of the concept _:s2md_a1 you need to look at the link triples and look for entries where xl:from equals the concept.

With the new RDF-star specifications you can just add the triple and then add properties to the triple as a whole, so the example would read

_:s2md_a1 arcrole:breakdown-tree _:s2md_a1.root .

<<_:s2md_a1 arcrole:breakdown-tree _:s2md_a1.root>> 
    xl:role tab:S.01.01.02.01 ;
    xl:order "0"^^xsd:decimal ;
    .

Which is basically what we need to define. If you now want to know the relations of the subject _:s2md_a1 then you just look for triples with this subject. In the visual presentation of the RDF dataset you will see a direct link between the two concepts. This new RDF format also implies simplifications of the SPARQL queries.

This blog has become a bit technical but I hope you see that the RDF-star specification allows a much needed simplification of RDF triples. I showed that the conversion of XBRL taxonomies to RDF-star leads to a smaller amount of triples and also to less complex triples. The resulting taxonomy triples lead to less complex graphs and can be used to derive the XBRL labels, template structures, validation rules and definitions, just by using SPARQL queries.

Europe’s insurance register linked to the GLEIF RDF dataset

Published by:

Number 7 of my New Year’s Resolutions list reads “only use and provide linked data”. So, to start the year well, I decided to do a little experiment to combine insurance undertakings register data with publicly available legal entity data. In concrete terms, I wanted to provide the European insurance register published by EIOPA (containing all licensed insurance undertakings in Europe) as linked data with the Legal Entity data from the Global Legal Entity Identifier Foundation (GLEIF). In this blog I will describe what I have done to do so.

The GLEIF data

The GLEIF data consists of information on all legal entities in the world (entities with a Legal Entity Identifier). A LEI is required by any legal entity who is involved with financial transactions or operates within the financial system. If an organization needs a LEI then it requests one at a local registration agent. For the Netherlands these are the Authority for the Financial Markets (AFM), the Chamber of Commerce (KvK) and the tax authority and others. GLEIF receives data from these agents in each country and makes the collected LEI data available in a number of forms (api, csv, etc).

The really cool thing is that in 2019, together with data.world, GLEIF developed an RDFS/OWL Ontology for Legal Entities, and began in 2020 to publish regularly the LEI data as a linked RDF dataset on data.world (see https://data.world/gleif, you need a (free) account to obtain the data). At the time of this writing, the size of the level 1 data (specifying who is who) is around 10.2 Gb with almost 92 million triples (subject-predicate-object), containing information about entity name, legal form, headquarter and legal address, geographical location, etc. Also related data such as who owns whom is published in this forms.

The EIOPA insurance register

The European Supervisory Authority EIOPA publishes the Register of Insurance undertakings based on information provided by the National Competent Authorities (NCAs). The NCA in each member state is responsible for authorization and registration of the insurance undertakings activities. EIOPA collects the data in the national registers and publishes an European insurance register, which includes more than 3.200 domestic insurance undertakings. The register contains entity data like international and commercial name, name of NCA, addresses, cross border status, registration dates etc. Every insurance undertaking requires a LEI and the LEI is included in the register; this enables us to link the data easily to the GLEIF data.

The EIOPA insurance register is available as CSV and Excel file, without formal naming and clear definitions of column names. Linking the register data with other sources is a tedious job, because it must be done by hand. Take for example the LEI data in the register, which is referred to with the column name ‘LEI’; this is perfectly understandable for humans, but for computers this is just a string of characters. Now that the GLEIF has published its ontologies there is a proper way to refer to a LEI, and that is with the Uniform Resource Identifier (URI) https://www.gleif.org/ontology/L1/LEI, or in a short form gleif-L1:LEI.

The idea is to publish the data in the European insurance register in the same manner; as linked data in RDF format using, where applicable, the GLEIF ontology for legal entities and creating an EIOPA ontology for the data that is unique for the insurance register. This allows users of the data to incorporate the insurance register data into the GLEIF RDF dataset and thereby extending the data available on the legal entities with the data from the insurance register.

Creating triples from the EIOPA register

To convert the EIOPA insurance register to linked data in RDF format, I did the following:

  • extract from the GLEIF RDF level 1 dataset on data.world all insurance undertakings and related data, based on the LEI in the EIOPA register;
  • create a provisional ontology with URIs based on https://www.eiopa.europe.eu/ontology/base (this should ideally be done by EIOPA, as they own the domain name in the URI);
  • transform, with this ontology, the data in the EIOPA register to triples (omitting all data from the EIOPA register that is already included in the GLEIF RDF dataset, like names and addresses);
  • publish the triples for the insurance register in RDF Turtle format on data.world.

Because I used the GLEIF ontology where applicable, the triples I created are automatically linked to the relevant data in the GLEIF dataset. Combining the insurance register dataset with the GLEIF RDF dataset results in a set where you have all the GLEIF level 1 data and all data in the EIOPA insurance register combined for all European insurance undertakings.

Querying the data

Let’s look what we have in this combined dataset. Querying the data in RDF is done with the SPARQL language. Here is an example to return the data on Achmea Schadeverzekeringen.

SELECT DISTINCT ?p ?o
WHERE
{ ?s gleif-L1:hasLegalName "Achmea schadeverzekeringen N.V." . 
  ?s ?p ?o .}

The query looks for triples where the predicate is gleif-base:hasLegalName and the object is Achmea Schadeverzekeringen N.V. and returns all data of the subject that satisfies this constraint. This returns (where I omitted the prefix of the objects):

gleif-L1-data:L-72450067SU8C745IAV11
    rdf#type                             LegalEntity
    gleif-base:hasLegalJurisdiction      NL  
    gleif-base:hasEntityStatus           EntityStatusActive  
    gleif-l1:hasLegalName                Achmea Schadeverzekeringen     
                                         N.V.
    gleif-l1:hasLegalForm                ELF-B5PM
    gleif-L1:hasHeadquartersAddress      L-72450067SU8C745IAV11-LAL  
    gleif-L1:hasLegalAddress             L-72450067SU8C745IAV11-LAL  
    gleif-base:hasRegistrationIdentifier BID-RA000463-08053410
    rdf#type                             InsuranceUndertaking
    eiopa-base:hasRegisterIdentifier     IURI-De-Nederlandsche-Bank-
                                         W1686

We see that the rdf#type of this entity is LegalEntity (from the GLEIF data) and the jurisdiction is NL (this has a prefix that refers to the ISO 3166-1 country codes). The legal form refers to another subject called ELF-B5PM. The headquarters and legal address both refer to the same subject that contains the address data of this entity. Then there is a business identifier to the registration data. The last two lines are added by me: a triple to specify that this subject is not only a LegalEntity but also an InsuranceUndertaking (defined in the ontology), and a triple for the Insurance Undertaking Register Identifier (IURI) of this subject (also defined in the ontology).

Let’s look more closely at the references in this list. First the legal form of Achmea (i.e. the predicate and objects of legal form code ELF-B5PM). Included in the GLEIF data is the following (again omitting the prefix of the object):

rdf#type                         EntityLegalForm  
rdf#type                         EntityLegalFormIdentifier  
gleif-base:identifies            ELF-B5PM  
gleif-base:tag                   B5PM  
gleif-base:hasCoverageArea       NL  
gleif-base:hasNameTransliterated naamloze vennootschap  
gleif-base:hasNameLocal          naamloze vennootschap  
gleif-base:hasAbbreviationLocal  NV, N.V., n.v., nv

With the GLEIF data we have this data on all legal entity forms of insurance undertakings in Europe. The local abbreviations are particularly handy as they help us to link an entity’s name extracted from documents or other data sources with its corresponding LEI.

If we look more closely at the EIOPA Register Identifier IURI-De-Nederlandsche-Bank-W1686 then we find the register data of this Achmea entity:

owl:a                         InsuranceUndertakingRegisterIdentifier
gleif-base:identifies         L-72450067SU8C745IAV11
eiopa-base:hasNCA                          De Nederlandsche Bank  
eiopa-base:hasInsuranceUndertakingID       W1686  
eiopa-base:hasEUCountryWhereEntityOperates NL  
eiopa-base:hasCrossBorderStatus            DomesticUndertaking  
eiopa-base:hasRegistrationStartDate        23/12/1991 01:00:00  
eiopa-base:hasRegistrationEndDate          None  
eiopa-base:hasOperationStartDate           23/12/1991 01:00:00  
eiopa-base:hasOperationEndDate             None

The predicate gleif-base:identifies refers back to the subject where gleif-L1:hasLegalName equals the Achmea entity. The other predicates are based on the provisional ontology I made that contains the definitions of the attributes of the EIOPA insurance register. Here we see for example that W1686 is the identifier of this entity in DNB’s insurance register.

Let me give a tiny example of the advantage of using linked data. The GLEIF data contains the geographical location of all legal entities. With the combined dataset it is easy to obtain the location for the insurance undertakings in, for example, the Netherlands. This query returns entity names with latitude and longitude of the legal address of the entity.

SELECT DISTINCT ?name ?lat ?long
WHERE {?sub rdf:type eiopa-base:InsuranceUndertaking ;
            gleif-base:hasLegalJurisdiction CountryCodes:NL ;
            gleif-L1:hasLegalName ?name ;
            gleif-L1:hasLegalAddress/gleif-base:hasCity ?city .
       ?geo gleif-base:hasCity ?city ;
            geo:lat ?lat ; 
            geo:long ?long .}

This result can be plotted on a map, see the link below. If you click on one of the dots then the name of the insurance undertaking will appear.

All queries above and the code to make the map are included in the notebook EIOPA Register RDF datase – SPARQL queries.

The provisional ontology I created is not yet semantically correct and should be improved, for example by incorporating data on NCAs and providing formal definitions. And other data sources could be added, for example the level 2 dataset to identify insurance groups, and the ISIN to LEI relations that are published daily by GLEIF.

By introducing the RDFS/OWL ontologies, the Global LEI Foundation has set an example on how to publish (financial) entity data in an useful manner. The GLEIF RDF dataset reduces the time needed to link the data with other data sources significantly. I hope other organizations that publish financial entity data as part of their mandate will follow that example.

Converting supervisory reports to Semantic Webs: from XBRL to RDF

Published by:

A growing number of supervisory reports across Europe are based on the XML Extensible Business Reporting Language standard (XBRL). Financial entities such as banks, insurance undertakings and pension institutions are required to submit their reports to their supervisors in this format.

XBRL is a language for modeling, exchanging and automatically processing business and financial information. Reports in this format (called instance documents) are based on metadata (set out in taxonomies) that add semantic meaning to the data points that are reported. You can choose different implementations but overall an XBRL taxonomy provides a semantically rich data model and that has always been one of the main advantages of XBRL.

However, in its raw format (an XML file) each report is basically a machine readable document with a tree structure that does not enable easy integration with related data from other sources or integration with text documents and their contents.

In this blog, I will show that converting the XBRL reports to another format allows easier integration and understanding. That other format is based on Semantic Webs. It has been shown that XBRL converted to Semantic Webs can be done without any loss of information (see for example this article). So if we convert the XBRL format to a Semantic Web then we keep the structure and the meaning provided by the taxonomy. The result is basically a graph and this format enables integration with other linked data that is much easier.

A Semantic Web consists of formats and technologies that are rather old (from a computer science perspective): it originated around the same time as XBRL, some twenty years ago. And because it tried to solve similar problems (lack of semantic meaning in the World Wide Web) as the XBRL standard (lack of semantic meaning in business and financial data), to some extent it is based on similar concepts. It was however developed completely separate from XBRL.

The general concept of a Semantic Webs, where data is linked together to provide semantic meaning, is also known as a knowledge graph.

How does a Semantic Web work? One of the formats of the Semantic Web is the Resource Description Framework (RDF), originally designed as a metadata data model. RDF was adopted as a World Wide Web Consortium recommendation in 1999. The RDF 1.0 specification was published in 2004, and RDF 1.1 followed in 2014.

The RDF format is based on expressions in the form of subject-predicate-object, called triples. The subject and object denote (web) resources and the predicate denotes the relationship between the subject and the object. For example the expression ‘Spinoza has written the book Ethica Ordine Geometrico Demonstrata’ in RDF is a triple with a subject denoting “Spinoza”, a predicate denoting “has written”, and an object denoting “the book Ethica Ordine Geometrico Demonstrata”. This is a different approach than for example object-oriented models with an entity (Spinoza), attribute (book) and value (Ethica).

The RDF format could potentially solve some problems with the XBRL format. To explain this, I converted an XBRL-instance (a test instance file from EIOPA for Solvency 2) to RDF format.

Below you see the representation of one arbitrary data point in the report (called a fact) in RDF format and visualized as a network (I used the Python package networkx). The predicates contain the complete web resource so I limited the name to the last word to make it readable.

The red node is the starting point of the data point. The red labels on the lines describe the predicate between subject and object. You see that the fact (subject) ‘has decimals’ (predicate) 2 (object), and furthermore has unit EUR, has value 838522076.03, has type metmi503 (an internal code describing Payments for reported but not settled claims) and some other properties.

The data point also has a so-called context that defines the entity to which the fact applies, the period of time the fact is relevant (in this case 2019-12-31) and also a scenario, which consists of additional metadata of the data point. In this case we see that the data point is related to statutory accounts, non-life and health non-STL, direct business and accepted during the period (and a node without a label).

All facts in every XBRL instance are structured in this way, which means that for example you can search all facts with the label statutory accounts. Furthermore, because XBRL uses namespaces you can unambiguously identify predicates and objects in the report. For example, you see that the entity node has an identifier (starting with 0LFF1…) and a scheme (17442). The scheme refers to the web resource for the ISO standard 17442 which specifies the Legal Entity Identifier (LEI), so the entity is unambiguously identified with the given (LEI-)code. If you add other XBRL instances with references to that entity then the data is automatically linked because other instances will contain exactly the same entity node.

The RDF representation of the XBRL fact above is:

_:provenance1 xl:instance "filename".
_:unit_u xbrli:measure iso4217:EUR.
_:fact926
  xl:provenance :provenance1;
  xl:type xbrli:fact; 
  rdf:type s2md_met:mi503;
  rdf:value "838522076.03"^^xsd:decimal;
  xbrli:decimals "2"^^xsd:integer;
  xbrli:unit :unit_u; 
  xbrli:context :context_BLx79_DIx5_IZx1_TBx28_VGx84.
_:context_BLx79_DIx5_IZx1_TBx28_VGx84
  xl:type xbrli:context;
  xbrli:entity [
    xbrli:identifier "0LFF1WMNTWG5PTIYYI38";
    xbrli:scheme http://standards.iso.org/iso/17442;
    ];
  xbrli:scenario [
    xbrldi:explicitMember "s2c_LB:x79"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_DI:x5"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_RT:x1"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_LB:x28"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_AM:x84"^^rdf:XMLLiteral;
    ];
  xbrli:instant "2019-12-31"^^xsd:date.

Instead of storing the data in separate templates with often unclear code names you can also convert the XBRL data to one large Semantic Web where all facts are linked together. The RDF format thus provides a graph model which allows easier integration and visualization (and, for me at least, easier understanding). It allows adding and linking data from other sources, such as Solvency 2 documents and external data, in the same graph.

Typically, supervisory reports consists of thousands of data points and supervisors receive reports from many entities each period. How would you store that information? I think that the natural way to store an XBRL instance is not a relational database but a graph database (like graphDB or Neo4j). These databases can store the facts with all the metadata in a structured way and enable to query the graph efficiently. Next blog, I will explore graph databases and query languages for XBRL reports converted to the RDF format.

Pilot Data Quality Rules

Published by:

Data Quality is receiving more and more attention within the financial sector, and deservedly so. That’s why DNB will start a pilot in September with the insurance sector to enable entities to run locally the required open source code and to evaluate Solvency 2 quantitative reports with our Data Quality Rules.

In the coming weeks we will:

With these tools you are able to assess the data quality of your Solvency 2 quantitative reports before submitting them to DNB. You can do that within your own data science environment.

We worked hard to make this as easy as possible; the only thing you need is Anaconda / Jupyter Notebooks (Python) and Git to clone our repositories from Github (all free and open source software). And of course the data you want to check. We also provide code to evaluate the XBRL instance files.

We are planning workshops to explain how to use the code and validation rules and to go through the process step by step.

Want to join or know more, please let me know (w.j.willemse at dnb.nl).

Code moved to GitHub/DNB

Published by:

It has been a bit quiet here on my side. But I have been busy moving the source code from the notebooks here to the GitHub-account of DNB. And in doing so we improved the code in many areas. Code development can now be done efficiently by using tools for Continuous Integration and Deployment. This enables everyone to work with us to explore new ideas and to test, improve and expand the code.

Let me shortly introduce our repositories (these are all based on code that I wrote for the blogs on this website).

Our insurance repositories

solvency2-data

A package for retrieving Solvency 2-data. Our idea is to provide you with one package for all Solvency 2-data retrieval.

Currently it is able to download the Risk Free Interest Term Structures from the EIOPA website, so you don’t need to search on the EIOPA website. And it implements the Smith-Wilson algorithm so you can make your own curves with different parameters.

This code is deployed as a package to the Python Package Index (a.k.a. the cheese shop), so you can install it with pip.

data-patterns

A package aimed to improve data quality of your reports. With this code you can generate and evaluate patterns, and we plan to publish validation and plausibility patterns in addition to the existing ones in the taxonomies. With this package you can evaluate your reports with these patterns.

This package is also deployed to the Python Package Index.

solvency2-data-science

A project with data science applications and tutorials on using the packages above. Currently, we have a data science tutorial using the public Solvency 2 data and a tutorial for the data-patterns package.

solvency2-nlp

Our experimental Natural Language Processing projects with Solvency 2 documents. I already published some results here with NLP (reading the Solvency 2 legislation documents and Word2Vec and Topic Modelling with SFCR documents) and we are planning to provide these and other applications in this repository.

All repositories were made from cookiecutter templates, which is a very easy way to set up your projects.

Take a look at the repositories. If you have suggestions for further improvements or ideas for new features, do not hesitate to raise an issue on the GitHub-site. In the documentation of each repository you can find more information on how to contribute.

Word2vec models for SFCRs

Published by:

Word2vec is a well-known algorithm for natural language processing that often leads to surprisingly good results, if trained properly. It consists of a shallow neural network that maps words to a n-dimensional number space, i.e. it produces vector representations of words (so-called word embeddings). Word2vec does this in a way that words used in the same context are embedded near to each other (their respective vectors are close to each other). In this blog I will show you some of the results of word2vec models trained with Wikipedia and insurance-related documents.

One of the nice properties of a word2vec model is that it allows us to do calculations with words. The distance between two word vectors provides a measure for linguistic or semantic similarity of the corresponding words. So if we calculate the nearest neighbors of the word vector then we find similar words of that word. It is also possible to calculate vector differences between two word vectors. For example, it appears that for word2vec model trained with a large data set, the vector difference between man and woman is roughly equal to the difference between king and queen, or in vector notation kingman + woman = queen. If you find this utterly strange then you are not alone. Besides some intuitive and informal explanations, it is not yet completely clear why word2vec models in general yield these results.

Word2vec models need to be trained with a large corpus of text data in order to achieve word embeddings that allow these kind of calculations. There are some large pre-trained word vectors available, such as the GloVe Twitter word vectors, trained with 2 billion tweets, and the word2vec based on google news (trained with 100 billion words). However, most of them are in the English language and are often trained on words that are generally used, and not domain specific.

So let’s see if we can train word2vec models specifically for non-English European languages and trained with specific insurance vocabulary. A way to do this is to train a word2vec model with Wikipedia pages of a specific language and additionally train the model with sentences we found in public documents of insurance undertakings (SFCRs) and in the insurance legislation. In doing so the word2vec model should be able to capture the specific language domain of insurance.

The Wikipedia word2vec model

Data dumps of all Wikimedia wikis, in the form of a XML-files, are provided here. I obtained the latest Wikipedia pages and articles of all official European languages (bg, es, cs, da, de, et, el, en, fr, hr, it, lv, lt, hu, mt, nl, pl pt, ro, sk, sl, fi, sv). These are compressed files and their size range from 8.6 MB (Maltese) to 16.9 GB (English). The Dutch file is around 1.5 GB. These files are bz2-compressed; the uncompressed Dutch file is about 5 times the compressed size and contains more than 2.5 million Wikipedia pages. This is too large to store into memory (at least on my computer), so you need to use Python generator-functions to process the files without the need to store them completely into memory.

The downloaded XML-files are parsed and page titles and texts are then processed with the nltk-package (stop words are deleted and sentences are tokenized and preprocessed). No n-grams were applied. For the word2vec model I used the implementation in the gensim-package.

Let’s look at some results of the resulting Wikipedia word2vec models. If we get the top ten nearest word vectors of the Dutch word for elephant then we get:

In []: model.wv.most_similar('olifant', topn = 10)
Out[]: [('olifanten', 0.704888105392456),
        ('neushoorn', 0.6430075168609619),
        ('tijger', 0.6399451494216919),
        ('luipaard', 0.6376790404319763),
        ('nijlpaard', 0.6358680725097656),
        ('kameel', 0.5886276960372925),
        ('neushoorns', 0.5880545377731323),
        ('ezel', 0.5879943370819092),
        ('giraf', 0.5807977914810181),
        ('struisvogel', 0.5724758505821228)]

These are all general Dutch names for (wild) animals. So, the Dutch word2vec model appears to map animal names in the same area of the vector space. The word2vec models of other languages appear to do the same, for example norsut (Finnish for elephant) has the following top ten similar words: krokotiilit, sarvikuonot, käärmeet, virtahevot, apinat, hylkeet, hyeenat, kilpikonnat, jänikset and merileijonat. Again, these are all names for animals (with a slight preference for Nordic sea animals).

In the Danish word2vec model, the top 10 most similar words for mads (in Danish a first name derived from matthew) are:

In []: model.wv.most_similar('mads', topn = 10)
Out[]: [('mikkel', 0.6680521965026855),
        ('nicolaj', 0.6564826965332031),
        ('kasper', 0.6114416122436523),
        ('mathias', 0.6102851033210754),
        ('rasmus', 0.6025335788726807),
        ('theis', 0.6013824343681335),
        ('rikke', 0.5957099199295044),
        ('janni', 0.5956574082374573),
        ('refslund', 0.5891965627670288),
        ('kristoffer', 0.5842193365097046)]

Almost all are first names except for Refslund, a former Danish chef whose first name was Mads. The Danish word2vec model appears to map first names in the same domain in the vector space, resulting is a high similarity between first names.

Re-training the Wikipedia Word2vec with SFCRs

The second step is to train the word2vec models with the insurance related text documents. Although the Wikipedia pages for many languages contain some pages on insurance and insurance undertakings, it is difficult to derive the specific language of this domain from these pages. For example the Dutch word for risk margin does not occur in the Dutch Wikipedia pages, and the same holds for many other technical terms. In addition to the Wikipedia pages, we should therefore train the model with insurance specific documents. For this I used the public Solvency and Financial Condition Reports (SFCRs) of Dutch insurance undertakings and the Dutch text of the Solvency II Delegated Acts (here is how to download and read it).

The SFCR sentences are processed in the same manner as the Wikipedia pages, although here I applied bi- and trigrams to be able to distinguish insurance terms rather than separate words (for example technical provisions is a bigram and treated as one word, technical_provisions).

Now the model is able to derive similar words to the Dutch word for risk margin.

In []: model.wv.most_similar('risicomarge')
Out[]: [('beste_schatting', 0.43119704723358154),
        ('technische_voorziening', 0.42812830209732056),
        ('technische_voorzieningen', 0.4108312726020813),
        ('inproduct', 0.409644216299057),
        ('heffingskorting', 0.4008549451828003),
        ('voorziening', 0.3887258470058441),
        ('best_estimate', 0.3886040449142456),
        ('contant_maken', 0.37772029638290405),
        ('optelling', 0.3554660379886627),
        ('brutowinst', 0.3554105758666992)]

This already looks nice. Closest to risk margin is the Dutch term beste_schatting (English: best estimate) and technische_voorziening(en) (English: technical provision, singular and plural). The relation to heffingskorting is strange here. Perhaps the word risk margin is not solely being used in insurance.

Let’s do another one. The acronym skv is the same as scr (solvency capital requirement) in English.

In []: model.wv.most_similar('skv')
Out[]: [('mkv', 0.6492390036582947),
        ('mcr_ratio', 0.4787723124027252),
        ('kapitaalseis', 0.46219778060913086),
        ('mcr', 0.440476655960083),
        ('bscr', 0.4224048852920532),
        ('scr_ratio', 0.41769397258758545),
        ('ðhail', 0.41652536392211914),
        ('solvency_capital', 0.4136047661304474),
        ('mcr_scr', 0.40923237800598145),
        ('solvabiliteits', 0.406883180141449)]

The SFCR documents were sufficient to derive an association between skv and mkv (English equivalent of mcr), and the English acronyms scr and mcr (apparently the Dutch documents sometimes use scr and mcr in the same context). Other similar words are kapitaalseis (English: capital requirement) and bscr. Because they learn from context, the word2vec models are able to learn words that are synonyms and sometimes antonyms (for example we say ‘today is a cold day’ and ‘today is a hot day’, where hot and cold are used in the same manner).

For an example of a vector calculation look at the following result.

In []: model.wv.most_similar(positive = ['dnb', 'duitsland'], 
                             negative = ['nederland'], topn = 5)
Out[]: [('bundesbank', 0.4988047778606415),
        ('bundestag', 0.4865422248840332),
        ('simplesearch', 0.452720582485199),
        ('deutsche', 0.437085896730423),
        ('bondsdag', 0.43249475955963135)]

This function finds the top five similar words of the vector DNBNederland + Duitsland. This expression basically asks for the German equivalent of De Nederlandsche Bank (DNB). The model generates the correct answer: the German analogy of DNB as a central bank is the Bundesbank. I think this is somehow incorporated in the Wikipedia pages, because the German equivalent of DNB as a insurance supervisor is not the Bundesbank but Bafin, and this was not picked up by the model. It is not perfect (the other words in the list are less related and for other countries this does not work as well). We need more documents to find more stable associations. But this to me is already pretty surprising.

There has been some research where the word vectors of word2vec models of two languages were mapped onto each other with a linear transformation (see for example Exploiting Similarities among Languages for Machine Translation, Mikolov, et al). In doing so, it was possible to obtain a model for machine translation. So perhaps it is possible for some European languages with a sufficiently large corpus of SFCRs to generate one large model that is to some extent language independent. To derive the translation matrices we could use the different translations of European legislative texts because in their nature these texts provide one of the most reliable translations available.

But that’s it for me for now. Word2vec is a versatile and powerful algorithm that can be used in numerous natural language applications. It is relatively easy to generate these models in other languages than the English language and it is possible to train these models that can deal with the specifics of insurance terminology, as I showed in this blog.