Category Archives: Data extraction

EIOPA’s Solvency 2 taxonomy in RDF

Published by:

To use the metadata from XBRL taxonomies, like labels, hierarchies, template structures and formulas, often licensed software is needed to process the taxonomy and convert the XML content to readable formats. In an earlier blog I have shown that it is useful to convert XBRL instance data to a linked data set in RDF and then query that data to retrieve the desired information. In this blog I will show how to do this with taxonomies: by using a number of small SPARQL queries the complete Data Point Model (DPM) of (European) XBRL taxonomies can be retrieved.

The main purpose of retrieving metadata in this manner is to be able to use taxonomy metadata in data science environments, for example to be able to apply machine learning models that use taxonomy metadata like hierarchies and to use concept and element labels from a taxonomy in NLP, for example Named Entity Recognition tasks to link quantitative reports to (unstructured, or: not yet structured) text data.

The lightweight solution that I show here is completely based on open source code, in the form of my xbrl2rdf package. This package converts XBRL instance files and all related taxonomy files (schemas and linkbases) to RDF and RDF-star and uses for this lxml and rdflib, and nothing else.

The examples below use the Solvency 2 taxonomy, but other taxonomies works as well. Gist with the notebook with the code below can be found here.

Importing the data

With the xbrl2rdf-package I have converted the EIOPA instance example file for quarterly reports for solo undertaking (QRS) to RDF. All taxonomy concepts that are used in that instance are included in the RDF data set. This result can be read with rdflib into memory.

# RDF graph loading
path = "../data/rdf/qrs_240_instance.ttl"

g = RDFGraph()
g.parse(path, format='turtle')

print("rdflib Graph loaded successfully with {} triples".format(len(g)))

This returns

rdflib Graph loaded successfully with 1203744 triples

So we have a RDF graph with 1.2 million triples that contains all data (facts related to concepts in the taxonomy, including all labels, validation rules, template structures, etc). The original RDF data file is around 64 Mb (combining instance and taxonomy triples). Reading and processing this file into an in-memory RDF graph takes some time, but then the graph can easily be queried.

Extracting template URIs

Let’s start with a simple query. Table or template URIs are subjects in the triple “subject xl:type table:table”. To get a list with all templates of an instance (in this case the first five) we run

q = """
  SELECT ?a
  WHERE {
    ?a xl:type table:table .
  }"""
tables = [str(row[0]) for row in g.query(q)]
tables.sort()
tables[0:5]

This returns a list of the URIs of the templates contained in the instance file.

['http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.01.01.02.01#s2md_tS.01.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.01.02.01.01#s2md_tS.01.02.01.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.02.01.02.01#s2md_tS.02.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.05.01.02.01#s2md_tS.05.01.02.01',
 'http://eiopa.europa.eu/xbrl/s2md/fws/solvency/solvency2/2019-07-15/tab/S.05.01.02.02#s2md_tS.05.01.02.02']

Extracting the explicit domains

Next, we extract the explicit domains and related data in the taxonomy. A domain is specific XBRL terminology and means a set of elements sharing a specified semantic nature. An explicit domain has its elements enumerated in the taxonomy and can be found with the subject in the triple ‘subject rdf:type model:explicitDomainType’.

q = """
  SELECT DISTINCT ?t ?x1 ?x2 ?x3 ?x4
  WHERE {
    ?t rdf:type model:explicitDomainType .
    ?t xbrli:periodType ?x1 .
    ?t model:creationDate ?x2 .
    ?t xbrli:nillable ?x3 .
    ?t xbrli:abstract ?x4 .
  }"""

The first five domains (of 41 in total) are

indexDomain nameDomain labelperiod typecreation datenillableabstract
0LBLines of businessesinstant2014-07-07truetrue
1MCMain categoriesinstant2014-07-07truetrue
2TITime intervalsinstant2014-07-07truetrue
3AOArticle 112 and 167instant2014-07-07truetrue
4CGCollaterals/Guaranteesinstant2014-07-07truetrue
Domain names and labels with attributes

So, the label of the domain LB is Lines of businesses; it has been there since the early versions of the taxonomy. If a domain is modified then this is also included as a triple in the data set.

Extracting domain members

Elements of an explicit domain are called domain members. A domain member (or simply a member) is enumerated element of an explicit domain. All members from a domain share a certain common nature. To get the members of a domain, we define a function that finds all domain-members relations of a given domain and retrieve the label of the member. In SPARQL this is:

def members(domain):
    q = """
      SELECT DISTINCT ?t ?label
      WHERE {
        ?l arcrole7:domain-member [ xl:from <"""+str(domain)+"""> ;
                                    xl:to ?t ] .
        ?t rdf:type nonnum:domainItemType .
        ?x arcrole3:concept-label [ xl:from ?t ;
                                    xl:to [rdf:value ?label ] ] .
        }"""
    return g.query(q)

All members of all domains can be retrieved by running this function for all domains defined earlier. Based on the output we create a Pandas DataFrame with the results.

df_members = pd.DataFrame()
for d in df_domains.iloc[:, 0]:
    data = [[urldefrag(d)[1]]+[urldefrag(row[0])[1]]+list(row[1:]) for row in members(d)]
    columns = ['Domain',
               'Member',
               'Member label']
    df_members = df_members.append(pd.DataFrame(data=data,
                                                columns=columns))

In total there are 4.879 members of all domains (in this taxonomy).

index DomainMemberMember label
0LBx0Total/NA
1LBx1Accident and sickness
2LBx2Motor
3LBx3Fire and other damage to property
4LBx4Aviation, marine and transport
The first five member of the domain LB (Lines of Businesses)

This allows us, for example, to retrieve all facts in a report that are related to the term ‘motor’ because a reported fact contains references to the domain members to which the fact relates.

Extracting the template structure

Template structures are stored in the taxonomy as a tree of linked elements along the axes x, y and, if applicable, z. The elements have a label and a row-column code (this holds at least for EIOPA and DNB taxonomies), and have a certain depth, i.e. they can be subcategories of other elements. For example in the Solvency 2 balance sheet template, the element ‘Equities’ has subcategories ‘Equities – listed’ and ‘Equities unlisted’. I have not included the code here but with a few lines you can extract the complete template structures as they are stored in the taxonomy. For example for the balance sheet (S.02.01.02.01) we get:

 axisdepthrc-codelabel
1x1C0010Solvency II value
3y1 Assets
4y1 Liabilities
5y2R0010Goodwill
6y2R0020Deferred acquisition costs
7y2R0030Intangible assets
8y2R0040Deferred tax assets
9y2R0050Pension benefit surplus
10y2R0060Property, plant & equipment held for own use
11y2R0070Investments (other than assets held for index-…
12y3R0080Property (other than for own use)
13y3R0090Holdings in related undertakings, including pa…
14y3R0100Equities
15y4R0110Equities – listed
16y4R0120Equities – unlisted
17y3R0130Bonds
18y4R0140Government Bonds
19y4R0150Corporate Bonds
20y4R0160Structured notes
21y4R0170Collateralised securities
22y3R0180Collective Investments Undertakings
23y3R0190Derivatives
24y3R0200Deposits other than cash equivalents
The first 25 lines of the balance sheet template

With relatively small SPARQL queries, it is possible to retrieve metadata from XBRL taxonomies. This works well because we have converted the original taxonomy (in xml) to the linked data format RDF; and this format is especially well suited for representing and querying XBRL data.

The examples above show that it is possible to retrieve the complete Solvency 2 Data Point Model (and more) from the the taxonomy in RDF and make it available in a Python environment. This allows incorporation of metadata in machine learning models and in NLP applications. I hope that this approach will allow more data scientists to use existing metadata from XBRL taxonomies.

Converting XBRL to RDF-star

Published by:

Lately I have been working on the conversion of XBRL instances and related taxonomy schemas and linkbases to RDF and RDF-star. In these semantic data formats, you can link data in XBRL data with other data sources and you can query the data in a fairly easy manner. RDF-star is an extension of RDF that in some situations allows a more compact description of linked data, and by that it narrows the gap between RDF and property graphs. How this works, I will show in this blog using the XBRL taxonomy definitions as an example.

In a previous blog I showed that XBRL instance facts can be converted to RDF and visualized as a network. The same can be done with the related taxonomy elements. An XBRL taxonomy consists of concepts and relations between concepts that define calculations, presentations, labels and definitions. The concepts are laid down (mostly) in XML schemas and the relations in linkbases using XML schemas and XLinks. By converting the XBRL taxonomy to RDF, the XBRL fact data is linked to its corresponding metadata in the taxonomy.

XBRL to RDF

There has been done some work on the conversion of XBRL to RDF, most notably by Dave Raggett. His project xbrlimport, written in C++ and available on SourceForge, converts XBRL data to RDF triples. His approach is clean and straightforward and reuses the original namespaces of the XBRL data (with some obvious elements translated to predicates with RDF namespaces).

I used Raggett’s xbrlimport as a starting point, translated it to Python, added XBRL items that were introduced after publication of the code and improved a number of things. The code is now for example able to convert all EIOPA’s Solvency 2 taxonomy elements with all metadata available to RDF format. This code is available under the same license as xbrlimport (GNU General Public License) as a Python Package on pypi.org. You can take an XBRL instance with corresponding taxonomy (in the form of a zip-file) and convert the contents to RDF and RDF-star. This code will look up any references (URIs) in the XBRL instance to the taxonomy in the zip-file and convert the relevant files to RDF.

Let’s look at some examples of the Solvency 2 taxonomy converted to RDF. The RDF triple of an arbitrary XBRL concept from the Solvency 2 taxonomy looks like this (in turtle format):

s2md_met:mi362 
    rdf:type xbrli:monetaryItemType ;
    xbrli:periodType """instant"""^^rdf:XMLLiteral ;
    model:creationDate """2014-07-07"""^^xsd:dateTime ;
    xbrli:substitutionGroup xbrli:item ;
    xbrli:nillable "true"^^xsd:boolean ;

This example describes the triples of concept s2md_met:mi362 (a Solvency 2 metric). With these triples we have exactly the same data as in the related XML file but now in the form of triples. Namespaces are derived from the XML file (except rdf:type) and datatypes are transformed to RDF datatypes with proper RDF syntax.

This can be done with all concepts used to which the facts of an XBRL instance refer. If you have facts in RDF format, then in RDF these concept are automatically linked with the concepts in the taxonomy because the URIs of the concepts are the same. This creates a network of facts with all related metadata of the facts.

An XBRL taxonomy also contains links that relate concepts to each other for several purposes (to provide labels, definitions, presentations and calculations) . An example of a link is the following.

_:link2 arcrole:concept-label [
    xl:type xl:link ;
    xl:role role2:link ;
    xl:from s2md_met:mi362 ;
    xl:to s2md_met:label_s2md_mi362 ;
    ] .

The link relates concept mi362 with label mi362 by creating a new subject _:link2 with predicate arcrole:concept-label and an object which contains all data about the link (including the xl:from and xl:to and the attributes of the link). This way of introducing a new subject to specify a link between two concepts is called reification and a bit artificial because you would like to link the concept directly with the label, such as

s2md_met:mi281 arcrole:concept-label s2md_met:label_s2md_mi281

However, then you are unable in RDF to link the attributes (like the order and the role) to the predicates. It is one of the disadvantages of the current RDF format. There appears to be no easy way to do this in RDF, other than by using this artificial reification approach (some other solutions exist like the singleton property approach, but all of them have disadvantages.)

The new RDF-star format

Recently, the RDF-star working group published their first Draft Community Report. In this report they introduced new RDF-star and SPARQL-star specifications. These new specifications, although not yet a W3C standard, enable more compact specification of linked datasets and simpler graphs and less nodes.

Let’s look what this means for the XBRL linkbases with the following example. Suppose we have the following link definition.

_:link1 arcrole:breakdown-tree [
    xl:from _:s2md_a1 ;
    xl:to _:s2md_a1.root ;
    xl:type xl:link ;
    xl:role tab:S.01.01.02.01 ;
    xl:order "0"^^xsd:decimal ;
    ] .

The subject in this case is _:link1 with predicate arcrole:breakdown-tree, so this link describes a part of a table template. It points to a subject with all the information of the link, i.e. from, to, type, role and order from the xl namespace. Note that there is no triple with _:s2md_a1 (xl:from) as a subject and _:s2md_a1.root (xl:to) as an object. So if you want to know the relations of the concept _:s2md_a1 you need to look at the link triples and look for entries where xl:from equals the concept.

With the new RDF-star specifications you can just add the triple and then add properties to the triple as a whole, so the example would read

_:s2md_a1 arcrole:breakdown-tree _:s2md_a1.root .

<<_:s2md_a1 arcrole:breakdown-tree _:s2md_a1.root>> 
    xl:role tab:S.01.01.02.01 ;
    xl:order "0"^^xsd:decimal ;
    .

Which is basically what we need to define. If you now want to know the relations of the subject _:s2md_a1 then you just look for triples with this subject. In the visual presentation of the RDF dataset you will see a direct link between the two concepts. This new RDF format also implies simplifications of the SPARQL queries.

This blog has become a bit technical but I hope you see that the RDF-star specification allows a much needed simplification of RDF triples. I showed that the conversion of XBRL taxonomies to RDF-star leads to a smaller amount of triples and also to less complex triples. The resulting taxonomy triples lead to less complex graphs and can be used to derive the XBRL labels, template structures, validation rules and definitions, just by using SPARQL queries.

Europe’s insurance register linked to the GLEIF RDF dataset

Published by:

Number 7 of my New Year’s Resolutions list reads “only use and provide linked data”. So, to start the year well, I decided to do a little experiment to combine insurance undertakings register data with publicly available legal entity data. In concrete terms, I wanted to provide the European insurance register published by EIOPA (containing all licensed insurance undertakings in Europe) as linked data with the Legal Entity data from the Global Legal Entity Identifier Foundation (GLEIF). In this blog I will describe what I have done to do so.

The GLEIF data

The GLEIF data consists of information on all legal entities in the world (entities with a Legal Entity Identifier). A LEI is required by any legal entity who is involved with financial transactions or operates within the financial system. If an organization needs a LEI then it requests one at a local registration agent. For the Netherlands these are the Authority for the Financial Markets (AFM), the Chamber of Commerce (KvK) and the tax authority and others. GLEIF receives data from these agents in each country and makes the collected LEI data available in a number of forms (api, csv, etc).

The really cool thing is that in 2019, together with data.world, GLEIF developed an RDFS/OWL Ontology for Legal Entities, and began in 2020 to publish regularly the LEI data as a linked RDF dataset on data.world (see https://data.world/gleif, you need a (free) account to obtain the data). At the time of this writing, the size of the level 1 data (specifying who is who) is around 10.2 Gb with almost 92 million triples (subject-predicate-object), containing information about entity name, legal form, headquarter and legal address, geographical location, etc. Also related data such as who owns whom is published in this forms.

The EIOPA insurance register

The European Supervisory Authority EIOPA publishes the Register of Insurance undertakings based on information provided by the National Competent Authorities (NCAs). The NCA in each member state is responsible for authorization and registration of the insurance undertakings activities. EIOPA collects the data in the national registers and publishes an European insurance register, which includes more than 3.200 domestic insurance undertakings. The register contains entity data like international and commercial name, name of NCA, addresses, cross border status, registration dates etc. Every insurance undertaking requires a LEI and the LEI is included in the register; this enables us to link the data easily to the GLEIF data.

The EIOPA insurance register is available as CSV and Excel file, without formal naming and clear definitions of column names. Linking the register data with other sources is a tedious job, because it must be done by hand. Take for example the LEI data in the register, which is referred to with the column name ‘LEI’; this is perfectly understandable for humans, but for computers this is just a string of characters. Now that the GLEIF has published its ontologies there is a proper way to refer to a LEI, and that is with the Uniform Resource Identifier (URI) https://www.gleif.org/ontology/L1/LEI, or in a short form gleif-L1:LEI.

The idea is to publish the data in the European insurance register in the same manner; as linked data in RDF format using, where applicable, the GLEIF ontology for legal entities and creating an EIOPA ontology for the data that is unique for the insurance register. This allows users of the data to incorporate the insurance register data into the GLEIF RDF dataset and thereby extending the data available on the legal entities with the data from the insurance register.

Creating triples from the EIOPA register

To convert the EIOPA insurance register to linked data in RDF format, I did the following:

  • extract from the GLEIF RDF level 1 dataset on data.world all insurance undertakings and related data, based on the LEI in the EIOPA register;
  • create a provisional ontology with URIs based on https://www.eiopa.europe.eu/ontology/base (this should ideally be done by EIOPA, as they own the domain name in the URI);
  • transform, with this ontology, the data in the EIOPA register to triples (omitting all data from the EIOPA register that is already included in the GLEIF RDF dataset, like names and addresses);
  • publish the triples for the insurance register in RDF Turtle format on data.world.

Because I used the GLEIF ontology where applicable, the triples I created are automatically linked to the relevant data in the GLEIF dataset. Combining the insurance register dataset with the GLEIF RDF dataset results in a set where you have all the GLEIF level 1 data and all data in the EIOPA insurance register combined for all European insurance undertakings.

Querying the data

Let’s look what we have in this combined dataset. Querying the data in RDF is done with the SPARQL language. Here is an example to return the data on Achmea Schadeverzekeringen.

SELECT DISTINCT ?p ?o
WHERE
{ ?s gleif-L1:hasLegalName "Achmea schadeverzekeringen N.V." . 
  ?s ?p ?o .}

The query looks for triples where the predicate is gleif-base:hasLegalName and the object is Achmea Schadeverzekeringen N.V. and returns all data of the subject that satisfies this constraint. This returns (where I omitted the prefix of the objects):

gleif-L1-data:L-72450067SU8C745IAV11
    rdf#type                             LegalEntity
    gleif-base:hasLegalJurisdiction      NL  
    gleif-base:hasEntityStatus           EntityStatusActive  
    gleif-l1:hasLegalName                Achmea Schadeverzekeringen     
                                         N.V.
    gleif-l1:hasLegalForm                ELF-B5PM
    gleif-L1:hasHeadquartersAddress      L-72450067SU8C745IAV11-LAL  
    gleif-L1:hasLegalAddress             L-72450067SU8C745IAV11-LAL  
    gleif-base:hasRegistrationIdentifier BID-RA000463-08053410
    rdf#type                             InsuranceUndertaking
    eiopa-base:hasRegisterIdentifier     IURI-De-Nederlandsche-Bank-
                                         W1686

We see that the rdf#type of this entity is LegalEntity (from the GLEIF data) and the jurisdiction is NL (this has a prefix that refers to the ISO 3166-1 country codes). The legal form refers to another subject called ELF-B5PM. The headquarters and legal address both refer to the same subject that contains the address data of this entity. Then there is a business identifier to the registration data. The last two lines are added by me: a triple to specify that this subject is not only a LegalEntity but also an InsuranceUndertaking (defined in the ontology), and a triple for the Insurance Undertaking Register Identifier (IURI) of this subject (also defined in the ontology).

Let’s look more closely at the references in this list. First the legal form of Achmea (i.e. the predicate and objects of legal form code ELF-B5PM). Included in the GLEIF data is the following (again omitting the prefix of the object):

rdf#type                         EntityLegalForm  
rdf#type                         EntityLegalFormIdentifier  
gleif-base:identifies            ELF-B5PM  
gleif-base:tag                   B5PM  
gleif-base:hasCoverageArea       NL  
gleif-base:hasNameTransliterated naamloze vennootschap  
gleif-base:hasNameLocal          naamloze vennootschap  
gleif-base:hasAbbreviationLocal  NV, N.V., n.v., nv

With the GLEIF data we have this data on all legal entity forms of insurance undertakings in Europe. The local abbreviations are particularly handy as they help us to link an entity’s name extracted from documents or other data sources with its corresponding LEI.

If we look more closely at the EIOPA Register Identifier IURI-De-Nederlandsche-Bank-W1686 then we find the register data of this Achmea entity:

owl:a                         InsuranceUndertakingRegisterIdentifier
gleif-base:identifies         L-72450067SU8C745IAV11
eiopa-base:hasNCA                          De Nederlandsche Bank  
eiopa-base:hasInsuranceUndertakingID       W1686  
eiopa-base:hasEUCountryWhereEntityOperates NL  
eiopa-base:hasCrossBorderStatus            DomesticUndertaking  
eiopa-base:hasRegistrationStartDate        23/12/1991 01:00:00  
eiopa-base:hasRegistrationEndDate          None  
eiopa-base:hasOperationStartDate           23/12/1991 01:00:00  
eiopa-base:hasOperationEndDate             None

The predicate gleif-base:identifies refers back to the subject where gleif-L1:hasLegalName equals the Achmea entity. The other predicates are based on the provisional ontology I made that contains the definitions of the attributes of the EIOPA insurance register. Here we see for example that W1686 is the identifier of this entity in DNB’s insurance register.

Let me give a tiny example of the advantage of using linked data. The GLEIF data contains the geographical location of all legal entities. With the combined dataset it is easy to obtain the location for the insurance undertakings in, for example, the Netherlands. This query returns entity names with latitude and longitude of the legal address of the entity.

SELECT DISTINCT ?name ?lat ?long
WHERE {?sub rdf:type eiopa-base:InsuranceUndertaking ;
            gleif-base:hasLegalJurisdiction CountryCodes:NL ;
            gleif-L1:hasLegalName ?name ;
            gleif-L1:hasLegalAddress/gleif-base:hasCity ?city .
       ?geo gleif-base:hasCity ?city ;
            geo:lat ?lat ; 
            geo:long ?long .}

This result can be plotted on a map, see the link below. If you click on one of the dots then the name of the insurance undertaking will appear.

All queries above and the code to make the map are included in the notebook EIOPA Register RDF datase – SPARQL queries.

The provisional ontology I created is not yet semantically correct and should be improved, for example by incorporating data on NCAs and providing formal definitions. And other data sources could be added, for example the level 2 dataset to identify insurance groups, and the ISIN to LEI relations that are published daily by GLEIF.

By introducing the RDFS/OWL ontologies, the Global LEI Foundation has set an example on how to publish (financial) entity data in an useful manner. The GLEIF RDF dataset reduces the time needed to link the data with other data sources significantly. I hope other organizations that publish financial entity data as part of their mandate will follow that example.

Converting supervisory reports to Semantic Webs: from XBRL to RDF

Published by:

A growing number of supervisory reports across Europe are based on the XML Extensible Business Reporting Language standard (XBRL). Financial entities such as banks, insurance undertakings and pension institutions are required to submit their reports to their supervisors in this format.

XBRL is a language for modeling, exchanging and automatically processing business and financial information. Reports in this format (called instance documents) are based on metadata (set out in taxonomies) that add semantic meaning to the data points that are reported. You can choose different implementations but overall an XBRL taxonomy provides a semantically rich data model and that has always been one of the main advantages of XBRL.

However, in its raw format (an XML file) each report is basically a machine readable document with a tree structure that does not enable easy integration with related data from other sources or integration with text documents and their contents.

In this blog, I will show that converting the XBRL reports to another format allows easier integration and understanding. That other format is based on Semantic Webs. It has been shown that XBRL converted to Semantic Webs can be done without any loss of information (see for example this article). So if we convert the XBRL format to a Semantic Web then we keep the structure and the meaning provided by the taxonomy. The result is basically a graph and this format enables integration with other linked data that is much easier.

A Semantic Web consists of formats and technologies that are rather old (from a computer science perspective): it originated around the same time as XBRL, some twenty years ago. And because it tried to solve similar problems (lack of semantic meaning in the World Wide Web) as the XBRL standard (lack of semantic meaning in business and financial data), to some extent it is based on similar concepts. It was however developed completely separate from XBRL.

The general concept of a Semantic Webs, where data is linked together to provide semantic meaning, is also known as a knowledge graph.

How does a Semantic Web work? One of the formats of the Semantic Web is the Resource Description Framework (RDF), originally designed as a metadata data model. RDF was adopted as a World Wide Web Consortium recommendation in 1999. The RDF 1.0 specification was published in 2004, and RDF 1.1 followed in 2014.

The RDF format is based on expressions in the form of subject-predicate-object, called triples. The subject and object denote (web) resources and the predicate denotes the relationship between the subject and the object. For example the expression ‘Spinoza has written the book Ethica Ordine Geometrico Demonstrata’ in RDF is a triple with a subject denoting “Spinoza”, a predicate denoting “has written”, and an object denoting “the book Ethica Ordine Geometrico Demonstrata”. This is a different approach than for example object-oriented models with an entity (Spinoza), attribute (book) and value (Ethica).

The RDF format could potentially solve some problems with the XBRL format. To explain this, I converted an XBRL-instance (a test instance file from EIOPA for Solvency 2) to RDF format.

Below you see the representation of one arbitrary data point in the report (called a fact) in RDF format and visualized as a network (I used the Python package networkx). The predicates contain the complete web resource so I limited the name to the last word to make it readable.

The red node is the starting point of the data point. The red labels on the lines describe the predicate between subject and object. You see that the fact (subject) ‘has decimals’ (predicate) 2 (object), and furthermore has unit EUR, has value 838522076.03, has type metmi503 (an internal code describing Payments for reported but not settled claims) and some other properties.

The data point also has a so-called context that defines the entity to which the fact applies, the period of time the fact is relevant (in this case 2019-12-31) and also a scenario, which consists of additional metadata of the data point. In this case we see that the data point is related to statutory accounts, non-life and health non-STL, direct business and accepted during the period (and a node without a label).

All facts in every XBRL instance are structured in this way, which means that for example you can search all facts with the label statutory accounts. Furthermore, because XBRL uses namespaces you can unambiguously identify predicates and objects in the report. For example, you see that the entity node has an identifier (starting with 0LFF1…) and a scheme (17442). The scheme refers to the web resource for the ISO standard 17442 which specifies the Legal Entity Identifier (LEI), so the entity is unambiguously identified with the given (LEI-)code. If you add other XBRL instances with references to that entity then the data is automatically linked because other instances will contain exactly the same entity node.

The RDF representation of the XBRL fact above is:

_:provenance1 xl:instance "filename".
_:unit_u xbrli:measure iso4217:EUR.
_:fact926
  xl:provenance :provenance1;
  xl:type xbrli:fact; 
  rdf:type s2md_met:mi503;
  rdf:value "838522076.03"^^xsd:decimal;
  xbrli:decimals "2"^^xsd:integer;
  xbrli:unit :unit_u; 
  xbrli:context :context_BLx79_DIx5_IZx1_TBx28_VGx84.
_:context_BLx79_DIx5_IZx1_TBx28_VGx84
  xl:type xbrli:context;
  xbrli:entity [
    xbrli:identifier "0LFF1WMNTWG5PTIYYI38";
    xbrli:scheme http://standards.iso.org/iso/17442;
    ];
  xbrli:scenario [
    xbrldi:explicitMember "s2c_LB:x79"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_DI:x5"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_RT:x1"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_LB:x28"^^rdf:XMLLiteral;
    xbrldi:explicitMember "s2c_AM:x84"^^rdf:XMLLiteral;
    ];
  xbrli:instant "2019-12-31"^^xsd:date.

Instead of storing the data in separate templates with often unclear code names you can also convert the XBRL data to one large Semantic Web where all facts are linked together. The RDF format thus provides a graph model which allows easier integration and visualization (and, for me at least, easier understanding). It allows adding and linking data from other sources, such as Solvency 2 documents and external data, in the same graph.

Typically, supervisory reports consists of thousands of data points and supervisors receive reports from many entities each period. How would you store that information? I think that the natural way to store an XBRL instance is not a relational database but a graph database (like graphDB or Neo4j). These databases can store the facts with all the metadata in a structured way and enable to query the graph efficiently. Next blog, I will explore graph databases and query languages for XBRL reports converted to the RDF format.

Pilot Data Quality Rules

Published by:

Data Quality is receiving more and more attention within the financial sector, and deservedly so. That’s why DNB will start a pilot in September with the insurance sector to enable entities to run locally the required open source code and to evaluate Solvency 2 quantitative reports with our Data Quality Rules.

In the coming weeks we will:

With these tools you are able to assess the data quality of your Solvency 2 quantitative reports before submitting them to DNB. You can do that within your own data science environment.

We worked hard to make this as easy as possible; the only thing you need is Anaconda / Jupyter Notebooks (Python) and Git to clone our repositories from Github (all free and open source software). And of course the data you want to check. We also provide code to evaluate the XBRL instance files.

We are planning workshops to explain how to use the code and validation rules and to go through the process step by step.

Want to join or know more, please let me know (w.j.willemse at dnb.nl).

How to analyze public Solvency 2 data of Dutch insurers

Published by:

In this blog we will use the public Solvency II data of all Dutch insurance undertakings and present it in one large data vector per undertaking. In doing so, we are able to use some easy but powerful machine learning algorithms to analyze the data. The notebook can be found here.

Solvency II data of individual insurance undertakings is published yearly by the Statistics department of DNB. The data represents the financial and solvency situation of each insurance undertaking per the end of each year. Currently, we have two years of data: ultimo 2016 and ultimo 2017. The publication of DNB is in the form of an Excel file with a number of worksheets containing the aggregated data and the individual data. Here we will use the individual data.

We will read the data in a Pandas Data Frame en use Numpy for data manipulations. Furthermore we need the datetime object

import pandas as pd
import numpy as np
from datetime import datetime

You can find the data with the following url

https://statistiek.dnb.nl/en/downloads/index.aspx#/?statistics_type=toezichtdata&theme=verzekeraars

Download the file and make sure that the Excel file is accessible.

data_path = "../../20_local_data/"
xls = pd.ExcelFile(data_path + "Data individual insurers
                                (year).xlsx")

The Excel file contains several worksheets with data. We want to combine all the data together in one Data Frame. To do that we need some data preparation and data cleaning for each worksheet.

In the following function a worksheet is put into a Data Frame and the columns names are set to lower case. Then an index of the data frame is set to the insurance undertaking name and the reporting period. Then we perform some cleaning (the original worksheets contain some process information). In addition, some worksheets in the file contain 2-dimensional data, that need to be pivoted such that we obtain one large vector with all the data per insurance undertaking in one row.

def get_sheet(num):
    # read entire Excel sheet
    df = xls.parse(num)
    # columns names to lower case
    df.columns = [c.lower() for c in df.columns]
    # set index to name and period
    df.set_index(['name', 'period'], inplace = True)
    # data cleaning (the excel sheet contains some
                     additional data that we don't need)
    drop_list = [i for i in df.columns 
                     if 'unnamed' in i or 'selectielijst' in i]
    df.drop(drop_list, axis = 1, inplace = True)
    # pivot data frame
    if "row_name" in df.columns:
        df.drop("row_name", axis = 1, inplace = True)
        df = df.pivot(columns = 'row_header')
    if df.columns.nlevels > 1:
        df.columns = [str(df.columns[i]) for i in
                          range(len(df.columns))]
    return df

Creating one large vector per insurer

With the function above we can read a part of the Excel file and store it in a Pandas data frame. The following worksheets are contained in the Excel file published by DNB.

  • Worksheet 14: balance sheet
  • Worksheet 15: premiums – life
  • Worksheet 16: premiums – non-life
  • Worksheet 17: technical provisions – 1
  • Worksheet 18: technical provisions – 2
  • Worksheet 19: transition and adjustments
  • Worksheet 20: own funds
  • Worksheet 21: solvency capital requirements – 1
  • Worksheet 22: solvency capital requirements – 2
  • Worksheet 23: minimum capital requirements
  • Worksheet 24: additional information life
  • Worksheet 25: additional information non-life
  • Worksheet 26: additional information reinsurance

Let’s read the first worksheet with data and then append the other sheets to it. We shall not read the last three worksheets, because these contain the country specific reports.

df = get_sheet(14)
for l in range(15, 24):
    df_temp = get_sheet(l)
    df = df.join(df_temp, rsuffix = "_"+ str(l))

Let’s get the shape of the Data Frame.

print("Number of rows   : " + str(len(df.index)))
print("Number of columns: " + str(len(df.columns)))

Out: Number of rows   : 286
     Number of columns: 1273

So we have 286 rows of insurance undertakings data for ultimo 2016 and ultimo 2017. And we have 1273 columns with financial and solvency data about each insurance undertaking. Let’s take the data from the year 2017, and select all columns that have floating numbers in them.

df = df.xs(datetime(2017,12,31), 
           axis = 0, 
           level = 1,
           drop_level = True)
df = df[[df.columns[c] for c in range(len(df.columns)) 
          if df.dtypes[c]=='float64']]
df.fillna(0, inplace = True)
print("Number of rows   : " + str(len(df.index)))
print("Number of columns: " + str(len(df.columns)))

Out: Number of rows   : 139
     Number of columns: 1272

Apparently there is one columns with a non-floating values. For ultimo 2017 we have the data of 139 insurance undertaking.

Now we can perform all kinds of numerical analysis. Let’s calculate the total amount of assets of all Dutch insurance undertakings.

df['assets|total assets , solvency ii value'].sum()

Out: 486702222727.1401

That’s almost 487 billion euro at the end of 2017.

Finding similar insurers

Now let’s apply some algorithms to this data set. Suppose we want to know what insurance undertakings are similar with respect to their financial and solvency structure. To do that we can calculate the distances between all the data points of each insurance undertakings. An insurance undertaking with a low distance to another insurance undertaking might be similar to that undertaking.

If we divide each row by the total assets we do as if all insurance undertakings have equal size, and then the distances indicate similarity in financial and solvency structure (and not similarity in size).

X = df.div(df['assets|total assets , solvency ii value'],
              axis = 0)

The scikit-learn package provides numerous algorithms to do calculations with distances. Below we apply the NearestNeighbors algorithm to find the neighbors of each insurance undertaking. Then we get the distances and the indices of the data set and store them.

from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors = 2, 
                        algorithm = 'brute').fit(X.values)
distances, indices = nbrs.kneighbors(X)

What are the nearest neighbors of the first ten insurance undertakings in the list?

for i in indices[0:10]:
     print(X.index[i[0]] + " --> " + X.index[i[1]])

Out: 
ABN AMRO Captive N.V. --> Rabo Herverzekeringsmaatschappij N.V.
ABN AMRO Levensverzekering N.V. --> Delta Lloyd Levensverzekering N.V.
ABN AMRO Schadeverzekering N.V. --> Ansvar Verzekeringsmaatschappij N.V.
AEGON Levensverzekering N.V. --> ASR Levensverzekering N.V.
AEGON Schadeverzekering N.V. --> Nationale-Nederlanden Schadeverzekering Maatschappij N.V.
AEGON Spaarkas N.V. --> Robein Leven N.V.
ASR Aanvullende Ziektekostenverzekeringen N.V. --> ONVZ Aanvullende Verzekering N.V.
ASR Basis Ziektekostenverzekeringen N.V. --> VGZ Zorgverzekeraar N.V.
ASR Levensverzekering N.V. --> Achmea Pensioen- en Levensverzekeringen N.V.
ASR Schadeverzekering N.V. --> Veherex Schade N.V.

And with the shortest distance between two insurance undertakings we can find the two insurance undertakings that have the highest similarity in their structure.

min_list = np.where(distances[:,1] == distances[:,1].min())
list(X.index[min_list])

Out: ['IZA Zorgverzekeraar N.V.', 'Univé Zorg, N.V.']

If you want to understand the financial performance it is of course handy to know which insurance undertakings are similar. A more general approach when comparing insurance undertakings is to cluster them into a small number of peer groups.

Clustering the insurers

Can we cluster the insurance undertakings based on the 1272-dimensional data? To do this we apply the t-sne algorithm (that we used before).

First we import all the packages required.

import matplotlib.pyplot as pyplot
from sklearn.manifold import TSNE
from sklearn.cluster import KMeans

Then we run the t-sne algorithm.

Y = TSNE(n_components = 2, 
         perplexity = 5, 
         verbose = 0, 
         random_state = 1).fit_transform(X.values)

And we plot the results

pyplot.figure(figsize = (7, 7))
pyplot.scatter(x = Y[:, 0], 
               y = Y[:, 1], 
               s = 7)
pyplot.show()

Depending on how you zoom in you see different clusters in this picture. In the above left you see the health insurance undertakings (with more clusters within that set: those offering basic health insurance and other offering additional health insurances, or both). On the right are (mostly) life insurance undertakings, and on the left (middle to below) are non-life insurance undertakings. And both clusters can be divided into several more sub clusters. These clusters can be used in further analysis. For example, you could use these as peer groups of similar insurance undertakings.

Clustering the features

Given that we have a 1272-dimensional vector of each insurance undertaking we might wish somehow to cluster the features in the data set. That is, we want to know which columns belong to each other and what columns are different.

An initial form of clustering were the different worksheets in the original Excel file. The data was clustered around the balance sheet, premiums, technical provisions, etc. But can we also find clusters within the total vector without any prior knowledge of the different worksheets?

A simple and effective way is to transpose the data matrix and feed it into the t-sne algorithm. That is, instead of assuming that each feature provides additional information about an insurance undertaking, we assume that each insurance undertaking provides additional information about a feature.

Let’s do this for only the balance sheet. In a balance sheet it is not immediately straightforward how the left side is related to the right side of the balance sheet, i.e. which assets are related to which liabilities. If you cluster all the data of the balance sheet then related items are clustered (irrespective of whether they are assets or liabilities).

df = get_sheet(14)

Instead of the scaled values we now take whether or not a data point was reported or not, and then transpose the matrix.

X = (df != 0).T

Then we apply the t-sne algorithm. In this case with a lower perplexity.

Y = TSNE(n_components = 2, 
         perplexity = 1.0, 
         verbose = 0, 
         random_state = 0, 
         learning_rate = 20, 
         n_iter = 10000).fit_transform(X.values)

And we plot the result with 15 identified clusters.

pyplot.figure(figsize = (7, 7))
pyplot.scatter(x = Y[:, 0], 
              y = Y[:, 1], 
              s = 5)
kmeans = KMeans(n_clusters = 15, random_state = 0, n_init  = 10).fit(Y)
for i in range(len(kmeans.cluster_centers_)):
    pyplot.scatter(x = kmeans.cluster_centers_[i,0],
                   y = kmeans.cluster_centers_[i,1],
                   s = 1,
                   c = 'yellow')
    pyplot.annotate(str(i), 
                   xy = (kmeans.cluster_centers_[i, 0], kmeans.cluster_centers_[i, 1]), 
                   size = 13)
pyplot.show()

Then we get this.

We see are large number of different clusters.

for i in df.T.loc[kmeans.labels_ == 6].index:
    print(i)

Out:
assets|assets held for index-linked and unit-linked contracts , solvency ii value
assets|loans and mortgages|loans and mortgages to individuals , solvency ii value
assets|reinsurance recoverables from:|life and health similar to life, excl health,index-linked,unit-linked|life excluding health,index-linked,unit-linked , solvency ii value
liabilities|technical provisions – index-linked and unit-linked , solvency ii value
liabilities|technical provisions – index-linked and unit-linked|best estimate , solvency ii value
liabilities|technical provisions – index-linked and unit-linked|risk margin , solvency ii value

So the assets held for index-linked and unit-linked contracts are in the same cluster as the technical provisions for index-linked and unit-linked items (and some other related items are found).

However, the relations found are not always perfect. To improve the clustering we should cluster the data that is related in their changes over time. But because we have just two years available (and so just one yearly difference) we presumably do not have enough data to do that.

How to download and read the Solvency 2 legislation

Published by:

In our first Natural Language Processing project we will read the Solvency II legislation from the website of the European Union and extract the text within the articles by using regular expressions.

For this notebook, we have chosen the text of the Delegated Acts of Solvency II. This part of the Solvency II regulation is directly into force (because it is a Regulation) and the wording of the Delegated Acts is more detailed than the Solvency II Directive and very precise and internally consistent. This makes it suitable for NLP. From the text we are able to extract features and text data on Solvency II for our future projects.

The code of this notebook can be found in here

Step 1: data Retrieval

We use several packages to read and process the pdfs. For reading we use the fitz-package. Furthermore we need the re-package (regular expressions) for cleaning the text data.

import os
import re
import requests
import fitz

We want to read the Delegated Acts in all available languages. The languages of the European Union are Bulgarian (BG), Spanish (ES), Czech (CS), Danish (DA), German (DE), Estonian (ET), Greek (EL), English (EN), French (FR), Croatian (HR), Italian (IT), Latvian (LV), Lithuanian (LT), Hungarian (HU), Maltese (MT), Dutch (NL), Polish (PL), Portuguese (PT), Romanian (RO), Slovak (SK), Solvenian (SL), Finnish (FI), Swedish (SV).

languages = ['BG','ES','CS','DA','DE','ET','EL',
             'EN','FR','HR','IT','LV','LT','HU',
             'MT','NL','PL','PT','RO','SK','SL',
             'FI','SV']

The urls of the Delegated Acts of Solvency 2 are constructed for these languages by the following list comprehension.

urls = ['https://eur-lex.europa.eu/legal-content/' + lang +
        '/TXT/PDF/?uri=OJ:L:2015:012:FULL&from=EN' 
        for lang in  languages]

The following for loop retrieves the pdfs of the Delegated Acts from the website of the European Union and stores them in da_path.

da_path = 'data/solvency ii/'
for index in range(len(urls)):
    filename = 'Solvency II Delegated Acts - ' + languages[index] + '.pdf'
    if not(os.path.isfile(da_path + filename)):
        r = requests.get(urls[index])
        f = open(da_path + filename,'wb+')
        f.write(r.content) 
        f.close()
 else:
        print("--> already read.")

Step 2: data cleaning

If you look at the pdfs then you see that each page has a header with page number and information about the legislation and the language. These headers must be deleted to access the articles in the text.

DA_dict = dict({
                'BG': 'Официален вестник на Европейския съюз',
                'CS': 'Úřední věstník Evropské unie',
                'DA': 'Den Europæiske Unions Tidende',
                'DE': 'Amtsblatt der Europäischen Union',
                'EL': 'Επίσημη Εφημερίδα της Ευρωπαϊκής Ένωσης',
                'EN': 'Official Journal of the European Union',
                'ES': 'Diario Oficial de la Unión Europea',
                'ET': 'Euroopa Liidu Teataja',           
                'FI': 'Euroopan unionin virallinen lehti',
                'FR': "Journal officiel de l'Union européenne",
                'HR': 'Službeni list Europske unije',         
                'HU': 'Az Európai Unió Hivatalos Lapja',      
                'IT': "Gazzetta ufficiale dell'Unione europea",
                'LT': 'Europos Sąjungos oficialusis leidinys',
                'LV': 'Eiropas Savienības Oficiālais Vēstnesis',
                'MT': 'Il-Ġurnal Uffiċjali tal-Unjoni Ewropea',
                'NL': 'Publicatieblad van de Europese Unie',  
                'PL': 'Dziennik Urzędowy Unii Europejskiej',  
                'PT': 'Jornal Oficial da União Europeia',     
                'RO': 'Jurnalul Oficial al Uniunii Europene', 
                'SK': 'Úradný vestník Európskej únie',        
                'SL': 'Uradni list Evropske unije',            
                'SV': 'Europeiska unionens officiella tidning'})

The following code reads the pdfs, deletes the headers from all pages and saves the clean text to a .txt file.

DA = dict()
files = [f for f in os.listdir(da_path) if os.path.isfile(os.path.join(da_path, f))]    
for language in languages:
    if not("Delegated_Acts_" + language + ".txt" in files):
        # reading pages from pdf file
        da_pdf = fitz.open(da_path + 'Solvency II Delegated Acts - ' + language + '.pdf')
        da_pages = [page.getText(output = "text") for page in da_pdf]
        da_pdf.close()
        # deleting page headers
        header = "17.1.2015\\s+L\\s+\\d+/\\d+\\s+" + DA_dict[language].replace(' ','\\s+') + "\\s+" + language + "\\s+"
        da_pages = [re.sub(header, '', page) for page in da_pages]
        DA[language] = ''.join(da_pages)
        # some preliminary cleaning -> could be more 
        DA[language] = DA[language].replace('\xad ', '')
        # saving txt file
        da_txt = open(da_path + "Delegated_Acts_" + language + ".txt", "wb")
        da_txt.write(DA[language].encode('utf-8'))
        da_txt.close()
    else:
        # loading txt file
        da_txt = open(da_path + "Delegated_Acts_" + language + ".txt", "rb")
        DA[language] = da_txt.read().decode('utf-8')
        da_txt.close()

Step 3: retrieve the text within articles

Retrieving the text within articles is not straightforward. In English we have ‘Article 1 some text’, i.e. de word Article is put before the number. But some European languages put the word after the number and there are two languages, HU and LV, that put a dot between the number and the article. To be able to read the text within the articles we need to know this ordering (and we need of course the word for article in every language).

art_dict = dict({
                'BG': ['Член',      'pre'],
                'CS': ['Článek',    'pre'],
                'DA': ['Artikel',   'pre'],
                'DE': ['Artikel',   'pre'],
                'EL': ['Άρθρο',     'pre'],
                'EN': ['Article',   'pre'],
                'ES': ['Artículo',  'pre'],
                'ET': ['Artikkel',  'pre'],
                'FI': ['artikla',   'post'],
                'FR': ['Article',   'pre'],
                'HR': ['Članak',    'pre'],
                'HU': ['cikk',      'postdot'],
                'IT': ['Articolo',  'pre'],
                'LT': ['straipsnis','post'],
                'LV': ['pants',     'postdot'],
                'MT': ['Artikolu',  'pre'],
                'NL': ['Artikel',   'pre'],
                'PL': ['Artykuł',   'pre'],
                'PT': ['Artigo',    'pre'],
                'RO': ['Articolul', 'pre'],
                'SK': ['Článok',    'pre'],
                'SL': ['Člen',      'pre'],
                'SV': ['Artikel',   'pre']})

Next we can define a regex to select the text within an article.

def retrieve_article(language, article_num):

    method = art_dict[language][1]
    
    if method == 'pre':
        string = art_dict[language][0] + ' ' + str(article_num) + '(.*?)' + art_dict[language][0] + ' ' + str(article_num + 1)
    elif method == 'post':
        string = str(article_num) + ' ' + art_dict[language][0] + '(.*?)' + str(article_num + 1) + ' ' + art_dict[language][0]
    elif method == 'postdot':
        string = str(article_num) + '. ' + art_dict[language][0] + '(.*?)' + str(article_num + 1) + '. ' + art_dict[language][0]

    r = re.compile(string, re.DOTALL)
            
    result = ' '.join(r.search(DA[language])[1].split())
            
    return result

Now we have a function that can retrieve the text of all the articles in the Delegated Acts for each European language.

Now we are able to read the text of the articles from the Delegated Acts. In the following we give three examples (article 292 with states the summary of the Solvency and Financial Conditions Report).

retrieve_article('EN', 292)
"Summary 1. The solvency and financial condition report shall include a clear and concise summary. The summary of the report
shall be understandable to policy holders and beneficiaries. 2. The
summary of the report shall highlight any material changes to the 
insurance or reinsurance undertaking's business and performance, 
system of governance, risk profile, valuation for solvency purposes 
and capital management over the reporting period."
retrieve_article('DE', 292)
'Zusammenfassung 1. Der Bericht über Solvabilität und Finanzlage 
enthält eine klare, knappe Zusammenfassung. Die Zusammenfassung des
Berichts ist für Versicherungsnehmer und Anspruchsberechtigte
verständlich. 2. In der Zusammenfassung werden etwaige wesentliche
Änderungen in Bezug auf Geschäftstätigkeit und Leistung des
Versicherungs- oder Rückversicherungsunternehmens, sein 
Governance-System, sein Risikoprofil, die Bewertung für 
Solvabilitätszwecke und das Kapitalmanagement im Berichtszeitraum 
herausgestellt.'
retrieve_article('EL', 292)
'Περίληψη 1. Η έκθεση φερεγγυότητας και χρηματοοικονομικής
κατάστασης περιλαμβάνει σαφή και σύντομη περίληψη. Η περίληψη της
έκθεσης πρέπει να είναι κατανοητή από τους αντισυμβαλλομένους και
τους δικαιούχους. 2. Η περίληψη της έκθεσης επισημαίνει τυχόν
ουσιώδεις αλλαγές όσον αφορά τη δραστηριότητα και τις επιδόσεις της
ασφαλιστικής και αντασφαλιστικής επιχείρησης, το σύστημα
διακυβέρνησης, το προφίλ κινδύνου, την εκτίμηση της αξίας για τους
σκοπούς φερεγγυότητας και τη διαχείριση κεφαλαίου κατά την περίοδο
αναφοράς.'

How to import the Solvency 2 RFR

Published by:

UPDATE: The code mentioned in this blog has been moved to https://github.com/wjwillemse/solvency2-data

If you want to make insurance calculations you often need an recent interest rate term structure. In this example we show how to import the Solvency 2 Risk Free Rate from the EIOPA website in a convenient Pandas Data Frame, ready to be used for future calculations.

The code can be found on https://github.com/wjwillemse/insurlib.

First we import insurlib.solvency2, the Python package that contains functions to generate the names of the files, import the zip-file from the EIOPA website, extract it to an Excel file (both stored on disk) and read the Excel file in a proper Pandas Data Frame.

In [1]: import pandas as pd
        from datetime import datetime
        from insurlib import solvency2

We have now all the functions we need.

The function that does all this is solvency2.rfr.read, it returns a Python dictionary with all information about the RFR of a certain reference date.

If you do not add a input datetime, i.e. solvency2.rfr.read(), then the function with use today() and you will receive the most recent published RFR.

In [2]: d = solvency2.rfr.read(datetime(2018,1,1))

What information is stored in the dictionary?

In [3]: d.keys()
Out[3]: dict_keys(['input_date', 
           'reference_date', 
           'url', 
           'path_zipfile', 
           'name_zipfile', 
           'path_excelfile', 
           'name_excelfile', 
           'metadata', 
           'RFR_spot_no_VA', 
           'RFR_spot_with_VA'])

Let’s take a look at the individual elements of the dictionary.

The original date by which the function was called is stored in the dictionary as input_date

In [4]: d['input_date']
Out[4]: datetime.datetime(2018, 1, 1, 0, 0)

You can call the function with any date and the function will generate a proper reference date from it. The reference date is the most recent end of the month prior to the input date. So if for example the input is datetime(2018, 1, 1) then the reference date is '20171231', because this the most recent end of the month prior to the input date. The reference date is a string because it is used in the name of the files to be downloaded from the EIOPA-website.

In [5]: d['reference_date']
Out[5]: '20171231'

Furthermore the url, location and filenames are stored in the dictionary.

In [6]: print(d['url'])
        print(d['name_zipfile'])
        print(d['name_excelfile'])
Out[6]: https://eiopa.europa.eu/Publications/Standards/
        EIOPA_RFR_20171231.zip
        EIOPA_RFR_20171231_Term_Structures.xlsx

Now, let’s take a look at the available RFR’s.

In [7]: d['metadata'].columns
Out[7]: Index(['Euro', 'Austria', 'Belgium', 'Bulgaria', 'Croatia', 'Cyprus', 'Czech Republic', 'Denmark', 'Estonia', 'Finland', 'France', 'Germany', 'Greece', 'Hungary', 'Iceland', 'Ireland', 'Italy', 'Latvia', 'Liechtenstein', 'Lithuania', 'Luxembourg', 'Malta', 'Netherlands', 'Norway', 'Poland', 'Portugal', 'Romania', 'Russia', 'Slovakia', 'Slovenia', 'Spain', 'Sweden', 'Switzerland', 'United Kingdom', 'Australia', 'Brazil', 'Canada', 'Chile', 'China', 'Colombia', 'Hong Kong', 'India', 'Japan', 'Malaysia', 'Mexico', 'New Zealand', 'Singapore', 'South Africa', 'South Korea', 'Taiwan', 'Thailand', 'Turkey', 'United States'],    dtype='object')

To get all the metadata of the French RFR we select metadata from the dictionary.

In [8]: d['metadata'].loc[:,'France']
Out[8]: Info              FR_31_12_2017_SWP_LLP_20_EXT_40_UFR_4.2
Coupon_freq                                             1
LLP                                                    20
Convergence                                            40
UFR                                                   4.2
alpha                                            0.126759
CRA                                                    10
VA                                                      4
reference date                                   20171231
Name: France, dtype: object

That is: the Coupon frequency is one year, the Last Liquid Point is 20 years, the Convergence period is 40 years, the Ultimate Forward Rate is 4.2 (this rate changed at the beginning of 2018 to 4.05), the alpha parameter of the Smith-Wilson algorithm is 0.126759, the Credit Rate Adjustment is 10 basis points and because we have the curve without the Volatility adjustment the VA is 4 basis points. It is identical to the Euro curve.

To get one single item from the metadata we can use the following line (note that this is the UFR at the end of 2017).

In [9]: d['metadata'].loc["UFR",'Germany']
Out[9]: 4.2

To get the euro RFR without Volatility Adjustment (the first ten durations) we use

In [10]: d['RFR_spot_no_VA']['Euro'].head(10)
Out[10]: Duration
1    -0.00358
2     -0.0025
3    -0.00088
4     0.00069
5     0.00209
6     0.00347
7     0.00469
8     0.00585
9     0.00695
10    0.00802
Name: Euro, dtype: object

Now suppose that we want to store the RFR of six consecutive months into one Data Frame. This is how we can do that.

First we define ref_dates with the reference dates we want to acquire.

In [11]: ref_dates = pd.date_range(start='2018-01-01', periods = 6,freq = 'MS')
         ref_dates
Out[11]: DatetimeIndex(['2018-01-01', '2018-02-01', '2018-03-01', '2018-04-01', '2018-05-01', '2018-06-01'], dtype='datetime64[ns]', freq='MS')

Then we use a Python list comprehension to obtain the RFR’s of the reference dates and we show the Data Frame with the first ten durations.

In [12]: rfr = [solvency2.rfr.read(ref_date)['RFR_spot_no_VA']['Euro'] for ref_date in ref_dates]
         df_euro = pd.DataFrame(data = rfr, index = ref_dates).T
         print(df_euro.head(10))

A list comprehension can also be used for the metadata. The following code obtains the metadata of the UK RFR.

In [13]: rfr = [solvency2.rfr.read(ref_date)['metadata']['United Kingdom'] for ref_date in ref_dates]
         print(pd.DataFrame(data = rfr, index = ref_dates))
Out[13]:           2018-01-01  2018-02-01  2018-03-01  2018-04-01  2018-05-01  \
Duration                                                               
1           -0.00358    -0.00363    -0.00352    -0.00362    -0.00358   
2           -0.00250    -0.00225    -0.00220    -0.00258    -0.00244   
3           -0.00088    -0.00020    -0.00022    -0.00083    -0.00065   
4            0.00069     0.00190     0.00178     0.00104     0.00120   
5            0.00209     0.00380     0.00361     0.00285     0.00286   
6            0.00347     0.00537     0.00521     0.00418     0.00441   
7            0.00469     0.00670     0.00666     0.00556     0.00577   
8            0.00585     0.00791     0.00793     0.00672     0.00698   
9            0.00695     0.00899     0.00906     0.00783     0.00809   
10           0.00802     0.00987     0.01007     0.00884     0.00911   

          2018-06-01  
Duration              
1           -0.00331  
2           -0.00236  
3           -0.00098  
4            0.00057  
5            0.00213  
6            0.00356  
7            0.00490  
8            0.00613  
9            0.00725  
10           0.00824