Category Archives: Natural language processing

Word2vec models for SFCRs

Published by:

Word2vec is a well-known algorithm for natural language processing that often leads to surprisingly good results, if trained properly. It consists of a shallow neural network that maps words to a n-dimensional number space, i.e. it produces vector representations of words (so-called word embeddings). Word2vec does this in a way that words used in the same context are embedded near to each other (their respective vectors are close to each other). In this blog I will show you some of the results of word2vec models trained with Wikipedia and insurance-related documents.

One of the nice properties of a word2vec model is that it allows us to do calculations with words. The distance between two word vectors provides a measure for linguistic or semantic similarity of the corresponding words. So if we calculate the nearest neighbors of the word vector then we find similar words of that word. It is also possible to calculate vector differences between two word vectors. For example, it appears that for word2vec model trained with a large data set, the vector difference between man and woman is roughly equal to the difference between king and queen, or in vector notation kingman + woman = queen. If you find this utterly strange then you are not alone. Besides some intuitive and informal explanations, it is not yet completely clear why word2vec models in general yield these results.

Word2vec models need to be trained with a large corpus of text data in order to achieve word embeddings that allow these kind of calculations. There are some large pre-trained word vectors available, such as the GloVe Twitter word vectors, trained with 2 billion tweets, and the word2vec based on google news (trained with 100 billion words). However, most of them are in the English language and are often trained on words that are generally used, and not domain specific.

So let’s see if we can train word2vec models specifically for non-English European languages and trained with specific insurance vocabulary. A way to do this is to train a word2vec model with Wikipedia pages of a specific language and additionally train the model with sentences we found in public documents of insurance undertakings (SFCRs) and in the insurance legislation. In doing so the word2vec model should be able to capture the specific language domain of insurance.

The Wikipedia word2vec model

Data dumps of all Wikimedia wikis, in the form of a XML-files, are provided here. I obtained the latest Wikipedia pages and articles of all official European languages (bg, es, cs, da, de, et, el, en, fr, hr, it, lv, lt, hu, mt, nl, pl pt, ro, sk, sl, fi, sv). These are compressed files and their size range from 8.6 MB (Maltese) to 16.9 GB (English). The Dutch file is around 1.5 GB. These files are bz2-compressed; the uncompressed Dutch file is about 5 times the compressed size and contains more than 2.5 million Wikipedia pages. This is too large to store into memory (at least on my computer), so you need to use Python generator-functions to process the files without the need to store them completely into memory.

The downloaded XML-files are parsed and page titles and texts are then processed with the nltk-package (stop words are deleted and sentences are tokenized and preprocessed). No n-grams were applied. For the word2vec model I used the implementation in the gensim-package.

Let’s look at some results of the resulting Wikipedia word2vec models. If we get the top ten nearest word vectors of the Dutch word for elephant then we get:

In []: model.wv.most_similar('olifant', topn = 10)
Out[]: [('olifanten', 0.704888105392456),
        ('neushoorn', 0.6430075168609619),
        ('tijger', 0.6399451494216919),
        ('luipaard', 0.6376790404319763),
        ('nijlpaard', 0.6358680725097656),
        ('kameel', 0.5886276960372925),
        ('neushoorns', 0.5880545377731323),
        ('ezel', 0.5879943370819092),
        ('giraf', 0.5807977914810181),
        ('struisvogel', 0.5724758505821228)]

These are all general Dutch names for (wild) animals. So, the Dutch word2vec model appears to map animal names in the same area of the vector space. The word2vec models of other languages appear to do the same, for example norsut (Finnish for elephant) has the following top ten similar words: krokotiilit, sarvikuonot, käärmeet, virtahevot, apinat, hylkeet, hyeenat, kilpikonnat, jänikset and merileijonat. Again, these are all names for animals (with a slight preference for Nordic sea animals).

In the Danish word2vec model, the top 10 most similar words for mads (in Danish a first name derived from matthew) are:

In []: model.wv.most_similar('mads', topn = 10)
Out[]: [('mikkel', 0.6680521965026855),
        ('nicolaj', 0.6564826965332031),
        ('kasper', 0.6114416122436523),
        ('mathias', 0.6102851033210754),
        ('rasmus', 0.6025335788726807),
        ('theis', 0.6013824343681335),
        ('rikke', 0.5957099199295044),
        ('janni', 0.5956574082374573),
        ('refslund', 0.5891965627670288),
        ('kristoffer', 0.5842193365097046)]

Almost all are first names except for Refslund, a former Danish chef whose first name was Mads. The Danish word2vec model appears to map first names in the same domain in the vector space, resulting is a high similarity between first names.

Re-training the Wikipedia Word2vec with SFCRs

The second step is to train the word2vec models with the insurance related text documents. Although the Wikipedia pages for many languages contain some pages on insurance and insurance undertakings, it is difficult to derive the specific language of this domain from these pages. For example the Dutch word for risk margin does not occur in the Dutch Wikipedia pages, and the same holds for many other technical terms. In addition to the Wikipedia pages, we should therefore train the model with insurance specific documents. For this I used the public Solvency and Financial Condition Reports (SFCRs) of Dutch insurance undertakings and the Dutch text of the Solvency II Delegated Acts (here is how to download and read it).

The SFCR sentences are processed in the same manner as the Wikipedia pages, although here I applied bi- and trigrams to be able to distinguish insurance terms rather than separate words (for example technical provisions is a bigram and treated as one word, technical_provisions).

Now the model is able to derive similar words to the Dutch word for risk margin.

In []: model.wv.most_similar('risicomarge')
Out[]: [('beste_schatting', 0.43119704723358154),
        ('technische_voorziening', 0.42812830209732056),
        ('technische_voorzieningen', 0.4108312726020813),
        ('inproduct', 0.409644216299057),
        ('heffingskorting', 0.4008549451828003),
        ('voorziening', 0.3887258470058441),
        ('best_estimate', 0.3886040449142456),
        ('contant_maken', 0.37772029638290405),
        ('optelling', 0.3554660379886627),
        ('brutowinst', 0.3554105758666992)]

This already looks nice. Closest to risk margin is the Dutch term beste_schatting (English: best estimate) and technische_voorziening(en) (English: technical provision, singular and plural). The relation to heffingskorting is strange here. Perhaps the word risk margin is not solely being used in insurance.

Let’s do another one. The acronym skv is the same as scr (solvency capital requirement) in English.

In []: model.wv.most_similar('skv')
Out[]: [('mkv', 0.6492390036582947),
        ('mcr_ratio', 0.4787723124027252),
        ('kapitaalseis', 0.46219778060913086),
        ('mcr', 0.440476655960083),
        ('bscr', 0.4224048852920532),
        ('scr_ratio', 0.41769397258758545),
        ('ðhail', 0.41652536392211914),
        ('solvency_capital', 0.4136047661304474),
        ('mcr_scr', 0.40923237800598145),
        ('solvabiliteits', 0.406883180141449)]

The SFCR documents were sufficient to derive an association between skv and mkv (English equivalent of mcr), and the English acronyms scr and mcr (apparently the Dutch documents sometimes use scr and mcr in the same context). Other similar words are kapitaalseis (English: capital requirement) and bscr. Because they learn from context, the word2vec models are able to learn words that are synonyms and sometimes antonyms (for example we say ‘today is a cold day’ and ‘today is a hot day’, where hot and cold are used in the same manner).

For an example of a vector calculation look at the following result.

In []: model.wv.most_similar(positive = ['dnb', 'duitsland'], 
                             negative = ['nederland'], topn = 5)
Out[]: [('bundesbank', 0.4988047778606415),
        ('bundestag', 0.4865422248840332),
        ('simplesearch', 0.452720582485199),
        ('deutsche', 0.437085896730423),
        ('bondsdag', 0.43249475955963135)]

This function finds the top five similar words of the vector DNBNederland + Duitsland. This expression basically asks for the German equivalent of De Nederlandsche Bank (DNB). The model generates the correct answer: the German analogy of DNB as a central bank is the Bundesbank. I think this is somehow incorporated in the Wikipedia pages, because the German equivalent of DNB as a insurance supervisor is not the Bundesbank but Bafin, and this was not picked up by the model. It is not perfect (the other words in the list are less related and for other countries this does not work as well). We need more documents to find more stable associations. But this to me is already pretty surprising.

There has been some research where the word vectors of word2vec models of two languages were mapped onto each other with a linear transformation (see for example Exploiting Similarities among Languages for Machine Translation, Mikolov, et al). In doing so, it was possible to obtain a model for machine translation. So perhaps it is possible for some European languages with a sufficiently large corpus of SFCRs to generate one large model that is to some extent language independent. To derive the translation matrices we could use the different translations of European legislative texts because in their nature these texts provide one of the most reliable translations available.

But that’s it for me for now. Word2vec is a versatile and powerful algorithm that can be used in numerous natural language applications. It is relatively easy to generate these models in other languages than the English language and it is possible to train these models that can deal with the specifics of insurance terminology, as I showed in this blog.

Text modeling with S2 SFCRs

Published by:

European insurance undertakings are required to publish each year a Solvency and Financial Condition Report (SFCR). These SFCRs are often made available via the insurance undertaking’s website. In this blog I will show some first results of a text modeling exercise using these SFCRs.

Text modeling was done with Latent Dirichlet Allocation (LDA) with the Mallet’s implementation, via the gensim-package (found here: https://radimrehurek.com/gensim/index.html). A description you can find here: https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/. LDA is an unsupervised learning algorithm that generates latent (hidden) distributions over topics for each document or sentence and a distribution over words for each topic.

To get the data I scraped as many SFCRs (in all European languages) as I could find on the Internet. As a result of this I have a data set of 4.36 GB with around 2,500 SFCR documents in PDF-format (until proven otherwise, I probably have the largest library of SFCR documents in Europe). Among these are 395 SFCRs in the English language, consisting in total of 287,579 sentences and 8.1 million words.

In a SFCR an insurance undertaking publicly discloses information about a number of topics prescribed by the Solvency II legislation, such as its business and performance, system of governance, risk profile, valuation and capital management. Every SFCR therefore contains the same topics.

The LDA algorithm is able to find dominant keywords that represents each topic, given a set of documents. It is assumed that each document is about one topic. We want to use the LDA algorithm to identify the different topics within the SFCRs, such that, for example, we can extracts all sentences about the solvency requirements. To do so, I will run the LDA algorithm with sentences from the SFCRs (and thereby assuming that each sentence is about one topic).

I followed the usual steps; some data preparation to read the pdf files properly. Then I selected the top 9.000 words and I selected the sentences with more than 10 words (it is known that the LDA algorithm does not work very good with words that are not often used and very short documents/sentences). I did not built bigram and trigram models because this did not really change the outcome. Then the data was lemmatized such that only nouns, adjectives, verbs and adverbs were selected. The spacy-package provides functions to tag the data and select the allowed postags.

The main inputs for the LDA algorithm is a dictionary and a corpus. The dictionary contains all lemmatized words used in the documents with a unique id for each word. The corpus is a mapping of word id to word frequency in each sentence. After we generated these, we can run the LDA algorithm with the number of topics as one of the parameters.

The quality of the topic modeling is measured by the coherence score.

The coherence score per number of topics

Therefore, a low number of topics that performs well would be nine topics (0.65) and the highest coherence score is attained at 22 topics (0.67), which is pretty high in general. From this we conclude that nine topics would be a good start.

What does the LDA algorithm produce? It generates for each topic a list of keywords with weights that represent that topic. The weights indicate how strong the relation between the keyword and the topic is: the higher the weight the more representative the word is for that specific topic. Below the first ten keywords are listed with their weights. The algorithm does not classify the topic with one or two words, so per topic I determined a description that more or less covers the topic (with the main subjects of the Solvency II legislation in mind).

Topic 0 ‘Governance’: 0.057*”management” + 0.051*”board” + 0.049*”function” + 0.046*”internal” + 0.038*”committee” + 0.035*”audit” + 0.034*”control” + 0.032*”system” + 0.030*”compliance” + 0.025*”director”

Topic 1 ‘Valuation’: 0.067*”asset” + 0.054*”investment” + 0.036*”liability” + 0.030*”valuation” + 0.024*”cash” + 0.022*”balance” + 0.020*”tax” + 0.019*”cost” + 0.017*”account” + 0.016*”difference”

Topic 2 ‘Reporting and performance’: 0.083*”report” + 0.077*”solvency” + 0.077*”financial” + 0.038*”condition” + 0.032*”information” + 0.026*”performance” + 0.026*”group” + 0.025*”material” + 0.021*”december” + 0.018*”company”

Topic 3 ‘Solvency’: 0.092*”capital” + 0.059*”requirement” + 0.049*”solvency” + 0.039*”year” + 0.032*”scr” + 0.030*”fund” + 0.027*”model” + 0.024*”standard” + 0.021*”result” + 0.018*”base”

Topic 4 ‘Claims and assumptions’: 0.023*”claim” + 0.021*”term” + 0.019*”business” + 0.016*”assumption” + 0.016*”market” + 0.015*”future” + 0.014*”base” + 0.014*”product” + 0.013*”make” + 0.012*”increase”

Topic 5 ‘Undertaking’s strategy’: 0.039*”policy” + 0.031*”process” + 0.031*”business” + 0.030*”company” + 0.025*”ensure” + 0.022*”management” + 0.017*”plan” + 0.015*”manage” + 0.015*”strategy” + 0.015*”orsa”

Topic 6 ‘Risk management’: 0.325*”risk” + 0.030*”market” + 0.027*”rate” + 0.024*”change” + 0.022*”operational” + 0.021*”underwriting” + 0.019*”credit” + 0.019*”exposure” + 0.013*”interest” + 0.013*”liquidity”

Topic 7 ‘Insurance and technical provisions’: 0.049*”insurance” + 0.045*”reinsurance” + 0.043*”provision” + 0.039*”life” + 0.034*”technical” + 0.029*”total” + 0.025*”premium” + 0.023*”fund” + 0.020*”gross” + 0.019*”estimate”

Topic 8 ‘Undertaking’: 0.065*”company” + 0.063*”group” + 0.029*”insurance” + 0.029*”method” + 0.023*”limit” + 0.022*”include” + 0.017*”service” + 0.016*”limited” + 0.015*”specific” + 0.013*”mutual”

To determine the topic of a sentences we calculate for each topic the weight of the words in the sentences. The main topic of the sentence is then expected to be the topic with the highest sum.

If we run the following sentence (found in one of the SFCRs) through the model

"For the purposes of solvency, the Insurance Group’s insurance obligations
are divided into the following business segments: 1. Insurance with profit
participation 2. Unit-linked and index-linked insurance 3. Other life
insurance 4. Health insurance 5. Medical expence insurance for non-life
insurance 6. Income protection insurance for non-life insurance Pension &
Försäkring (Sweden) Pension & Försäkring offers insurance solutions on the
Swedish market within risk and unit-linked insurance and traditional life
insurance."

then we get the following results per topic:

[(0, 0.08960573476702509), 
(1, 0.0692951015531661),
(2, 0.0692951015531661),
(3, 0.06332138590203108),
(4, 0.08363201911589009),
(5, 0.0692951015531661),
(6, 0.08004778972520908),
(7, 0.3369175627240143),
(8, 0.13859020310633216)]

Topic seven (‘Insurance and technical provisions’) has clearly the highest score 0.34 , followed by topic eight (‘Undertaking’). This suggests that these sentences are about the insurances and technical provisions of the undertaking (that we can verify).

Likewise, for the sentence

"Chief Risk Officer and Risk Function 
The Board has appointed a Chief Risk Officer (CRO) who reports directly to
the Board and has responsibility for managing the risk function and
monitoring the effectiveness of the risk management system."

we get the following results:

[(0, 0.2926447574334898), 
(1, 0.08294209702660407),
(2, 0.07824726134585289),
(3, 0.07824726134585289),
(4, 0.07824726134585289),
(5, 0.08450704225352113),
(6, 0.14866979655712048),
(7, 0.07824726134585289),
(8, 0.07824726134585289)]

Therefore, topic zero (‘Governance’) and topic six (‘Risk management’) have the highest score and this suggests that this sentence is about the governance of the insurance undertaking and to a lesser extent risk management.

The nine topics that were identified reflect fairly different elements in the SFCR, but we also see that some topics consist of several subtopics that could be identified separately. For example, the topic that I described as ‘Valuation’ covers assets and investments but it might be more appropriate to distinguish investment strategies from valuation. The topic ‘Solvency’ covers own funds as well as solvency requirements. If we increase the number of topics then some of the above topics will be split into more topics and the topic determination will be more accurate.

Once we have made the LDA model we can use the results for several applications. First, of course, we can use the model to determine the topics of previously unseen documents and sentences. We can also analyze topic distributions across different SFCRs, we can get similar sentences for any given sentence (based on the distance of the probability scores of the given sentence to other sentences).

In this blog I described first steps in text modeling of Solvency and Financial Condition Reports of insurance undertakings. The coherence scores were fairly high and the identified topics represented genuine topics from the Solvency II legislation, especially with a sufficient number of topics. Some examples showed that the LDA model is able to identify the topic of specific sentences. However, this does not yet work perfectly; an important element of SFCR documents are the numerical information often stored in table form in the PDF. These are difficult to analyze with the LDA algorithm.

How to download and read the Solvency 2 legislation

Published by:

In our first Natural Language Processing project we will read the Solvency II legislation from the website of the European Union and extract the text within the articles by using regular expressions.

For this notebook, we have chosen the text of the Delegated Acts of Solvency II. This part of the Solvency II regulation is directly into force (because it is a Regulation) and the wording of the Delegated Acts is more detailed than the Solvency II Directive and very precise and internally consistent. This makes it suitable for NLP. From the text we are able to extract features and text data on Solvency II for our future projects.

The code of this notebook can be found in here

Step 1: data Retrieval

We use several packages to read and process the pdfs. For reading we use the fitz-package. Furthermore we need the re-package (regular expressions) for cleaning the text data.

import os
import re
import requests
import fitz

We want to read the Delegated Acts in all available languages. The languages of the European Union are Bulgarian (BG), Spanish (ES), Czech (CS), Danish (DA), German (DE), Estonian (ET), Greek (EL), English (EN), French (FR), Croatian (HR), Italian (IT), Latvian (LV), Lithuanian (LT), Hungarian (HU), Maltese (MT), Dutch (NL), Polish (PL), Portuguese (PT), Romanian (RO), Slovak (SK), Solvenian (SL), Finnish (FI), Swedish (SV).

languages = ['BG','ES','CS','DA','DE','ET','EL',
             'EN','FR','HR','IT','LV','LT','HU',
             'MT','NL','PL','PT','RO','SK','SL',
             'FI','SV']

The urls of the Delegated Acts of Solvency 2 are constructed for these languages by the following list comprehension.

urls = ['https://eur-lex.europa.eu/legal-content/' + lang +
        '/TXT/PDF/?uri=OJ:L:2015:012:FULL&from=EN' 
        for lang in  languages]

The following for loop retrieves the pdfs of the Delegated Acts from the website of the European Union and stores them in da_path.

da_path = 'data/solvency ii/'
for index in range(len(urls)):
    filename = 'Solvency II Delegated Acts - ' + languages[index] + '.pdf'
    if not(os.path.isfile(da_path + filename)):
        r = requests.get(urls[index])
        f = open(da_path + filename,'wb+')
        f.write(r.content) 
        f.close()
 else:
        print("--> already read.")

Step 2: data cleaning

If you look at the pdfs then you see that each page has a header with page number and information about the legislation and the language. These headers must be deleted to access the articles in the text.

DA_dict = dict({
                'BG': 'Официален вестник на Европейския съюз',
                'CS': 'Úřední věstník Evropské unie',
                'DA': 'Den Europæiske Unions Tidende',
                'DE': 'Amtsblatt der Europäischen Union',
                'EL': 'Επίσημη Εφημερίδα της Ευρωπαϊκής Ένωσης',
                'EN': 'Official Journal of the European Union',
                'ES': 'Diario Oficial de la Unión Europea',
                'ET': 'Euroopa Liidu Teataja',           
                'FI': 'Euroopan unionin virallinen lehti',
                'FR': "Journal officiel de l'Union européenne",
                'HR': 'Službeni list Europske unije',         
                'HU': 'Az Európai Unió Hivatalos Lapja',      
                'IT': "Gazzetta ufficiale dell'Unione europea",
                'LT': 'Europos Sąjungos oficialusis leidinys',
                'LV': 'Eiropas Savienības Oficiālais Vēstnesis',
                'MT': 'Il-Ġurnal Uffiċjali tal-Unjoni Ewropea',
                'NL': 'Publicatieblad van de Europese Unie',  
                'PL': 'Dziennik Urzędowy Unii Europejskiej',  
                'PT': 'Jornal Oficial da União Europeia',     
                'RO': 'Jurnalul Oficial al Uniunii Europene', 
                'SK': 'Úradný vestník Európskej únie',        
                'SL': 'Uradni list Evropske unije',            
                'SV': 'Europeiska unionens officiella tidning'})

The following code reads the pdfs, deletes the headers from all pages and saves the clean text to a .txt file.

DA = dict()
files = [f for f in os.listdir(da_path) if os.path.isfile(os.path.join(da_path, f))]    
for language in languages:
    if not("Delegated_Acts_" + language + ".txt" in files):
        # reading pages from pdf file
        da_pdf = fitz.open(da_path + 'Solvency II Delegated Acts - ' + language + '.pdf')
        da_pages = [page.getText(output = "text") for page in da_pdf]
        da_pdf.close()
        # deleting page headers
        header = "17.1.2015\\s+L\\s+\\d+/\\d+\\s+" + DA_dict[language].replace(' ','\\s+') + "\\s+" + language + "\\s+"
        da_pages = [re.sub(header, '', page) for page in da_pages]
        DA[language] = ''.join(da_pages)
        # some preliminary cleaning -> could be more 
        DA[language] = DA[language].replace('\xad ', '')
        # saving txt file
        da_txt = open(da_path + "Delegated_Acts_" + language + ".txt", "wb")
        da_txt.write(DA[language].encode('utf-8'))
        da_txt.close()
    else:
        # loading txt file
        da_txt = open(da_path + "Delegated_Acts_" + language + ".txt", "rb")
        DA[language] = da_txt.read().decode('utf-8')
        da_txt.close()

Step 3: retrieve the text within articles

Retrieving the text within articles is not straightforward. In English we have ‘Article 1 some text’, i.e. de word Article is put before the number. But some European languages put the word after the number and there are two languages, HU and LV, that put a dot between the number and the article. To be able to read the text within the articles we need to know this ordering (and we need of course the word for article in every language).

art_dict = dict({
                'BG': ['Член',      'pre'],
                'CS': ['Článek',    'pre'],
                'DA': ['Artikel',   'pre'],
                'DE': ['Artikel',   'pre'],
                'EL': ['Άρθρο',     'pre'],
                'EN': ['Article',   'pre'],
                'ES': ['Artículo',  'pre'],
                'ET': ['Artikkel',  'pre'],
                'FI': ['artikla',   'post'],
                'FR': ['Article',   'pre'],
                'HR': ['Članak',    'pre'],
                'HU': ['cikk',      'postdot'],
                'IT': ['Articolo',  'pre'],
                'LT': ['straipsnis','post'],
                'LV': ['pants',     'postdot'],
                'MT': ['Artikolu',  'pre'],
                'NL': ['Artikel',   'pre'],
                'PL': ['Artykuł',   'pre'],
                'PT': ['Artigo',    'pre'],
                'RO': ['Articolul', 'pre'],
                'SK': ['Článok',    'pre'],
                'SL': ['Člen',      'pre'],
                'SV': ['Artikel',   'pre']})

Next we can define a regex to select the text within an article.

def retrieve_article(language, article_num):

    method = art_dict[language][1]
    
    if method == 'pre':
        string = art_dict[language][0] + ' ' + str(article_num) + '(.*?)' + art_dict[language][0] + ' ' + str(article_num + 1)
    elif method == 'post':
        string = str(article_num) + ' ' + art_dict[language][0] + '(.*?)' + str(article_num + 1) + ' ' + art_dict[language][0]
    elif method == 'postdot':
        string = str(article_num) + '. ' + art_dict[language][0] + '(.*?)' + str(article_num + 1) + '. ' + art_dict[language][0]

    r = re.compile(string, re.DOTALL)
            
    result = ' '.join(r.search(DA[language])[1].split())
            
    return result

Now we have a function that can retrieve the text of all the articles in the Delegated Acts for each European language.

Now we are able to read the text of the articles from the Delegated Acts. In the following we give three examples (article 292 with states the summary of the Solvency and Financial Conditions Report).

retrieve_article('EN', 292)
"Summary 1. The solvency and financial condition report shall include a clear and concise summary. The summary of the report
shall be understandable to policy holders and beneficiaries. 2. The
summary of the report shall highlight any material changes to the 
insurance or reinsurance undertaking's business and performance, 
system of governance, risk profile, valuation for solvency purposes 
and capital management over the reporting period."
retrieve_article('DE', 292)
'Zusammenfassung 1. Der Bericht über Solvabilität und Finanzlage 
enthält eine klare, knappe Zusammenfassung. Die Zusammenfassung des
Berichts ist für Versicherungsnehmer und Anspruchsberechtigte
verständlich. 2. In der Zusammenfassung werden etwaige wesentliche
Änderungen in Bezug auf Geschäftstätigkeit und Leistung des
Versicherungs- oder Rückversicherungsunternehmens, sein 
Governance-System, sein Risikoprofil, die Bewertung für 
Solvabilitätszwecke und das Kapitalmanagement im Berichtszeitraum 
herausgestellt.'
retrieve_article('EL', 292)
'Περίληψη 1. Η έκθεση φερεγγυότητας και χρηματοοικονομικής
κατάστασης περιλαμβάνει σαφή και σύντομη περίληψη. Η περίληψη της
έκθεσης πρέπει να είναι κατανοητή από τους αντισυμβαλλομένους και
τους δικαιούχους. 2. Η περίληψη της έκθεσης επισημαίνει τυχόν
ουσιώδεις αλλαγές όσον αφορά τη δραστηριότητα και τις επιδόσεις της
ασφαλιστικής και αντασφαλιστικής επιχείρησης, το σύστημα
διακυβέρνησης, το προφίλ κινδύνου, την εκτίμηση της αξίας για τους
σκοπούς φερεγγυότητας και τη διαχείριση κεφαλαίου κατά την περίοδο
αναφοράς.'