This FAQ is part of the documentation of the CLASSLA CLARIN knowledge centre for South Slavic languages. If you notice any missing or wrong information, please do let us know on helpdesk.classla@clarin.si, Subject “FAQ_Croatian”.
The questions in this FAQ are organised into three main sections:
1. Online Croatian language resources
Q1.1: Where can I find Croatian dictionaries?
Below we list the main lexical resources:
- Hrvatski Jezični Portal offers search over the largest dictionary database of Croatian language (the Novi Liber dictionary database)
- The Institute for Croatian language and linguistics offers the Spelling dictionary, the Dictionary of Phrasemes, the School Dictionary of Croatian language, the Valency Dictionary, the Collocation Dictionary, the Dictionary of Croatian First Names, the Croatian Metaphor Repository MetaNet.HR, the database of semantic frames in the field of aviation AirFrame, the Database of Croatian Morphological Doublets, the Portal of Croatian Grammars from the Pre-Illyrian Period, and the Croatian Terminology Portal, which offers central access to various terminological dictionaries and a list of other terminology resources
- The Croatian Old Dictionary Portal allows you to search through digitised dictionaries from between 16th and 19th century
- The Miroslav Krleža Institute of Lexicography offers access to a series of on-line lexicons, including the Dictionary of Croatian Exonyms
- The Lexonomy portal offers access to a Dictionary of Croatian idioms
- Kontekst.io is a lexicon of semantically related words, automatically produced on the basis of word-embeddings from large corpora
- Termania is a portal of free online dictionaries, offered by the Amebis company
- Wječnik, the Croatian Wiktionary, is a multilingual, openly accessible and openly editable dictionary
- CroWN is a lexical database for Croatian and CroDeriV is a morphological database of Croatian verbs
- hrLex is the largest inflectional lexicon of Croatian language, consisting of 186,743 lexemes and 6,428,577 entries; it is searchable through the CLARIN.SI web interface (Anonymous login, Lexicon)
- Croatian Psycholinguistic Database and Croatian Lexical Database HLB provide psycholinguistic information on Croatian words (i.e., concreteness, imageability, frequency, and age of acquisition)
- Construction Grammar Conceptual Network CongraCNet app from the project EmoCNET provides a way of analysing various semantic relations of concepts based on a network structure
Q1.2: How can I analyse Croatian corpora online?
CLARIN.SI offers access to four concordancers, which share the same set of corpora and back-end, but have different front-ends:
- CLARIN.SI Crystal noSketch Engine, an open-source variant of the well-known Sketch Engine. Instructions for its use are available here. CLARIN.SI offers two installations of Crystal noSketch Engine: an open installation (no log-in, which simplifies use for less advanced users) and a version with log-in which allows subcorpus creation and personalised display of e.g. corpus attributes.
- KonText, with a somewhat different user interface. Basic functionality is provided without logging in, but to use more advanced functionalities, it is necessary to log in via your home institution.
- CLARIN.SI Bonito noSketch Engine is the old version of noSketch Engine with a radically different user interface from Crystal. This version offers some functions that the new noSketch Engine does not, in particular, accessing the results of queries in XML, where it is enough to add the parameter “format=XML” to the end of the query URL.
Documentation on how to query corpora via the SketchEngine-like interfaces is available here.
Note that the commercial Sketch Engine also offers access to several Croatian language corpora, as well as some additional tools that are not accessible on the free NoSketch Engine, including the tools to analyse collocations (Word sketches), synonyms and antonyms (Thesaurus), the tools to compute frequency lists of multiword expressions (N-grams) and to extract keywords and terms. It also allows users to create their own corpora.
Q1.3: Which Croatian corpora can I analyse online?
For a complete list of corpora available under CLARIN.SI concordancers, see the index for Crystal noSkE, Bonito noSkE or KonText. Below we list the Croatian ones, with links to the Crystal noSketch Engine concordancer:
- general language corpora are the web corpora CLASSLA-web.hr (2.7 billion tokens), hrWaC (1.4 billion tokens), the Riznica Croatian Language Corpus (100 million tokens) of the Institute for Croatian language and linguistics, which consists of literary works and newspaper texts and which you can query via its specialized interface as well, and The Croatian National Corpus (HNK) of the Institute of Linguistics
- specialized corpora include the parliamentary corpora ParlaMint-HR and yu1Parl, the parliamentary spoken corpus ParlaSpeech-HR, the Wikipedia corpus for Croatian, CLASSLAWiki-hr, and Serbo-Croatian, CLASSLAWiki-sh, the corpus of news portals ENGRI, and the corpus of tweets Tweet-hr
- manually annotated corpora include the training corpus of standard language hr500k, the corpora of non-professional written language Raput-cln (speakers with language disorders) and Raput-ncln (typical speakers), and the training corpus of computer-mediated communication ReLDI-hr with manually normalised (standardised), morphosyntactically tagged and lemmatised words and named entities
- parallel corpora include the multilingual European parliamentary corpora ParlaMint-XX, paired with the machine-translated English corpora ParlaMint-XX-en, and the multilingual DGT translation memory corpus EU DGT-UD: Croatian
In addition to these, the Croatian Spoken Language Corpus HrAL is available through TalkBank. The latter platform also offers access to a small language development corpus of three participants, the Kovačević Corpus, the Croatian part of the comparable corpora CHILDES, which consists of transcripts of child language for 24 languages. Furthermore, the CroLTeC corpus, a learner corpus of Croatian, can be queried via the TeiTok interface.
Furthermore, the commercial Sketch Engine includes the following Croatian corpora: EUR-Lex Croatian 2/2016 and OPUS2 Croatian, which is a part of the parallel corpus of 40 languages.
Q1.4: What linguistic annotation schemas are used in Croatian corpora?
Most of these corpora are annotated according to the MULTEXT-East morphosyntactic specifications. The more recent ones use the Version 6 specifications for the Serbo-Croatian macrolanguage. More recent corpora also use the Universal Dependencies project annotation scheme, in particular that for Croatian and Serbian. Named entities are annotated via the Janes NE guidelines.
Q1.5: Where can I download Croatian resources?
The main point for archiving and downloading Croatian language resources is the repository of CLARIN.SI.
In addition to the resources mentioned above and below, the repository offers:
- manually annotated corpora and datasets, including the Sentiment Annotated Dataset of Croatian News, the multilingual sentiment dataset of parliamentary debates ParlaSent, the offensive language dataset FRENK, annotated for different types of socially unacceptable discourse, and the commonsense reasoning datasets COPA-HR in Croatian and DIALECT-COPA in the Chakavian dialect
- parallel corpora, including the Croatian-English parallel corpora MaCoCu-hr-en, hrenWaC and the Tourism Corpus
- other corpora and datasets, including the largest Croatian corpus – the web corpus MaCoCu-hr with 2.4 billion words, also available as a genre-enriched version inside the MaCoCu-Genre corpus collection, the linguistically annotated corpus of parliamentary debates ParlaMint.ana, the automatic speech recognition training dataset ParlaSpeech-HR, the 24sata news article archive and news comment dataset, the multilingual IPTC news media topic dataset EMMediaTopic, the Twitter corpus, the text collection for training the BERTić transformer model BERTić-data, the Keyword extraction dataset, the news dataset SETimes.HBS and the Twitter dataset Twitter-HBS for discriminating between Bosnian, Croatian, Montenegrin and Serbian, and the Mići Princ “text and speech” dialectal dataset in various Chakavian micro-dialects
- wordlists and other lexical resources, including the automatically constructed multiword lexicon hrMWELex, the verbal databases of the Western South Slavic HyperVerb and WeSoSlaV, and the LiLaH emotion lexicon
Another point where you can find Croatian resources is the MetaShare repository, which includes the sentiment lexicon CroSentilex, the valency lexicon CROVALLEX, and the South-East European Parallel Corpus SETimes Corpus.
In addition to this, some Croatian language resources can be downloaded from the Repository of the Faculty of Maritime Studies of University of Rijeka (FMSRI), such as the Database of English words and their Croatian equivalents, the Database of English words in Croatian, and the CROWD-5e database, a Croatian psycholinguistic database of affective norms for five discrete emotions.
Moreover, scientific publications in Croatian language are available as part of the scientific corpus ZNANJE on the Hugging Face repository. In addition to Slovenian and Serbian publications, a large part of the ZNANJE corpus comprises Croatian scientific publications that were collected from the Croatian Digital Academic Archives and Repositories (DABAR) service.
2. Tools to annotate Croatian texts
Q2.1: How can I perform basic linguistic processing of my Croatian texts?
The state-of-the-art CLASSLA-Stanza pipeline provides processing of standard and non-standard (Internet) Croatian on the levels of tokenisation and sentence splitting, part-of-speech tagging, lemmatisation, dependency parsing, and named entity recognition. For Croatian, the CLASSLA-Stanza pipeline uses the rule-based reldi-tokeniser. There are also available off-the-shelf models for lemmatisation of standard and non-standard Croatian, and part-of-speech tagging of standard and non-standard Croatian. You can try out the pipeline at the CLASSLA Annotation tool website.
The documentation for the installation and use of the pipeline is available here.
In addition to this, tokenisation, part-of-speech tagging, and lemmatisation are provided by a CLARIN.SI service ReLDIanno as well. This is a legacy system for linguistic annotation that we still keep available for backward compatibility, but we suggest new users to use the above-mentioned CLASSLA-Stanza pipeline.
Q2.2: How can I standardize my texts prior to further processing?
The CLASSLA-Stanza pipeline, mentioned above, includes also models for processing of non-standard text, which allows non-standard texts to be annotated before previous standardization.
Currently, the only on-line text normalisation tool available through the CLARIN.SI services (ReLDIanno) is the REDI diacritic restorer. Its usage is documented here. You can also download it, install it and use it locally.
For word-level normalisation of user-generated Croatian texts, you can download and install the CSMTiser text normaliser.
Q2.3: How can I annotate my texts for named entities?
Named entity recognition is provided by the CLASSLA-Stanza pipeline, which also offers off-the shelf models for standard and non-standard Croatian. In addition to this, on-line NER is available via the CLARIN.SI service ReLDIanno. You can also download the janes-ner tool.
Q2.4: How can I syntactically parse my texts?
You can syntactically parse Croatian texts, following the Universal Dependencies formalism, in multiple ways:
- by using the state-of-the-art CLASSLA-Stanza pipeline, which also offers an off-the-shelf model
- by using the CLARIN.SI service ReLDIanno
- by using the UDPipe tool, which has off-the-shelf models for many languages, Croatian included
3. Datasets to train Croatian annotation tools
Q3.1: Where can I get word embeddings or pre-trained language models for Croatian?
- The embeddings trained on the largest collection of Croatian textual data (hrWaC, Riznica, 24sata newspaper texts and comments, MaCoCu-hr, etc.) is the CLARIN.SI-embed.hr embedding collection.
- There are also collections of trained embeddings for Croatian available from fastText.
- If you want to train your own embeddings, the largest freely available collection of Croatian texts is the BERTić-data text collection.
You can also use a transformer language model BERTić, a state-of-the-art model representing words/tokens as contextually dependent word embeddings. It allows you to extract word embeddings for every word occurrence, which can then be used in training a model for an end task.
Q3.2: What data is available for training a text normaliser for Croatian?
For training text normalisers for Internet Croatian, the ReLDI-NormTagNER-hr dataset can be used, a gold-standard training and testing dataset for tokenisation, sentence segmentation, word normalisation, morphosyntactic tagging, lemmatisation and named entity recognition of non-standard Croatian.
Q3.3: What data is available for training a part-of-speech tagger for Croatian?
The reference dataset for training a standard tagger is hr500k. There is also the ReLDI-NormTagNER-hr training dataset of Internet Croatian.
You can also use the CLASSLA-Stanza pipeline in combination with the CLARIN.SI embeddings and the training dataset hr500k to train and evaluate your own part-of-speech tagger. The documentation is available here.
Q3.4: What data is available for training a lemmatiser for Croatian?
Lemmatisers can be trained either on the tagger training data (hr500k, ReLDI-NormTagNER-hr, see the section on PoS tagger training for details) and/or on the inflectional lexicon hrLex.
For training your own lemmatiser for standard and non-standard Croatian, you can use the CLASSLA-Stanza pipeline, which uses the external lexicon for lemmatisation (hrLex). The documentation is available here.
Q3.5: What data is available for training a named entity recogniser for Croatian?
For training a named entity recogniser of standard language, hr500k is the best resource. For training NER systems for online, non-standard texts, ReLDI-NormTagNER-hr can be used.
The CLASSLA-Stanza pipeline allows you to train your own named entity recogniser as well. The documentation is available here.
Q3.6: What data is available for training a syntactic parser for Croatian?
If you want to follow the Universal Dependencies formalism for dependency parsing, the best location for obtaining training data is the Universal Dependencies repository.
If you require additional annotation layers, e.g., for multi-task learning, the hr500k dataset should be used.
You can also use the CLASSLA-Stanza pipeline to train your own parser. The documentation is available here.