Slovenska raziskovalna infrastruktura za jezikovne vire in tehnologije
Common Language Resources and Technology Infrastructure, Slovenia

FAQ for Serbian language resources and technologies

This FAQ is part of the documentation of the CLASSLA CLARIN knowledge centre for South Slavic languages. If you notice any missing or wrong information, please do let us know on, Subject “FAQ_Serbian”.

The questions in this FAQ are organised into three main sections:

1. Online Serbian language resources

Q1.1: Where can I find Serbian dictionaries?

Below we list the main lexical resources:

  • Raskovnik is a dictionary portal aimed at providing access to historical dictionaries of Serbian language, currently consisting of five dictionaries with around 84 thousand entries altogether
  • Lexicom gives a search interface to an inflectional lexicon of Serbian language
  • srLex is the largest inflectional lexicon of Serbian language, consisting of 192,590 lexemes and 6,908,043 entries; it is searchable through the CLARIN.SI web interface (Anonymous login, Lexicon)

Q1.2: How can I analyse Serbian corpora online?

CLARIN.SI offers access to two concordancers, which share the same set of (Serbian) corpora and back-end, but have different front-ends:

  • NoSketch Engine, an open-source variant of the well-known Sketch Engine. No registration is necessary or possible, which  also has drawbacks, e.g. not being able to save your screen settings or making private subcorpora.
  • Kontext, with a somewhat different user interface. Basic functionality is provided without logging in, but to use more advanced functionalities, it is necessary to log in via AAI through you identity provider.

Documentation on how to query corpora via the SketchEngine-like interfaces is available here.

Note that the commercial Sketch Engine also offers access to several Serbian language corpora. Furthermore, for researchers in the EU, access to SketchEngine is free for non-commerical purposes in 2018-2022.

Q1.3: Which Serbian corpora can I analyse online?

These are the main general language corpora:

The main specialised corpora are the following:

  • The only available specialised corpus of Serbian language we are aware of is the Serbian Corpus of Early Child Language (SCECL), and it can be downloaded and browsed through via TalkBank.

Finally, the main manually annotated corpora are the following:

  • The training corpus of standard language (SETimes.SR) is available through noSkE or KonText
  • The training corpus of computer-mediated communication (ReLDI-NormTagNER-sr) is available through NoSkE and KonText.

Q1.4: What linguistic annotation schemas are used in Serbian corpora?

Most of these corpora are annotated according to the MULTEXT-East morphosyntactic specifications. The more recent ones use the Version 6 specifications for the Serbo-Croatan macrolanguage. More recent corpora also use the Universal Dependencies project annotation scheme, in particular that for Croatian and Serbian. Named entities are annotated via the Janes NE guidelines.

Q1.5: Where can I download Serbian resources?

The main point for archiving and downloading Serbian language resources is the repository of CLARIN.SI.

Another point for downloading resources in Serbian is the MetaShare repository.

2. Tools to annotate Serbian texts

Q2.1: How can I perform basic linguistic processing of my Serbian texts?

Q2.2: How can I standardize my texts prior to further processing?

  • Currently, the only text on-line normalization tool available through the CLARIN.SI services is the REDI diacritic restorer. The usage of the CLARIN.SI services is documented here. You can also download this REDI diacritic restorer, install it and use it locally.
  • For word-level normalisation of user-generated Serbian texts you can download and install the CSMTiser text normalizer.

Q2.3: How can I annotate my texts for named entities?

  • On-line NER is available via the CLARIN.SI services documented here. You can also download this NER tool and use it locally.

Q2.4: How can I syntactically parse my texts?

You can syntactically parse Serbian texts in multiple ways:

3. Datasets to train Serbian annotation tools

Q3.1: Where can I get word embeddings for Serbian?

The embeddings trained on the srWaC web corpus is the embedding collection.

There are also collections of trained embeddings for Serbian available from fastText (Latin and Cyrillic script are intertwined and no transliteration was performed).

If you want to train your own embeddings, the largest freely available collection of Serbian texts is the srWaC corpus.

Q3.2: What data is available for training a text normaliser for Serbian?

For training text normalisers for Internet Serbian the ReLDI-NormTagNER-sr dataset can be used.

Q3.3: What data is available for training a part-of-speech tagger for Serbian?

The reference dataset for training a standard tagger is SETimes.SR. There is also the ReLDI-NormTagNER-sr training dataset of Internet Serbian.

Q3.4: What data is available for training a lemmatiser for Serbian?

Lemmatisers can be trained either on the tagger training data (SETimes.SR, ReLDI-NormTagNER-sr, see the section on PoS tagger training for details) and/or on the inflectional lexicon srLex.

Q3.5: What data is available for training a named entity recogniser for Serbian?

For training the named entity recognizer of standard language, SETimes.SR is the best resource. For training NER systems for online, non-standard texts, ReLDI-NormTagNER-sr can be used.

Q3.6: What data is available for training a syntactic parser for Serbian?

If you want to follow the Universal Dependencies formalism for dependency parsing, the best location for obtaining training data is the Universal Dependencies repository.

If you require additional annotation layers, e.g., for multi-task learning, the SETimes.SR dataset should be used.