Pogosta vprašanja za slovenščino

(V PRIPRAVI)

This FAQ is part of the documentation of the CLASSLA CLARIN knowledge centre for South Slavic languages. If you notice any missing or wrong information, please do let us know on helpdesk.classla@clarin.si, Subject “FAQ_Slovene”.

The questions in this FAQ are organised into three main sections:

    1. Online Slovene language resources
      1. Where can I find Slovene dictionaries?
      2. How can I analyse Slovene corpora online?
      3. Which Slovene corpora can I analyse online?
      4. What linguistic annotation schemas are used in Slovene corpora?
      5. Where can I download Slovene resources?
    2. Tools to annotate Slovene texts
      1. How can I perform basic linguistic processing of my Slovene texts?
      2. How can I standardize my texts prior to further processing?
      3. How can I annotate my texts for named entities?
      4. How can I syntactically parse my texts?
    3. Datasets to train Slovene annotation tools
      1. Where can I get word embeddings for Slovene?
      2. What data is available for training a text normaliser for Slovene?
      3. What data is available for training a part-of-speech tagger for Slovene?
      4. What data is available for training a lemmatiser for Slovene?
      5. What data is available for training a named entity recogniser for Slovene?
      6. What data is available for training a syntactic parser for Slovene?

Online Slovene language resources

Q1.1 Where can I find Slovene dictionaries?

Below we list the main dictionary portals offered by CLARIN.SI partners:

Dictionaries by other providers:

  • Evroterm, multilingual terminology database and a list of on-line dictionaries, by the Government of the Republic of Slovenia
  • Islovar, a terminological dictionary for the field of Informatics by the Slovene Society for Informatics
  • Wikislovar, the Slovene Wiktionary

Q1.2: How can I analyse Slovene corpora online?

CLARIN.SI offers access to two concordancers, which share the same set of (Slovene) corpora and back-end, but have different front-ends:

  • NoSketch Engine, an open-source variant of the well-known Sketch Engine. No registration is necessary or possible, which  also has drawbacks, e.g. not being able to save your screen settings or making private subcorpora.
  • Kontext, with a somewhat different user interface. Basic functionality is provided without logging in, but to use more advanced functionalities, it is necessary to log in via AAI through you identity provider.

Documentation on how to query corpora via the SketchEngine-like interfaces is available here.

Note that the commercial Sketch Engine also offers access to several Slovene language corpora. Furthermore, for researchers in the EU, access to SketchEngine is free for non-commerical purposes in 2018-2022.

Some Slovene corpora, esp. those produced in the scope of the “Communications in Slovene” project have their specialised web concordancers, cf. the corpora listed in Q1.3.

Q1.3: Which Slovene corpora can I analyse online?

The main reference corpus for Slovene is Gigafida (1 billion words), which you can query via its specialized interface, via noSkE or KonText. Note that the corpus is also available in a version which has (near) duplicate paragraphs removed, cf. noSkE or KonText. A balanced subset of Gigafida is KRES (100 million tokens), which you can query via its specialized interface.

For a complete list of corpora available under CLARIN.SI concordancers see the index for noSkE or KonText. Below we list some of the important ones, with links to the noSketch Engine concordancer:

  • a general language corpus (apart from GigaFida) is slWaC, a large corpus (900 million tokens) of Slovene texts form the Web
  • specialized corpora include the corpus of academic writing KAS, the corpus of user-generated content Janes, the spoken corpus GOS, the corpus of historical Slovene IMP and the developmental corpus ŠOLAR
  • manually annotated corpora include the reference training corpus ssj500k, the corpus of historical Slovene goo300k, the corpus of user-generated content Janes-tag (manually annotated with morphosyntax, lemmas, and named entities) and Janes-norm (manually annotated with normalized forms).

Q1.4: What linguistic annotation schemas are used in Slovene corpora?

Most of these corpora are annotated on the level of morphosyntax with the MULTEXT East tagset. On the level of syntax, there are two tagsets, one Slovene-specific and another from the Universal Dependencies project. The Universal Dependencies project already contains a tagset for annotating morphosyntax, which is currently only applied in training corpora. Named entities are annotated via the guidelines developed for South Slavic languages.

Q1.5 Where can I download Slovene resources?

The main point for archiving and downloading Slovene language resources is the repository of CLARIN.SI.


Tools to annotate Slovene texts

Q2.1 How can I perform basic linguistic processing of my Slovene texts?

Q2.2 How can I standardize my texts prior to further processing?

  • Currently, the only text on-line normalization tool available through the CLARIN.SI services is the REDI diacritic restorer. The usage of the CLARIN.SI services is documented here. You can also download this REDI diacritic restorer, install it and use it locally.
  • For word-level normalisation of e.g. historical and user-generated Slovene texts you can download and install the CSMTiser text normalizer.

Q2.3 How can I annotate my texts for named entities?

  • On-line NER is available via the CLARIN.SI services documented here. You can also download this NER tool and use it locally.

Q2.4 How can I syntactically parse my texts?

You can syntactically parse Slovene texts in multiple ways:


Datasets to train Slovene annotation tools

Q3.1 Where can I get word embeddings for Slovene?

The embeddings trained on the largest collection of Slovene textual data (Gigafida, slWaC, JANES, KAS etc.) is the CLARIN.SI-embed.sl embedding collection.

There are also collections of trained embeddings for Slovene available from SketchEngine and from fastText.

If you want to train your own embeddings, the largest freely available collection of Slovene texts is the Slovene portion of Commoncrawl.

Q3.2 What data is available for training a text normaliser for Slovene?

For training text normalisers for Internet Slovene the Janes-norm dataset can be used. For normalising historical data, the goo300k dataset should be used.

Q3.3 What data is available for training a part-of-speech tagger for Slovene?

The reference dataset for training a standard tagger is ssj500k. There is also a silver-standard extension of the ssj500k dataset available, jos1M. There are also training datasets available for Internet Slovene (Janes-tag) and for historical Slovene (goo300k).

Q3.4 What data is available for training a lemmatiser for Slovene?

Lemmatisers can be trained either on the tagger training data (ssj500k, jos1M, Janes-tag, goo300k, see the section on PoS tagger training for details) and/or on the inflectional lexicon Sloleks.

Q3.5 What data is available for training a named entity recogniser for Slovene?

For training the named entity recognizer of standard language, ssj500k is the best resource. For training NER systems for online, non-standard texts, Janes-tag can be used. Finally, for training historical NER models, goo300k is the best resource.

Q3.6 What data is available for training a syntactic parser for Slovene?

If you want to follow the Universal Dependencies formalism for dependency parsing, the best location for obtaining training data is the Universal Dependencies repository.

For training parsers by following the Slovene-specific formalism, the ssj500k dataset should be used.