Files in this item

This item is
Publicly Available
and licensed under:
Creative Commons - Attribution 4.0 International (CC BY 4.0)
Distributed under Creative Commons Attribution Required
Icon
Name
SemSEX.ttl
Size
191.22 KB
Format
Unknown
Description
SemSex ontology
MD5
86af6728344f1434d537a9aacfabc22c
 Download file
Icon
Name
concept_classifier_SloBerta.zip
Size
363.71 MB
Format
application/zip
Description
SloBerta based classifier for classifying concepts
MD5
1f8520a17579b1dec5e2f2fec18334b3
 Download file  Preview
 File Preview  
  • concept_classifier_SloBerta
    • config.json1 kB
    • training_args.bin2 kB
    • tokenizer_config.json505 B
    • tokenizer.json2 MB
    • special_tokens_map.json298 B
    • pytorch_model.bin422 MB
    • sentencepiece.bpe.model781 kB
Icon
Name
concept_classifier_CroSloEngual.zip
Size
440.71 MB
Format
application/zip
Description
CroSloEngual BERT based classifier for classifying concepts
MD5
f0340e83590c576ea86d4c5fba712180
 Download file  Preview
 File Preview  
  • concept_classifier_CroSloEngual
    • config.json1 kB
    • training_args.bin2 kB
    • tokenizer_config.json370 B
    • tokenizer.json1 MB
    • special_tokens_map.json112 B
    • pytorch_model.bin473 MB
    • vocab.txt321 kB
    • sentencepiece.bpe.model781 kB
Icon
Name
binary_classifier_SloBerta.zip
Size
703.92 MB
Format
application/zip
Description
SloBerta based classifier for detecting concepts
MD5
4a956f4ec4587806b774367b708dd958
 Download file  Preview
 File Preview  
  • binary_classifier_SloBerta
    • sentencepiece.bpe.model781 kB
    • pytorch_model.bin422 MB
    • tokenizer_config.json505 B
    • config.json779 B
    • training_args.bin2 kB
    • model.safetensors387 MB
    • tokenizer.json2 MB
    • vocab.txt321 kB
    • special_tokens_map.json298 B
Icon
Name
binary_classifier_CroSloEngual.zip
Size
779.73 MB
Format
application/zip
Description
CroSloEngual BERT based classifier for detecting concepts
MD5
16aa8b4744091a201cbeeec679a6336a
 Download file  Preview
 File Preview  
  • binary_classifier_CroSloEngual
    • sentencepiece.bpe.model781 kB
    • pytorch_model.bin473 MB
    • tokenizer_config.json370 B
    • config.json701 B
    • training_args.bin2 kB
    • model.safetensors387 MB
    • tokenizer.json1 MB
    • vocab.txt321 kB
    • special_tokens_map.json112 B