Common Voice Spontaneous Speech 2.0 - Rutoro
License:
CC0-1.0
Steward:
Common Voice
Task: ASR
Release Date: 12/5/2025
Format: MP3
Size: 272.63 MB
Description
A collection of spontaneous spoken phrases in Rutoro.
Specifics
Considerations
Forbidden Usage
It is forbidden to attempt to determine the identity of speakers in the common Voice datasets. It is forbidden to re-host or re-share this dataset
Processes
Intended Use
This dataset is intended to be used for training and evaluating automatic speech recognition (ASR) models. It may also be used for applications relating to computer-aided language learning (CALL) and language or heritage revitalisation.
Metadata
Rutoro — Rutoro (ttj)
This datasheet has been generated automatically, we would love to include more information, if you would like to help out, get in touch!
This datasheet is for version 2.0 of the the Mozilla Common Voice Spontaneous Speech dataset
for Rutoro (ttj). The dataset contains 3113 clips representing 17 hours of recorded
speech (11 hours validated) from 26 speakers.
Data splits for modelling
| Split | Count |
|---|---|
| Train | 1279 |
| Test | 254 |
| Dev | 349 |
Transcriptions
Prompts:
120Duration:
16:46:26 [h:m:s]Avg. Transcription Len:
200Avg. Duration:
19.4[s]Valid Duration:
36131.184[s]Total hours:
16.77[h]Valid hours:
10.04[h]
Samples
Questions
There follows a randomly selected sample of questions used in the corpus.
Okora ki otakabyamire?
Kovidi ekatabangura eta bbiizinesi entaito omu Uganda?
Bulemeezi ki oburukurugirra mu kwetaba mu bikoosi ebibi omu kicweka kyaitu?
Habwaki kintu kikuru kuhuliiriza abantu abakuru?
Butumwa ki obu orukusobora kuheereza omuzaire owaine omwana owʼobwojo owagenzire omu bisimba?
Responses
There follows a randomly selected sample of transcribed responses from the corpus.
Obu mba ntakabyamire mbanza ncumba ebyokulya ndya obu mmara kulya ndoraho obuzaano ha tiivi obumu nkira kurora amakuru obu mmara nyara ekitabu nsaba esaara nukwo mbyama
Kovidi ekatabangura bbiizinesi entaito mu Uganda obu abantu baalemerwe kutunga sente bakagura kandi bakalima n'okusuubura
Kwetaba mu bikoosi ebibi mu kicweka kyange niharugamu engeso ezibi nk'okunywa enjaahi kunywa amarwa mairungi kurwana n'okutaha bantu ekitiinisa niharugamu n'obusuma
Abantu bakuru nibaba baine ebintu bingi ebi bamanyire na habw'eki obu obahuliriza noosobora kubeegeraho ebintu ebirungi nibakuhabura engeso ez'omu bantu kandi nibakuheereza n'emiringo ei osobora okwerindiramu endwaire kwerinda kuba muntu w'omugaso mu nsi munu
Amwete abanze abaze na abaze nawe nk'omuzaire kakuba kirema amwetere abeebembezi babaze nawe basobole okumuhabura
Fields
Each row of a tsv file represents a single audio clip, and contains the following information:
client_id- hashed UUID of a given useraudio_id- numeric id for audio fileaudio_file- audio file nameduration_ms- duration of audio in millisecondsprompt_id- numeric id for promptprompt- question for usertranscription- transcription of the audio responsevotes- number of people that who approved a given transcriptage- age of the speaker1gender- gender of the speaker1language- language namesplit- for data modelling, which subset of the data does this clip pertain tochar_per_sec- how many characters of transcription per second of audioquality_tags- some automated assessment of the transcription--audio pair, separated by|transcription-length- character per second under 3 characters per secondspeech-rate- characters per second over 30 characters per secondshort-audio- audio length under 2 secondslong-audio- audio length over 30 seconds
Get involved!
Community links
Contribute
Acknowledgements
Funding
This dataset was partially funded by the Open Multilingual Speech Fund managed by Mozilla Common Voice.
Licence
This dataset is released under the Creative Commons Zero (CC-0) licence. By downloading this data you agree to not determine the identity of speakers in the dataset.
Footnotes
For a full list of age, gender, and accent options, see the demographics spec. These will only be reported if the speaker opted in to provide that information. ↩ ↩2
