Common Voice Spontaneous Speech 3.0 - Toba Qom

License icon

License:

CC0-1.0

Shield icon

Steward:

Common Voice

Task: ASR

Release Date: 3/22/2026

Format: MP3

Size: 173.39 MB


Share

Description

A collection of spontaneous responses to questions in Toba Qom (tob).

Specifics

Licensing

Creative Commons Zero v1.0 Universal (CC0-1.0)

https://spdx.org/licenses/CC0-1.0.html

Considerations

Restrictions/Special Constraints

None provided.

Forbidden Usage

It is forbidden to attempt to determine the identity of speakers in the Common Voice datasets. It is forbidden to re-host or re-share this dataset.

Processes

Intended Use

This dataset is intended to be used for training and evaluating automatic speech recognition (ASR) models. It may also be used for applications relating to computer-aided language learning (CALL) and language or heritage revitalisation.

Metadata

tob — Toba Qom (tob)

This datasheet is for sps-corpus-3.0-2026-03-09 of the Mozilla Common Voice Spontaneous Speech dataset for Toba Qom [tob - tob]. The dataset contains 1572 clips representing 10 hours of recorded speech (9.67 hours validated) from 25 speakers.

Language

The Toba Qom language is an endangered language spoken in Gran Chaco, a region spanned over Argentina, Paraguay and Bolivia. As per the official demographic data provided by the Argentinian state, the population of Qom individuals is estimated at 80,000, of which approximately 49% are speakers of the oral form of the language. The term "qom" describes a population that has traditionally been arranged into multiple extended families or groups. Language and sociocultural traits that are essential to qom culture are shared by these groups, which are traditionally hunter-gatherer.

The contributors to this corpus originate from Chaco and Formosa provinces in Argentina. This area encompasses four ethnodialectal subregions with distinct self-identification terms (Messineo, 1991) [^3].

AreaProvinceLocationsVariant (self-identification)
NorthwestChacoEl Colchwón, El Espinillo and the Bermejo river’s surroundingsdapigemlʔek
NorthcenterChacoPampa del IndionoʔolgaGanaq
SouthcenterChacoSáenz Peña, Machahay, QuitilipilʔañaGashek
SoutheastChaco, Eastern FormosaLas Palmas, Clorindatakshek

For further information, see [^2] [^3] [^4].

Data splits for modelling

The dataset clips are categorised by transcription status and training-set assignment. The following tables summarise the distribution.

Audio clips

BucketClips%
Transcribed & Validated1,54098.0%
Transcribed & Pending00.0%
Not transcribed322.0%

Training splits

BucketClips%
Train93959.7%
Dev19712.5%
Test40425.7%
Unassigned322.0%

Training split coverage: 1,540 of 1,540 transcribed & validated clips (100.0%)

Transcriptions

Transcription status

BucketClips%
Validated1,540100.0%
Pending00.0%
Edited54035.1%

Writing system

The transcriptions follows the orthographic systems proposed by Buckwalter (2001) [^2]

Symbol table

a c ch d e g hu i j l ll m n ñ o p q qu r s sh t u v x y ỹ ’

Samples

Questions

There follows a randomly selected sample of questions used in the corpus.

  1. Negue't ca 'auo'ot da ivita ca 'adañaxoqui?

  2. ¿Qonetec naxa da mashe qoỹoqtegue da sa anauace’ ca ’anauochaxaua?

  3. ¿Negue’t na l’ashaxac na ñaqpiolec yi ’adma’?

  4. ¿’Eetec na ñicpi?

  5. ¿Negue’t aca nhuoshaxaqui da qai’ot na nhuoshec?

Responses

There follows a randomly selected sample of transcribed responses from the corpus.

  1. yi sotaa' huotel lalaxat so iataqta lta'araic

  2. Aiem saq saxañi ra siotape na iuaxaye cha'aye maiche iataxac

  3. Mashe ivi' cuarenta vi'iyi da sooto'ot aso iua yiqopita cha'aye onataxanaxai qataq ýaýaten da ilo'ogue na qoyalaqpi yiqopita cha'aye qomi' ñaq nsoxodolqa can sadonaxa't

  4. Aiem yocopita so iguaa chaye cansaronoga nache comi nsorolco'

  5. Añi laỹi ana nmeenapi huetaigui na iotta'a ra iachaxan nmeenapi qataq qaiachaxan na ashaxaicpi aiem ñimeten ra semetetac enauac na huetaigui aña'añi nmeena laỹi

Recommended post-processing

To be updated in the next release. Contact the author for details.

Fields

Each row of a tsv file represents a single audio clip, and contains the following information:

  • client_id - hashed UUID of a given user

  • audio_id - numeric id for audio file

  • audio_file - audio file name

  • duration_ms - duration of audio in milliseconds

  • prompt_id - numeric id for prompt

  • prompt - question for user

  • transcription - transcription of the audio response

  • votes - number of people that who approved a given transcript

  • age - age of the speaker1

  • gender - gender of the speaker1

  • language - language name

  • split - for data modelling, which subset of the data does this clip pertain to

  • char_per_sec - how many characters of transcription per second of audio

  • quality_tags - some automated assessment of the transcription--audio pair, separated by |

    • transcription-length - character per second under 3 characters per second

    • speech-rate - characters per second over 30 characters per second

    • short-audio - audio length under 2 seconds

    • long-audio - audio length over 5 minutes

Get involved

Community links

Discussions

Contribute

Acknowledgements

Datasheet authors

Citation guidelines

B. Ticona, P. Cuneo. A. Anastasopoulos. “Datasheet of Spontaneous Speech Corpus for Qom - Mozilla Common Voice”. Revised on Aug 29th, 2025. [Publication Date].

Funding

This dataset was partially funded by the Open Multilingual Speech Fund managed by Mozilla Common Voice.

The speaker collaborators were funded by Mozilla Common Voice. The project coordinator was partially funded by the US NSF grants 2346334 and 2439202.

Licence

This dataset is released under the Creative Commons Zero (CC-0) licence. By downloading this data you agree to not determine the identity of speakers in the dataset.

Footnotes

  1. For a full list of age, gender, and accent options, see the demographics spec. These will only be reported if the speaker opted in to provide that information. 2