LinguaLibre
Difference between revisions of "Bot"
(→Bots ?) |
(Bot request for Catalan Witkionary) |
||
Line 127: | Line 127: | ||
== Bots ? == | == Bots ? == | ||
Let's welcome [[User:Babel AutoCreate]] ([[User talk:Babel AutoCreate|t]]•[[Special:Contributions/Babel AutoCreate|c]]), [[User:FuzzyBot]] ([[User talk:FuzzyBot|t]]•[[Special:Contributions/FuzzyBot|c]]) :D [[User:Yug|Yug]] ([[User talk:Yug|talk]]) 23:00, 8 March 2021 (UTC) | Let's welcome [[User:Babel AutoCreate]] ([[User talk:Babel AutoCreate|t]]•[[Special:Contributions/Babel AutoCreate|c]]), [[User:FuzzyBot]] ([[User talk:FuzzyBot|t]]•[[Special:Contributions/FuzzyBot|c]]) :D [[User:Yug|Yug]] ([[User talk:Yug|talk]]) 23:00, 8 March 2021 (UTC) | ||
+ | |||
+ | == Bot request for Catalan Witkionary == | ||
+ | * '''Example pages (3):''' [[:ca:wikt:mariner]], [[:ca:wikt:activity]], [[:ca:wikt:fèr]] - You can see best audio integration there. | ||
+ | * '''Target section:''' There is no specific section. The audio file should be added under the language heading <code><nowiki>== {{-xx-}} ==</nowiki></code> and before the first POS section. It should be added in a new line after pronuntiation templates, if any, either <code><nowiki>{{pron|xx|...}}</nowiki></code>, <code><nowiki>{{pronafi|xx|...}}</nowiki></code> or <code><nowiki>{{xx-pron}}</nowiki></code>. In these templates xx means language code ISO 639-1 or ISO 639-3. | ||
+ | * '''Local audio template(s) example(s):''' <code><nowiki>{{àudio|en-us-activity.ogg|lang=en|accent=EUA}}</nowiki></code> | ||
+ | * '''Local audio template(s) explained:''' | ||
+ | ** {àudio} means "audio", and take the following parameters... | ||
+ | ** <code>en-us-activity.ogg</code> is the filename | ||
+ | ** <code>lang=en</code> is the ISO 639-1 of the language. | ||
+ | ** <code>accent=EUA</code> means USA which is the accent or local variant. This parameter is optional and it may be codified for Catalan as explained at [[:ca:wikt:Template:àudio]]. | ||
+ | * Request by: [[User:Vriullop|Vriullop]] ([[User talk:Vriullop|talk]]) 14:41, 9 March 2021 (UTC) |
Revision as of 14:41, 9 March 2021
Lingua Libre Bot (aka LiLiBot or LLBot) is passionate about audio recordings, languages and Wikimedia projects. Every day, it adds Lingua Libre's latest audio recordings to the relevant pages on various Wikimedia projects.
Due to the big amount of recordings added every day, it is necessary for LiLiBot to obtain the Bot status on each wiki it works on, in order to contribute to it safely. As of today, the bot is allowed and able to contribute to four Wikimedia projects.
YOU can help LiLiBot pursue its mission! Follow the guidelines on this page to request LiLiBot on your wiki!
This page serves as a request page for Lingua Libre Bot on specific wikis.
Copy and adapt this template to your needs, then paste it in a new section at the bottom of this page:
== Bot request for the {language} Wiktionary == {{Bot steps}} * '''Example pages (≥3):''' a few links to your Wiktionary's pages that are examples of the usual page structure. * '''Target section:''' title of the section in which the recordings should be listed (on the French Wiktionary, this is {{S|prononciation}}) * '''Local audio template(s) example(s):''' an example of how the audio recording template is used : e.g. {{deng|en|en-us-apple.ogg|Deng (DYA)}} * '''Local audio template(s) explained:''' explain the various parameters of that template (especially if the documentation of your template is not available in English nor in French) ** {Deng} means "audio", and take the following parameters... ** <code>en</code> is the iso639-2 of the audio. ** <code>en-us-apple.ogg</code> is the filename ** <code>Deng (DYA)</code> means audio (deng) and USA (DYA) which is the local variant or accent. * '''Edit summary text: ''' the summary text you would like to be displayed on your wiki when Lingua Libre Bot adds an audio file. * Request by: ~~~~
You can also read and publish useful information about bots in general on the current collaborative page.
Requests
Add your request below. Follow this template
== Bot request for {language} witkionary == * '''Example pages (3):''' [[:ku:wikt:Apple]], [[:ku:wikt:Pomme]] - You can see best audio integration there. * '''Target section:''' the audio file should be added at the end of the <code>==Bilêvkirin==</code> section, which means ... * '''Local audio template(s) example(s):''' {{deng|en|en-us-apple.ogg|Deng (DYA)}} * '''Local audio template(s) explained:''' ** {Deng} means "audio", and take the following parameters... ** <code>en</code> is the iso639-2 of the audio. ** <code>en-us-apple.ogg</code> is the filename ** <code>Deng (DYA)</code> means audio (deng) and USA (DYA) which is the local variant or accent. * Request by: ~~~~
Bot request for ku.wiktionary
- Example pages (3): ku:wikt:beran, ku:wikt:başûr, ku:wikt:keskesor- You can see best audio integration there.
- Target section: The audio file should be added at the end of the
=== Bilêvkirin ===
section, which means "Pronunciation". If there is no=== Bilêvkirin ===
section on the page, please create one after the language section, that is== {{ziman|<lang code>}} ==
. If there is no language section, the audio file should not be added. - Local audio template(s) example(s):
{{deng|ku|LL-Q36368 (kur)-Mihemed Qers-keskesor.wav|Deng|dever=Qers}}
- Local audio template(s) explained:
- {Deng} is the template name which means "audio", and takes the following parameters...
ku
is the lang code from ISO 639-1 of the audio, ISO 639-3 and ISO 639-2 are also in use.LL-Q36368 (kur)-Mihemed Qers-keskesor.wav
is the filenameDeng
means audio, should always be present.|dever=
means place of origin, could be local variant or accent, country or city name. In the example "Qers" is the Kurdish name for the city en:Kars.
- Request by: Balyozxane (talk) 04:05, 22 February 2021 (UTC)
- Here are two examples [1], [2]. If there are multiple part of speech sections we still collect them all at the top of the page like this [3].
|dever=
parameter should fetch the Kurdish names for places from Wikidata if possible. Lingua libre uses "kur" code for Kurdish, but we use "ku" and sometimes "kmr" on ku.wikt. Even when the language code is "kmr" in language section, the lang code in {{deng|<lang code>}} should be "ku". I think that's all I can remember. Any questions? --Balyozxane (talk) 00:26, 21 February 2021 (UTC)- You can also take a look at this page [4] for guidance.--Balyozxane (talk) 00:47, 21 February 2021 (UTC)
- @Balyozxane , your last link is a diff, is it normal ? Also, can you reformat a bit your request so it follow the template above. You can also allow me to edit your text and I will happily do it. cc: user:Poslovitch. Yug (talk) 18:52, 21 February 2021 (UTC)
- @Yug The last link was for example only to show there are other varients but the first two are the desired outcome from the LiLiBot. Feel free to correct my use of the template as free as you can. Balyozxane (talk) 04:05, 22 February 2021 (UTC)
- @Balyozxane Thanks! I'll get to work ASAP. I'll notify you once I'm ready to test the bot ;) --Poslovitch (talk) 13:19, 23 February 2021 (UTC)
- Thank you!Balyozxane (talk) 08:00, 24 February 2021 (UTC)
- @Balyozxane Thanks! I'll get to work ASAP. I'll notify you once I'm ready to test the bot ;) --Poslovitch (talk) 13:19, 23 February 2021 (UTC)
- @Yug The last link was for example only to show there are other varients but the first two are the desired outcome from the LiLiBot. Feel free to correct my use of the template as free as you can. Balyozxane (talk) 04:05, 22 February 2021 (UTC)
- @Balyozxane , your last link is a diff, is it normal ? Also, can you reformat a bit your request so it follow the template above. You can also allow me to edit your text and I will happily do it. cc: user:Poslovitch. Yug (talk) 18:52, 21 February 2021 (UTC)
- You can also take a look at this page [4] for guidance.--Balyozxane (talk) 00:47, 21 February 2021 (UTC)
Connexion via Oauth and Bots for Unilex lists editing
@Olaf & Poslovitch Hello folks. I'am having some connection issues with WikiAPI (JS) code to connect to LinguaLibre. Is there some special thing to do to connect my bot to edit Lili ? As human using chrome, being connected to Commons alone doesn't connect you to LinguaLibre. We have to come here, click login, which sends a Oauth query (I guess), check my login status on Commons, then makes something so I'am loggued into both Commons and Lingualibre. I suspect some additional Oauth query is needed inside my bot. Yug (talk) 21:22, 1 March 2021 (UTC)
- Normally the login procedure here is very complicated: mw:OAuth/For_Developers, and I've never managed to implement it, however if you use a bot account, you can create a password in Special:BotPasswords, and then log in directly on Lingua Libre wiki without Commons. Alternatively you can use one of the JS frameworks to log in. Finally, if you are logged in manually in the browser, the authorization proof should be in cookies, so the JS scripts in the browser should work fine. Olaf (talk) 21:19, 1 March 2021 (UTC)
- Special:BotPasswords/Dragons_Bot. Progress underway. Thank you.
- I see :
Allowed IP ranges: 0.0.0.0/0 ::/0
- Any explanation for this ? Dragons Bot (talk) 21:30, 1 March 2021 (UTC)
Lists: approach and limits
Dragons Bot script is ready to run. A test back is visible on Special:Contributions/Dragons_Bot.
I propose the following ranges of words for lists creations :
var ranges = [ [ '00001', '00200' ], // 1) 'List:Ibo/Most_used_words,_UNILEX_1:_words_00001_to_00200' [ '00201', '01000' ], // 2) 'List:Ibo/Most_used_words,_UNILEX_2:_words_00201_to_01000' [ '01001', '02000' ], // … [ '02001', '05000' ], // 4) <←——— 1st threshold [ '05001', '10000' ], // … [ '10001', '15000' ], // [ '15001', '20000' ], // [ '20001', '25000' ], // [ '25001', '30000' ], // 9) <←——— 2nd threshold [ '30001', '35000' ], // [ '35001', '40000' ], // [ '40001', '45000' ], // [ '45001', '50000' ] // ];
I willfully create a smooth ramp approach to onboard new comers. I tested, 200 is a nice balance while we start. It is gently ambitious and about 10 mins works. It typically the kind of list-size I was looking for demoing in IRL events, with new users. Then can do just 20 if they wish. But the length alone, 200, encourage to flow it forward and to try out the Lingualibre productive flow which appears after 20~30 words but requires 50 words to "see the power" of LinguaLibre.
As for the depth, I first though of a deal :
// `corpus-limit`: // - default: x = 5000; // - active: x = 30000 // - rule: if recordings > 2000 according to https://lingualibre.org/wiki/LinguaLibre:Stats, then `active`.
With that rule, our 17 most active languages get 30,000 words via 9 files. All others get 5,000 words via 4 files.
But after some though I'am wondering if this 5,000 first limit is too small. It allows good on-boarding, but then nothing. Waiting still a bit. Yug (talk) 22:22, 4 March 2021 (UTC)
Imported list names ?
- @Pamputt , I derivated from the IETF's column on the right a new `iso639-3` column on the left. These `iso639-3` will provide the
Iso
inList:{Iso}/{Title}{range}
. But I often didn't know what was the `iso639-3` versions so I kept the IETF tag (ceux avec les-
). Could you review languages.js and share with me possible corrective indications ? Or is it ok if I use those ? (I don't think so, the record wizard will have difficulties finding them) Yug (talk) 22:28, 4 March 2021 (UTC)- About two letters code, you can find the "equivalent" ISO 639-3 code using this Wikipedia page. For example, "ae-Latn" corresponds to "ave" on LinguaLibre. Pamputt (talk) 06:46, 5 March 2021 (UTC)
- Yes, I need to be sure I'am converting correctly from composite IETF into current conventions (iso639-3 right?). Thanks for the lead on `ave`, I will check those. I gathered below the list of items I'am confused by.
- About two letters code, you can find the "equivalent" ISO 639-3 code using this Wikipedia page. For example, "ae-Latn" corresponds to "ave" on LinguaLibre. Pamputt (talk) 06:46, 5 March 2021 (UTC)
{ 'iso639-3':'ave', file:'ae-Latn' }, { 'iso639-3':'', file:'be-tarask' }, { 'iso639-3':'', file:'blt-Latn' }, { 'iso639-3':'', file:'ca-valencia' }, { 'iso639-3':'', file:'ctd-Latn' }, { 'iso639-3':'', file:'el-Latn-u-sd-it75' }, { 'iso639-3':'', file:'gsw-u-sd-chag' }, { 'iso639-3':'', file:'gsw-u-sd-chbe' }, { 'iso639-3':'', file:'gsw-u-sd-chfr' }, { 'iso639-3':'', file:'kab-Arab' }, { 'iso639-3':'', file:'kab-Tfng' }, { 'iso639-3':'', file:'rm-puter' }, { 'iso639-3':'', file:'rm-rumgr' }, { 'iso639-3':'', file:'rm-surmiran' }, { 'iso639-3':'', file:'rm-sursilv' }, { 'iso639-3':'', file:'rm-sutsilv' }, { 'iso639-3':'', file:'rm-vallader' }, { 'iso639-3':'', file:'sr-Latn' }, { 'iso639-3':'', file:'vec-u-sd-itpd' }, { 'iso639-3':'', file:'vec-u-sd-itts' }, { 'iso639-3':'', file:'vec-u-sd-itvr' },
- Do we have a naming convention for cases like
gsw
,rm
andvec
which has several sub elements each ? Should I doList:{gsw}/u-sd-chag/{title}
? I will return here later to complete all those I can. Yug (talk)- What I know is Lingua Libre uses ISO 639-3 to identify the language in the lists. So we should use pure ISO 639-3 to name the lists. Let us talk about
rm
for example, the ISO 639-3 code is "roh". The text after the hyphen is used to discriminate the dialects. On LinguaLibre, we can create list for a given dialect, but it should be named such as "List:Roh/Puter-namelist" or maybe "List:Roh/Puter/Namelist". I have not checked what is the behaviour of the list taht would be named following the last proposal. - The same remark applied for IETF code such as "kab-Arab". It that case, "Arab" is about the script. So we could name the list such as "List:Kab/Arab/namelist" for example. Pamputt (talk) 08:59, 5 March 2021 (UTC)
- It's a more general question - how should the resulting recording files look like? For example
be-tarask
is a standard Belarussian but written with Latin script instead of Cirillic. Still most Wiktionaries considers it a separate language (example: wikt:fr:Catégorie:biélorusse_(tarashkevitsa)). If the LiLi bot is supposed to attach the recordings properly, the language should have a separate ISO code in LiLi. You can't just put it as a version ofbel
. But LiLi has only one Belarussian language code defined. sr-latin
is a Latinized version of Serbian, but the French Wiktionary puts it in the same bag with Cyrillic: wikt:fr:Catégorie:serbe, in the Polish Wiktionary we allow only the Cyrillic script in Serbian, in English Wiktionary everything is together with Croatian as the Serbo-Croatian language. Total mess. A few other codes are also different script versions of standard languages.gsw-*
, on the other hand, are various dialects of Swiss German. I believe they all are treated in Lingua Libre and Wiktionaries as dialects of German (deu
). Perhaps codegsw
could also be created here, but it isn't.rm-*
are dialects of the Romansh language. LiLi treats them as one languageroh
.- In general, if we want to have rare languages on board, they should be defined here first. It's not enough to make a list if you can't select proper language while recording. Maybe you should import only lists for the languages defined in LiLi? Olaf (talk) 09:26, 5 March 2021 (UTC)
- It's a more general question - how should the resulting recording files look like? For example
- What I know is Lingua Libre uses ISO 639-3 to identify the language in the lists. So we should use pure ISO 639-3 to name the lists. Let us talk about
- Do we have a naming convention for cases like
- Special:RecordWizard's Step 3 :
Details
(which should beList
IMHO) does 2 things:- List picking: seems to load the list via a simple research by name. The list's name (and iso prefix) does NOT influences the recordings.
- « You record words in: {pick your language} » : this defined how the words's Qid will be tagged, imported to Commons, and categorized.
- You can load List:Mar/wiktionary, and pick the language Japanese. Then your recordings Qitems will be
iso639-3 = jpn
. - So, for today case (list creation), I just need to have my list starting with recognizable
iso639-3
so they show up properly. - The question of languages is a Wikidata/LanguageImporter issue.
- I'am cognitively tired of this past coding days so I will simply not upload those composite-names languages for now. But it stays a practical question with implications and side effects (wikidata, wiktionary) to kink about. Yug (talk) 21:36, 5 March 2021 (UTC)
- Special:RecordWizard's Step 3 :
Bots ?
Let's welcome User:Babel AutoCreate (t•c), User:FuzzyBot (t•c) :D Yug (talk) 23:00, 8 March 2021 (UTC)
Bot request for Catalan Witkionary
- Example pages (3): ca:wikt:mariner, ca:wikt:activity, ca:wikt:fèr - You can see best audio integration there.
- Target section: There is no specific section. The audio file should be added under the language heading
== {{-xx-}} ==
and before the first POS section. It should be added in a new line after pronuntiation templates, if any, either{{pron|xx|...}}
,{{pronafi|xx|...}}
or{{xx-pron}}
. In these templates xx means language code ISO 639-1 or ISO 639-3. - Local audio template(s) example(s):
{{àudio|en-us-activity.ogg|lang=en|accent=EUA}}
- Local audio template(s) explained:
- {àudio} means "audio", and take the following parameters...
en-us-activity.ogg
is the filenamelang=en
is the ISO 639-1 of the language.accent=EUA
means USA which is the accent or local variant. This parameter is optional and it may be codified for Catalan as explained at ca:wikt:Template:àudio.
- Request by: Vriullop (talk) 14:41, 9 March 2021 (UTC)