Wikidata:Bot requests

From Wikidata
Jump to: navigation, search


Bot requests
If you have a bot request, create a new section here and tell exactly what you want. You should discuss your request first and wait for the decision of the community. Please refer to previous discussion. If you want to request sitelink moves, see list of delinkers. For botflag requests, see Wikidata:Requests for permissions.
On this page, old discussions are archived after 2 days, if they are marked with {{Section resolved}}. An overview of all archives can be found at this page's archive index. The current archive is located at 2015/05.

Intel processors[edit]

Hello. Can some bot please read Intel's site http://ark.intel.com and gather information from that database for the Wikidata? Its needed that it catches the name of the processors, create an item corresponding to each one and complete the item with the properties sockets supported (P1041), instruction set (P1068), manufacturer (P176) and number of processor cores (P1141).--MisterSanderson (talk) 15:58, 17 May 2014 (UTC)

Why don't you write them to ask that they release that data via a dump/machine readable API under a free license (or rather CC-0)? Even better, they could add it themselves here on Wikidata, to save us some work. --Nemo 17:16, 17 May 2014 (UTC)
I could not find an appropiate e-mail adress at http://www.intel.com/content/www/us/en/company-overview/contact-us.html, so there is no way to contact them.--MisterSanderson (talk) 18:45, 17 May 2014 (UTC)
Try any of the first four in "Intel PR Departments" [1] (calling yourself an analyst) and [2], you'll be fine. --Nemo 15:51, 23 May 2014 (UTC)
Ok, I sent them a message.--MisterSanderson (talk) 11:37, 25 May 2014 (UTC)
The contact was closed without response.--MisterSanderson (talk) 16:50, 29 May 2014 (UTC)
So the creation of the items needs to be made by Wikidata robots...--MisterSanderson (talk) 15:32, 4 July 2014 (UTC)

Today I found the link "Export Full Specifications" that generates a XML file with the data. I think this will turn easy to gather the information with bots.--MisterSanderson (talk) 15:06, 2 October 2014 (UTC)

Here, I even extracted myself manually the data and created a table: http://hypervolution.wordpress.com/2014/10/01/soquete-lga771/. I think that now there is no excuse to not include these informations on Wikidata.--MisterSanderson (talk) 18:52, 3 October 2014 (UTC)

The table looks good. However, we can't yet add values with a dimension (e.g. Hz, MB, nm) so the only information we can now extract is the number of cores (number of processor cores (P1141). Are there already items on Wikidata about intel processors or should a new item be created for every row in the table? --Pasleim (talk) 19:15, 3 October 2014 (UTC)
Not only number of processor cores (P1141), there are other properties too: sockets supported (P1041), instruction set (P1068) and manufacturer (P176). I think that maybe there is a "release date" property too, but I could not find. And there is the subclass of (P279): all Celeron models are a subclass of the Celeron family. Some processors already have an item, but in Wikipedia is more common to create articles about a family of processors, not to individual models, so I think that each row must be an item.--MisterSanderson (talk) 22:39, 3 October 2014 (UTC)

New table: https://hypervolution.wordpress.com/2014/11/01/soquete-lga775/ .--MisterSanderson (talk) 23:20, 6 December 2014 (UTC)

New table: https://hypervolution.wordpress.com/2015/01/01/soquete-fclga1366/ .--MisterSanderson (talk) 17:02, 1 February 2015 (UTC)

New table: https://hypervolution.wordpress.com/2015/02/01/soquete-fclga-1567/ .--MisterSanderson (talk) 22:41, 1 February 2015 (UTC)

You can add the data by yourself using http://tools.wmflabs.org/wikidata-todo/quick_statements.php . However, it is still not possible to add columns 3-6 to Wikidata as there is no support for quantities with units. Adding sockets supported (P1041) and instruction set (P1068) can be interesting but I do not find these data on your page. --Pasleim (talk) 13:54, 3 February 2015 (UTC)
This tool only add statements to already existing items. But there are not items for all these processors. That's why I need that a robot creates them, I don't want to create manually, I already did my part of the job by creating the tables from the ARK. sockets supported (P1041) is the title of the posts, and instruction set (P1068) is not available for all the processors. There is too little information for processors released before 2006. --MisterSanderson (talk) 18:11, 2 March 2015 (UTC)

New table: https://hypervolution.wordpress.com/2015/03/01/soquete-lga-1156/ .--MisterSanderson (talk) 18:11, 2 March 2015 (UTC)

New table: https://hypervolution.wordpress.com/2015/04/01/soquete-lga-1155/ .--MisterSanderson (talk) 21:14, 2 April 2015 (UTC)

New table: https://hypervolution.wordpress.com/2015/05/01/soquete-pga-478/.--MisterSanderson (talk) 00:35, 11 May 2015 (UTC)

Importing Commons Categories (P373)[edit]

Dutch Wikipedia has a nl:Categorie:Wikipedia:Commonscat zonder link op Wikidata (Commonscat without equivalent in Wikidata). From there the P373 statement could be filled. -- Pütz M. (talk) 23:13, 13 June 2014 (UTC)

In en:Category:Commons category without a link on Wikidata is even more. -- Pütz M. (talk) 23:28, 13 June 2014 (UTC)
 Writing something up for this, I'll work on it as much as I can. George Edward CTalkContributions 19:25, 15 January 2015 (UTC)
@George.Edward.C: any progress in this or do you know how to do that? We would like to import commonscat links from fi-wiki to Wikidata. --Stryn (talk) 17:59, 21 January 2015 (UTC)
Doing the parser right now. Will work on it more at the weekend. If I can figure out the parser, it should be easy to complete the rest of it. When I've written it, and it works, I will run both NL and FI after that. George Edward CTalkContributions 18:05, 21 January 2015 (UTC)
Please don't run a bot on nl. There are a lot of conflicts in them, also a lot of pages with double templates. Sjoerd de Bruin (talk) 18:12, 21 January 2015 (UTC)
Noted. :-) The bot will only run on EnWiki and FiWiki (as long as there's no problems with either of them). (Edit: I will probably need a category similar to those mentioned in the reques') George Edward CTalkContributions 18:23, 21 January 2015 (UTC)
Been a while, but I've finally finished the code, and tested it with 4 edits (2 didn't work as planned, so I'm going to work on that, as it happens when Commonscat defaults to the pagename when no value is specified). Expect a request at RFP soon. --George (Talk · Contribs · CentralAuth · Log) 08:47, 20 February 2015 (UTC)

Add instance of (P31) from MusicBrainz Artist Type via MusicBrainz artist ID (P434)[edit]

We can leverage the existing links between Wikidata and MusicBrainz to populate instance of (P31) for the people and groups linked with MusicBrainz artist ID (P434). Specifically, the bot would add human (Q5) if the MusicBrainz Artist Type was "Person", and band (Q215380) if the Type was "Group". The reference would be imported from (P143) MusicBrainz (Q14005), as with the MusicBrainz identifiers. If User:Mineo is interested, this could be an additional task for User:MineoBot -- or someone else could take it on. JesseW (talk) 06:21, 8 July 2014 (UTC)

What about duos, orchestras, non-performing personnel (compsers, producers etc)? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:18, 14 September 2014 (UTC)
That shouldn't be a problem for the first case (adding human (Q5)). It's a good point about automatically adding "Group" entries, though. MusicBrainz now categorizes some items in more detail (with orchestras, choirs, etc.) so those might be useful. JesseW (talk) 06:40, 5 December 2014 (UTC)
We should not describe, say, Simon & Garfunkel (Q484918) as an instance of (a) human (Q5). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:04, 5 December 2014 (UTC)
I agree -- and they wouldn't fall under the first case, as they are identified as a Group on Musicbrainz (as can be seen here), not a Person. Do you see any problem with the first case (adding human (Q5) to items linked to "Person"-type MusicBrainz entries)? JesseW (talk) 06:53, 13 December 2014 (UTC)
I think "Group" in MusicBrainz is actually closer to instance of (P31) musical ensemble (Q2088357). According to http://tools.wmflabs.org/wikidata-todo/tree.html?lang=en&q=Q2088357&rp=279, orchestra (Q42998) is not currently a subtype of band (Q215380), whereas in MusicBrainz "Orchestra" is a subtype of "Group". -- Nikki (talk) 03:17, 5 January 2015 (UTC)
I agree, musical ensemble (Q2088357) is a better choice. Thanks for finding it. Now we just need to find someone interested in coding and running it... JesseW (talk) 06:01, 15 January 2015 (UTC)
I can take this from 24 January. --JulesWinnfield-hu (talk) 11:55, 15 January 2015 (UTC)

I've made ten test edits. May I proceed? What about types other than “Person” and “Group”? --JulesWinnfield-hu (talk) 15:34, 24 January 2015 (UTC)

Should Anonymous (Q10920) be instance of a musical ensemble? Do anyone know how many groups that aren't realy about music exists on MusicBrainz? If there are only a few then I guess it's better to mark everything up and try to fix the few that's strange, however if there are many I think we need to make sure that the group actually is about music. --Pajn (talk) 09:28, 25 January 2015 (UTC)
Thank you SO MUCH for coding this up and making some test edits! Considering the Anonymous issue, it seems like running it fully automatically would produce too many false positives. I see two possible paths forward:
  1. Generate list pages for manual operation (with humans checking to make sure the entity is actually music-related)
  2. Cross-check against Wikipedia categories and only do ones that already have a music-related category on their associated Wikipedia entry.
In either case, it would be great to get a sense of how many items we are talking about -- could you generate such counts? Thanks again (and sorry for the delayed reponse) JesseW (talk) 08:29, 23 February 2015 (UTC)
Also, of the Musicbrainz artist types, I think "Group" is probably the most ambiguous. You might have better luck working from "Person" (adding human (Q5), not anything more music-specific), or one of the new categories like Orchestra or Choir. JesseW (talk) 08:33, 23 February 2015 (UTC)
Type Count Total MB Artists MB Artists with Wikidata links
Group 4,225 226,496 32,022
Person 578 455,522 74,148
Orchestra 54 1,343 458
Choir 50 933 129
Character 49 2,447 209
Other 45 1,320 125
N/A 146 241,961 1,441

--JulesWinnfield-hu (talk) 22:42, 23 February 2015 (UTC)

I'm surprised at how few Person items there are (considering that there are nearly 500,000 MusicBrainz entities of that type). I'm also slightly confused by what this is a count of. Is it:
  1. A count of Wikidata items with links to MusicBrainz Artist entries of the specified type
  2. A count of Wikidata items with links to MusicBrainz Artist entries of the specified type AND without instance of (P31).
  3. A count of MusicBrainz Artist entries of the specified type with a link to a Wikidata item
  4. Something else?
These may not be the same because while User:MineoBot does try to keep them synced, I'm not sure if it is fully bi-directional. Looking at the numbers, I think doing the Orchestra and Choir ones would be useful (and small enough to manually fix after the fact (we'll need a way to mark false positives so they don't get re-done)). Regarding Person -- could you make a count of the Wikidata items that are linked to MusicBrainz Artist Person entities and lack instance of (P31) = human (Q5)? Also, maybe we should move this to your talk page, or some smaller page. Thanks for your work. JesseW (talk) 18:17, 28 February 2015 (UTC)
The counts are of option 2. --JulesWinnfield-hu (talk) 19:32, 28 February 2015 (UTC)
Neat -- could you find the counts for option 1 as well? JesseW (talk) 19:36, 28 February 2015 (UTC)
It would run for too much time, because of MusicBrainz rate limit. This was two and a half hours. --JulesWinnfield-hu (talk) 19:45, 28 February 2015 (UTC)
You might be able to use http://reports.mbsandbox.org/ . (Actually, I will look into doing so.) JesseW (talk) 20:24, 28 February 2015 (UTC) (
OK, here's a report that provides option 3: http://reports.mbsandbox.org/report/295/view . I've added the results to the table.

Qualifiers for Pokédex number[edit]

For the following items lists, can a bot duplicate the property Pokédex number for the first list and add to one property :

Here, an example of what I attemps to do.

Thanks! Ju gatsu mikka (talk) 09:45, 20 September 2014 (UTC)

@Ju gatsu mikka: Many of these items have multiple Pokédex number (P1112) values. Should the qualifier of (P642)=Kanto Pokédex (Q18086661) added to all of them? --Pasleim (talk) 12:57, 28 February 2015 (UTC)

Lists of listed buildings[edit]

As can be seen from https://tools.wmflabs.org/wikidata-todo/autolist.html?q=claim[1216]%20and%20noclaim[625], a number of "list of listed buildings in [UK place]" articles (for example, Listed buildings in Brindley (Q15979098)) have multiple National Heritage List for England number (P1216) values. These should be removed, and replaced by the equivalent Q value, as has part (P527). Where they have multiple coordinate values these should also be removed. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:14, 20 September 2014 (UTC)

Can anyone help with this? Perhaps User:Magnus Manske has an idea? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 23:13, 26 October 2014 (UTC)
Note that many of these National Heritage List for England number (P1216) are Grade II buildings, many of which do not have a Q item yet. The WLM drive only called for grade I and grade II* buildings to get items. If there is will to do this, I recommend quick_statements. To create a new item for value, linking back to list (no leading spaces; separate by tabs):
CREATE
LAST P361 Qlist
LAST P1216 "value"

plus some "grade II" instance of, if the grade is known. (Note that I am using part of (P361) here; has part (P527) could then be created as "counter-link"?) One should probably clear out double values for National Heritage List for England number (P1216) first. --Magnus Manske (talk) 23:42, 26 October 2014 (UTC)

@pigsonthewing:@Magnus Manske: I'll take a look at the double values; most of the ones I've spot-checked so far are the list & the building, but not all. - PKM (talk) 18:02, 23 November 2014 (UTC)
Okay. I have cleared out the list of doubles for P1216. There were two cases where two items have the same English Heritage ID because there are separate ENwiki articles for different tenants of the same building. Otherwise the National Heritage List for England number (P1216) should only be duplicated between a building's item and the list-of-listed-buildings item.
I have not added new, separate items for every boathouse, pumphouse, wall, and gate that are part of an estate, so a few items have multiple EH ID's. Does that need to be done? - PKM (talk) 04:20, 24 November 2014 (UTC)

@Pigsonthewing: Should this job still be done? On one hand there are still 74 lists of listed buildings with National Heritage List for England number (P1216) claims but on the other hand, no single item out of 436 lists of listed buildings has a has part (P527) claim. So it doesn't look to me that there is a consensus to use has part (P527) on those lists.--Pasleim (talk) 12:54, 28 February 2015 (UTC)

Minerals[edit]

Is it possible to get following data:

Adding UNII identifiers[edit]

I've noticed that UNII (P652) didn't have much data, while a canonical list from the Food and Drugs Administration (thus PD and freely reusable, cf http://www.nlm.nih.gov/copyright.html) does exist.

The data is tabulated as follows and has several existing identifiers for cross checking:

Légende
UNII PT RN MF INCHIKEY EINECS NCIt ITIS NCBI PLANTS SMILES INN_ID UNII_TYPE
007C07V77Z BIBENZYL 103-29-7 C14H14 QWUWMCYKGHVNAV-UHFFFAOYSA-N 203-096-4 c1ccc(cc1)CCc2ccccc2 Ingredient Substance

It is available at : http://fdasis.nlm.nih.gov/srs/jsp/srs/uniiListDownload.jsp
What I think could be nice are the UNII identifiers, and the data for the other properties when it is not present, and creating any items that would not be present.
But I am not a chemist and while the data seems trustworthy, there might be issues.
I'm interested by this since aside from Wikidata, I contribute to Open Food Facts, a cross language open database (OdBL and CC) of food products, and some of the items of this list end up as ingredients, additives and sometimes Wikipedia articles in various langages :-)

Teolemon (talk) 13:31, 21 November 2014 (UTC)

We're talking 60k+ items --Teolemon (talk) 13:36, 21 November 2014 (UTC)
If the licence is compatible, I think it would be a great idea to import the UNII identifiers. If possible we should also link to Open Food Facts via an identifier. Some food items are notable enough and have a Wikipedia page bzw. a Wikidata item. --Tobias1984 (talk) 16:43, 21 November 2014 (UTC)
Maybe Magnus can add it to Mix'n'Match so id's can be matched to items here. Multichill (talk) 19:28, 21 November 2014 (UTC)
Added (bottom of page). Currently trying to auto-match items. Also working on importing the other data in the set (e.g. SMILES, CAS) to match/import later. --Magnus Manske (talk) 20:18, 22 November 2014 (UTC)
Update: Have ~10K identified through name and/or CAS number. Will update Wikidata with the UNII IDs tomorrow. --Magnus Manske (talk) 23:55, 22 November 2014 (UTC)
@Magnus Manske: Hi, can you just explain which parameter from which data source did you use to match UNII ID with wikidata items ? Thanks. Snipre (talk) 15:24, 24 November 2014 (UTC)
I used RN, which is CAS (for above example). --Magnus Manske (talk) 15:29, 24 November 2014 (UTC)
Mmouais. It would be better first to check the CAS number of the items using a cross-checking between EN, DE and FR WP. Snipre (talk) 13:56, 27 November 2014 (UTC)
Open Food Facts has individual pages for additives (http://world.openfoodfacts.org/additives), food categories (http://world.openfoodfacts.org/categories) and brands (http://world.openfoodfacts.org/brands) in addition to products (which aren't the best option available since there can be dozens of variation and different versions of coke, with identical or different barcodes). If you see fit, I can open property proposals for some of those page slugs Teolemon (talk) 20:21, 28 November 2014 (UTC)

Astronomical objects in the Index Catalog[edit]

There are hundreds of astronomical objects in the Index Catalog which have no label or description in English. I did a hundred or so of these by hand to see what's involved, and it seems to me a bot could generate labels and descriptions fairly easily with this logic:

This would give a description like "galaxy in the constellation Virgo" which is essentially what I was doing by hand (except I was looking them up so I could say spiral galaxy, elliptical galaxy, etc., but I don't think that's necessary for basic disambiguation). Thoughts? - PKM (talk) 03:16, 23 November 2014 (UTC)

PS most of these items don't have articles in EN wiki, but do have articles and infoboxes in BS, UK, SR, SH, KK, RU, etc. - PKM (talk) 20:21, 24 November 2014 (UTC)
I have made a script to do this (code here), but I have no idea how to obtain a list of items to edit. Any suggestions? Popcorndude (talk) 22:51, 28 February 2015 (UTC)
All items with catalog code (P528) you can get with WDQ: http://wdq.wmflabs.org/api?q=claim%5B528%5D --Pasleim (talk) 23:03, 28 February 2015 (UTC)
Thanks! It seems the problems I was having getting a list were due including a 'P' before the property numbers. Popcorndude (talk) 02:58, 1 March 2015 (UTC)
thanks for picking this up! I'm not really a programmer, but I looked at your script and didn't see where it is inserting the phrase "in the constellation" between the P31 and P59 in the description. - PKM (talk) 07:45, 1 March 2015 (UTC)
Line 15, right in the middle:
item.editDescriptions({'en':item.claims['P31'][0].get()['labels']['en'] + " in the constellation " + item.claims['P59'][0].get()['labels']['en']})
Popcorndude (talk) 14:16, 1 March 2015 (UTC)
Doh! Thanks! - PKM (talk) 04:11, 5 March 2015 (UTC)

Adding SOC job codes[edit]

http://www.bls.gov/soc/#materials

I've noticed that SOC Occupation Code (2010) (P919) didn't have much data (https://tools.wmflabs.org/wikidata-todo/translate_items_with_property.php?prop=919), while a canonical list from the Bureau of Labor Statistics (thus PD and freely reusable, cf http://www.bls.gov/bls/linksite.htm) does exist (we're talking 7K items).

The data is tabulated as follows:

SOC Codes and Job Titles
2010 SOC Code 2010 SOC Title 2010 SOC Direct Match Title
11-1011 Chief Executives CEO
11-1011 Chief Executives Chief Executive Officer
11-1011 Chief Executives Chief Operating Officer
11-1011 Chief Executives Commissioner of Internal Revenue
11-1021 General and Operations Managers Department Store General Manager

http://www.bls.gov/soc/#materials It is available at : http://www.bls.gov/soc/soc_2010_direct_match_title_file.xls (with more related files at http://www.bls.gov/soc/#materials)

What I think could be nice are adding the SOC codes based on the 2010 SOC Direct Match Title since so many outside services and pages use the SOC codes (which seems to be a requirement for a lot of job offers in the US).

I've already added manually the French equivalent of the SOC code in Wikidata, so being able to match national codes and job titles through Wikidata would be cool.Teolemon (talk) 18:59, 29 November 2014 (UTC)

SOC is based on an UN standard, ISCO. Each country's bureau of national statistics have their own translation/adaptation of ISCO. There is a standard for historical occupations too, HISCO. In Denmark the national adaptation of this code system is called DISCO, for Norway STYRK and so on. Adding all these codes will be messy, since they may be non-overlapping. But it may also becoma a source for statisticians to get translations from one code to another. H@r@ld (talk) 23:12, 19 January 2015 (UTC)

NLI identifier (P949)[edit]

Can a bot please check the identifier NLI (Israel) identifier (P949)? Some of the IDs are wrong, see Property talk:P949#Unstable numbers? I don't know if it is only a small or a large percentage. --Kolja21 (talk) 03:24, 27 December 2014 (UTC)

Add data of birth / death to French people[edit]

I noticed that a lot of items about people (instance of (P31) -> human (Q5)) don't have date of birth (P569) and date of death (P570), but do have an article at the French Wikipedia (list of about 78.000 items). I already imported a lot of dates from the Dutch Wikipedia, but that was much more difficult. It looks like the French Wikipedia uses the template Date de naissance (date of birth) and Modèle:Date de décès (date of death). Could someone please harvest these templates? Probably best suitable for a French speaking bot operator. Multichill (talk) 21:31, 28 December 2014 (UTC)

Mayor election results in Hungary[edit]

Good day!

Last year, the mayors voted in Hungary. The French Wikipedia hungarian settlements mayor to update information. The data are available on the valasztas.hu: [3] --นายกเทศมนตรี (talk) 12:32, 6 January 2015 (UTC)

I will have a look at that during the week-end. Orlodrim (talk) 11:52, 16 January 2015 (UTC)
I updated the data templates on the French Wikipedia.
I don't know if it's possible to put the mayors on Wikidata without creating an item for each mayor, but if someone is interested, I put the list in CSV format on this page.
Orlodrim (talk) 21:56, 16 January 2015 (UTC)

Dates[edit]

I have set more than 8000 dates via "Wikidata - The Game" and because of that I can safely say that a bot could also do this with a efficiency of 95 - 99 %. The bot should only take a look on articles where two brackets and withing these brackets date(s) are. If the date of the death of the person is missing, the bot should enter the date after the "-", the "†" or the word "death" as the property. If the date of the birth of the person is missing, the bot should enter the date before the "-", the "*" or after the word "born" as the property. This could be also done with other languages just by translating the words "born" and "died". --Impériale (talk) 00:43, 11 January 2015 (UTC)

I have set more than 100,000 dates via "Wikidata - The Game", and I highly disagree that this is something that can be done by bot -- there are just too many irregularities. In the case where dates are added with templates it can be done very reliably, but not in cases where the dates are stored in plain text. Jon Harald Søby (talk) 19:26, 15 January 2015 (UTC)
Haha, good work! What irregularities do you mean? I'll try to explain the bot better (i have a bit experience with programming):

::If in the first line are numbers and parentheses then:
:::If a "*" or a "born" is between the parentheses then:
::::If there is one! date between a "*" or a "born" and a "†" or a "died" then:
:::::If the date of the persons birth is missing in wikidata then:
::::::transcribe it wikidata
:::::Else:
::::::go to the next article
::::else:
:::::go to the next article
:::elif a "†" or a "died" is between the parentheses then:
::::If there is one! date between a "†" or a "died" and the ")" parentheses then:
:::::If the date of the persons death is missing in wikidata then:
::::::transcribe it wikidata
::::else:
:::::go to the next article
:::elif a "-" is between the parentheses then:
::::If there is one! date between the "(" and the "-":
:::::If the date of the persons birth is missing:
::::::transcribe it to wikidata
::::else:
:::::go to the next article
::::If there is one! date between the "-" and the ")":
:::::If the date of the persons death is missing:
::::::transcribe it to wikidata
::::else:
:::::go to the next article
:::else:
::::go to the next article
::else:
:::go to the next ariticle
I know that there are probably some mistakes but in my imagination this bot could not solve set all dates (there would be still a lot of work for us), but I think it should work nearly without mistakes and help us a lot. --Impériale (talk) 17:04, 16 January 2015 (UTC)

I have a suggestion: go over all 8000 additions and make sure you know whether each and every date was in the Gregorian or Julian calendar. This has been discussed before in this forum.
For example, this edit claims that David Leslie Melville, 6th Earl of Leven, was born 4 May 1722, Gregorian calendar. But this source says that he was born 4 May 1722 in Leven, Fife, UK. At that date and place the Julian calendar was in force, so the date 4 May 1722 appears to be a Julian calendar date, not a Gregorian calendar date. Jc3s5h (talk) 02:14, 17 January 2015 (UTC)
Do you probably know how many articles are proximally affected (in percentage) by this? --Impériale (talk) 00:19, 19 January 2015 (UTC)
If you mean how many in all of wikidata, I would say all dates before 1583 are suspect. Also, dates of people from the British Isles, Canada, and American colonies before 1752. Russia before 1918. Greece before 1923. I don't know what the coverage is for persons from various areas and time periods, so I couldn't guess what the percentage is. Jc3s5h (talk) 03:15, 20 January 2015 (UTC)
So after this dates the bot wouldn't probably make any mistakes, right? --Impériale (talk) 16:50, 27 January 2015 (UTC)
Since the bot will usually not know in what region a dated event occurred, it would have to assume the latest date. The latest adoption date for the Gregorian calendar (where the previous calendar was the Julian calendar) is documented at https://secure.ssa.gov/apps10/poms.nsf/lnx/0200307523
That date is March 1, 1923. Jc3s5h (talk) 18:05, 27 January 2015 (UTC)
I think even that would affect a few thousand entries. Is there anyone who could realise this? --Impériale (talk) 17:04, 28 January 2015 (UTC)

────────────────────────────────────────────────────────────────────────────────────────────────────If you mean, can anyone make the corrections with a bot, I don't know how myself, but I guess the procedure might be something like this:

  1. Obtain a list of all the edits made by the bot that didn't know about Julian/Gregorian.
  2. See if the date is greater than or equal to 1 March 1923; if so, do nothing.
  3. See if the current date matches the date inserted by the bot; if the dates do not match, do nothing.
  4. See if any references have been added since the bot edit; if so, put the article on a list for manual inspection.
  5. If no references have been added since the bot edit, delete the date.

Jc3s5h (talk) 17:21, 28 January 2015 (UTC)

Template:OK your list looks great but i think the last point is not necessary because there are manually also no references needed. --Impériale (talk) 21:25, 28 January 2015 (UTC)
The point about references is that if the references are the same as when the bot imported the date, there is no new information to show if the date is right or wrong. But if a reference was added, that means a human editor probably looked at the date, decided it was right, and added a reference. A bot should not override a decision by a human editor.
At some point in the process, there must be a step where the suspicious date is deleted. Otherwise the process never does anything and there is no point in the procedure. Jc3s5h (talk) 21:42, 28 January 2015 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── "In the case where dates are added with templates it can be done very reliably" - In en.Wikipedia infoboxes, many (sadly not all) dates use en:Template:Birth date, en:Template:Birth date and age, en:Template:Death date, en:Template:Death date and age, en:Template:Start date, en:Template:Death date, en:Template:Film date, or suchlike. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 22:04, 13 March 2015 (UTC)

It is fortunate that some infoboxes do not use the templates listed by User:Pigsonthewing because all those emit metadata which is specified to use ISO 8601 and the Gregorian calendar. But since it is contrary to normal writing customs in the English language to use the Julian calendar before 15 October 1582, or for any event that occurred at a time and place where the Julian calendar was in force, any such date placed in one of the listed templates would be false information, either to the human reader, or to the metadata user. Jc3s5h (talk) 23:51, 13 March 2015 (UTC)

Move all template ICD9 and ICD10 references to wikidata[edit]

I would be grateful if a knowledgeable editor could create a bot to go through every template in en:Category:Medicine templates and its subcategories to move references contained to ICD9 and 10 in the title field to wikidata. Relevant details:

Thanks! --LT910001 (talk) 21:59, 6 February 2015 (UTC)

LT910001, resolving the constraint violations of P493 and P494 first could help. --Succu (talk) 22:11, 6 February 2015 (UTC)
Thanks for your reply, I apologise for not having responded more promptly. Whether or not there are existing constraint violations (by which I assume you mean duplicates and poor formatting?) won't affect the data that's already present in the infoboxes. It may even be more helpful to have that data on board here so that these violations can be addressed in a more comprehensive manner. Moving this from the English WP to WD would certainly help the english WP. I will respond more promptly in the future! --LT910001 (talk) 21:30, 14 February 2015 (UTC)

Template:Wanted footballers[edit]

I want to update these lists. I created Template:Wanted footballers for that reason. This template will help to update lists. I put this template on discussion page of some footballers like Talk:Q823064. However, Maybe there are hundreds of thousands of footballers. Too many. So, I can't put on all footballers. I want a bot to put this template on all footballers. Can you this task? --Japan Football (talk) 13:23, 2 March 2015 (UTC)

To create the Wanted footballer list, you don't need that template. With http://tools.wmflabs.org/wikidata-terminator/?list&lang=en&mode=tx&q=claim%5B106:937857%5D you can directly create these lists. Just replace lang=en in the URL with another language code to get the other lists. Also categories likte Category:Footballers-Japan are unnecessary. Use http://tools.wmflabs.org/autolist/autolist1.html?lang=ja&q=claim%5B106%3A937857%5D%20and%20claim%5B27%3A17%5D to get a list of all footballer players from Japan. --Pasleim (talk) 22:15, 2 March 2015 (UTC)
I don't know how to use those sites. Please tell me how to make a Wanted footballers lists from those sites. --Japan Football (talk) 08:10, 3 March 2015 (UTC)

London transport adjacent stations from English Wikipedia navbox tables[edit]

Hello,

Most of Tube, DLR and London Overground stations have nice tables with adjacent stations qualified by a line (Picadilly Line, Victoria Line, DLR, etc.). It would be nice to somehow scrape that data and include it in Wikidata items for each station. Holek (talk) 12:15, 3 March 2015 (UTC)

How will these be qualified (north, south, inbound/ outbound, name of line, etc)? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:32, 17 March 2015 (UTC)
I'd say by name of a line and direction, much like some German and Polish stations. (Sorry for late response.) Holek (talk) 23:46, 5 April 2015 (UTC)

Occupations on lists[edit]

Hi, I need a bot for changing occupation (P106) to is a list of (P360) moving occupation (P106) claims on Wikimedia list article (Q13406463) items to qualifiers of is a list of (P360) (if the claim exists). It's a part of my occupation cleanup sprint, any help will be greatly appreciated. Matěj Suchánek (talk) 20:48, 7 March 2015 (UTC)

Anyone for this task? Wikidata:Database reports/Constraint violations/P106#Type Q5, Q95074 is still one big mess. Matěj Suchánek (talk) 09:21, 25 April 2015 (UTC)
moved some claims, claim[31:13406463] and claim[106] should now give zero results. BTW: I'm not sure if the type constraint on occupation (P106) is right. An item can be about a occupation (Q13516667) or can be a subclass of people, but imo not both at the same time. --Pasleim (talk) 11:37, 25 April 2015 (UTC)
Thanks. The constraint maybe doesn't make sense but I would say and are correct. I have introduced it rather to clean up the mess (it has already taken two months) and to make and maintain a "tree of occupations". Matěj Suchánek (talk) 12:51, 25 April 2015 (UTC)
We have been discussing that privately before (User talk:Gymel#Property talk:P106). I finally understood the logic in Matěj Suchánek's reasoning, it is based on identifying the profession with the set of all individuals with that profession, identifying sets of individuals with classes of individuals and therefore any profession is a subclass of "human" (or rather person?). At least for the German language it is hard to argue against the validity of that reasoning, because for any profession you can express "he is an architect" likewise as "Er ist ein Architekt" (profession as attribute) and as "Er ist Architekt" (profession as membership). However I'm still opposing this way of viewing upon occupation (P106) because a) any property mostly suitable for objects of a class A can be used to define a subclassing of A, thats tautological. And b) having subclasses of A you can express the original property equivalently by means of instance of (P31). So this approach to occupation (P106) has similarities to the current discussion about sex or gender (P21) at Wikidata:Project chat#gender question (message_moved) and the "usual" questions arise how to deal with (fictitious) occupations of fictitious or other non-human entities, whether professions don't have definitions of their own and therefore should not be seen as mere subdivisions of mankind, and if "wikidata" really wants it this way. -- Gymel (talk) 15:22, 25 April 2015 (UTC)

Importing sitelinks from en:Template:German Type VII submarines[edit]

Could someone have a Bot compare (a) the articles linked to at en:Template:German Type VII submarines and sub-templates with (b) pt:Predefinição:Tipo VIIA, pt:Predefinição:Tipo VIIB, pt:Predefinição:Tipo VIIC, pt:Predefinição:Tipo VIIC/41, pt:Predefinição:Tipo VIID, pt:Predefinição:Tipo VIIF; and, when both en.wp and pt.wp articles exist, but aren't linked on Wikidata, merge the Wikidata items? I have done two ([4], [5]) by hand as proof of concept; it would be quite tedious to manually go through the entire list. Thanks! It Is Me Here t / c 20:26, 30 March 2015 (UTC)

radio stations, replace located in the administrative territorial entity (P131) with headquarters location (P159)[edit]

There are 13314 radio stations with located in the administrative territorial entity (P131) which should all actually be headquarters location (P159). Here is a list of them. Could someone please change the property on all these items? Popcorndude (talk) 18:17, 18 April 2015 (UTC)

For instance of (P31)  radio station (Q14350) or instance of (P31)  television station (Q1616075) where country (P17)  Canada (Q16) or country (P17)  United States of America (Q30): It's likely that the value of the ambiguous located in the administrative territorial entity (P131) is neither the headquarters location (P159) (offices/studio) nor the transmitter (Q190157) location, but actually licensed to broadcast to (P1408) (the "city of license" that the station is mandated to target but tries to ignore), or occasionally a marketing claim of the main city of a media market (Q2270014), which there is no Wikidata property for. Sometimes headquarters location (P159) and licensed to broadcast to (P1408) and the media market (Q2270014) are the same place, but very often they are not. You can check licensed to broadcast to (P1408) against the official U.S. database by running a query on https://www.fcc.gov/encyclopedia/fm-query-broadcast-station-search (or the other search tabs on that page) however. Reliable sourcing for the studio or headquarters location (P159) would require more complex sources. --Closeapple (talk) 18:41, 21 April 2015 (UTC)
It would seem that 693 of them have located in the administrative territorial entity (P131)  Texas (Q1439), which leads me to suspect that many of the others are equally vague. What property should then be used? Popcorndude (talk) 22:50, 21 April 2015 (UTC)
I don't think there's any good destination for the data that is in located in the administrative territorial entity (P131) without qualifiers. I think the accurate thing to do would be:
  1. Remove located in the administrative territorial entity (P131) if there is no qualifier and no source except imported from (P143), since there's no evidence that the value is real. That's especially true near any border, which a lot of major metropolitan areas are.
  2. Import licensed to broadcast to (P1408) by matching the city and state from the Federal Communications Commission (Q128831) database or AMQ/FMQ/TVQ dumps; both city and state are in there. It's authoritative for country (P17)  United States of America (Q30). (You have to make sure the name matches the right item if the state has multiple settlements with the same name, of course.) It also has data for other countries in the Americas (Q828) if you're desperate, but it's not as accurate: For country (P17)  Canada (Q16) or country (P17)  Mexico (Q96) it will be the licensed to broadcast to (P1408) "notified" to the U.S. for treaty purposes, which may be different the current station and its domestic license; you can get the real domestic information from those countries' websites though. For other countries, sometimes it's even the name of a nationwide station instead of a location; it's not accurate beyond the country level.
  3. You can grab the transmitter coordinates in the same way, but processing that is tricky: If you're going to actually use them for coordinate location (P625), they have to be converted from NAD27 or NAD83 to World Geodetic System 1984 (Q215848) used by Wikidata.
  4. Even unconverted, the transmitter datum would be within a few hundred meters, so if you're just trying to find out if the studio is in a specific administrative territorial entity (Q56061), you can probably assume that a station that has a full-power license, is not a Non-commercial educational (Q17145784), and has a transmitter 50km away from a border, has to have a studio in the same administrative territorial entity (Q56061) since U.S. law requires such stations to have a nearby "main studio". You could do the same by getting coordinates for licensed to broadcast to (P1408) and guessing that the studio wouldn't be more than 50km from the licensed to broadcast to (P1408) without a Non-commercial educational (Q17145784) license or a "main studio waiver". That's the good news. The bad news is: That requires comparing the coordinates to geographic borders. And it may be a leap to even assume that the legal studio is a headquarters location (P159): Most stations are owned by out-of-town media conglomerates now, and though most still do have form of local management, they sometimes don't have much real local decision-making power left. --Closeapple (talk) 08:33, 22 April 2015 (UTC)
I should have pointed out that the above advice is still for country (P17)  Canada (Q16) or country (P17)  United States of America (Q30). The concept of licensed to broadcast to (P1408) based on locality (Q3257686) may not exist in other places: it might be for media market (Q2270014) or administrative territorial entity (Q56061) or not have an official licensed to broadcast to (P1408). --Closeapple (talk) 02:10, 23 April 2015 (UTC)

Patrol page moves and deletes[edit]

Two kinds of edits pollute the list of recent unpatrolled changes that are trivial to patrol:

  • “Page moved from [Xwiki:A] to [Xwiki:B]”: Mark as patrolled if Xwiki:A is now a redirect of Xwiki:B.
  • “Page on [Xwiki] deleted: A”: Mark as patrolled if Xwiki:A is deleted.

Even if the page move or deletion is incorrect, that’s something that Xwiki has to handle, not Wikidata.

It would be great if someone could write a bot to automatically patrol these two kinds of changes, so that human patrollers can focus on more important changes. —DSGalaktos (talk) 11:42, 22 April 2015 (UTC)

I agree, these edits should be patrolled automatically. But in case you don't know User:pasleim recently created a great tool that puts unpatrolled edits into groups (like "page moves") and allows patrolling them from a list view and even patrolling multiple edits (of one user) with one click. I think this helps a lot already. --YMS (talk) 12:02, 22 April 2015 (UTC)
I saw the announcement, but wasn’t aware of those features… thanks! —DSGalaktos (talk) 15:39, 22 April 2015 (UTC)
For fun, I started writing the bot myself (bash, curl, jq, sed, and coreutils). You can see it on GitHub. “Page moved” patrolling works. “Page deleted” isn’t yet implemented, mainly because it’s harder to work on because there are less edits of this type that I can use to test :D —DSGalaktos (talk) 15:05, 27 April 2015 (UTC)
@DSGalaktos: Even if the page move or deletion is incorrect, that’s something that Xwiki has to handle, not Wikidata. I'm not sure I agree. A confused item/article subject pair can very well mean a wrong label later and incorrects statements thereafter has the locutor of this language will read the label. It is totally a problem for future interconflicts redirects solving. And I don't really like this is not our problem, this means potential future misanderstanding and ball throwing at each other. I don't think that Wikidata and the other wikis are different projects as they are highly connected and dependant. For example I think that Infoboxes on Wikipedia should be edited on Wikidata. Changes on Wikiadata can affect infoboxes. So it is our duty to take care of each other. TomT0m (talk) 16:04, 27 April 2015 (UTC)
@TomT0m: I don’t think most Wikidata users are equipped to properly assess page moves, simply because of language problems. From my personal experience, I’d say that could actually understand about 10% of the page moves I’ve patrolled – the rest was in Hebrew, or Chinese, or Japanese, or Kannada, or some other language that I don’t speak. (That number is probably biased because I’m likely to be awake and active at the same time other European users are likely to be awake and active, whose changes I’m more likely to understand than Asian users’.) In those cases, all I can do is check that both pages have the same content, and that the old page link has something right below the title that looks like a “redirected from” info.
And I really don’t think that’s a problem. What I’m proposing to auto-patrol is only the most basic kind of change: the old page redirects to the new one. In my experience, this usually happens because the spelling of the title changed, e. g.: an English title or term being translated; or transliteration of a name added, improved, or removed in favor of original spelling (example); or simply case changes (example). None of these are problematic in any way – in fact, it wouldn’t change much if the Wikidata link wasn’t updated at all (users would just see the redirect). (Similarly, if a page on another wiki was deleted, there’s no point in keeping the sitelink, because it doesn’t go anywhere.) The other frequent kind of page move, where the old page becomes a disambiguation (for example, for multiple people with the same name), I’m happy to leave alone. —DSGalaktos (talk) 19:58, 27 April 2015 (UTC)
As for 2nd point. I've seen a bunch of unpatrolled edits with "article was deleted" where article was only temporarily deleted (for moving, for merging histories, for filtering vandal edits) and immediately restored. But sitelink doesn't restore in the item itself! So such edits should be manually checked (and reverted). --Infovarius (talk) 13:25, 29 April 2015 (UTC)
I wasn’t aware of such edits – should I limit the bot to page moves only? (Which conveniently is exactly what’s already implemented.) —DSGalaktos (talk) 13:26, 30 April 2015 (UTC)
I’ve written another version, again only for page moves (deletions are much rarer and consequently less important), but this time as a user script: User:DSGalaktos/checkMoveDiff.js. If the new page has a redirect from the old page, it adds a green ✔ before the move comment, freeing you of the necessity to verify the move manually, while still leaving the responsibility of the actual patrolling to you. (I recommend using User:DSGalaktos/hidePreviewOnDiff.js as well if you’re using the Preview gadget.) Perhaps User:TomT0m will be happier with this behavior? —DSGalaktos (talk) 15:19, 30 April 2015 (UTC)

Move last group of ship classes[edit]

Now that a consensus was finally reached to retain vessel class (P289), I would appreciate a bot to port the last 800 or so statements that still indicate the ship class [ ship class (Q559026) ], improperly using property instance of (P31).

Out of 5175 ships using vessel class (P289), these 800 or so : [Query: claim[31:(claim[31:(tree[559026][][279])])]] are duplicating the information in P31. Rather than blindly deleting these P31 statements (since they are redundant), I would prefer to port references and qualifiers from P31 to P289.

Thanks ---- LaddΩ chat ;) 00:10, 24 April 2015 (UTC)

I'm sad to learn that. By the way, you did not commented on Wikidata:Requests_for_comment/Adopt_Help:Classification_as_an_official_help_page, which takes ship classes as an example and is way broader in this scope. I put a lot of energy in this, so I'd like at least some comments. TomT0m (talk) 06:25, 24 April 2015 (UTC)
Hi TomT0m, I responded on your talk page. There is nothing "sad" about fixing the remaining inconsistencies to comply with the community decision, rather the opposite. -- LaddΩ chat ;) 14:20, 24 April 2015 (UTC)
I'm just saying I would have like the decision to be the other option :) TomT0m (talk) 08:12, 25 April 2015 (UTC)
  • Oppose - The general concept is to have P31 always to be as precise as possible. FreightXPress (talk) 12:10, 28 April 2015 (UTC)
    • Comment User:Laddo made me aware of this discussion. So, there might be reasons to have "vessel class", e.g. display in infoboxes. But that does not mean that for ships, submarines, spacecrafts the P31 should be degraded. All P279 of coastal defence ship (Q2304194) are ship classes [6]. So, all ships belonging to one of these ship classes can be made P31 of the more specific value. If there would be a subclass "blue coastal defence ship" it could be that a ship could be put into two subclasses. But that is not the case. Each ship (that is a coastal defence ship [clarification thanks to User:TomT0m]) is an instance of exactly one subclass of coastal defence ship (Q2304194) or is a direct instance of coastal defence ship (Q2304194). It can then inherit anything from that class. That is what subclassing is for, not? FreightXPress (talk) 15:25, 29 April 2015 (UTC)
      • @FreightXPress: Did you read Help:Classification ? If you did, you know you're wrong ;) No, every subclass of ship en:ship class is not a ship class in the sense of the article. But every subclass of ship that is a ship class is
        < the ship class > instance of (P31) miga < ship class >
        , which allows us to query or find the correct class. But I'm not sure you do not know that. TomT0m (talk) 15:32, 29 April 2015 (UTC)
        • @TomT0m: "No, every subclass of ship en:ship class is not a ship class in the sense of the article." - I know that. Laddo made that pretty clear. Each ship type class is not a ship class. Please help me to understand with which statement I was wrong. FreightXPress (talk) 15:44, 29 April 2015 (UTC)
          • @FreightXPress: I barely understood what you mean, every ship is obviously not a coastal defense ship. TomT0m (talk) 15:48, 29 April 2015 (UTC)
            • @TomT0m: Thanks a lot. Now I see how my text might be misleading if not read as a whole. Starting with "All P279 of coastal defence ship (Q2304194)" I was only talking about coastal defence ships. FreightXPress (talk) 15:52, 29 April 2015 (UTC)
              • Oh, OK, I think we agree then. Of course, as I said before, for practical reasons it could be convenient to have as real statements the instance of (P31) claims about the ship type and the ship class to ease infobox coding. No change in meaning, just a small amount of chosen redundancy. TomT0m (talk) 16:38, 29 April 2015 (UTC)

Import DOI Prefixes[edit]

DOI Prefix (P1662): This Crossref list could be imported. It's not complet but contains a basic selection of DOI Prefixes. --81.171.85.254 13:42, 30 April 2015 (UTC)

List all items using "nl-informal"[edit]

I need a list of items that have a label, description, alias, etc. for "nl-informal". Who can help me with this? Sjoerd de Bruin (talk) 07:45, 11 May 2015 (UTC)

http://quarry.wmflabs.org/query/3554 --Pasleim (talk) 08:47, 11 May 2015 (UTC)
There's also this Autolist query. - Nikki (talk) 08:50, 11 May 2015 (UTC)
Thanks, I've tried AutoList earlier but it didn't show all the items. Not sure what's changed. Now, time for fixing stuff from the past! Sjoerd de Bruin (talk) 09:03, 11 May 2015 (UTC)

ISBN conversions[edit]

We have, at the time of writing, 8,287 items with ISBN-13 (P212) and 10,713 items with ISBN-10 (P957). Since it is possible to generate the former from the latter as described on Wikipedia, please can someone do so? The task might usefully be repeated on, say, a monthly basis.Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:15, 11 May 2015 (UTC)

Changing P143 with P248 only on certain items[edit]

If it's possible, I'd like to change imported from (P143) with stated in (P248) only on those items that use Anagrafe delle biblioteche italiane (Q19753501) as a source, since I've read the latter is to be preferred. There should be just around 60 items to be fixed. I'd like to do it on my own, but I don't know how to use pywikibot to do it on my own. Thanks. --Sannita (ICCU) (talk) 13:17, 11 May 2015 (UTC)

Adding missing labels for all biological species, genus, families, etc[edit]

In a lot of languages the labels are missing, while a label can be added. This is especially easy for items with instance of (P31)  taxon (Q16521) + taxon name (P225) -> the scientific name that is added with P225, can be added to the labels of all languages with the Latin alphabet. If the label already has been given in a language, and the label differs from the Latin name, the taxon name (P225) should be added as synonym (if it is not already added). This should finding the right items make a lot easier.

This bot run should be done for all languages with Latin alphabet, and those include: en, de, fr, it, es, af, an, ast, bar, br, ca, co, cs, cy, da, de-at, de-ch, en-ca, en-gb, eo, et, eu, fi, frp, fur, fy, ga, gd, gl, gsw, hr, hu, ia, id, ie, is, io, kg, lb, li, lij, mg, min, ms, nap, nb, nds, nds-nl, nl, nn, nrm, oc, pcd, pl, pms, pt, pt-br, rm, ro, sc, scn, sco, sk, sl, sr-el, sv, sw, vec, vi, vls, vo, wa, wo, zu. Are there any languages missing where this should be done as well?

Thanks! Romaine (talk) 17:25, 13 May 2015 (UTC)

There is no consensus for this (see latest discussion). --Succu (talk) 17:48, 13 May 2015 (UTC)
What I found, is a discussion of exactly one year old, and just one person that is not supporting because of "the gadgets then need to load more data". Is that the same "no consensus" as you meant? Edoderoo (talk) 20:13, 13 May 2015 (UTC)
(after edit conflict)The discussion you link to links to another discussion in what only 1 person is objecting is because it would increase the size of items, and some tools can't handle such. This was and is silly reasoning, the adding will happen anyway, it only will take a bit longer. Now we have the problem that a lot of items can't be found because of missing labels. Is Wikidata intended to serve bots only, or can we humans get a better serving by having a label in my own language? Romaine (talk) 20:16, 13 May 2015 (UTC)
Almost all taxa have an english label. So searching for them you should have a hit. Adding these labels is not a big deal. My bot can easely do this. Simply get consensus for this. --Succu (talk) 20:33, 13 May 2015 (UTC)
In the search section on top of every page, I get to see only the items with a label in the language I have set in my preferences. So it happens a lot that I can't easily search and select. Secondly, to be able to link to an item, I need to have a label set in my language to be able to link it on other items. This is frustrating users to work easily with Wikidata. Romaine (talk) 20:45, 13 May 2015 (UTC)
Romaine: An example would be helpfull. --Succu (talk) 21:10, 13 May 2015 (UTC)
An random example is Q12488462, searching for it fails. This is not a taxon, but illustrates that we need more labels in more languages on many more items, including taxa. Romaine (talk) 22:01, 13 May 2015 (UTC)
  • I haven't changed my position: it would be best if the software could read P225 as if it were the label (this would save a whole lot of redundant content). And otherwise, Succu is the user that has the closest knowledge of the situation here. - Brya (talk) 11:06, 14 May 2015 (UTC)
Fully agree with Brya here. The Lua-modules used on WP and in other places should often prefer to use the name-properties when they exist. Introducing "wrong label" by bot, especially to this kind of items, would cause a lot of wrath on my home-wp. The search-engine-problem can be solved otherwise. Earlier, I could search for correct items by the content in the properties. I have not seen that for some time here now. -- Innocent bystander (talk) 11:40, 14 May 2015 (UTC)

List of Wikidata items with VIAF numbers[edit]

Hello, if it's not too much to ask, could somebody please check the VIAF identifiers in this table and add the corresponding Wikidata items (if extant) in the right column? Jonathan Groß (talk) 11:27, 20 May 2015 (UTC)

You can fetch them from http://tools.wmflabs.org/wikidata-todo/beacon.php --- Jura 11:51, 25 May 2015 (UTC)
Yes, but that's a huge bulk of data. Is there any way to single out the 600+ VIAF numbers on my subpage without Ctrl+F searching every single identifier? Jonathan Groß (talk) 13:15, 27 May 2015 (UTC)
Resolver by User:Magnus Manske will do that, one item at a time; perhaps he has (or might add) a bulk in put option? Or suggest an alternative? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:15, 27 May 2015 (UTC)
Wrote a new tool for you; also updated your list, and threw in GND identifier (P227) for good measure ;-) --Magnus Manske (talk) 22:26, 27 May 2015 (UTC)
Thanks a bunch :) That's great. Jonathan Groß (talk) 09:47, 28 May 2015 (UTC)

mark Hidden categories as Administrative[edit]

https://www.wikidata.org/wiki/Wikidata:Project_chat/Archive/2015/04#Marking_adiministrative_categories.

I tried to do this with http://tools.wmflabs.org/autolist/index.php by specifying Category=Hidden categories, depth=1 (all admin cats are direct subcats of "Hidden categories").

But there are 15,859 total (see https://en.wikipedia.org/wiki/Category:Hidden_categories), so the tool takes forever. Could someone do it with a bot? Please:

  • set P31:Q15647814 Wikimedia administration category page
  • delete P31:Q4167836 Wikimedia category

--Vladimir Alexiev (talk) 06:52, 21 May 2015 (UTC)

Don't forget that other projects could do something different. :) Sjoerd de Bruin (talk) 12:06, 25 May 2015 (UTC)

Labels of elements in en-GB[edit]

A number of chemical elements have capitalised labels in en-gb, as can be seen in https://tools.wmflabs.org/ptable/?lang=en-gb (compare to https://tools.wmflabs.org/ptable/?lang=en ) Can someone change them to lower-case, please? Indeed, this could be done for any "en-gb" label, where the "en" label is lower case. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:21, 23 May 2015 (UTC)

Heh, I just came to this page to ask if someone could make a list of items where the en-gb label doesn't match the en label so that I could check them and this was the most recent request. :) I've fixed the capitalisation of the chemical elements manually but I have no idea how many other items need fixing since I don't know how to find the ones which differ. - Nikki (talk) 11:48, 25 May 2015 (UTC)