Shortcut: WD:PC

Wikidata:Project chat

From Wikidata
(Redirected from Wikidata:PC)
Jump to: navigation, search
Wikidata project chat
Place used to discuss any and all aspects of Wikidata: the project itself, policy and proposals, individual data items, technical issues, etc.
Please take a look at the frequently asked questions to see if your question has already been answered.
Also see status updates to keep up-to-date on important things around Wikidata.
Requests for deletions can be made here.
Merging instructions can be found here.

IRC channel: #wikidata connect
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2016/09.
Filing cabinet icon.svg
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 1 day.





for permissions


for deletions


for deletion

for comment

and imports

a query




The Gadget DuplicateReferences doesn't work properly again! I can both copy and paste, but the data isn't saved! -- Innocent bystander (talk) 06:57, 9 September 2016 (UTC)

I came here to report the same problem. When I paste a reference it says "saving", but apparently doesn't do anything other than print that message to the screen. It stays "saving" until the page is reloaded at which point it becomes clear that no saving has been happening at all. Thryduulf (talk) 13:03, 9 September 2016 (UTC)
phab:T142203. Sjoerd de Bruin (talk) 13:10, 9 September 2016 (UTC)
Bumping this thread! -- Innocent bystander (talk) 12:47, 14 September 2016 (UTC)
I've put it into the current development sprint. I hope the devs get to it asap. Sorry for the breakage. --Lydia Pintscher (WMDE) (talk) 09:42, 15 September 2016 (UTC)
The gadget should be working again. Sjoerd de Bruin (talk) 09:37, 21 September 2016 (UTC)
Yes - it's working. Thanks for fixing it. A most useful tool. Robevans123 (talk) 10:01, 21 September 2016 (UTC)
It's working but not as good as it used to work. If you first copy and then add a claim beforehand you could still paste they reference and now you can't. Given the importance of sources I also don't understand why this is a gadget instead of a core feature. ChristianKl (talk) 10:57, 22 September 2016 (UTC)
Yep, reported that yesterday. Sjoerd de Bruin (talk) 11:41, 22 September 2016 (UTC)

Allow non-existing article in a language to list articles in other languages[edit]

Suggestion: Allow indicating that a non-existing article in current language exists in another language, so show the other language(s) in the Languages list, when suggesting creation of article.

Solution (idea): WikiData must allow language specification for a non-existing article in a certain language.

Advantage: Clicking on a read link for a non-existing article in a certain language could then give the option to read the article in other languages, in addition to creating the article. The reader can then get the information in another language of own choice, while the link still shows that the article is non-existing in the current language.

Note: Suggestion was also previously posted at [1].  – The preceding unsigned comment was added by MortenZdk (talk • contribs) at 11. 9. 2016, 11:35 (UTC).

Does it make sense to allow anonymous editing on Wikidata?[edit]

Less than one percent of edits on Wikidata are made by anonymous users. Yet when I look at the "Recent changes"-page a lot of the problematic edits are made by annonymous users. Even when anonymous users enter valid content into Wikidata they seldom add sources for the content. If one user creates many bad edits it's much easier to fight all the edits if the user is registered than when the edits are made with different ID's. In the sense of valuing data quality highly, our data quality might be higher without anonymous users.

Educating anonymous users is hard, given that they might have another IP address the next time they edit. As a result the interaction with them is frequently more hostile. There edits get simple reverted without them getting an explanation of how to integrate. If every user would be registered there's the possibility of users who make bad edits getting useful guidance instead of the status quo where there edits are simply reverted.

Do you think that there's value in allowing anonymous editing? ChristianKl (talk) 14:47, 14 September 2016 (UTC)

I feel exactly the same way. At this point, given that any data may be edited at will by pretty much anybody, and that there's likely too much for the current community of (two orders of magnitude below) 16,000 active editors to counter every stupid act of vandalism, this despite the existence of several bots and automated tools that aid in this effort, we should consider requiring edits by anonymous editors and non-autoconfirmed users to undergo review (in a similar fashion to Wikibooks) before being included in existing items and properties. On a similar note, we may also wish to prohibit the creation of items by those same categories of people. Mahir256 (talk) 16:29, 14 September 2016 (UTC)
I don't think there's the bandwith to review the edits of all anonymous editors. Having a lot of unconfirmed data might not be very useful. I think it makes more sense to simply not allow anonymous edits. On the other hand I think there's value in allowing non-autoconfirmed users to create items. It's how new users come into this community. ChristianKl (talk) 17:13, 14 September 2016 (UTC)
I am reverting lot of vandalism and I can agree that most of them is made by anonymous. But looking on recent changes of anonymous, most of them are not vandalism, so there is value from allowing anonymous editing.
We need to fight against vandals, not anonymous users. For vandals, the biggest reward is their vandalism to be seen. So I think that semi-protecting most of popular items may be the way to reduce vandalism. At the moment we have less than 100 items it main namespace semi-protected. For comparison enwiki has 3,164 indefinitely semi-protected and at least 1,254 temporary semi-protected pages. --Jklamo (talk) 17:33, 14 September 2016 (UTC)
While most edits by anonymous editors are not vandalism they are also not referenced content. There's no good way to teach anonymous editors to provide sources for their edits. When it comes to registered editors it would be possible to automatically post a message to their talk page after they made 25 edits without providing sources that encourages them to source their material.
I think a good subset of those who currently make anomyous edits would register an account if that's the only way they could contribute.
As far as semi-protection goes, there might be value in automatically semi-protecting an item in Wikidata when a corresponding Wikipedia article is semiprotected. ChristianKl (talk) 18:14, 14 September 2016 (UTC)
  • Does eswiki still have a big red button saying "go vandalize Wikidata" on their protected pages?
    --- Jura 17:38, 14 September 2016 (UTC)
Recently we've put suggestive icons of pencils in our infoboxes, inviting to edit vandalize Wikidata. Strakhov (talk) 17:45, 14 September 2016 (UTC)
We should certainly allow anonymous editing on Wikidata. However, we should work more on making it easier to edit correctly. Some or all properties should prompt editors for a source before saving. Properties (and sometimes items) should show more extensive usage instructions in the editing area itself. Constraints violations should be visible while editing. Property and item suggestions should be more accurate. IIRC, the devs are working on all these things, so we just need to be patient and do our best patrolling in the meantime. --Yair rand (talk) 20:47, 14 September 2016 (UTC)
Before we can ask people to add references, we should fix the property suggestions for them or at least suggest a default set when no suggestion data is available. Sjoerd de Bruin (talk) 20:52, 14 September 2016 (UTC)
The short answer of us is Symbol support vote.svg yes, see our rule #2 of principle. --Liuxinyu970226 (talk) 10:12, 17 September 2016 (UTC)
I'm not sure what would make that page a binding policy document. It seems to have an expecation list that freely grows without policy decisions to add expections in a way that's easily visible. There are for example NPOV expections when projects found NPOV didn't work for them. This would just be another expection. ChristianKl (talk) 18:45, 17 September 2016 (UTC)
@ChristianKl: c.f. m:Wikimedia_Forum/Archives/2015-11#Proposition: Letting individual Wiki projects decide on their own whether they want to ban IP edits. --Liuxinyu970226 (talk) 22:29, 21 September 2016 (UTC)
That page has one user who has a strong opinion with Nemo and other users who point to the fact that expections are made defacto to those principles. ChristianKl (talk) 09:18, 22 September 2016 (UTC)

Symbol support vote.svg Support Absolute support for blocking IPs! They are just wasting volunteers time.--Sauri-Arabier (talk) 18:21, 17 September 2016 (UTC)

  • Symbol oppose vote oversat.svg Strong oppose First the developers have to fix the bugs who randomly makes us log out when we move from one project to another! I personally often experience it when I visit projects I am not very active in, for example Wikimedia Commons. -- Innocent bystander (talk) 18:26, 17 September 2016 (UTC)
  • Symbol oppose vote oversat.svg Strong oppose You all well know that even if we wanted to disable anonymous editing, we wouldn't be allowed to. We are an open wiki facing vandalism not only from anonymous users, just like 800+ other Wikimedia wikis whereas none of these has disabled editing for by anonymous users (some wikis have an advanced system for edit review though) and it will stay so forever. So I consider this discussion really pointless. Matěj Suchánek (talk) 18:55, 17 September 2016 (UTC)
That sounds a bit like the fox in aesops fable. Are you aware of any Wiki that tried to ban anonymous editors with the Wikimedia foundations blocking their attempt to do so?
I think a key reason why en.Wiki didn't shut down anonymous editing was that there a lot of valuable content contributed to en.Wiki by anonymous users. Less than 1% of our content comes from anonymous sources and the amount of sourced content is even less. Vandalism fighting in Wikidata is also harder given the fact that vandals can choose to contribute content in languages that most reviewers don't speak.
But even without a decision to ban all anonymous editing we could block new item creation like en.Wiki and do sighting like de.Wiki which we currently don't do. ChristianKl (talk) 14:23, 20 September 2016 (UTC)
@ChristianKl: I would certainly encourage disallowing certain actions through use of abusefilters for IP addresses, or even new users, even if that means that there are some new abuse filters or actions that we need to develop. The ability for an IP to edit is clearly different from the ability to edit as they so please, or without controls. If it is a disallowable event, we could restrict and direct IP users to have an account for certain actions  — billinghurst sDrewth 02:11, 21 September 2016 (UTC)
  • Symbol oppose vote.svg Oppose Per Matěj Suchánek. Lymantria (talk) 20:27, 17 September 2016 (UTC)
  • Symbol oppose vote.svg Oppose Per Matěj Suchánek. Conny (talk) 10:59, 18 September 2016 (UTC).

Symbol support vote.svg Support Blocking anonymous editing is not a violation of rule #2 because everyone can register an account in 5 seconds.--It's So Easy (talk) 10:56, 18 September 2016 (UTC)

@It's So Easy: At least in Mainland China you have to waste at least half minutes before loading Special:CreateAccount as per Great Firewall of China (Q5370363), Iran has a likely case IMO. --Liuxinyu970226 (talk) 23:50, 24 September 2016 (UTC)
  • Symbol oppose vote.svg Oppose Per Matěj Suchánek. Averater (talk) 16:30, 18 September 2016 (UTC)
  • Symbol oppose vote.svg Oppose pretty much as Matěj Suchánek stated. There would have to be a strong evidence-base that (nearly) all IP editing was bad for WD and that we were unable to manage the risks of IP editing with the available tools. Such a case has not been presented in this situation. If the proposal is to come forward then based on the anecdotes let us start getting that sort of statistical evidence required. We can also start encouraging IP users to create and use an account, and in that we are no different from the other wikis which face that issue.  — billinghurst sDrewth 02:02, 21 September 2016 (UTC)
We could display an inline popup every time an anonymous user saves a statement that encourages that user to register an account. ChristianKl (talk) 12:45, 21 September 2016 (UTC)
I hope that is not a serious suggestion. I presumed that we are looking for practical solutions to vandalism, not provocative actions to piss off valid users. The scheme for converting IP editors to valued contributors would hopefully be more mature than suggested.

IP editing is reasonable; whereas vandalism that is unacceptable. Not adding references is unfortunate, not solely the province of IP editors, and, not something that should prevent an edit. There we need to look at our system with regard to references and our inability to make it both easy and robust. Nobody has yet presented solutions to my issues about adding references, and I have added significantly more than one or two entries manually, and our system is simply immature for my needs.  — billinghurst sDrewth 14:34, 21 September 2016 (UTC)

At this phase listing suggestions is more a matter of brainstorming for me than saying that a certain system should be implemented. If we don't want anonymous editors but Wikimedia insists in us allowing anonymous editors this would be a possible solution. But contrary to what Matěj Suchánek suggests I don't think that Wikimedia would insist.
Additonally an edit without references by a user that has a history of making good edits is more valuable than the same edit by an anonymous user. The vandalism detection algorithms can learn to trust registered users in a way they can't learn to trust anonymous users. ChristianKl (talk) 14:48, 21 September 2016 (UTC)
  • To use anecdotal evidence from my own patrolling activities here, I'd say that ~95% of anon edits are valid contributions to the project. Not worth turning it off because of the slight margin of abuse, in my opinion. -- Ajraddatz (talk) 00:25, 22 September 2016 (UTC)

Detailed lists of monuments in Pompeii[edit]


Hi all,

I am working on the migration of Italian WLM lists to Wikidata and I have a question about notability. We have detailed lists of all the buildings in Pompeii (e.g. here), but many buildings are just "houses" or "shops" with no further specification and obviously no wikilinks. Such elements though do have references to external sites (although not to the official site) and are included in the WLM lists. Can they be inserted as items in Wikidata along with their WLM IDs and statements about their location (e.g. location Regio I degli scavi archeologici di Pompei, which already exists as an item)? Nvitucci (talk) 09:52, 15 September 2016 (UTC)

They are each well documented in external sources, so eminently notable. Go ahead. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:47, 15 September 2016 (UTC)
If a file in commons shows a picture of X, then there's a valid sitelink to Commons and it's notable under (1) and (3) of the notability policy. Supporting WLM is also a structural need under (3). There are likely also some serious&reliable sources from Google Maps, Bing Maps, to various documents in city planning that should make buildings notable under (2). ChristianKl (talk) 17:25, 15 September 2016 (UTC)
"If a file in commons shows a picture of X, then there's a valid sitelink to Commons and it's notable under (1) and (3) of the notability policy" Are you sure about that? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:44, 16 September 2016 (UTC)
I think that's what the sitelink criteria is about. When we want Wikidata - Commons integration there's no reason to set any policy barriers that make that integration harder. ChristianKl (talk) 18:47, 17 September 2016 (UTC)
Would that be the criterion which says 1. It contains at least one valid sitelink to a page on... Wikimedia Commons. To be valid, a link must not be a... file...? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:10, 17 September 2016 (UTC)
Okay, then it might not fall under (1) but there still (3). Being able to state what object a file represents is a structual need. ChristianKl (talk) 11:16, 20 September 2016 (UTC)
I refer you to the example picture, above. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:52, 21 September 2016 (UTC)
Not all of them have links to images in Commons, so probably (1) does not apply (yet). They have indeed external references as Andy confirmed. Nvitucci (talk) 09:11, 21 September 2016 (UTC)
Entries in monuments lists are fine.
--- Jura 22:33, 15 September 2016 (UTC)

I created an example here. If it looks like making sense, I'll proceed with the creation of all the other items. Nvitucci (talk) 13:48, 21 September 2016 (UTC)

@Nvitucci: It should have instance of (P31) - you could create an item specially, then use, say, "instance of ruin at Pompeii". If possible, please also add an English-language label and description - that will make it more likely that people will translate into other languages. Can you include precise coordinates? You should also be able to add (say) "significant event - burial" with date qualifier. It would be worth waiting a day or two for other suggestions, before proceeding,. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:34, 21 September 2016 (UTC)
Yes, I'll add a instance of (P31) of an Italian cultural heritage item as well, but as for English labels and descriptions it will not be easy to do automatically (at least as a "first round"); same goes for qualifiers. As for coordinates, we don't have any at the moment (some will most probably be added later). Nvitucci (talk) 17:06, 21 September 2016 (UTC)
There is also a more specific property, namely heritage status (P1435) that could be used in place of instance of (P31), but I had some doubts that I mentioned here: what do you think about it? Nvitucci (talk) 10:37, 22 September 2016 (UTC)
Everything should have either instance of (P31) or subclass of (P279). That's not to say heritage status (P1435) shouldn't be used as well. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:50, 22 September 2016 (UTC)
Sure, I mentioned it because heritage status (P1435) is a subproperty of instance of (P31). I saw that "instance of (P31) ruins" has been added, which is fine and makes sense for Pompeii. So, it looks sensible to add "heritage status (P1435) Italian national heritage site" to each item from WLM lists; more specific classes for instance of (P31) (e.g. castle, church, fountain etc.) are less easy to assign automatically, so they can be added afterwards in some cases? Nvitucci (talk) 16:17, 22 September 2016 (UTC)
@Nvitucci: Not sure this subproperty claim is a good idea. As far as I know, the tools that detects the subclasses does not use this, first thing, and second thing : are all statuses classes of monuments ? author  TomT0m / talk page 18:12, 22 September 2016 (UTC)
@TomT0m: I'm not sure what you mean. The use of this subproperty is recommended with the use of Wiki Loves Monuments ID (P2186) and it is not to be used with classes of monuments (say, castles and churches). Nvitucci (talk) 19:08, 22 September 2016 (UTC)
@TomT0m: Sorry, I think now I get it. You were referring to the claim itself, not to its usage; I agree that it is (was) weird. Nvitucci (talk) 21:11, 22 September 2016 (UTC)
I've removed subproperty of instance of (P31) from heritage status (P1435). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:20, 22 September 2016 (UTC)
I think it's a good idea. Although I get the reason why the subproperty claim was introduced, I always find it strange when properties such as instance of (P31) (or rdf:type) are specialized. Nvitucci (talk) 20:57, 22 September 2016 (UTC)
I've been reverted - without explanation - by User:Izno. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:43, 23 September 2016 (UTC)
A) this keeps track of all the "instance of" look-alikes (which may or may not be suitable for deprecation--I'll suggest that in the majority case, they are not suitable, just so you know the intent of that statement) and B) indicates to data users (internal or external) that they can use the phrases synonymously. --Izno (talk) 18:55, 23 September 2016 (UTC)
@Iznothis discussion is starting to be really messy ... This is acknowledging that we assimilate any heritage status item to the class of all monuments/sites that have this status. Is that your intent ? just to clarify, no opposition to that. author  TomT0m / talk page 10:55, 24 September 2016 (UTC)
@TomT0m: Indeed. --Izno (talk) 21:13, 24 September 2016 (UTC)
If it's only a component of an "Italian national heritage site", I wouldn't add Q26971668 in P1435 (or p31). "Part of" seems to be the way to link it to the site, but maybe a more accurate statement about its status can be made.
--- Jura 16:23, 22 September 2016 (UTC)
I see what you mean and actually I agree. The presence of heritage status (P1435) is recommended when Wiki Loves Monuments ID (P2186) is used, but this (single "houses" of Pompeii ruins) is a more special case. Anyway I think that there is a description problem for Q26971668: "Italian cultural property" would be probably better as a label since it is meant as an item of the cultural heritage, not necessarily a place. Nvitucci (talk) 17:18, 22 September 2016 (UTC)
The English label of Q26971668 might not match the Italian one. If in doubt, use Italian only ;)
--- Jura 05:51, 23 September 2016 (UTC)
I've added an English label, "Pompeii I.1.1", based on [2]. That should be computable. It can be changed to an alias when a better label is available. I've also added "I.1.1" as a catalog code (P528), which needs a better catalog (P972) qualifier, for the catalogue described at [3] - do we have an item for that? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:33, 22 September 2016 (UTC)
If you mean that "I.1.1" is computable then yes, it is: I did it for the Italian label in the same way. I'm not aware of an item for the catalogue you have linked; a few days ago I searched for references of such numbering system in an official catalogue, but I haven't found any. Since the lists are basically built using this website (which is the "external site" I mentioned in the first post) as a source, if it's deemed "good enough" the catalogue item can be created and used. Nvitucci (talk) 20:57, 22 September 2016 (UTC)
Not sure if it's a good idea to add some random computed text as English label. I think the Italian one should do. There do seem to be two entries for "Caupona di Epagatus".
--- Jura 05:51, 23 September 2016 (UTC)
Yes, thing is that in some cases (e.g. "Bottega" or "Ingresso" without other details) it is easy to automatically translate the label to English (e.g. to "Shop" and "Entrance" respectively), while other cases might need to be dealt with manually. By the way, what do you mean with "there are two entries"? Nvitucci (talk) 07:14, 23 September 2016 (UTC)
The lists I found have two, probably adjacent lots named after Epagatus.
--- Jura 07:19, 23 September 2016 (UTC)
I can't find the second one, but yes, this might happen - that's why I added the "I.1.1" to the label as well (even more useful when you want to tell a generic "Shop" apart from another one). Nvitucci (talk) 07:25, 23 September 2016 (UTC)
"Not sure if it's a good idea to add some random computed text as English label." Indeed it would not be; which is why I did not suggest doing anything like that. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:36, 23 September 2016 (UTC)

Here is a reference of what a Pompeii item should have according to the discussion so far; I will edit this should there be any changes, then when there is agreement I'll proceed to the creation of all the items using it as a guideline.

The reference for these claims should be imported from (P143) Wiki Loves Monuments Italia (Q19960422).

Nvitucci (talk) 08:00, 23 September 2016 (UTC)

You should be able to add the date of destruction; and its cause. Also, if possible the "significant event"="excavation";"point in time"=[date]. I've done that on your example, Pompeii I.1.1 (Q26961007). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:46, 23 September 2016 (UTC)
This information is not included into the lists. I am talking about claims that can be inserted using the WLM lists as a source plus some "common knowledge" (e.g. the instance-of ruins claim). Surely this can be done manually (or by gathering such information with other methods). Nvitucci (talk) 14:45, 24 September 2016 (UTC)
Date and cause of destructions are surely common to them all? I'm not sure if that's true of the excavations, hence "if possible". Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:44, 24 September 2016 (UTC)

About catalog code: The qualifier is "catalog" (P972) and the value is "Pompeji". As far as I have understood, a new item should be created for the webpage which will be the new value. I have seen this catalog has pages like Maybe they could be inserted into WD too? --Molarus 19:36, 23 September 2016 (UTC)

I believe that uses the catalogue, but is not the source of it. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:56, 23 September 2016 (UTC)
It seems you are right. Here it says: "Pompeii has been divided up into Regions or Regio by the archaeologists, based on a methodology devised by Fiorelli in the 1860s." He is this one: en:Giuseppe Fiorelli. I could not find on the internet how that "catalogue" is called. The Information about "Changes and renumbering" is interesting too. Maybe there should be a qualifier only if there is more then one number. What is with names, for example the ones at it:Regio I degli scavi archeologici di Pompei? They have changed sometimes too. There are wikipedia articles like it:Casa dei Vettii (Pompeii VI.15.1), a system different to "Pompeii I.1.1". There has to be a connection between both and I don´t know if that should be done with the alias names. --Molarus 20:41, 23 September 2016 (UTC)
Yes, that's what I was saying a few messages ago. I've looked for an official list using this numbering system but I couldn't find any. I will try and ask around. So this means we can't use a catalog right now, or can we? Nvitucci (talk) 14:45, 24 September 2016 (UTC)
Maybe it is enough if the number is in the label "Pompeii VI.15.1" or in the alias name "I.1.1". Are you sure it would not be better to say "Pompeii I.1.1" in the alias name? The reason is that I can search for "Pompeii I.1.1" and it does not matter if it is written in the label or in the alias name, I will find that item. You can check that with searching for "Casa dei Vettii" and "Pompeii VI.15.1". You should be able to find the item with both ways. By the way, I have found at this list de:Liste_von_Gebäuden_in_Pompeji at the bottom the number "HGW24" outsides the gates of Pompeii. The author of this list seems to be a trusted editor. I have thought about getting all this items with SPARQL and maybe P361 "Insula x della Regio y" could be used for that. We will get soon code to print the content of items as lists into Wikipedia. Maybe it could be done this way. PS: We already have Template:Wikidata list (example at User:Magnus Manske/test1), which a user has coded. It is available in many wikipedias. --Molarus 19:03, 24 September 2016 (UTC)
We can create an item for the catalogue now, and add the name and author, etc, later. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:44, 24 September 2016 (UTC)

Years/ages units mess[edit]

There's kind of a messy situation in units specifying age (there are more messy situations with units but this particular one caught my attention). There is year (Q577), annum (Q1092296) and now there's a new one years old (Q24564698). I think we need to standardize on one of them to use at least in properties concerning age, like minimum age (P2899) or age of majority (P2997). I'd prefer to remove years old (Q24564698) completely and use one of the remaining ones consistently. --Laboramus (talk) 19:10, 16 September 2016 (UTC)

+1 with deletion of years old (Q24564698). And somebody knows the difference between year (Q577) and annum (Q1092296) ? Snipre (talk) 21:11, 16 September 2016 (UTC)
@Laboramus: Annum is simply the Latin word for "year". On the wikis that have separate articles for annum and year, they seem to restrict "annum" to being a Julian year, i.e. exactly 365.25 days, or meaning "year" exclusively in the context of standardized measurements (in which case there is virtually no difference between annum and year). As most countries haven't used the Julian calendar since the 1600s, it would make a lot more sense to standardize on "year" (which covers both historical and modern methods of counting years). I'm sure there are some pedantic conscientious editors out there who would like to use annum for people that lived under a Julian calendar, but this would effectively defeat the purpose of Wikidata, which is to provide useful metadata that can be easily queried. I also support deleting years old (Q24564698). Kaldari (talk) 02:16, 17 September 2016 (UTC)
  • I don't see an advantage of adding units to human age properties. This especially as QuickStatements doesn't support it.
    --- Jura 08:32, 17 September 2016 (UTC)
    • There may be ages when months or weeks would be standardly used (like when a vaccine is due, or for a certain stage of human development, or even for some uses of P2899). I really don't think units for human age should be abandoned. – Máté (talk) 09:06, 17 September 2016 (UTC)
    • I agree with user:Máté - units are required unless we want values like 0.307692308 year (16 weeks), when e.g. some vaccinations are given[4]. Thryduulf (talk) 10:56, 17 September 2016 (UTC)
  • I agree, I'd just make these unit-less as units are pretty much implied there if it's age property. But at least if we have units it should be one, not three different ones. --Laboramus (talk) 18:43, 20 September 2016 (UTC)
    • @Laboramus: Years are only implied for adults. Ages of children are measured in days, then weeks and then months until years becomes almost (but not quite) exclusive after about 3-4 years old. Similarly ages of foetuses are never measured in units longer than months. Thryduulf (talk) 20:07, 20 September 2016 (UTC)
  • I can't imagine making a distinction between year and annum when expressing the age of people. I can imagine using the unit Julian year (Q217208) for expressing how long ago events occurred in the distant past, when the Earth rotated significantly more quickly on its axis, so days were significantly shorter than now (or when the Earth hadn't formed yet, so calendars are completely meaningless). Jc3s5h (talk) 12:31, 17 September 2016 (UTC)
    Well, I am not sure the calendar-problem in reality is a problem. I mean, the "startdate" for universe (Q1) is not set to NaN because Sun and Earth and any calendar based on them are irrelevant for that point in time. -- Innocent bystander (talk) 05:04, 18 September 2016 (UTC)
    The start date of the universe purports to be in the Gregorian calendar. The references and determination method provide sufficient information to figure out the real meaning, which is 13,798 million Julian years before the present. Using the Gregorian calendar to state this date is like using a chisel as a screwdriver. Jc3s5h (talk) 14:02, 18 September 2016 (UTC)
    There's about 9000 meanings of "year" when we're talking about astronomical-scale dates, but with precision we're talking about it doesn't really matter. Let's not get into the woods there, this question is about non-astronomical scales where we have pretty good idea usually which years it is about, we just need to figure out how to record them properly.--Laboramus (*:::talk) 18:54, 20 September 2016 (UTC)
    Most dates in Wikidata are wrong because of sloppy definition of what a date is. You must consider astronomical time spans. Jc3s5h (talk) 21:01, 20 September 2016 (UTC)
    Many dates are wrong because the data model for time was not well communicated. Data added by API and through the GUI gave/gives different results. When it comes to dates in Astronomy, at least I have waited for some bug fixes, since Julian dates are neither the Gregorian or the Julian calendar. -- Innocent bystander (talk) 05:56, 21 September 2016 (UTC)

Wikipedia Corpus to Wikidata[edit]

@ChristianKl: @GerardM: I found that Wikidata StrepHit project target matches my intentions given that, it extracts statements from sentences, but since Strephit accuracy is only 78% and stills relays on the human interaction by using PMS. We can enhance the Strephit since we already have 180000 apporved and 50000 disapproved statements which could be use to retrain StrepHit's in order to get an accuracy that is closed to the human level.Once we get that level of accuracy it make sense to automate the process by using a bot. In order to confirm my hypothesis I need to know more about the implementation of StrepHit, so I am going to try to contact any of the StrepHit's team member to learn more about it.--GhassanMas (talk) 21:47, 16 September 2016 (UTC)
I think there will be an uptick in actual StrepHit usage once usability improvements are made. If there are more claims made by StrepHit I also think that more claims will be approved. Working with the Primary Sources tool will also be an activity that might be fun for new users of Wikidata.
Even if the data quality of StrepHit improves having a human review data can be still useful to catch errors. There might be some subsets where StrepHit is certain enough to enter data without human verification but I think we are a long way from that at the moment. ChristianKl (talk) 22:01, 16 September 2016 (UTC)
To be more clear my intentions are as the following: 1) Training a model like StrepHit to extract a special type of statements, by looking for a specific properties like contains administrative territorial entity (P150), e.g: scan the corpus of different cities on wikipeida "where the sentience we are scanning is referenced", 2) extracting the statement, 3) compare the extracted statements to the actual statements to measure the accuracy, 4) repeat to until getting a human like accuracy, 5) rebuild another model but for another type of propitiates.
When I reference to a human like accuracy, I but in mind the mean accuracy of editors on Wikidata, just like some of the humans edits needs to be rechecked, I expects the same for the model. The point is each model would be compared to a human to who's spending a 24/7 checking wikipedia corpus for a specific properties.--GhassanMas (talk) 11:32, 17 September 2016 (UTC)
I'm not sure that you get the false positive rate by comparing the extrated statements to the actual statements in most cases. If there an extracted statement that doesn't exist as an actual statement you don't know if it doesn't exist because nobody entered it or whether it doesn't exist because it's wrong.
In general I don't think there would be strong opposition to writing more bots that import data from Wikipedia. On the other hand it's not the kind of data that's most valued inside Wikidata as data that isn't backed up by external sources. ChristianKl (talk) 18:27, 17 September 2016 (UTC)
Suppose we have an extracted statement that state "city_X is a city in Germany", we could definitely know that if the "cityX" is a city in Germany or not "false positive".
The point is not just to create a bot to extract statements from Wikipedia, imagine every sentence in Wikipeida "that has a corresponding statement in Wikidata" is synced with it's relevant Wikidata item. We would easily detects inconsistency between different versions of Wikipedia.--GhassanMas (talk) 20:25, 17 September 2016 (UTC)
Historically the German question doesn't happen to be a trival question with a trival answer. There are many things that can be meant with Germany. Was Vienna in 1943 in Germany? Was Munich in 1860 in Germany? Was Berlin in 1860?
In general there's no reason to expect different versions to be completely consistent about issues such as borders of countries and in which country a city happens to be located.
When there are inconsistencies about it the issues tend to be highly political and those might not be best solved by a bot that doesn't understand the underlying politics. ChristianKl (talk) 09:43, 19 September 2016 (UTC)
The first paragraph in the Wikipedia article of Berlin, Munich, Vienna and others cities answers the question relative to the meantime. Extracting statements that answers historical question, would be done when scanning the e.g. historical section of the corresponding article with different module/features.
The bot isn't meant to be responsible to fix inconsistency, rather to just point it out.--GhassanMas (talk) 10:38, 19 September 2016 (UTC)
Not all cities exist currently. Take the historical city of Wedding that was integrated in 1860 into Berlin. It was a German city. But it's not located in Germany (Q183) because Germany (Q183) was founded in 1871.
As a more practical example I would be interested in which country Hohengiersdorf Kreis Grottkau (Q25825696) is located. It's currently located in Polish territory but it's a German name and I don't know whether it was ever a Polish city or town. I know it was a Prussian town but I don't know whether it ever was a German town I think that parsing a Wikipedia article that would say something about it's locations is nontrivial.
In many cases people say something like X is located in Germany they can mean multiple different things. It's in the interest of Wikidata to not muddle everything together. ChristianKl (talk) 12:11, 20 September 2016 (UTC)
@GhassanMas: thanks for your interest in the StrepHit project. As pointed out (thank you @ChristianKl: completely agree), StrepHit is not meant for harvesting internal sources like Wikipedia. I also think that your proposal would fit a bot duty. Said that, here are a couple of comments/questions about what you mentioned:
  • "Training a model like StrepHit to extract a special type of statements"
StrepHit is not a model, it is a NLP pipeline. With respect to the machine learning part, the implementation is already one model per Lexical Unit;
  • "compare the extracted statements to the actual statements to measure the accuracy"
this naturally emerges when a StrepHit dataset is uploaded to the primary sources tool: if a statement extracted by StrepHit already exists in Wikidata, it will get the reference. You may see this as a signal of a correct extraction, but I wouldn't call it accuracy in classical terms of classification performances evaluation.
In light of this whole discussion, would you mind recapping in a more specific place how you wish to contribute to StrepHit? meta:Grants_talk:IEG/StrepHit:_Wikidata_Statements_Validation_via_References/Renewal
Cheers, Hjfocs (talk) 13:19, 22 September 2016 (UTC)

Office held vs titular[edit]

Hoi, several maharajas were no longer in office from the day of Indian independence. After that they and people after them are known as titular maharajas. This is definitgly not being "in office". So how is this to be registered? Thanks, GerardM (talk) 09:11, 18 September 2016 (UTC)

A related problem is Swedish Dukes in modern time, like Princess Madeleine, Duchess of Hälsingland and Gästrikland (Q212035). You have to go back to 16th century to see this as a little more than a title. -- Innocent bystander (talk) 10:13, 18 September 2016 (UTC)
We don't have a property for "being in office anyway" we have one for "holds position". I think that works for titular positions. ChristianKl (talk) 17:30, 18 September 2016 (UTC)
They are thought to be synonymous. That is part of the problem. I now used (once) instance of titular ruler.. I use it as a qualifier. Thanks GerardM (talk) 18:13, 18 September 2016 (UTC)
I think when it comes to issues like this it's very important to look at the actual properties we have. On Johannes Kleineidam (Q113746) I think a titular bischop is well marked as such with the position held (P39). It's debatable whether being a titular bishop is an office but it clearly seems to be a position. ChristianKl (talk) 09:21, 19 September 2016 (UTC)
A titular bishop is something specific. A titular Maharaja is the next successor in a dynastic line. Something utterly different. Thanks, GerardM (talk) 19:15, 24 September 2016 (UTC)

Hide wikidata[edit]

How can we hide the Wikidata of my article from search results on the web? It pops up every time I search for my article. Most articles don't show their Wikidata online only mine. I believe the one I talked to in Wikidata channel is responsible for this. Can you do something about it?-Screamborn (talk) 17:38, 18 September 2016 (UTC)

What is "your article"? Would you provide the link to the search query where you get that result? Who did you talk to? Matěj Suchánek (talk) 18:07, 18 September 2016 (UTC)
@Screamborn: If you're asking about the sitelinks, just use {{noexternallanglinks}} on that page when needed.
But if you're asking about search engines, there seems that nothing you can do, you need to ask Google, Bing, ... etc. by yourself. --Liuxinyu970226 (talk) 06:13, 19 September 2016 (UTC)
FYI Topic:Tbugr1auj56uoac5 Matěj Suchánek (talk) 12:27, 19 September 2016 (UTC)
Also at Wikidata:Contact the development team#Wikidata available for search on the web. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:23, 19 September 2016 (UTC)
Surprising request. The Wikidata page shows up in search engines if the search engines rank the page high enough, higher than other pages. So if you compare it with other terms you search for, you will rarely find Wikidata in the Top 10 results, because Wikidata is, as of today, rarely considered a top-ranked site. If the topic is very esoteric, as is the case here, Wikidata can happen to become one of the best results for the given search term according to the ranking of a given search engine. I think everything here is working as intended. --Denny (talk) 21:39, 19 September 2016 (UTC)
@Denny: Should we use robots exclusion standard (Q80776) in this case? i.e. --Liuxinyu970226 (talk) 23:55, 19 September 2016 (UTC)
I don't think there any reason to avoid having Wikidata indexed in search engines if the search engine believes that the user will find the wikidata page valuable. ChristianKl (talk) 10:01, 20 September 2016 (UTC)
I would agree with ChristianKl in this case. In particular languages that do not have much content on the Web in general, and that don't have ArticlePlaceholder set up - or maybe not even a Wikipedia - it would be a shame to withhold the Wikidata page. --Denny (talk) 14:41, 20 September 2016 (UTC)
I don't think non-English pages of Wikidata are currently viewable by users that aren't logged in. ChristianKl (talk) 16:51, 21 September 2016 (UTC)
They do: although that is not perfect. --Denny (talk) 17:18, 21 September 2016 (UTC)

Tool for social media[edit]

Hi all,

Is there a tool that found in a website identifiers of social medias (Facebook ID (P2013), Twitter username (P2002), Instagram username (P2003), Dailymotion channel ID (P2942), etc.)?

Tubezlob (🙋) 10:08, 19 September 2016 (UTC)

Your question is not clear enough for me. Sjoerd de Bruin (talk) 10:13, 19 September 2016 (UTC)
@Sjoerddebruin: OK sorry. Is there a tool that can search in the source code of a webpage (of a Wikidata item) URLs of social medias (like Facebook or Twitter), and then add them in the Wikidata item? Tubezlob (🙋) 11:10, 19 September 2016 (UTC)
@Tubezlob: So basically - see, if Twitter/Facebook etc. is used as source or qualifier for some statement in that particular Wikidata item? And then add it as Facebook/Twitter profile, if there isn't such already set? If yes, then I don't think so (but it shouldn't be too hard to code it up) and don't think it would be a good idea - you may see Twitter/Facebook links of other profiles in sources. --Edgars2007 (talk) 17:03, 21 September 2016 (UTC)

Entity usage exposure[edit]

Hello all,

We're currently working on developing tools about Wikidata entities usage exposure on the Wikimedia projects.

  • On the API, you can use prop=wbentityusage with the title of a Wikipedia page to display information about the Wikidata entities used in this page. This is already deployed and you can see an example about Hubble.
  • In the info special page of an article, we display the list of Wikidata entities used in this page (example). Later, the entity exposure information will be in the bottom of the page in the "Properties" section.
  • There is another API module we implemented which is list=wblistentityusage. It's not deployed yet, you can see an example in the beta cluster.
  • We implemented a special page called Special:EntityUsage. It's not deployed yet, you can see an example in beta cluster.
  • We're also working on action=info in Wikidata, to show editors where the data of a particular item is used on other Wikimedia projects : see on Phabricator.

If you have any question or feedback about it, feel free to add a comment or mention @Ladsgroup:. Lea Lacroix (WMDE) (talk) 14:02, 19 September 2016 (UTC)

In Wikidata, please add a link in the tools menu to get a list of Wikis that make use of an item. Aggregated usage counts at the item page would even be better but probably too difficult to get. -- JakobVoss (talk) 17:44, 19 September 2016 (UTC)
That a great news. Thank you very much! -- T.seppelt (talk) 18:15, 19 September 2016 (UTC)
Any chance, that this kind of thing would be possible? The property in question there isn't actual, I'm asking about it in general. Didn't see such option in links you gave, so sorry, if I missed something. --Edgars2007 (talk) 18:23, 19 September 2016 (UTC)
I think it should end up on
eventually. For Wikidata:Requests for deletions, this seems important.
--- Jura 11:19, 20 September 2016 (UTC)
Hello @Edgars2007:, no, working on the use of a property is not planned for now. Can you give me an example of where and why this could be useful for you so we can understand better what is the need? :) Lea Lacroix (WMDE) (talk) 16:59, 26 September 2016 (UTC)
@Lea Lacroix (WMDE): a) the one, that Jura mentioned (for Wikidata:Requests for deletions) - to make sure, that everybody are notified
b) pretty the same, that I mentioned in the link I gave. I have property X (external ID), for which the URL structure completely or partially changed. Now I update data here with new IDs, and want to make sure, that on client-side everything is fine (e.g. if they're using their own formatter URL, not the one from Wikidata).
Well, something like that. Probably there are other use-cases, too. Oh, and to deprecate {{ExternalUse}} :) because I'm pretty sure, it's not complete on any of property talk pages. --Edgars2007 (talk) 17:22, 26 September 2016 (UTC)

Interwiki links to Wikidata in the sidebar[edit]

Is it possible to make an interwiki sidebar link to Wikidata "manually" from another project? It would be like creating a link say to English Wikipedia with [[en:foo]]. The reason I'm asking is that commons:Category:Bay of Islands Coastal Park uses a template to get interlanguage links, but lacks a link to Wikidata in the "Tools" or "In other projects" sections of the sidebar.

I know that the Tools link usually appears as a side-effect of creating a sitelink, but this Commons category (like many others) doesn't have a sitelink, and it seems undesirable to either create a cross-namespace sitelink or to create a Category Wikidata item that wouldn't link to anything else.

If such a method is possible, the template that creates the interwiki links would be also able to create the Wikidata link. Ghouston (talk) 10:01, 20 September 2016 (UTC)

@Ghouston: I added the category as the Commons link to Bay of Islands Coastal Park (Q4874208) and that has now generated the "Wikidata item" link in the Tools section at Commons. I am presuming that is the effect that you were seeking. If there is something at Commons then it should be paired to the wikidata item here. That said noting that there is no consensus on whether it should be galleries or categories that are linked to an item, though if there is an overarching category here, then that should be using the Category interlink.  — billinghurst sDrewth 03:33, 21 September 2016 (UTC)
Oh, and as a addendum, English Wikisource actually does generate a separate wikilink to the wikidata item through s:Template:Plain sister in its headers — if you want an example of how to add something manually.  — billinghurst sDrewth 03:37, 21 September 2016 (UTC)
Thanks, but that's a cross-namespace link from a Commons category to a Wikidata main item, which I've been told here previously is not permitted. I don't want to waste time making such links if they will all be reverted some day. I also know of a template on Commons that can put the wikidata link into the category header (Template:On Wikidata), but that seems a bit intrusive for a link that's likely to be of little use to most people. Ghouston (talk) 05:31, 21 September 2016 (UTC)
@Ghouston: It is NOT a cross namespace link, such a reference would be internal to a wiki only. There is no main to main relationship across wikis, for instance the wikisources utilise numbers of namespaces to items here as that is how their organisation links to the item here. Whomever told you that it is not permitted hasn't accurately reflected the last discussion that was held here and was closed as "no consensus". So my thoughts to you are to put in the most accurate link, and for the category that you indicated there is only one item. Plus the links won't be reverted, however, they may evolve, but that is a wiki! Never be afraid of progressive change, it is actually our way.  — billinghurst sDrewth 07:27, 21 September 2016 (UTC)
The guidance that we have is at Wikidata:Commons and the last discussion about this is at Wikidata:Requests for comment/Commons links.  — billinghurst sDrewth 07:35, 21 September 2016 (UTC)
Yes, I remember that discussion, and how it was closed with one outcome, and then an "Addendum 2" was added to say that option VI was to be implemented, which involves templates, which in the meantime have been implemented and which I was trying to use above. That was also explained to me on Wikidata talk:Wikimedia Commons under the "Commons:Category:Raphael Lemkin" heading. However, if that is now all old history, and there's no longer anybody objecting to making a simple sitelinks from Commons Category to Wikidata main items, then I guess I should do that instead, since the end result is better (i.e., including a Wikidata link in the sidebar.) Ghouston (talk) 09:00, 21 September 2016 (UTC)

quickstatements syntax problem[edit]

Q4115189 P1087 2652 P585 +2016-09-20T00:00:00Z/11 S248 Q23058744 P813 +2016-09-20T00:00:00Z/11

What is the error in this code?

I thought that this code tells quickstatements that Wikidata Sandbox (Q4115189) has Elo rating (P1087) of 2652, it had this elo rating on 20.9.2016 and it is sourced by (Q23058744), which has been reached retrieved (P813) on 20.9.2016.

Thank you for your advice. --Wesalius (talk) 19:48, 20 September 2016 (UTC)

Try adding a + before the Élő rating too (+2652). – Máté (talk) 20:11, 20 September 2016 (UTC)
Nice, it is almost solved. 1 remaining problem - the retrieved (P813) is now quantifying the Elo rating (P1087) instead of the stated in (P248). How to solve this? --Wesalius (talk) 20:18, 20 September 2016 (UTC)
S813 instead? Matěj Suchánek (talk) 20:40, 20 September 2016 (UTC)
Nope, Q4115189 P1087 +2652 P585 +2016-09-20T00:00:00Z/11 S248 Q23058744 S813 +2016-09-20T00:00:00Z/11 did not change a thing. --Wesalius (talk) 20:51, 20 September 2016 (UTC)
I believe this is QuickStatement issue #31 in Magnus' bug list. LaddΩ chat ;) 01:52, 21 September 2016 (UTC)

api.php reply empty via curl/Groovy, non-empty via wget/Firefox[edit]

I wrote a Groovy script that at some point runs this request:

PROBLEM: It returns an empty reply.

The same query via wget or Firefox works fine. But it is also empty via curl. The Groovy script:

def url = ''.toURL()
println url.getText('utf-8')

What is going on? How to make the Groovy script work? Thanks! Syced (talk) 06:17, 21 September 2016 (UTC)

  • Not sure exactly what is causing your trouble, but I think the issue is there's a redirect involved - 'curl -L' allows this to work for curl, so presumably something similarly is needed for groovy. ArthurPSmith (talk) 15:30, 21 September 2016 (UTC)

Time out despite query restricted to tiny area[edit]

I want to create a GPX file containing all places missing a Wikidata image. A GPX file can be used by smartphone map apps like OsmAnd without using the Internet, so unlike WikiShootMe this will cost zero roaming money.

The naive query obviously times out, so I thought I would divide the world into small portions and query them one after the other:

  ?item p:P625 ?statement .
  ?statement psv:P625 ?coordinate_node .
  ?coordinate_node wikibase:geoLatitude ?lat .
  ?coordinate_node wikibase:geoLongitude ?long .
  FILTER (ABS(?lat - 48.8738) < 0.001)
  FILTER (ABS(?long - 2.2950) < 0.001)
  MINUS {?item wdt:P18 ?image}

SPARQL query

PROBLEM: The above query times out, even though the area is only about 100 meters * 100 meters.

Why does it fail? Any other strategy to get all of the data I want? Syced (talk) 07:52, 21 September 2016 (UTC)

Cool, that seems to work:
  (SAMPLE(COALESCE(?en_label, ?item_label)) as ?label)
  (SAMPLE(?location) as ?location)
  SERVICE wikibase:box {
    ?item wdt:P625 ?location .
    bd:serviceParam wikibase:cornerSouthWest "Point(3 -90)"^^geo:wktLiteral .
    bd:serviceParam wikibase:cornerNorthEast "Point(3.1 90)"^^geo:wktLiteral .
  MINUS {?item wdt:P18 ?image}
  MINUS {?item wdt:P373 ?commonsCat}
  OPTIONAL {?item rdfs:label ?en_label . FILTER(LANG(?en_label) = "en")}
  OPTIONAL {?item rdfs:label ?item_label}
GROUP BY ?item
SPARQL query
Thanks a lot! Syced (talk) 08:46, 21 September 2016 (UTC)

Backlink on Property talk subpages?[edit]

Shall we do that? Otherwise one relies on some template pointing back to the talk page.

As there can't be any subpages in property namespace (e.g. Property:P214/whatever), the only use of pages such as Property talk:P214/Archive 1 is in relation to their parent (Property talk:P214).
--- Jura 12:02, 21 September 2016 (UTC)

Sounds sane. I Symbol support vote.svg Support this. Matěj Suchánek (talk) 15:14, 21 September 2016 (UTC)
Symbol support vote.svg Support, useful. Sjoerd de Bruin (talk) 15:24, 21 September 2016 (UTC)
You can add my Symbol support vote.svg Support. --Edgars2007 (talk) 17:19, 21 September 2016 (UTC)
  • Symbol support vote.svg Support. Useful and I can't see any problems with it. Thryduulf (talk) 21:12, 21 September 2016 (UTC)
  • Symbol support vote.svg Support. --Yair rand (talk) 23:59, 22 September 2016 (UTC)

Merge Q328829 and Q25089084[edit]

How to merge Khadija Mosque (Q328829) and Khadija Mosque (Q25089084)? No idea how to do it. --Ahmadi (talk) 12:38, 21 September 2016 (UTC)

Ahmadi See Help:Merge --ديفيد عادل وهبة خليل 2 (talk) 12:45, 21 September 2016 (UTC)
Yes! I did it!! - I had read the Help:Merge but I missed that I need to activate the Merge tool in my preferences... The rest was easy. --Ahmadi (talk) 16:47, 21 September 2016 (UTC)


Where do Wikidata pull its languages list for the entries that require a language (such as official name or motto text)? I ask because I need to enter a "motto text" in Old French (ISO code "fro"), but it doesn't recognize it. Thanks, Amqui (talk) 14:44, 21 September 2016 (UTC)

See Help:Monolingual text languages.
--- Jura 15:03, 21 September 2016 (UTC)
Thanks, Amqui (talk) 18:19, 21 September 2016 (UTC)

Do we have a property to describe a nerve going through fascia?[edit]

The Anterior interosseous nerve (Q4771357) goes through Interosseous membrane of forearm (Q1692993) ( Do we have an existing property that can be used to describe this spatial relationship? ChristianKl (talk) 21:39, 21 September 2016 (UTC)

We have the newly created innervated by (P3189) and innervates (P3190). Is that what you are after? Thryduulf (talk) 21:42, 21 September 2016 (UTC)
I'm not 100% sure but I was thinking that the nerve just passes through the fasciae and in general muscles get innervated by (P3189) and innervates (P3190). I don't think there are any neuron's in the fasciae. I was thinking about the nerve passes through the fasciae in the way a door allows you to pass through a wall. ChristianKl (talk) 22:02, 21 September 2016 (UTC)
Ah ok, I'm way out of my depth on subject knowledge so I can't help further! Thryduulf (talk) 22:11, 21 September 2016 (UTC)
[Nerve] innerverates [Muscle] via [Fascia]? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:48, 22 September 2016 (UTC)

How to enter with data / files[edit]

Hi I've created the account but don't know how can I file my data. Could you orient me? Or is there any number I could call in? Thanks.  – The preceding unsigned comment was added by MetaSocialTPH (talk • contribs) at 15:47, 22 September 2016‎ (UTC).

What kind of data do you wan to add? Wikidata:Data donation might be off help. ChristianKl (talk) 18:18, 22 September 2016 (UTC)

described at url vs. reference URL[edit]

How do I choose between reference URL and described at URL? --Richard Arthur Norton (1958- ) (talk) 19:56, 22 September 2016 (UTC)

The former is for plain links in references (add retrieval date as well), the latter is meant to be used in standalone statements if no authority control property is available, and it is not wise to propose for one (no retrieval date necessary). —MisterSynergy (talk) 20:35, 22 September 2016 (UTC)

SPARQL examples migrated[edit]

I've finished migrating SPARQL examples to Wikidata:SPARQL query service/queries/examples - this is now the official SPARQL examples page.

I've also created a redirect: for convenience. The old page at is now soft redirect to a new one.

Please notify me if you see any issues.

--Smalyshev (WMF) (talk) 20:43, 22 September 2016 (UTC)

Woo! This is great! ·addshore· talk to me! 10:47, 23 September 2016 (UTC)
The page is very long (currently 140,307 bytes) and needs to be subdivided. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:30, 25 September 2016 (UTC)

Q7530126 (autumnal equinox) + Q12014191 (September equinox)[edit]

These two seem to be the same, at least according to descriptions in EN, LT, EO and ET language wikis. Interwiki links do not conflict, only Freebase IDs are different. Powermelon (talk) 06:37, 23 September 2016 (UTC)

✓ Merged. Lymantria (talk) 06:58, 23 September 2016 (UTC)
The problem is the autumnal equinox occurs in September for those living north of the equator in a temperate climate, and March for those living south of the equator in a temperate climate. And many people don't experience the seasons spring or autumn, so the terms "vernal equinox" and "autumnal equinox" are rather meaningless. But the version of astronomy that is widely practiced throughout the world developed in Europe, where the autumnal equinox is the September equinox, so frequently in astronomy "autumnal equinox" and "September equinox" are synonymous. Jc3s5h (talk) 15:11, 23 September 2016 (UTC)

Translating many labels to QIDs efficiently and safely[edit]

Using QuickStatements I find myself often translating large lists like:


... to their QID equivalents, using one-by-one. Is there a tool that could give me the CSV result below automatically from the list above?

Abuja   Q3787 Capital of Nigeria
Accra   Q3761 Capital city of Ghana
Algiers Q3561 Capital of Algeria

The description on the right is important so that I can check whether the item is really the one I was want. Thanks! Syced (talk) 07:29, 23 September 2016 (UTC)

By the way, I am surprised that the request below produces zero result (despite the existence of Berlin (Q64)):
SELECT ?item
    ?item rdfs:label "Berlin"
SPARQL query
Syced (talk) 07:41, 23 September 2016 (UTC)
SPARQL probbaly won't be your friend this time. See also Wikidata talk:SPARQL query service/queries#Items with a given label in any language. --Edgars2007 (talk) 07:46, 23 September 2016 (UTC)
It's "Berlin"@de
--- Jura 07:48, 23 September 2016 (UTC)
BTW, you can use search via API. If you create some script somewhere, you may get what you want. --Edgars2007 (talk) 07:53, 23 September 2016 (UTC)
The Linked Items tool does something close. Input is Wiki markup, so you'd need to enter [[Abuja]], [[Accara]], etc, one item per line, and watch for disambiguated articles. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:05, 23 September 2016 (UTC)
Unfortunately it does not give a description, so I can't check whether the correct items have been retrieved (short of opening each item in a web browser, which is very inconvenient). Syced (talk) 02:53, 26 September 2016 (UTC)
@Syced: you query for Berlin has some problems, it's too wide : you don't specify the langage or the type of items you're looking for, and you didn't ask for label or description. Something better (with lang and capital of (P1376)) would be :
SELECT ?item ?itemLabel ?itemDescription WHERE {
    ?item wdt:P1376 [] ; rdfs:label "Berlin"@de
	SERVICE wikibase:label { bd:serviceParam wikibase:language "de". }
SPARQL query
For the initial question, you could build a query like:
SELECT ?item ?itemLabel ?itemDescription WHERE {
    ?item wdt:P1376 ?country.
	?country wdt:P31 wd:Q6256 .
	SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
SPARQL query
Caveat: I'm good but not very good at queries, there is probably more efficient way to do it. Cdlt, VIGNERON (talk) 08:56, 26 September 2016 (UTC)

Labels in disambiguation pages[edit]

I try to add "Wedi" as the German label for de:Wedi in Q14856954. However, I get the error message Could not save due to an error. Item Q7948799 already has label "WEDI" associated with language code de, using the same description text. How do I go about the difference between "Wedi" and "WEDI" here? --Gereon K. (talk) 07:44, 23 September 2016 (UTC)

Have heard, that people are adding some simbols to distinguish labels. Like " (2)". Or delete de-label from WEDI (Q7948799). But there are probably much better ways to do this. --Edgars2007 (talk) 07:49, 23 September 2016 (UTC)
I'd say those two items can be merged. --YMS (talk) 08:02, 23 September 2016 (UTC)
I tried to merge the two items but did not manage. --Gereon K. (talk) 08:42, 23 September 2016 (UTC)
I merged them. ChristianKl (talk) 09:14, 23 September 2016 (UTC)
If the label and description of an item are the same, then either (1) the item refers to the same concept in the world and thus should be merged or similar, or (2) refer to two different concepts in the world and thus the descriptions should be refined to reflect that. The label and description should always be identifying for the concept the item is about. If they are the same, how would you know which one to choose from from a list? --Denny (talk) 16:01, 23 September 2016 (UTC)
I think that "WEDI" is different than "Wedi" but for the wikidata software is the same thing. --ValterVB (talk) 17:12, 23 September 2016 (UTC)
Are there Wikipedia projects who have separate disambiguation pages based on capitalization? ChristianKl (talk) 23:11, 23 September 2016 (UTC)
Yes, there are quite a lot of them. Just type some two or three letter to the search engine and see. Matěj Suchánek (talk) 07:45, 24 September 2016 (UTC)

Property to mark signatories of the Giving Pledge[edit]

Do we have a property to mark signatories of The Giving Pledge (Q203807)? ChristianKl (talk) 10:39, 23 September 2016 (UTC)

@ChristianKl: Use signatory (P1891). You can search for properties by entering "P:STRING" in the search box, replacing STRING with the search term. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:53, 23 September 2016 (UTC)

Linking Wikipedia articles directly to Wikidata items[edit]

There are many items that don't have entries on Wikipedia but have entries on Wikidata. It seems that there's currently no way to make this link. Is there a reason for why we have the status quo? I'm in the process of going through and the spouses of the billionaires are certainly important enough to have Wikidata items even when they don't have Wikipedia articles. The same goes for parents of notable figures.

A reference to the Wikidata item wouldn't have to provide a direct link. It could also offer a tooltip in the Reasonator way. ChristianKl (talk) 12:09, 23 September 2016 (UTC)

mw:Extension:ArticlePlaceholder? Sjoerd de Bruin (talk) 12:26, 23 September 2016 (UTC)
At enwiki there is also w:en:template:Red Wikidata link. --Jklamo (talk) 14:55, 23 September 2016 (UTC)
Great, I was looking for something like w:en:template:Red Wikidata link. ArticlePlaceholder seems to be only for items that are notable by the standards of the local Wikis. If we have an example like items in a list of WikiLoveMomuments, those aren't. It would be still good to have them linked.
I think it would be nice if that template would show the same thing that's shown when a user hovers an item in Reasonator. I'm also not sure whether Wikipedians would get annoyed with the space that's taken by written out Wikidata and Reasonator in case this template would be used more widely.ChristianKl (talk) 15:34, 23 September 2016 (UTC)
Would it be posible to copy w:en:template:Red Wikidata link to the German Wikipedia? ChristianKl (talk) 12:28, 24 September 2016 (UTC)
Like the rest of Wikipedia, it's under an open licence. You should ask an admin on de.Wikipedia to import it for you, to ensure that the attribution in the page history is preserved. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:38, 24 September 2016 (UTC)
I'm not familar with the practical process of asking an admin (and knowing which admin to ask) in this case. ChristianKl (talk) 15:01, 24 September 2016 (UTC)
@ChristianKl: Simply ask on de:Wikipedia:Administratoren/Notizen. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:25, 25 September 2016 (UTC)

Open topic : articles whose main topic slides over time[edit]

w:en:The Hidden Wiki is an article about a family of wikis. It was an article about a specific wiki in the past.

What does that mean for Wikidata ? Per the "1 topic = 1 item" subject, it seems to me that there is two items involved to talk about the two topics, one for the original wiki, one for the concept behind the wiki that has been instanciated several time since. But that would imply that the article migrates to another item when they change the topic.

This does not seem to be a problem on the Wikidata side as this is technically absolutely not a problem and is actually the obvious solution. What does community feels about this, in and out Wikidata ? I do not think we actually document this anywhere and it has not been discussed. Do you think this will be well accepted by wikipedians ? Some of them have still a problem with the "one topic = one subject" equation unfortunately, so they might want to merge the items ... author  TomT0m / talk page 12:00, 24 September 2016 (UTC)

Well, from Wikisource point of view, this is what we have tried to implement. But I am afraid we have failed more or less completely. Wikidata maybe never should have been marketed as the solution to Interwiki. Phase 1 and Phase 2 probably came in the wrong order! -- Innocent bystander (talk) 12:30, 24 September 2016 (UTC)


Stakihnúkur (Q27000000): another milestone. Matěj Suchánek (talk) 18:07, 24 September 2016 (UTC)

It's just another number, not a milestone (Q2143762). :( --Succu (talk) 22:11, 24 September 2016 (UTC)
Milestones are defined by arbitrary, usually round numbers :) --Denny (talk) 01:28, 25 September 2016 (UTC)

Oh, the data about its height is rather... diverse. Also, is it a hill or a mountain? That will be interesting to watch. --Denny (talk) 01:36, 25 September 2016 (UTC)

  • I'm not sure whether number of items created is a good measurement. If I recreate a lot of doublicate items that then have to be merged the number of created items rises, but that doesn't mean that there's progress in Wikidata. Counting the actual number of items in existence would make more sense to me. ChristianKl (talk) 19:05, 25 September 2016 (UTC)

Bot generated data[edit]

sv:User:Lsj runs a bot that generates geographical articles (e.g. villages, rivers) in the Swedish and Cebuano wikis using freely available data from NASA and other sources. The bot extracts data about a location, then formats it into text and generates stub articles. Example for en:Abuko, with data items bolded:

Abuko has a savanna climate. The average temperature is {{convert|24|C}}. The hottest month is April, with {{convert|27|C}} and the coldest month is July, with {{convert|22|C}}.<ref name = "nasa">{{Cite web |url=|title= NASA Earth Observations Data Set Index|access-date = 30 January 2016 |publisher= NASA}}</ref> Average annual rainfall is {{convert|1148|mm}}. The wettest month is August, with {{convert|449|mm}} of rain, and the driest month is February, with {{convert|1|mm}} of rain.<ref name = "nasarain">{{Cite web |url=|title= NASA Earth Observations: Rainfall (1 month - TRMM)|access-date = 30 January 2016 |publisher= NASA/Tropical Rainfall Monitoring Mission}}</ref>
  • Would there by any problem with the bot storing the data in Wikidata?
  • Would there be any problem with articles embedding the wikidata items for a location into standardized text at display time?

Other types of data that could be stored by bots for settlements include census data and election results. Standard templates could then pull the data into chunks of Wikipedia text for articles in all languages, picking up the latest values at display time. Crazy? Aymatth2 (talk) 23:41, 24 September 2016 (UTC)

I think such data *should* be stored in Wikidata, and not as generated text in the Wikipedias, together with the provenance (i.e. whether it is from NASA or other sources). Whether the Wikipedias accept this kind of text, and whether they accept queries in running text, is up to the individual Wikipedias (I am rather skeptical regarding that approach), but that's really up to the local Wikipedia communities
Even without generating the text via queries from Wikidata, there are many ways that the projects could benefit from having the data stored in Wikidata, e.g. for checking the Wikipedia text whether it corresponds with Wikidata, etc. --Denny (talk) 01:32, 25 September 2016 (UTC)
+1. We don't need GeoNames stubs, in no language.--Sauri-Arabier (talk) 10:15, 25 September 2016 (UTC)
The data in question comes from GeoNames which is CC-BY licensed. A Wikipedia can important data with less copyright (sui generis database) concerns than Wikidata can. There's also a question about data quality. There are people who are concerned about importing a lot of wrong data into Wikidata. ChristianKl (talk) 08:15, 25 September 2016 (UTC)
An alternative to import only very high quality data is to provide wikidata data to quality heuristics or metrics, as told in the recent RfC : Data quality framework for Wikidata. The positive stuff about importing datas into Wikidata is that this is a starting point to improve datas by collaborative work, as opposed to trying to clean a dataset by someoneself alone. This also allows to spot inconsistencies between dataset because Wikidata can store several inconsistent datasets, hence provide some heuristic to know where the datas should be improved. author  TomT0m / talk page 08:37, 25 September 2016 (UTC)
Another piece of the puzzle is mw:Extension:ArticlePlaceholder who can generate text from wikidata datas and generates stub articles on the fly. This could basically make the bot useless. author  TomT0m / talk page 08:31, 25 September 2016 (UTC)

Can a bot reliably tell if there is already an item about the village, river, etc.? This can be difficult due to spelling variations, alternate names, and different levels of government. For example, near me, there are Rutland County (Q513878), Rutland (Q25893), and Rutland (Q1008836). Jc3s5h (talk) 12:37, 25 September 2016 (UTC)

  • The Swedish bot is steadily creating articles in the sw and ceb wikipedias for all locations in the world, mostly using geonames / NASA data. I assume these all get Wikidata entries. There may be errors, e.g. not realizing that Paraty and Parati are the same place, but that can be sorted out. The Swedish bot data is in the public domain: nobody can copyright mere facts on average rainfall or temperatures. Can we backtrack from existing Wikidata entries to the corresponding NASA data, then update attributes like "average July rainfall" from the NASA data, giving the source? That would give the Wikipedias a higher level of confidence about importing the data into their articles, and possibly generating articles to match the Wikidata entries. Aymatth2 (talk) 15:09, 25 September 2016 (UTC)
Having doublicate items isn't necessarily an error. It's not ideal but if someone notices they can merge. On the other hand GeoNames often contains wrong coordinates for an items. If then the temperature data are pulled based on the incorrect coordinates the whole item would have real errors in it's data. ChristianKl (talk) 19:08, 25 September 2016 (UTC)
  • @ChristianKl: Do we know how often the GeoNames coordinates are wrong, and how far they are wrong? The Swedish bot seems to be causing entries to be made in Wikidata for a great many places. I assume this includes coordinates. If they are within a kilometer or two, the temperature and rainfall data will be close enough - they are rough values anyway. If only 0.001% of the coordinates are completely wrong, we can live with that. Perfection is the enemy of excellence. But if 10% of the coordinates are completely wrong we have a very serious problem. Aymatth2 (talk) 21:59, 25 September 2016 (UTC)
  • @Aymatth2, ChristianKl: A lot of the data in GeoNames is just garbage, especially for Central America. I have no idea where GeoNames gets their data from, but it definitely isn't reliable. From spot checks of areas I know well, I would estimate that about 5% of their data for Central America is totally bogus. Kaldari (talk) 23:06, 26 September 2016 (UTC)
  • It looks like GeoNames gets their data from 74 other databases, which explains why some of the data is high quality and some of it is garbage. Kaldari (talk) 23:27, 26 September 2016 (UTC)
  • As far as the temperature data goes, there's are currently proposal to have currently to add a property for it . Currently there isn't a property for it. ChristianKl (talk) 18:47, 25 September 2016 (UTC)
  • Oppose - A large amount of the data from GeoNames is poor quality (especially outside of Europe and North America). GeoNames is the largest geography database on the internet, not the most accurate. They aggregate data from 74 other databases, some of which are high quality and some of which have no quality control whatsoever. Our species data is already polluted by Lsjbot. I would hate to see the same thing happen with our geographical data. Kaldari (talk) 23:46, 26 September 2016 (UTC)
  • @Kaldari: I get the impression that as Lsjbot churns out geo-articles in the sv and ceb wikipedias, the coordinates from GeoNames get loaded into Wikidata. It would help to have some hard numbers on what percentages of these coordinates in Wikidata are a) accurate b) within 1km c) within 10km d) off by more than 10km. Is there a way to check a random sample of the coordinates against what we would consider reliable sources? Perhaps it could be done on a country-by-country basis. The bot data on climate etc. derived from coords+NASA could then be accepted for countries where coordinates are fairly accurate, rejected for others.
If there are countries where other sources give more accurate coordinates than GeoNames, is there a way to override the GeoNames Wikidata coordinates with data from those sources? Which are those countries? Aymatth2 (talk) 03:04, 27 September 2016 (UTC)
The problem is that the data quality from GeoNames is essentially random, as it depends mostly on which original database the data came from. Evaluating the quality of such an aggregated meta-database is practically impossible. It's like asking "What is the quality of data in Wikidata?". What Swedish Wikipedia should be doing is evaluating the quality of each of the 74 sources that GeoNames uses, figuring out which ones have high-quality data and importing only that data directly from the original sources. Kaldari (talk) 08:24, 27 September 2016 (UTC)

What is the percentage of errors in Wikidata, Wikipedia and Geonames? The data IS in Wikipedia and consequently it should be in Wikidata. The best thing we can do is work on this data and improve where necessary. Dodging the bullet by making it appear to be outside of what we do is plain silly. It is what we do among other things. Thanks, GerardM (talk) 10:25, 27 September 2016 (UTC)

I disagree with GerardM (talkcontribslogs)'s statement "The data IS in Wikipedia and consequently it should be in Wikidata". The whole idea of importing data from Wikipedia is dicey, since the quality of Wikipedia data is not as good as some other sources. Certainly if I came across some demonstrably wrong data in Wikipedia, and couldn't find a correct replacement, I should delete the data from both Wikipedia and Wikidata. Jc3s5h (talk) 12:25, 27 September 2016 (UTC)
  • Have we talked to the GeoNames people? I assume they have tried to use the most accurate data sources they can access, but in some cases have had to make do with imperfect sources. Spot-checks can give a good measure of the quality of data in GeoNames or, for that matter, in Wikipedia. If we find that GeoNames coordinates for British locations are 99.99% accurate in GeoNames, and 98.4% accurate in Wikipedia, we should replace all the British coordinates in Wikipedia and Wikidata with the Geonames coordinates. It is possible that one of the 00.01% of inaccurate GeoNames coordinates will replace an accurate Wikpedia coordinate, but the trade-off seems reasonable. We can then use a modified version of the Swedish bot to match the coordinates to the NASA data to get the altitude, temperature and rainfall data for those British locations and store it in Wikidata for use by Wikipedia. Why not? Aymatth2 (talk) 12:48, 27 September 2016 (UTC)
I don't know about all the Wikipedia's, but at the English Wikipedia, if a bot repeatedly replaces information that has been individually researched by a human editor, and for which reliable sources have been provided, with incorrect values, that bit will find itself indefinitely blocked. The current compromise on using Wikidata information at the English Wikipedia (other than linking to equivalent articles in other languages) may be found at w:Wikipedia:Requests for comment/Wikidata Phase 2. Jc3s5h (talk) 13:41, 27 September 2016 (UTC)
  • @Jc3s5h: An approach that may work is to have a bot take the coordinates given in a Wikipedia infobox (which may come from Wikidata), and use those coordinates to fetch the temperature and rainfall data from NASA and format them as text in the appropriate language. The chunk of text would be held in a separate Wikipedia file, transcluded into the article like a template, and the text would make it clear that it is NASA data for those coordinates as of the retrieval date. The bot could be rerun occasionally, or on demand, to refresh the data. It would be nice to store the data in Wikidata so all the Wikipedias could use it, but I get the impression that getting the Wikipedias and Wikidata to agree is tough. Aymatth2 (talk) 16:26, 27 September 2016 (UTC)

One large problem with Geonames is that they have matched data from different databases, but matched them so poorly that a small village with two families near my home got a population of several hundreds. This error was introduced because the village share the same name as a small town 1000 kilometers from here. The population data was correct, but GeoNames did not match the correct place. Another large problem is that Geonames have many duplicate items. Both French and English databases have been used for Canada, therefor many Canadian items in Geonames can be found twice. Once with a French name and once with an English name. A lake at the border between Northern Territories and Western Australia can be found at least twice. Places who ends with the letter Ö in Sweden, are categorised as islands, even if they are not islands. Large parts of Faraoe Islands can be found at the bottom of the Atlantic Ocean. Almost every coordinate is rounded to nearest minute, locating mountain peaks floating in the air and lakes on dry land. Many items about buildings does not tell very much about the building at all. It only tells that this kind of building at least have existed here at some point between Stone Age and today. -- Innocent bystander (talk) 13:37, 27 September 2016 (UTC)

  • The immediate concern that triggered this discussion is with villages, where we need accurate enough coordinates to derive rainfall and temperature data from NASA. Are the GeoNames coordinates usually "good enough" for this purpose? Duplicate names are probably not a huge issue with villages. In Canada a lake, river or mountain might have variants (e.g. Lake Champlain/Lac Champlain), but a village would have the same name in both languages. Aymatth2 (talk) 16:26, 27 September 2016 (UTC)
Duplicate names are an issue with villages. Villages names often aren't unique. ChristianKl (talk) 17:13, 27 September 2016 (UTC)
  • If GeoNames has two entries for one village, St. Jean and Saint John, whatever, and they both have roughly accurate coordinates, good enough for climate data, there is no problem for the purpose being discussed as long as one of them can be matched to the Wikidata entry. The problem is when GeoNames places St. Jean, Quebec somewhere in Alabama. I suspect that wildly inaccurate coordinates are rare. Aymatth2 (talk) 17:28, 27 September 2016 (UTC)
  • I'm not convinced that getting the wrong village in the same county (or similar geographic unit) is good enough. I've hiked in an area where one side of a mountain ridge line is a temperate rain forest, and the other side is an ordinary northern forest. Jc3s5h (talk) 18:07, 27 September 2016 (UTC)
When data is used to create articles in Wikipedias we are not talking about English Wikipedia we are talking about the process whereby new content is created in multiple Wikipedias. When we refuse to acknowledge processes like this and not include the data we have no way of improving the data before it is actually used to created articles. What use is it for us to be the data repository for Wikipedia when we refuse to be of service? It is wonderful to disagree but what does it bring us? NOTHING. We can do better and we should do better. Thanks, GerardM (talk) 20:03, 27 September 2016 (UTC)

NGC vs. SIMBAD - IDs for astronomical objects[edit]

New General Catalogue ID (P3208), which I proposed, was recently created. It seems that in many cases the ID is the same as SIMBAD ID (P3083) (e.g. for Jewel Box (Q725477), the IDs are "NGC 4755"). Is this always the case? If so, how should we deal with the matter? The fact that IDs are prefixed "NGC" suggests that the property name should include that string. @Mike Peel: for expertise. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:24, 25 September 2016 (UTC)

The IDs for the New General Catalogue (Q14534) used by New General Catalogue ID (P3208) are malformed. The ID for Jewel Box (Q725477) (in the context of New General Catalogue (Q14534)) is 4755, not NGC 4755. --Succu (talk) 22:01, 25 September 2016 (UTC)
I'm not sure I completely understand the subject of concern. ID's in SIMBAD are not always the same as in the NGC, for example, HR 349 in SIMBAD is "* g Psc -- Double or multiple star", but there is no NGC ID for this star. --Marshallsumter (talk) 00:31, 26 September 2016 (UTC)
The questions I asked were "Is this always the case? If so, how should we deal with the matter?". You have given answer to the former which is equivalent of "no"; thereby rending the second of them void. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:37, 26 September 2016 (UTC)
Simbad IDs are a bit complicated... As far as I can see, they basically have multiple IDs for each catalogue that's been imported. So NGC 1952 and M1 are both identifiers for the same object, using the NGC and Messier catalogues respectively.
Quite how that should be handled here, I'm not sure. Perhaps via giving multiple SIMBAD IDs in Wikidata where those IDs point to the same object? Or maybe auto-generating SIMBAD IDs based on things like NGC number?
But either way, you can't assume that a SIMBAD number will always be an NGC number, so it makes sense to keep track of them separately. @Succu: is right in terms of the number (rather than code) that should be given for the NGC identifier. Thanks. Mike Peel (talk) 14:52, 26 September 2016 (UTC)
As I indicated in the proposal for P3208, we currently store the IDs with the NGC prefix, in catalog (P972). Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:04, 26 September 2016 (UTC)
In that case, it makes sense to include the acronym, as you have multiple catalogues you're giving IDs for and it avoids confusion if they have the acronyms included. Here, since it will always be "NGC", why not include that in the property rather than repeating it each time? Thanks. Mike Peel (talk) 15:43, 26 September 2016 (UTC)

Why weren't these question brought up during the property proposal? And why are properties created when nobody of the community is familiar with the subject? --Pasleim (talk) 13:46, 26 September 2016 (UTC)

You'd have to ask all the people who were aware of them but didn't bring them up. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:33, 26 September 2016 (UTC)
So why you at least did not notify participants of Wikidata:WikiProject Astronomy if you are not familiar with the subject? I am not sure if displaying the proposal in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign outside the door saying "Beware of the Leopard" is enough to bring attention of people familiar with the subject. --Jklamo (talk) 15:09, 26 September 2016 (UTC)
Why did the members of WikiProject Astronomy not notify the community of its existence? I am not sure if displaying the project in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign outside the door saying "Beware of the Leopard" is enough to bring attention of people not familiar with the subject. And why does no-one from the project have the far-from-cryptically named Wikidata:Property proposal/Space on their watchlist? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:14, 26 September 2016 (UTC)
Why insists Mr. Mabbett on his own view about what we (as a community) regard as consensus for property creation? --Succu (talk) 18:30, 26 September 2016 (UTC)
Why insists Mr Succu on attempting to speak for me, and doing so so badly? And what does this have to do with the NGC? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:46, 26 September 2016 (UTC)
Mimicry? Informing the concerned Wikidata:WikiProject about a property proposal has todo something with communications affords within our project. --Succu (talk) 19:28, 26 September 2016 (UTC)

KrBot changing halfbrother to martial half-brother[edit]

I created Pierre Bernard (Q7192082) and entered the statement that he's a half-brother (Q15312935) of Glen Bernard (Q26996771) and gave a Reference for that claim. Afterwards @KrBot: came and turned half-brother (Q15312935) into maternal half-brother (Q19595227) based on the fact that a shared mother is entered (who doesn't the reference). I think that he's actually a maternal half-brother (Q19595227) but the reference I have given for the claim doesn't back this claim. To me that seems like KrBot worses the data quality (@Alessandro Piscopo:. What do you think? ChristianKl (talk) 20:47, 25 September 2016 (UTC)

I have noticed similar issues when I update data in statements for individual data items, though not for relations, but other properties. The way I handle these is to delete the reference. If I have a reference for my specific change I will add that one, and if I have no reference except inference (e.g. in this case the shared mother could be inferred from the relation tree in reasonator) then the statement just becomes unreferenced. I think this is a case of "Wikipedia-think" in that you can keep such a reference in Wikipedia, because the reference applies to part of the statement. On Wikidata that doesn't work, because the reference always applies to the whole statement. Jane023 (talk) 07:36, 26 September 2016 (UTC)
I think that's also wrong behavior because it deletes good referenced data. If you want to add something for which you don't have a reference that's as trustworthy as the original reference it would make more sense to add a new statement with your claim intead of deleting the old one. ChristianKl (talk) 09:23, 26 September 2016 (UTC)
Agree - a bot should never, ever change data with a good (full) reference. If useful, it should be ok to add a new statement, but should also add a qualifier, along the lines of "inferred from" Catherine Bernard (Q26997009). However, in the example given, the only statements that support the inference are completely unreferenced so the bot is creating unreferenced data based on more unreferenced data. This is not a good idea. Robevans123 (talk) 13:37, 26 September 2016 (UTC)

How do we mark the birth year if we know a person was age X at year or date Y?[edit]

Is there currently a straightfoward way to mark the this kind of information? ChristianKl (talk) 13:31, 26 September 2016 (UTC)

Birth date is somevalue qualified by earliest date (P1319) and latest date (P1326)? – Máté (talk) 13:52, 26 September 2016 (UTC)

Spam, or not (redux)[edit]

I raised the issue of whether:

are spam, or not, in August. The discussion seems to have been archived with no decision reached. A further item:

seems related. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:31, 26 September 2016 (UTC)

I think the goal of our notability policy is to have data that can be verified. That's the sense in which we require serious and reliable sources. To me it seems like all the data in those items is of that nature. I don't see how it would help the goals of Wikidata to delete this entry. ChristianKl (talk) 16:24, 26 September 2016 (UTC)
I significantly shortened the descriptions, which were far too detailed. I suggest that since these items are not connected with the rest of Wikidata, that they could be safely deleted as of now - but I don't have a strong opinion on that. My reasoning for deletion is less "I don't want this stuff in Wikidata" but rather "if this stuff is in Wikidata, I want similar stuff to be in Wikidata too". I.e. if the creator would create a reasonably complete list of the given specific area of interest (however defined), yes, sure, go ahead. If we don't expect that to happen, meh, keep the single items out. Something like this. It isn't polished, and just opinion. --Denny (talk) 16:51, 26 September 2016 (UTC)
To me that policy sounds like administrative work that creates conflict between people when we instead could simply allow the data to be in Wikidata without having any conflict whereby people's contributions get deleted. Having a welcoming culture that makes it easy to contribute new data sounds like me like it's more likely to get people to contribute interesting data.
Wikipedia get's a lot of criticism because it's not welcome of knowledge that white males don't find interesting. When you define notability as "things that we find interesting as white people and that institutions that we consider to have reputation find interesting" you keep out a lot of potential contributors.
As long as data is created in a process that likely leads to accurate data, accepting it is valuable for our internal culture. There's nothing wrong with having a lot of undeveloped stubs. Maybe they become valuable to someone and maybe they don't but they don't hurt.
If we focus more on the actual data in question I think we will have more data about journals in Wikidata as WikiCite moves forward. If we allow a Zotero plugin, then we would get even more data that's linked to journals. ChristianKl (talk) 17:46, 26 September 2016 (UTC)
I don't like this kind of item, sources proving the existence not the relevance, and our policy said « The entity must be notable, in the sense that it can be described using serious and publicly available references » this is not the case--ValterVB (talk) 18:01, 26 September 2016 (UTC)
Do you doubt that is a serious or a publically available reference? It seems for me hard to argue that it isn't serious. It's also hard to argue that it isn't publically available. If you think that serious can be defined in a way where doesn't count what's your definition?The same goes for the Library of Congress. If instiutions like that aren't serious, what's `serious` a code word for?ChristianKl (talk) 19:03, 26 September 2016 (UTC)
No, I doubt that in this page I can found something of notable. --ValterVB (talk) 19:28, 26 September 2016 (UTC)
Basically you reject the definition of notability that the document gives and want a different definition of notability. The document defines notability about serious and public available sources. I would guess that you want something like "I want that a source respected by my culture says this item is important". In a project that tries to be culturally inclusive that policy has no place. ChristianKl (talk) 19:42, 26 September 2016 (UTC)
No, I follow our criteria of notability. is a serious source that confimt that exist something called IndraStra but says nothing about what is and why is notable. -- ValterVB (talk) 19:52, 26 September 2016 (UTC)
The requirement in the policy is that the source describes an item. It's not that the source declares the item as important or noteworthy. In this case the source describes IndraStra. It says that it has an INSI number. It also tells us where IndraStra is located. That's information we can take from INSI as reputable facts to integrate into Wikidata. There are four Wikidata statements that can be gathered from the INSI description. No statement made in those items comes from a source that's not trustworthy for the statements being made. (I think that's what serious should be about, can we trust the source to say the truth?) ChristianKl (talk) 20:20, 26 September 2016 (UTC)
« in the sense that it can be described using serious and publicly available references » what kind of description we have in insi page? We have an address, but what is? A shop, an hotel? we haven't information about this item in insi page. Probably "white page" have more info. --ValterVB (talk) 20:32, 26 September 2016 (UTC)
We have some information from INSI but if you want more detail we also have CrunchBase as another serious and reliable source and that says that has a field that explicitly labeled "description". In general it's useful to have a place that aggregates information from different serious and reliable databases. It generally useful to have a place where CrunchBase ID's can be linked to INSI numbers. For various bots that want to make operations on large datasets knowledge that links entities together is valuable. ChristianKl (talk) 15:28, 27 September 2016 (UTC)

Why doesn't Quickstatements enter references?[edit]

I entered `Q11090 P927 Q682466 S854 Q27031918 S304 5` into Quickstatements. Unfortunately small intestine (Q11090) doesn't show the reference. What's up? ChristianKl (talk) 19:00, 26 September 2016 (UTC)

page(s) (P304) is of string type, so you need to add quotes. However, I would be surprised if it works as you desire, I would expect that QuickStatements adds two separate references rather than compiling one with P854 and P340. —MisterSynergy (talk) 20:32, 26 September 2016 (UTC)
I reduced it to `Q64386 P927 Q682466 S854 Q27031918` and it still doesn't add a reference. ChristianKl (talk) 10:46, 27 September 2016 (UTC)
replace S854 by S248. --Pasleim (talk) 11:00, 27 September 2016 (UTC)

Wikidata weekly summary #228[edit]

d:Q4420546 and link to en:Synecology[edit]

Please, help to add link en:Synecology to Wikepedia block at d:Q4420546. When I try to save I get error below

Could not save due to an error. The link en:Community_(ecology) is already used by item d:Q5608096. You may remove it from d:Q5608096 if it does not belong there or merge the items if they are about the exact same topic.

Niichavo (talk) 23:35, 26 September 2016 (UTC)

en:Synecology is a redirect to en:Community (ecology). You can't add a redirect to an item without some workaround, but I think it isn't necessary in this case --ValterVB (talk) 06:31, 27 September 2016 (UTC)
ValterVB, try to delete interwiki to en:Synecology from bottom of the code of it:Sinecologia and you will change your opinion. Niichavo (talk) 19:11, 27 September 2016 (UTC)
Deleted, but why I must change my idea? --ValterVB (talk) 19:34, 27 September 2016 (UTC)

Help for longer caption property and translation for other properties[edit]

Hi. Could you check this discussion from Village pump (and related pages on and help on how to get longer caption and is it possible to translate content of some properties if there is no article written yet on other Wikipedia? Thanks. --Obsuser (talk) 00:20, 27 September 2016 (UTC)

Wikidata sourced with Wikidata. Is this ok?[edit]

Is this ok? I mean, it's potentially useful having imported from (P143) with the Wikipedia the value was imported from, whether that source is unreliable or not. But using other Wikidata item for sourcing a statement with stated in (P248)? I think that's crossing a line... If it's useful storing this data (who knows...) wouldn't be better using at least a different property? Strakhov (talk) 11:20, 27 September 2016 (UTC)

I thought we've had a item to indicate that the statement was added to sync a property (like in this example, father and child). Sjoerd de Bruin (talk) 11:21, 27 September 2016 (UTC)
Found it: corresponding Wikidata item (Q20651139). Sjoerd de Bruin (talk) 18:10, 27 September 2016 (UTC)
P248 is supposed to be used with books, articles and similar things. Using it in this way makes it very hard for Wikipedias templates to interpret what the source really is. -- Innocent bystander (talk) 13:43, 27 September 2016 (UTC)
This kind of use is just a sign of laziness: the correct way is to copy the reference from the first item. Just imagine what happens the day of the initial statement is deleted ? Snipre (talk) 14:47, 27 September 2016 (UTC)
I think bots adding references to new statements they make without the bot really understanding what the reference says isn't a good idea. It's better to have a statement without a reference than having a statement with a potentially wrong reference. If a bot wants to copy references, the content should go through the primary sources tool. ChristianKl (talk) 15:16, 27 September 2016 (UTC)
@ChristianKl: Your comment is not logic: you trust enough bots to add statements but not enough to see them adding references ? If bots can be coded to extract and understand reverse statements, they can extract the reference from the original statement too. Snipre (talk) 16:40, 27 September 2016 (UTC)
The stated in property has the correct constraint to catch laziness like this. The editor should be trouted. --Izno (talk) 15:19, 27 September 2016 (UTC)

LiveJournal ID property[edit]

More eyes are needed on Wikidata:Property proposal/LiveJournal. ru.Wikipedia has over 1000 IDs LiveJournal in templates. Template:Lj (Q13254200) exists in eleven (11) Wikipedias. Should the property be created? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:51, 27 September 2016 (UTC)

Unsourced sexual orientation (P91) statements[edit]

sexual orientation (P91) is a property which requires sensitive use, and its talk page particularly states that it should be used “only together with a reference in that the person itself states her/his sexual orientation or for historical persons if most historians agree”. Using this SPARQL query, I can find 4790 violations of this rule (5560 property uses in total, thus 86% violation rate). The issue came to my attention because two days ago a Wikidata user added almost 2000 new violations by a data import from enwiki and eswiki (see this complex violations report diff).

What to do now? It appears unlikely that someone adds the required sources to the statements in question, although parts of the imported data are probably properly sourced at enwiki or eswiki. I would therefore propose to remove all unsourced sexual orientation (P91) statements in the very near future, given the fact that we use this property almost exclusively in case of non-heterosexuality (Q339014) (usage statistics). —MisterSynergy (talk) 15:55, 27 September 2016 (UTC)

This was discussed recently. Such values should not be removed, without first making efforts to source them - not least by checking the originating Wikipedia. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:02, 27 September 2016 (UTC)
Could you please provide a link to the recent discussion? This does not seem fair to me. Someone made a mass import, violated a clearly stated rule and now other Wikidata users should do the difficult part of the job, which is a manual verification and addition of sources? The information of the unsourced statements would not be lost after a possible removal, since it is mostly still available in Wikipedias. —MisterSynergy (talk) 16:11, 27 September 2016 (UTC)
+1 This is not the task of other contributors to correct or complete data from previous contributions especially when the initial import was not respecting a constraint. The only way to teach people the respect of the rules isto delete their work when it is not complying with the rules. Snipre (talk) 16:44, 27 September 2016 (UTC)
I think more people said this and just a few had the same opinion as Pigsonthewing. Sjoerd de Bruin (talk) 17:23, 27 September 2016 (UTC)
Wikidata:Project chat/Archive/2016/08#Unsourced and Wikipedia sourced P91 statements. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:19, 27 September 2016 (UTC)
Thanks for the link. The positions in this discussion were pretty much the same as here, with roughly the same number of users on both sides. If we now went with the "manually check before remove" scenario, how could that ever work out? I don’t see a chance that anybody solves the problem this way, but I am open to hear suggestions by anyone… —MisterSynergy (talk) 19:34, 27 September 2016 (UTC)
User:Thryduulf made a very useful suggestion in the linked discussion about ranking statements without sources. In the case under discussion, the edits would fall under group 1, and should be deleted within a short space of time. Not sure what the definition of a short space of time would be (a couple of days?), but if people want to keep unsourced, potentially libellous and defamatory, controversial data, then provide a reliable source. Robevans123 (talk) 20:44, 27 September 2016 (UTC)
In this case the problem is that can be potentially defamatory, maybe already said but I think that we must delete all this data without source. --ValterVB (talk) 16:46, 27 September 2016 (UTC)

Imprecise date of birth[edit]

How should one enter an imprecise date of birth - e.g. 1698/9, as the ODNB has for Mary Mogg? thanks --Tagishsimon (talk) 18:39, 27 September 2016 (UTC)

Just type "1698-09". This adds a date of birth with month precision. Cf. Help:Dates as well. —MisterSynergy (talk) 18:44, 27 September 2016 (UTC)
Thanks. Help:Dates helps. --Tagishsimon (talk) 19:03, 27 September 2016 (UTC)
@Tagishsimon, MisterSynergy: "1698-09" is "September 1698". "1698/9", while ambiguous, is in this case most likely "1698 or 1699". Enter "1698" with precision "decade"; and use qualifiers "earliest date" and latest date", thus. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:27, 27 September 2016 (UTC)
@Tagishsimon: what does "1698/9" mean? Does it "1698 or 1699" or does it mean "September 1698"? Jc3s5h (talk) 19:29, 27 September 2016 (UTC)
Oh I indeed thought they mean "September 1698", but "1698 or 1699" seems more likely. Thanks for noting! —MisterSynergy (talk) 19:31, 27 September 2016 (UTC)
Thanks; yes, I'd got to the same place. And sorry, my question was ambiguous ... could have been month, could have been span of two years. As I said, Help:Dates helped. (It was a span of two years, in this case, and PoTW has sorted it out, for which thanks.) --Tagishsimon (talk) 20:04, 27 September 2016 (UTC)

instance of (P31) vs. subclass of (P279)[edit]

Probably this was already discussed, but I am not able to find a reliable source for solving this task. I wanted to add instance of (P31) and subclass of (P279) to racket sports tournaments, but there are totally different ways of using these statements (see French Open (Q43605), Australian Open (Q60874) or US Open (Q123577)). What is the correct way of using the statements, and where can I find a guideline for using these statements for sports events? --Florentyna (talk) 19:09, 27 September 2016 (UTC)

(Added clickable links to your contribution); The subclass-approach as in the first example is correct. You can then use the instance statement on specific editions of these tournaments, as in 2016 French Open (Q22690923). —MisterSynergy (talk) 19:16, 27 September 2016 (UTC)


A second question would be, if TOP CATEGORY statements should be added if already a SUBCATEGORY statement exists - see Saina Nehwal (Q464311). There was added the occupation (P106) sportsperson (Q2066131), but the occupation badminton player (Q13141064) was already there. --Florentyna (talk) 19:14, 27 September 2016 (UTC)

badminton player (Q13141064) correctly subclasses sportsperson (Q2066131), thus the more general statement is not necessary and can safely be removed. —MisterSynergy (talk) 19:20, 27 September 2016 (UTC)

A way to directly link to the edit box for Wikidata items[edit]

Greetings, is there a way from, say, an enwiki page to directly link to the edit box for the associated Wikidata item? There is a proposal here to use information from Wikidata in infoboxes and one concern raised was how to fix incorrect information. Jo-Jo Eumerus (talk, contributions) 19:15, 27 September 2016 (UTC)

YouTube channel & user names[edit]

Which properties need their constraints changed, to prevent edits like this, using website username (P554) as a qualifier for YouTube channel ID (P2397), from triggering a constraint warning? Once done, I'll ask someone (User:Pasleim?) to temporarily move user-name values from website account on (P553) to P2397, where User:Mbch331's ytcleaner tool will then fetch the Channel IDs, and ask Mbch331 to modify the tool to add the qualifier. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:56, 27 September 2016 (UTC)