Wikidata:Contact the development team

From Wikidata
Jump to: navigation, search
Shortcut: WD:DEV
Wikidata development is ongoing. You can leave notes for the development team here, on #wikidata connect and on the mailing list or report bugs on Bugzilla. (See the list of bugs on Bugzilla.)

Regarding the accounts of the Wikidata development team, we have decided on the following rules:

  • Wikidata developers can have clearly marked staff accounts (in the form "Fullname (WMDE)"), and these can receive admin and bureaucrat rights.
  • These staff accounts should be used only for development, testing, spam-fighting, and emergencies.
  • The private accounts of staff members do not get admin and bureaucrat rights by default. If staff members desire admin and bureaucrat rights for their private accounts, those should be gained going through the processes developed by the community.
  • Every staff member is free to use their private account just as everyone else, obviously. Especially if they want to work on content in Wikidata, this is the account they should be using, not their staff account.


On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at September.


ISO-format date: Precision parameter[edit]

An edit, changing precision value to 7 (century), makes the date 1815-08-15 look as 18. century instead of 19.century. How to fix it? Sealle (talk) 06:35, 18 August 2014 (UTC)

there really is a problem with dates, as a date for "20. century" shows as 2000-01-01 - it should be 1900-01-01 (or 19.. as used in most databases, including commons dates). The very strange result is that a person can be married in 1985 and born in 2000-01-01 !! - even if "precision" should make it read as 20.century, there still is a problem... :D --Hsarrazin (talk) 13:43, 20 August 2014 (UTC)
  • Just a little amendment: the beginning of the 20th century is 1901-01-01, not 1900-01-01. Sealle (talk) 14:20, 20 August 2014 (UTC)
Sealle, I agree - 1901 is the REAL first year of the century :) , but it does not make it right to use the last year of the century to store the century value… if it could be stored with 19.. or 1901, or better, the approximate value and precision having it displayed as century, but without loosing the approximate value : sometimes we know the decade… but only have year or century as precision argument :S
I think it's VIAF that uses the following coding : 1800-1999, to signify "19-20th century" - i.e. from begining to 19th to end of 20th - or 1800-1899 when birth AND death are within the 19th century, without precision... - maybe it is not the "real" first year of the century, but, at least, it's clear, and birth is always < other events, and death > other life events… which is the main point ;)
what I mean is… perhaps we should use the smaller value for "begining dates" and the bigger for "ending dates"… this way, dates could be compared logically… --Hsarrazin (talk) 21:49, 20 August 2014 (UTC)

I wonder, is anybody going to fix an obvious error?! Sealle (talk) 21:16, 22 August 2014 (UTC)

Yes sorry. I've been super busy. Will look into it over the next days. --Lydia Pintscher (WMDE) (talk) 09:31, 29 August 2014 (UTC)

MonolingualTextValue[edit]

Could somebody explain, why labels and descriptions serialize as {language, value}, while monolingualtext snak values serialize as {text, language}? How can they serialize differently? --JulesWinnfield-hu (talk) 19:26, 21 August 2014 (UTC)

Can we get an explanation? Is this how it will be forever? --JulesWinnfield-hu (talk) 10:20, 27 August 2014 (UTC)

Indeed a strange idea at first sight. But in a way, I can see some sense in it regarding monolingualtext. The actual (data) value is the combination of the text (string) and the language. So, to have (data)value: {value, language} would be confusing. It would probably make sense to use the same key (text), for labels, descriptions and aliases. But, maybe, there are plans that, as soon as multilingualtext is implemented, labels, descriptions and aliases will return multilingualtext values? That would probably resolve the situation. Random knowledge donator (talk) 11:17, 27 August 2014 (UTC)
These two things – labels, descriptions and aliases (also referred to as "fingerprint") on one side, mono- and multilingual text values on the other side – do not have anything in common and do not share any code. There are no plans to replace one with the other. I can see that the two concepts can be confused. But it's really important to look at them independently and create independent implementations for them. The difference in the serialization is partly unintentional and intentional. It just does not matter if they are the same or not because they do not and should not have anything in common. The different serialization makes this clear. --Lydia Pintscher (WMDE) (talk) 15:38, 27 August 2014 (UTC)
Why would the values for label, description and aliases not be a multilingual values? Label, description and aliases are basically implicit properties. Why making it an obstacle that, for example, accessing both, a label and some other multilingual value, in a particular language requires two completely different implementations? Random knowledge donator (talk) 05:31, 28 August 2014 (UTC)

Wikidata broken by design?[edit]

Excuse me for re-activating the following discussion since @Jeblad: made a quite impressive statement that illuminates the problems in a more technical fashion. I did not check back earlier since I am sort of disappointed by by the fact that this fundamental topic is not regarded as important as it should be. Random knowledge donator (talk) 09:56, 25 August 2014 (UTC)


Trying to get some answer here since the project chat discussion about how to properly capture uncertainty did not result in any valuable input.
I am still not sure how to properly capture those two cases of uncertainty I listed. An answer that Wikidata does and will not support that is fair enough - although, as far as I know, the intention of Wikidata was not to be a plain fact database but indeed allow modeling uncertainty. Listing the original description of the issues - any answer appreciated (please excuse the catchy headline, just trying to get some more attention than in the project chat):
The first one was on Abraham von Freising (Q330885): According to the reference, the person may have died either on 7 June 993 or 7 June 994. This could be reflected by using a time range or a data type specific qualifier like "alternative date". But, actually, these are two discrete values, basically a list of dates. Eventually, I added both which, at first, seems reasonable and was done before. However, when querying for people having died in 993, one would receive Abraham von Freising (Q330885) without any hint that this information is not certain. Consequently, when querying for people having died in 993, one would assume that this person, in fact, died in 993 and uncertainty becomes fact.
Another example is Wolfgang Carl Briegel (Q1523127). According to one reference, the person may have been a student of Johann Erasmus Kindermann (Q466635). Qualified by the same time range, I added "unknown value" and Johann Erasmus Kindermann (Q466635) for student of (P1066). However, split into two separate statements, that does not really reflect what the reference expresses and applying both statements, backed by the same reference, seems even odder than backing different values for date of death (P570) with the same reference. Expressing that Johann Erasmus Kindermann (Q466635) may have taught Wolfgang Carl Briegel (Q1523127) using student (P802) on Johann Erasmus Kindermann (Q466635) seems kind of impossible without some weird qualifier expressing "may be false". One could argue to just drop that uncertain information and use "unknown value" exclusively, but, well, that would be a loss of information and I am sure such problems occur in other situations as well (an example of a more prominent topic may be to model something like "Roger Godberd (Q7358238) might have been Robin Hood (Q122634)"). Random knowledge donator (talk) 06:56, 25 June 2014 (UTC)

I think those are different cases and in each case it should be treated differently. For instance, for the case of the date of birth, I would mark it as "unknown value" with qualifiers earliest date (P1319)/latest date (P1326). For the second case you could propose a qualifier "source certainty" that would indicate how sure are the sources about the provided information.
But you shouldn't expect to get "ultimate answers". Anyone can give suggestions, and if you don't get feedback, that means that you can come up with a proposal of your own.
OTOH, I agree that Wikidata is broken by design, however that applies not only Wikidata but to any piece of software or reality-representation :) The trick is to move closer little by little every day and not to expect perfect data or knowledge, because by definition it doesn't exist. --Micru (talk) 08:30, 25 June 2014 (UTC)
Thanks for your answer. Using earliest date (P1319) and latest date (P1326) would imply a range though. "Source certainty" is a nice idea. However, one would need to define a constraint of exclusive values (which probably would need to be items to be machine-readable) and what would these values be? Items for "high certainty", "normal certainty", "low certainty"?
Technically, I would like to simply flag values that can be regarded uncertain. When issuing a query, these values could be marked/filtered/whatever easily. As for the first example, it would even be better to allow some kind of alternative values on single statements since the value is basically a list of possible values - but that is probably hard to model from a technical perspective. Flagging statements uncertain could, for example, be simply(?) achieved by extending the "value type" options though: "custom value" (as opposed to "no value" and "unknown value") would be split into something like "certain value" (default) and "uncertain value". In my opinion, the amount of uncertainty ("source certainty") should be left to the reference/content of the reference since capturing that is out of scope for Wikidata as it involves subjective rating. Random knowledge donator (talk) 14:07, 28 June 2014 (UTC)
Seems like my inquiry was not successful once again. Still, I think this is a fundamental problem. I do not demand that the issue has to be solved right now but it needs to be addressed. However, the only outcome of my question is that no one really cares. I refrain from editing data as long as there is no strategy to resolve such a fundamental issue. Random knowledge donator (talk) 08:46, 2 July 2014 (UTC)
Random knowledge donator, how do you expect it to be successful if you don't file a property proposal with whatever property you think it could help you model uncertainty? I agree that it would be nice to have a confidence option for sources, but I am not the one setting the priorities, and I also think that for now we can do that with a property or a qualifier, so we can learn about the needs and possible uses.--Micru (talk) 09:31, 2 July 2014 (UTC)
Repeating myself: Personally, I do not think a property is appropriate. I would be fine if someone would explain how a property would solve the issue. Random knowledge donator (talk) 09:35, 2 July 2014 (UTC)
Random knowledge donator, if we create a new property it could be used as a qualifier: [qualifiers] expand on, annotate, or contextualize beyond just a property-value pair. It is not the same saying "date of birth:1850" than "date of birth:1850" with qualifier "source certainty:low". Both statement and qualifier form a whole and the statement is incorrect if you don't take both into account.--Micru (talk) 10:48, 2 July 2014 (UTC)
I really appreciate your answers and understand your argumentation. However, having a "source certainty" property involves subjectivity by rating the amount of certainty of a source or the fact stated by the source (which even are two different things but that is more of a different story). And which values would be allowed for "source certainty"? Low, normal, high, very low? Ultimately, I would not support having such subjectivity in Wikidata. How is one supposed to rate the certainty of a reference anyway? That is a very scientific matter. In my opinion, the amount of uncertainty should not be subject of Wikidata - however, having a qualifier like "is uncertain" pointing to a boolean "true" seems pointless as well. Random knowledge donator (talk) 11:42, 3 July 2014 (UTC)
Random knowledge donator, when there is source uncertainty it happens mainly because of two reasons: either the source is stating their self-assessed level of uncertainty, or the circumstances do not allow to consider properly the information contained in the source (physical support degradation, obsolete methodology, wrong assumptions, etc). You could generalize both cases with a general "sourcing circumstances" qualifier with objective values like: significant self-assessed uncertainty, incomplete source, source ambiguity, etc. To model information that is disputed by other sources we already have statement disputed by (P1310).--Micru (talk) 08:01, 4 July 2014 (UTC)
OK, I get the point. Still, I have concerns though. Sorry! First off, a generic property is not really usable since users need to figure out upfront that (a) the property exists, (b) it is the one they are actually looking for and (c) what values are supposed to be used for the property. The concept is just really hard to understand resulting in the property not being used at all. And what data type would the values of "sourcing circumstances" have? Are these supposed to be items, individual text or something else? Apart from that, one needs to be aware of the properties ("source uncertainty", "disputed by" and whatever is there and there to come) that mark uncertainty when querying to be able to filter those values eventually. And in the end, still, I think it involves too much subjectivity and detail. How can I judge that a source is incomplete, outdated or whatever? Yes, there are those really obvious matters like the flat earth theory - however, there are sources with much more subtle issues and the reason why a source may be regarded uncertain can be of diverse scientific matters and I would probably not put my head above the parapet and ascertain a reason why a reference may be regarded uncertain. Instead, I would recommend having a look at the original reference to the reader. Even more, in a secondary source, the reason why something is uncertain may not be supplied at all, like for the two dates of the example in the initial post. I am afraid, the concept of using one or more properties to mark uncertainty, still, seems too complex and - please, excuse me - naive. However, I think the two of us are not getting towards a solution here... what about the developers anyway? Random knowledge donator (talk) 07:22, 8 July 2014 (UTC)
I would like to see something done with qualifiers first to see how it is being used. We can then decide about what to do next and if it is worth investing more time into and if it is worth complicating the user interface and data model for it. --Lydia Pintscher (WMDE) (talk) 09:21, 8 July 2014 (UTC)
A rank "uncertain" would be nice, but I do not know which property could represent that... suggestions welcome, Random knowledge donator. --Micru (talk) 19:28, 11 July 2014 (UTC)
No offense, but waiting for "something being done with qualifiers" is not really helpful. Personally, I do not see a sane way to get that resolved with qualifiers (see all my statements). I would rejoice if there is... Using an additional rank is problematic since that would interfere with the original concept of ranks (see discussion on the corresponding help page).
Still, I stick to another snak type technically being the most sane solution. If there is another method to flag statements - fine. But regarding qualifiers: Qualifiers do not allow flagging since snaks always consist out of a property and a specific value (unless you choose another snak type - you get the point...). You would need to restrict the value to one particular item which is true (Q16751793). Regardless of true (Q16751793) being a strange item, the method would be too technical, too complex and not prove usable since, even more, you never would use false (Q5432619) for a property "is uncertain" - instead, you would just not assign the qualifier.
I really think that I made my point clear in all the lines above. If you want to see something made up with qualifiers - I cannot offer that since I am not convinced of that being a proper solution; And since nobody else seems to be interested in specifying uncertainty, we can also wipe that topic off the table since "doing something with qualifiers" is unlikely to happen unless you guys take action and figure out a proper solution.
If there is another proper solution (in terms of being logical and usable), yes, I would gladly accept it but, to me, it seems like uncertainty was not really considered in the original concept of the software. However, being labeled a "knowledge base" in contrast to a "fact database", I would suppose modeling uncertainty should be a core concept of Wikidata. Random knowledge donator (talk) 10:17, 16 July 2014 (UTC)
@Random knowledge donator: The other day I was doing some tests with a qualifier "type of statement" to specify uncertainty, universal quantification (∀) and existential quantification (∃). All these options are necessary (perhaps integrated into the software) if some day we want to move away from mere fact collection into the "knowledge base" realm. You can also do some experiments (create properties and items as needed) on the test instance of wikidata. See for instance:
--Micru (talk) 10:58, 16 July 2014 (UTC)
I get the point, but still not convinced, sorry. That seems to capture the logic but for the price of poor usability. "type of statement" is far too generic for users to be aware of using it for specifying uncertainty. Given the huge amount of data that is supposed to be managed in Wikidata, Wikidata cannot rely on experts that have dug into the concepts. I think, being able to represent uncertainty should be as obvious as possible. If not, users will simply not enter data or, probably even worse, enter data as if it was not uncertain... But maybe/probably I am the only one who regards that as important. Random knowledge donator (talk) 09:44, 17 July 2014 (UTC)
@Random knowledge donator: "Uncertainty" is not the only meta-information that statements require. There were users also requesting for a system to protect statements, maybe both features belong together, I don't know. But without testing first what is needed we will not be able to know what to ask for.--Micru (talk) 09:57, 17 July 2014 (UTC)
(Sorry for editing in an archive, but I think this issue is more important.)
I agree with you on whether Wikidata is broken in this respect, it is a lot of things that isn't correct when it comes to handling of simple values values. I'll add bits and pieces as I read your text, but the general idea goes like this
  • Our mental model of an value is quite complex, but we must represent it in some simple way
  • Our value can be a range or a bag of values, probably also an ordered set of values
  • Our value can have several uncertainties and error sources attached to them, and two values in a list might not use the same error model
  • Any value should refer to some kind of datum, but values that share the same dimension might be compared (not always, we know from sources how some relates to each other but we don't know their absolute value)
Let me give an example: A box can have 3 lengths, those are width, depth and height. We could call them extent. We could then say
width: a
depth: b
height: c
or slightly better
extent: a
extent: b
extent: c
or perhaps even for a list of values
extent: {a b c}
If a, b', and c is 1, 2 and 3 then it might be valid to say
extent: [a c]
Those two last forms are very important if you want to keep the values together in a statement, and they are ordered and unordered sets. The first form is typically called a seq in rdf and the last form a bag.

But the values themselves (the a, b', and c), what if we need to model them more accurately? If we have simple values as the main snak then we can describe that value by using qualifiers, that is it is a reified statement anyhow. But if we have separate values inside a bag or seq inside a main snak, then we need to create the values as blank nodes themselves and we put the additional stuff inside that blank node. The qualifiers for the statements as in the Wikidata UI will then refer to the whole bag or seq of stuff, while we keep the very specific additions inside the blank node. We sort of add another level of qualifiers.

(Note that we can add qualifiers to describe uncertainty for a value in a statement, but when that statement is multivalued this might not be apropriate.)
So in your case with Abraham von Freising (Q330885) you will have
died: { "7 June 993" "7 June 994" }
Simply two dates, no mystery at all except for two dates where most people would expect one. That can be solved with a reference to some publication that describes the situation. It will although be troublesome in some context where you want to use a single value.
Messing around with this opens up some quite funny simplifications. What about open ended intervals for a birth date? The nice thing with this is that we can say something about a value without spiraling down the we need yet another special property.
There is also a situation where a statement holds a reference to a vector. I played around with various representations of that, it seems solvable. Some very important cases are where there exist some kind of statistical analysis, like a w:en:Five-number summary. Those represents something that would probably have been squashed into a single value in the present model.
There are several core concepts that should be implemented, and it should be possible to either write some parseable strings or it should be possible to build them some other way. Personally I like parseable strings, take a look at w:en:Well-known text for example.
I think some of the problems regarding the modeling of this is lack of expertise in statistics, probability and real life wetting and analysis of data. It was simply expected that this was a simple problem with simple solutions, but it is not, it is quite complex. Jeblad (talk) 21:01, 17 August 2014 (UTC)
Thanks alot Jeblad for the extensive write-up. I agree with your concept of modeling. Featuring collections of values/multivalued statements would be the most sane way of representing uncertainty when it comes to representing possible alternatives, like for the dates in the example. It would be a step into the right direction although there would still be the need for a solution regarding uncertain values without alternatives (assumptions), like the other example of "someone might have been someone else". However, with multivalued statements, the impact on the data model would probably be quite huge and according to the attention the topic receives, I am convinced that this will not be implemented in the far future and the problem will not be highlighted before there is a lot of misleading data in the database already... Random knowledge donator (talk) 09:56, 25 August 2014 (UTC)
@Jeblad, Random knowledge donator: I've got these ideas, but I don't know if that would be practical. This is also related to some properties that take as value "items", but sometimes with just a string would be enough (P:P1420, I'm looking at you...).--Micru (talk) 13:51, 25 August 2014 (UTC)

Adding more expressive features to Wikidata[edit]

Hi all. @Jeblad, Random knowledge donator, Micru: Jeroen asked me to comment here, so here is my attempt at an answer. What you are discussing has many different aspects and the discussion has already turned into something rather hard to digest. I might be missing some of the proposals. But on a general note, in order to ever achieve a result in such discussions, it is important not to come up with new examples in each reply. You will always find something else that does not work. A better approach would be to make a list of use cases (in the form of statements that one could want to make). Then one can decide if we want to support these statements or not, whether we already have a way to model them or not, and whether we want to have new features in some future to capture them or not.

I hope that we all agree that there will always be knowledge that cannot be captured accurately in a computer, whatever tool or system we use. But I have a feeling that not all of us might fully be aware of all of the reasons for this. I guess most of us are aware of several kinds of "knowledge" that are clearly out of scope for Wikidata (and maybe any computer):

  1. Beliefs and feelings. One could call this "vague knowledge" but it's not the kind of vagueness that the discussion here was about. It's highly personal and not something that people can easily nail down even in words, not to mention statements in a fixed format.
  2. Understanding. Somebody who studied, e.g., Wittgenstein's philosophy for ten years may (hopefully) have acquired a kind of knowledge that is more than a mere collection of facts. Learning all the facts by heart won't be enough to obtain this. Deep understanding leads to new ways of thinking. This cannot be captured in a database of any kind, since databases don't think.
  3. Process knowledge. Knowledge like "how to knit a pair of socks" is process knowledge that involves a lot of implicit skills that one gets only by practising (if you ever read a written manual on knitting, you know what I mean). This type of knowledge is largely out of scope for Wikidata.

One could add more, but this is only to clarify that some things we might call "knowledge" are without dispute beyond our reach. But even if we restrict to a "formal" kind of knowledge that seems more liable to machine representation, we have to be aware of certain limitations:

  1. Computability and undecidability. There are many fully formal systems of knowledge with a clear and "mechanical" meaning that cannot be fully implemented in computers. These insights go back to Turing and Goedel: there are mathematical functions that no computer can compute and there is no computer that can draw all logical conclusions.
  2. Practicality and complexity. Even if a computer can compute a desired answer in principle, the task might be extremely hard in the sense that there cannot be any tractable algorithm for solving it. Such hardness results can be shown by mathematical proof.

How do these things affect us here? The answer is that answering queries (as used in the examples of Random knowledge donator) is a computing task. When we make the knowledge model more powerful, this task will become harder, maybe even undecidable. Then we have managed to express the world in greater detail and yet will not get the answers that this formalised knowledge should give us. Since Wikidata is a data management platform and not a theorem prover, this would not be a very good situation to be in.

We really want to be able to answer all queries in reasonable time (low complexity). I am completely missing this whole aspect in the above discussion (maybe I overlooked it?). It seems to me that you are discussing features solely from the viewpoint of more powerful modelling. "I need to express this, hence we should add a new feature." You seem to presume that, once you have a way of expressing something in the system, anything that a human can reasonably conclude from it would also be computable with some algorithm. Goedel taught us that this is not the case. Turing taught us that even when it is, it might require exceedingly high amounts of time and memory.

I am emphasizing this here because already the proposed features are known to make query answering much harder. The first of the use cases is a requirement for disjunctive information: you want to say that one of two possible things might be the case. This makes query answering intractable (NP-hard in the size of the data). If you combine it with additional features, it can even be much worse (e.g. the lightweight ontology language OWL EL, which is in polynomial time, jumps to ExpTime if you add disjunction).

If you go further into the modelling of "vagueness" then you will encounter probability as a useful modelling paradigm. Complexity of query answering in probabilistic databases (even with very simple notions of probability) is again well known to be computationally intractable (#P even for simple scenarios with far-reaching assumptions of probabilistic independence). If you study the literature on probabilistic databases, you will see that there is more than one way of adding probability to a database.

What I am saying here is that this is not a one dimensional problem, where we just have to add as much features as possible until we are happy. Every new feature has a cost, which has to be paid by all users and consumers of the data. The above discussion revolves around the simplest of the problems: how could we best represent (or even just: write down) this knowledge. This is the first and most simple step. A much harder question is how this information should be taken into account when answering queries. Some might say that this is left to the consumer, but then we might still see the same misleading query results that Random knowledge donator argued against (and I fully agree with her/him on this).

How to move forward. It is important to consider expressive power and query answering behaviour together. The world of databases offers many query languages, many ontology/constraint languages, and many data models from which we can get some understanding on how to structure this discussion. For example, adding disjunction to a database ("born in 943 or 944") is a completely different matter than adding probability. The algorithms you need are quite different. For other statements, like "he might have been Robin Hood", I don't even know what to make of it as a human. What kind of information does this give me? How should this influence query results? A much more detailed description of the desired behaviour would be needed to decide if this can be supported and what this would require. Anyway, my main point for now is that these things are completely and fundamentally different and should be discussed in separate threads. As it is now, this discussion will never lead to anything that can be implemented. (Of course, the heading chosen for this discussion suggests that there is not much to discuss or implement here anyway, don't you think? Maybe it would help to take a more constructive perspective if you really want to have some impact.)

Cheers,

Markus Krötzsch (talk) 14:18, 31 August 2014 (UTC)

P.S. I am notoriously bad at replying to talk page discussions. Apologies in advance for ignoring any replies given here. If there is a concrete proposal (which use cases should we support, what behaviour is desired, maybe what technology could be used) then feel free to give me a shout and I will have a look.

Why so big size of diff?[edit]

[1] --Infovarius (talk) 13:48, 27 August 2014 (UTC)

That's due to the new internal serialization. It is a bit more verbose. It's fine :) --Lydia Pintscher (WMDE) (talk) 15:36, 27 August 2014 (UTC)

No Label[edit]

This page is full of template errors. Each links says: "no label": Wikidata:WikiProject Medicine/Properties - Tobias1984 (talk) 16:09, 27 August 2014 (UTC)

Likely because the Lua module creating it is relying on the internal serialization. This changed as announced. The module needs to be adapted. --Lydia Pintscher (WMDE) (talk) 16:41, 27 August 2014 (UTC)

New badges system[edit]

Hello! I read your edit to Italian Wikipedia Commons.js (and also to English version), but I can not understand: the new code checks the class "badge-featuredarticle", but actually the class "badge-Q17437796" is added! I suppose it is a temporary mistake. Second question: css definition for new classes will be added automatically, or do we have to add them to MediaWiki:Common.css? Last question: can we use badges system also for other projects links (not only for interlanguage links)? For the moment I create my global.css and it works fine! Thank you very much for your great work! --FRacco (talk) 03:19, 28 August 2014 (UTC)

I'd like to see a clarification of this, too. Currently the software automatically adds a class name "badge-Q17437796" to the HTML for featured articles. What class names will be in the HTML when the whole process is finished:
  1. class="badge-Q17437796"?
  2. class="badge-featuredarticle"?
  3. class="FA"?
  4. All of them?
Thanks, --Entlinkt (talk) 17:05, 28 August 2014 (UTC)
@Bene*:: Can you have a look please? --Lydia Pintscher (WMDE) (talk) 09:35, 29 August 2014 (UTC)
It will be 'badge-featuredarticle' once we deploy a configuration change for that and think badge-Q17437796 will also remain. Aude (talk) 10:33, 29 August 2014 (UTC)
What Aude said. -- Bene* talk 13:08, 29 August 2014 (UTC)
Ok, thanks. But another question: Will the software automatically add icons to these links? If yes, which ones? --Entlinkt (talk) 22:05, 29 August 2014 (UTC)
Just forget the above question, I've missed that the new implementation is already live. Is it now safe to remove the old one from Common.js/.css? --Entlinkt (talk) 22:37, 29 August 2014 (UTC)
It should be, yes. --Lydia Pintscher (WMDE) (talk) 09:11, 30 August 2014 (UTC)

New badges and missing categories in cawiki[edit]

Today, all categories of "featured articles in other languages" show empty in cawiki, and a wrongly named red category has appeared in ca:Categoria:Viquipèdia:Articles destacats en la Viquipèdia en German. I'm afraid it could be related to new badges. I don't see the same problem in other wikipedias except for astwiki, where such categories are empty, too.--Pere prlpz (talk) 08:58, 28 August 2014 (UTC)

Problem solved reverting an unaccured edit on local templates. Anyway, once the badges are available for Wikipedias, local templates like Link FA will be removed and there will be no categorization. --Vriullop (talk) 12:12, 28 August 2014 (UTC)
@Ladsgroup: Can you have a look please? --Lydia Pintscher (WMDE) (talk) 09:37, 29 August 2014 (UTC)
French Wikipedia has the same issue, They have categories that is being populated by these templates, So we can't remove them now but It's possible to write a Lua module to add categories but the question in what template you want to invoke the module? I think they'll do it in Template:Portal in French Wikipedia. Amir (talk) 13:44, 29 August 2014 (UTC)
I think we should create a special page to query all badges on Wikidata and then we can drop the categories on the Wikipedias. -- Bene* talk 11:10, 30 August 2014 (UTC)
I'm analyzing a bot request in Portuguese Wikipedia to remove the {Link FA} from articles and Ladsgroup said about this problem. I can try to create a tool in Tool Labs to generate the lists, but I'm not finding where the badges are in the wikidata database, it is not in the wb_items_per_site table, someone knows where the badges are in database? Danilo.mac (talk) 02:37, 31 August 2014 (UTC)

he:Mediawiki:FeaturedInterwiki.js bug[edit]

this script was edited by User:Bene*, with summary asking us to report issues linking to this page, so i'm reporting:

there was a small bug, basically an extra semicolon in the middle of boolean expression (specifically, !$(this).hasClass('badge-featuredlist'); && !$(this).hasClass('badge-featuredarticle' ) . maybe you want to check for similar issues with other languages. see he:Special:Diff/15888375. note that hebrew is RTL, so the old is on the right and the new on the left. looking carefully you might be able to locate the extra semicolon.

peace - קיפודנחש (talk) 15:36, 28 August 2014 (UTC)

Thank you! :) @Bene*: Please see above. --Lydia Pintscher (WMDE) (talk) 09:38, 29 August 2014 (UTC)
Thanks for the report. I fixed the script and made it a bit more readable. -- Bene* talk 13:07, 29 August 2014 (UTC)

Script error?[edit]

I am seeing a lot of script errors ("The time allocated for running scripts has expired.") at Wikidata:List of properties/Generic and other subpages of Wikidata:List of properties. I also noticed that all of the properties have "(no label)" as the title". I don't understand what is happening enough to debug it further! Slaporte (talk) 00:38, 29 August 2014 (UTC)

The first part is a Lua limitation. There are too many calls on that page. The second one is probably an issue with the template that broke after we switched the internal serialization. Needs someone to investigate and adapt the template. --Lydia Pintscher (WMDE) (talk) 09:40, 29 August 2014 (UTC)
In case it helps, it seems like the script errors are only showing when my language is set to English NavinoEvans (talk) 13:23, 29 August 2014 (UTC)

"no language" for monolingual-text type[edit]

We should probably replace inscription (P438) with a monolingual-text property but we would need to be able to say "no linguistic content" (ISO 639-2 : "zxx") and "undetermined" (ISO 639-2: "und"). Is that possible ? --Zolo (talk) 08:05, 30 August 2014 (UTC)

Currently not. I'll have to figure out if we can make that happen. Would you mind opening a bug for it? --Lydia Pintscher (WMDE) (talk) 09:12, 30 August 2014 (UTC)
✓ Done--Zolo (talk) 09:41, 30 August 2014 (UTC)

Featured list badge[edit]

Here an unsupported badge item for featured lists is mentioned. What's the state with this one? Isn't it quite obvious that it's needed as quite a few wikis use en:Template:Link FL to mark interwiki links. As much as I understand it can't be replaced with FA badge item as it's need to distinguish featured lists that aren't articles (or at least not proper articles). I.e. no badges have been imported for Q462671 yet and there doesn't seem to be a proper way to do it. 88.196.241.249 11:34, 30 August 2014 (UTC)

Please start a topic on Wikidata:Project chat saying you want it. If there is no objection within a few days we will add it. --Lydia Pintscher (WMDE) (talk) 16:19, 1 September 2014 (UTC)

Links[edit]

Ať the bottom of every page there are four links: Privacy policy, About Wikidata, Disclaimers and Developers. The second one should point to "//www.wikidata.org/wiki/Special:MyLanguage/Wikidata:Introduction" and the third to "//www.wikidata.org/wiki/Special:MyLanguage/Wikidata:General_disclaimer". The first one is bit more complicated because it points to wmf-wiki which doesn't support MyLanguage-links. Thanks for looking into it! Matěj Suchánek (talk) 16:21, 31 August 2014 (UTC)

@Matěj Suchánek: That is something you can solve on your own, as an admin. The second (about) link target (a.k.a. the link target for the About label) is at MediaWiki:Aboutpage, and the third link target (a.k.a the link target for the Disclaimer label) is at MediaWiki:Disclaimerpage.--Snaevar (talk) 17:04, 31 August 2014 (UTC)
Eh... I was searching for some MW pages here and on translatewiki but didn't find anything. So I asked developers. Thank you, will solve myself. Matěj Suchánek (talk) 17:16, 31 August 2014 (UTC)

Redirects to redirects[edit]

If one merges more than two items, by doing the wrong order one may create a redirect to a redirect. However, it is not possible to edit the redirect in a simple way, to solve this. Are there any thoughts on that matter? I solved it very unelegantly here, faking a new merge through merge.js. Lymantria (talk) 07:45, 1 September 2014 (UTC)

We have bugzilla:69167 for that. --Lydia Pintscher (WMDE) (talk) 16:16, 1 September 2014 (UTC)

"format=txt has been deprecated"[edit]

Hello, today I find warning "format=txt has been deprecated. Please use format=json instead" in API response. My bot uses this format 4 years already. This format works fine (at least on my side). Is it really needed to deprecate it? How much time I have to move my code to JSON? — Ivan A. Krestinin (talk) 14:25, 1 September 2014 (UTC)

Hey :) That's something Wikimedia Core did I believe. Nothing specific to Wikidata. I don't know how long it'll stay around. Sorry. --Lydia Pintscher (WMDE) (talk) 16:15, 1 September 2014 (UTC)