Wikidata:Contact the development team
|On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2021/05.|
|Development plan||Usability and usefulness||Status updates||Development input||Contact the development team|
site optimization suggestion
it would be really cool wikipedia use the ipfs network as a cache for the pages, because as ipfs uses the p2p protocol the readers themselves could contribute to the project by providing an article for other readers, reducing a part of the workload on the wikipedia site. there is already a project that does this, it would be interesting to get in touch with people from that project. https://github.com/ipfs/distributed-wikipedia-mirror
Editing claims using the mobile interface (Minerva)
I mainly contribute to wikidata using my mobile phone. The mobile Web interface Minerva does not allow to edit claims. So I switch to Vector and I come back to Minerva after. I know that this is an important pain point for years and that this is not an objective of the development team for 2021.
I guess it would be very difficult to develop a mobile friendly interface for claims editing in Minerva.
I've seen recently that the Structured section of Wikimedia Commons works very well using a mobile phone. Since Structured Commons is really similar to wikidata, I'm just asking myself if it would not be possible to reuse their template to make it possible edit claimsin Wikidata.
- Unfortunately, this is not so easy, because both don't use the same technology to run the pages on mobile. Lea Lacroix (WMDE) (talk) 12:34, 3 May 2021 (UTC)
Accessing data provenance
Is there a way (either query service or API) to find out the origin of data in Wikidata? For instance, whether a given claim was entered originally by a user or a bot, and what the source of the claim was? I.e., was the data imported from another knowledge base or graph in some way? One could of course dive into the edit history of the article, but this seems like a lot of effort, especially at scale. Wsslr (talk) 07:32, 30 April 2021 (UTC)
- Ideally, editors indicate the provenance of the data in the references section. When they don't, it's harder to figure out where the information come from. You can retrieve the editor by querying the edit history from the API, maybe the edit summary also helps (for example, edits done with QuickStatements are indicated with a tag), but that doesn't necessarily indicate the source. Lea Lacroix (WMDE) (talk) 12:36, 3 May 2021 (UTC)
Strange name for the Pastho language with Swedish settings
- Hi @Sextvåetc:, thanks for letting us know. I created a ticket here, feel free to have a look, and add details if necessary. Thanks, Lea Lacroix (WMDE) (talk) 12:44, 3 May 2021 (UTC)
- @Lea! Looks fine, except that we do not use capital initial letter in languages in Swedish, except for when it is in the beginning of a sentence. 62 etc (talk) 13:45, 3 May 2021 (UTC) (Who do not have an account on Phabricator.) 62 etc (talk) 13:45, 3 May 2021 (UTC)
What Json Fields are Nullable?
(Was told to move this post here from Project Chat)
I'm working on a library for parsing JSON of Wikidata Entities into a more native format in a strongly typed language (OCaml). Because this language is strongly typed, any fields that are sometimes null have to explicitly be marked as only optionally containing data; when accessing this data using my library, people will have to explicitly deal with the case that no data is present before the code will compile. I've read through the Wikibase JSON Format Documentation a number of times but I keep getting tripped up by fields being nullable that I thought weren't. For example, the property coordinate location (P625) of St John's College (Q691283) is a globe-coordinate with a null precision, but precision wasn't listed as being optional in the documentation. Is there any canonical list of fields that are nullable? I would just mark everything as being optional except that makes it really annoying to work with my library, especially considering there are many fields that are almost certainly never null (like the text field of monolingualtext). --- ImpossiblyNew (talk) 22:53, 5 May 2021 (UTC)
Vandalism dashboard broken?
- @Count Count: There could have been potential errors or hiccups with toolforge because I can see that the changes are updating normally. If you experience that happening again do let us know. Thanks! -Mohammed Sadat (WMDE) (talk) 11:12, 10 May 2021 (UTC)
length of WDQS query urls that can be shortened
I don't care whose heads you have to break to do it, but for goodness sakes get the URL shortener fixed so that it can handle reasonable-length WDQS queries.
It really is infuriating when you have a query like this, which isn't vastly long, but all you can get is the error message
URL shortening failed.
- The problem might be that URLs beyond ~2000 bytes of length are incompatible with some browsers, so this limitation might be a justified choice, rather than a cut-off at an arbitrary length. Your query procudes an URL of 2268 bytes. —MisterSynergy (talk) 18:17, 9 May 2021 (UTC)
- Thanks MisterSynergy. AFAIK 2000 bytes is just an arbitrary cut-off that has been imposed 'to prevent misuse'. Quite what sort of misuse it's supposed to prevent I have never been entirely clear, especially when there is an open paste-bin service on the toolserver that anyone can post anything in (of any length). And sorry to all that my original post was a bit intemperate. This refusal to shorten is something I have been hitting again and again and again and again and again and again over the last 18 months, and it annoys every time, but somehow manages to still come as a surprise every time. This afternoon was just one time too many. Probably also that this time the query was so little over the arbitrary 2000 bytes, and still the shortener couldn't work. Whereas I suppose with a query of 7 or 8,000 bytes one just grimly knows it's not going to shorted. Even if it still annoys every time. We didn't use to have this trouble with tinyurl; though tinyurl doesn't like sparql any more, it seems -- possibly now it may get in a worry that it's being set up for an SQL injection attack. But as a result seemingly one can't even use tinyurl as an alternative any more. Ho hum. At least twitter still seems okay to accept (and shorten) quite long urls. Jheald (talk) 19:17, 9 May 2021 (UTC)
- @Jheald: I can understand your frustration about the cutoff imposed on the length of query URLs. In the ticket pointed here there are still some reservations about extending or removing the limits. In order to move forward we need to first talk this out, particularly Amir’s use case about having two levels of rate limit in the URL shortener. -Mohammed Sadat (WMDE) (talk) 09:50, 17 May 2021 (UTC)
Request for new gadget: monitor items having a specific property
Hi! Since in Wikidata:Tools I don't find a link to a page designed for proposals of new tools or gadgets (is there one? if not, could you think about creating one?), I write here a proposal of a new user gadget. In brief, this gadget should add to each property page a button that allows the user to automatically add to his watchlist all pages (items, properties, lexemes) containing this property; moreover, if in the next days/months/years the property gets added to other pages, these pages are automatically added to the watchlist of the user. This would be extremely helpful for my and other users' patrolling. Thank you in advance, --Epìdosis 11:11, 16 May 2021 (UTC)