Shortcuts: WD:PC, WD:CHAT, WD:?

Wikidata:Project chat

From Wikidata
Jump to navigation Jump to search

Mass-import policy

[edit]

Hi I suggested a new policy similar to OpenSteetMap for imports in the telegram chat yesterday and found support there.

Next steps could be:

  • Writing a draft policy
  • Deciding how many new items users are allowed to make without seeking community approval first.

The main idea is to raise the quality of existing items rather than import new ones.

I suggested a limit of 100 items or more to fall within this new policy. @nikki: @mahir256: @ainali: @kim: @VIGNERON: WDYT? So9q (talk) 12:11, 31 August 2024 (UTC)[reply]

@So9q: 100 items over what time span? But I agree there should be some numeric cutoff to item creation (or edits) over a short time period (day, week?) that triggers requiring a bot approval at least. ArthurPSmith (talk) 00:32, 1 September 2024 (UTC)[reply]
QuickStatements sometimes returns the error message Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes.
M2k~dewiki (talk) 02:07, 1 September 2024 (UTC)[reply]
Time span does not really matter, intention does.
Let me give you an example: I recently imported less than 100 banks when I added Net-Zero Banking Alliance (Q129633684). I added them during 1 day using OpenRefine.
Thats ok. It's a very limited scope, we already had most of the banks. It can be discussed if the new banks which are perhaps not notable should have been created or just left out, but I did not do that as we have no policy or culture nor space to talk about imports before they are done. We need to change that.
Other examples:
  • Importing all papers in a university database or similar totaling 1 million items over half a year using automated tools is not ok without prior discussion no matter if QS or a bot account was used.
  • Importing thousands of books/monuments/whatever as part of a GLAM project over half a year is not ok without prior discussion.
  • Importing all the bridges in the Czech Republic e.g. Q130213201 during whatever time span would not be ok without prior discussion. @ŠJů:
  • Importing all hiking paths of Sweden e.g. Bohusleden (Q890989) over several years would not be ok.
etc.
The intention to import many object without prior community approval is what matters. The community is your boss, be bold when editing but check with your boss before mass-imports. I'm pretty sure most users would quickly get the gist of this policy. A good principle could be: If in doubt, ask first. So9q (talk) 05:16, 1 September 2024 (UTC)[reply]
@So9q: I'm not sure that the mention of individually created items for bridges belongs to this mass import discussion. There exist Wikidata:Notability policy and so far no one has questioned the creation of entries for physically existing objects that are registered, charted, and have or may/should have photos or/and categories in Wikimedia Commons. If a community has been working on covering and processing any topic for twenty years, it is probably not appropriate to suddenly start questioning it. I understand that what is on the agenda in one country may appear unnecessarily detailed in another one. However, numbered roads, registered bridges or officially marked hiking paths are not a suitable example to question; their relevance is quite unquestionable.
The question of actual mass importation would be moot if the road administration (or another authority) published the database in an importable form. Such a discussion is usually led by the local community - for example, Czech named streets were all imported, but registered addresses and buildings were not imported generally (but individual items can be created as needed). Similarly, the import of authority records of the National Library, registers of legal entities, etc. is assessed by the local community; usually the import is limited by some criteria.. It is advisable to coordinate and inspire such imports internationally, however, the decision is usually based on practical reasons, i.e. the needs of those who use the database. It is true that such discussions could be more transparent, not just separate discussions of some working group, and it would be appropriate to create some formal framework for presenting and documenting individual import projects. For example, creating a project page that will contain discussion, principles of the given import, contact for the given working group, etc., and the project page should be linked from edit summaries. --ŠJů (talk) 06:03, 1 September 2024 (UTC)[reply]
Thanks for chipping in. I do not question the notability of the items in themselves. The community in telegram has voiced the opinion that this whole project has to consider what we want to include and not and when and what to give priority.
Millions of our current items are in a quite sad state as it is. We might not have the man-power to keep the level of quality at an acceptable level as is.
To give you one example Wikidata currently does not know what Swedish banks are still in operation. Nobody worked on the items in question, see Wikidata:WikiProject_Sweden/Banks, despite them being imported many years ago (some are from 2014) from svwp.
There are many examples to draw from where we only have scratched the surface. @Nikki mentioned in Telegram that there are a ton of items with information in descriptions not being reflected by statements.
A focus on improving what we have, rather than inflating the total number of items, is a desire by the telegram community.
To do that we need to discuss imports whether already ongoing or not, whether very notable or not that notable.
Indeed increased steering and formality would be needed if we were to undertake having an import policy in Wikidata. So9q (talk) 06:19, 1 September 2024 (UTC)[reply]
Just as a side note, with no implications for the discussion here, but "The community in telegram has voiced" is irrelevant, I understood. Policies are decided here on the wiki, not on Telegram. Or? --Egon Willighagen (talk) 07:58, 1 September 2024 (UTC)[reply]
It is correct that policies are created on-wiki. However, it may also be fair to use that as a prompt to start a discussion here and transparent to explain if that is the case. It won't really carry weight unless the same people also voice their opinions here, but there is also no reason to belittle it just because people talked somewhere else. Ainali (talk) 08:13, 1 September 2024 (UTC)[reply]
+1. I'll add that the Wikidata channel is still rather small compared to the total number of active Wikidata editors (1% or less is my guess). Also the frequency of editors to chat is very uneven. A few very active editors/chatmembers contribute most of the messages (I'm probably one of them BTW). So9q (talk) 08:40, 1 September 2024 (UTC)[reply]
Sorry, I did not want to imply that discussion cannot happen elsewhere. But we should not assume that people here know what was discussed on Telegram. Egon Willighagen (talk) 10:36, 1 September 2024 (UTC)[reply]
Terminology matters, and the original bot policies are probably not clear anymore to the current generation Wikidata editors. With tools like OpenRefine and QuickStatements), I have the impression it is no longer clear what is "bot" and what is not. You can now easily create hundreds of items with either of these tools (and possibly others) in a editor-driven manner. I agree it is time to update the Wikidata policies around important. One thing to make clear is the distinction mass creation of items and mass import (the latter can also be mass importing annotations and external identifiers, or links between items, without creating items). -- Egon Willighagen (talk) 08:04, 1 September 2024 (UTC)[reply]
I totally agree. Since I joined around 2019 I have really struggled to understand what is okay and not when it comes to mass-edits and mass-imports. I have had a few bot requests declined. Interestingly very few of my edits have ever been questioned. We should make it simple and straightforward for users to learn what is okay and what is not. So9q (talk) 08:43, 1 September 2024 (UTC)[reply]
I agree that we need an updated policy that is simple to understand. I also really like the idea of raising the quality of existing items. Therefore, I would like the policy to recommend that, or to even make an exception for preapproval if, there is a documented plan to weave the imported data into the existing data in a meaningful way. I don't know exactly how it could be formulated, but creating inbound links and improving the data beyond the source should be behavior we want to see, whereas just duplicated data on orphaned items is what we don't want to see. And obviously, these plans need to be completed before new imports can be made, gaming the system will, as usual, not be allowed. Ainali (talk) 08:49, 1 September 2024 (UTC)[reply]
@ainali: I really like your idea of "a documented plan to weave the imported data into the existing data in a meaningful way". This is very similar to the OSM policy.
They phrase it like so:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on both existing data and local mapping communities." source
A similar phrasing for Wikidata might be:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on existing data and could rapidly inflate the number of items beyond what the community is able or willing to maintain."
WDYT? So9q (talk) 08:38, 5 September 2024 (UTC)[reply]
In general, any proposal that adds bureaucracy that makes it harder for people to contribute should start wtih explaining what problem it wants to solve. This proposal contains no such analysis and I do consider that problematic. If there's a rule 10,000 items / year seems to me more reasonable than 100. ChristianKl15:37, 2 September 2024 (UTC)[reply]
Thanks for pointing that out. I agree. That is the reason for me to raise the discussion here first instead of diving right into a writing a RfC.
The community in Telegram seems to agree that a change is needed and has pointed to some problems. One of them mentioned by @Nikki: was: most of the manpower and time in WMDE for the last couple of years seem to be spent on trying to avoid a catastrophic failure of the infrastructure rather than improving the UI, etc. At the same time a handful of users have imported mountains of half-ass quality data (poor quality import) and show little or no sign of willingness to fix the issues pointed out by others in the community. So9q (talk) 08:49, 5 September 2024 (UTC)[reply]
Would someone like to sum up this discussion and point to ways forward?
Do we want/need a new policy? No?
Do we want/need to adapt decisions about imports or edits to the state of the infrastructure? No?
Do we want to change any existing policies? No?
I have yet to see any clear proposals from the fearsome crowd in Telegram or anyone else for that matter. Might I have stirred this pot just to find that business as usual is what this community wants as a whole?
If that is the case I'm very happy. It means that the community is somewhat split, but that I can possibly continue working on things I find valuable instead of holding back because of fear of a few.
I dislike decisions based on fear. I have wanted to improve citations in Wikipedia for years. I have built a citation extraction tool that could easily be used to import all papers with an identifier that is cited in English Wikipedia (=millions of new items). In addition I'm going to request running a bot on Wikipedia to swop out local idiosyncratic local citation templates with citeq.
I look forward to see Wikidata grow in quantity and quality. So9q (talk) 08:20, 13 September 2024 (UTC)[reply]

There are many different discussions going on here on Wikidata. Anyone can open a discussion about anything if they feel the need. External discussions outside of Wikidata can evaluate or reflect Wikidata project, but should not be used to make decisions about Wikidata.

This discussion scope is a bit confusing. By mass import I mean a one-time machine conversion of an existing database into wikidata. However, the examples given relate to items created manually and occasionally over a long period of time. In relation to this activity, they do not make sense. If 99 items of a certain type are made in a few years, everything is fine, and as soon as the hundredth item have to be made, we suddenly start treating the topic as "mass import" and start demanding a previous discussion? That makes absolutely no sense. For this, we have the rules of notability, and they apply already for the first such item, they have no connection with "mass imports".

As I mentioned above, I would like to each (really) mass import have its own documentation project page, from which it would be clear who did the import, according to what policies, and whether someone is taking care of the continuous updating of the imported data. It is possible to appeal to mass importers to start applying such standards in their activities. It is also possible to mark existing items with some flags that indicate which specific workgroup (subproject) takes care of maintaining and updating the item. --ŠJů (talk) 18:06, 1 September 2024 (UTC)[reply]

Maybe using the existing "bot requests" process is overkill for this (applying for a bot flag shouldn't be necessary if you are just doing QS or Openrefine work), but it does seem like there should be either some sort of "mass import requests" community approval process, or as ŠJů suggests, a structural prerequisite (documentation on a Wikiproject or something of that sort). And I do agree if we are not talking about a time-limited threshold for this then 100 is far too small. Maybe 10,000? ArthurPSmith (talk) 22:55, 1 September 2024 (UTC)[reply]
There are imports based on existing identifiers - these should be documented on property talkpages (e.g. new mass import of newly created identifiers every month, usually using QS). Next big group is import of existing geographic features (which can be photographed) - these have coordinates, so are visible on maps. Some of them are in focus of few people only. Maybe document them in country wikiproject? JAn Dudík (talk) 15:49, 2 September 2024 (UTC)[reply]


My thoughts on this matter :
  • we indeed need a page (maybe a policy, maybe just a help page, a recommandation, a guideline, etc.) to document how to do good mass-import
  • mass-import should be defined in more precise terms, is it only creation? or any edits? (they are different but both could be problematic and should be documented)
  • 100 items is very low
    • we are just the 2nd of September and 3 people already created more than 100 items ! in August, 215 people created 100 or more items, the community can't process that much
    • I suggest at least 1000, maybe 10 000 items (depends if we focus only on creations or on any edits)
  • no time span is strange, is it even a mass-import if someone create one item every month for 100 months? since most mass-import are done by tools, most are done in a small period, a time span of a week is probably best
  • the quantity is a problem but the quality should also be considered, also the novelty (it's not the same thing to create/edit items following a well know model and to create a new model rom scratch, the second need more review)
  • could we act on Wikidata:Notability, mass-import should be "more" notable? or at least, notability should be more thoroughly checked?
    • the 2 previous point are based on references which is often suboptimal right now (most imports are from one source only, when crossing multiple references should be encouraged if possible)
  • the bot policies (especially Wikidata:Requests for permissions/Bot) probably need an update/reform too
  • finally, there is a general problem concerning a lot of people but there is mainly a few power-user who are borderline abusing the ressources of Wikidata ; we should focus the second before burden the second, it would be easier and more effective (ie. dealing with one 100 000 items import rather than with 1000 imports of 100 items).
Cheers, VIGNERON (talk) 09:20, 2 September 2024 (UTC)[reply]
In my opinion, items for streets are very useful, because there are a lot of pictures with categories showing streets. There are street directories. Streets often have their own names, are historically significant, buildings/cultural heritage monuments can be found in the respective street via street categories and they are helpful for cross-referencing. So please keep items for streets. Triplec85 (talk) 10:26, 5 September 2024 (UTC)[reply]
As streets are very important for infrastructure and for structuring of villages, towns, cities, yes, they are notable. Especially if we consider the historical value of older streets or how often they are used (in the real world). Data objects of streets can be combined with many identificators from origanizations like OpenStreetView and others. And they are a good element for categorizing. It's better to have images categorized in Hauptstraße (Dortmund) (with a data object that offers quick facts) than only Dortmund. Also, streets are essential for local administrations, which emphasizes the notability. And you can structurize where the street names come from (and how often they are used) etc. etc., with which streets they are connected etc. For me, I see many good reasons for having lists of streets in Commons, Wikidata, whatever; it gives a better overview, also for future developments, when streets are populated later or get cultural heritage monuments, or to track the renaming of streets... --PantheraLeo1359531 (talk) 10:37, 5 September 2024 (UTC)[reply]
Regarding the distribution / portion of content by type (e.g. scholary articles vs. streets / architectal structure) see this image:
Content on Wikidata by type
M2k~dewiki (talk) 10:41, 5 September 2024 (UTC)[reply]
Better to consider the number of potential items. ScienceOpen has most studies indexed and stands at 95 M items (getting to the same degree of completion would unlock several use-cases of Wikidata like Scholia charts albeit not substituting all the ones the mentioned site can be used for as the abstract text content as well as altmetrics scores etc are not included here). 100 M isn't that large and I think there are more streets than there are scholarly articles – having a number there would be nice.
-
I think of Wikidata usefulness beyond merely linking Wikipedia articles in this way: what other widely-used online databases exist and can we do the same but better and more? Currently, Wikidata can't be used to fetch book metadata into your ebook reader or song metadata into your music player/library, can't be used for getting food metadata in food tracking apps, can't tell you the often problematic ingredients of cosmetics/hygiene products, can't be used to routinely monitor new studies of a field or search them, or pretty much anything else that is actually useful to real people so I'd start working on the data coverage of such data first before importing lots of data with unknown questionable potential future use or manual item creation/editing. If we got areas covered that people actually use and need, then we could still improve areas where no use-cases yet exist or which if at all only slightly improve the proprietary untransparent-algorithm Google Web search results (that don't even index files & categories on Commons). I'd be interested in how other people think about WD's current and future uses but discussing that may be somewhat outside the scope of this discussion. Prototyperspective (talk) 13:35, 7 September 2024 (UTC)[reply]
I use streets a lot to qualify locations, especially in London - see Ambrose Godfrey (Q130211790) where the birth and death locations are from the ODNB. - PKM (talk) 23:16, 5 September 2024 (UTC)[reply]
Disagree on books and papers then – they need to be imported to enable all sorts of useful things which are not possible or misleading otherwise such as the statistics of Scholia (e.g. research field charts, author timeline/publications, etc etc).
I think papers are mostly a (nearly-)all-or-nothing thing – they aren't that useful before where I don't see much of a use-case. Besides charts, one could query them in all sorts of interesting ways once they are fairly complete, embed results of queries (e.g. studies by author sortable by citations & other metrics on the WP article about the person).
When fairly complete and unvandalized, they could also be analyzed, e.g. for AI-supported scientific discovery (there's studies on this) and be semantically linked & queried and so on.
It's similar for books. I don't know how Wikidata could be useful in that space if it doesn't contain at least as many items with metadata than other websites. For example one could then fetch metadata from Wikidata instead of from these sites. In contrast to studies, I currently see an actual use-case for WD items for streets – they may be useful at some point but I don't see why now or in the near future or how. Prototyperspective (talk) 00:31, 6 September 2024 (UTC)[reply]

Hello, from my point of view, there would be some questions regarding such a policy, like:

  • What is the actual goal of the policy? What is is trying to achive? How will this goal be achivied?
  • Is it only a recommendation for orientation (which easily can be ignored)?
  • Is this policy realized in a technical manner, so users are blocked automatically? Currently, for example QuickStatements already implements some policy and disables the creation of new objects with the error massage: Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes. How will the new policy be different from the current anti-abuse policy?
  • Who will control and decide, if the policy if followed/ignored and how? What are the consequences if the policy is ignored?
  • Does this policy only include objects without sitelinks or also objects with sitelinks to any language version or project like wikisource, commons, wikivoyage, wikibooks, ...?
  • Does this policy only concern the creation of new objects or also the modification of existing objects?
  • How is quality defined regarding this policy? How and by whom will be decided if a user and/or user task is accepted for a higher limit?
  • There are always thousands of unconnected articles and categories in any language version and project (including commons), for example
  • https://wikidata-todo.toolforge.org/duplicity/#/
AutoSuggestSitelink-Gadget

Who will connect them to existing objects (if existing) or create new objects if not yet existing and when (especially, if there is a new artificial limit to create such objects)? Will someone implement and operate a bot for all 300 wikipedia language versions and all articles, all categories (including commonscats), all templates, all navigation items, ... to connect sitelinks to existing objects or create new objects if not yet existing?

From my point of view, time and ressources should be spent on improving processes and tools and help, support and educate people in order to improve data quality and completeness. For example, in my opionion the meta:AutosuggestSitelink-Gadget should be activated for all users on all language versions per default in the future.

Some questions and answers which came up over the last years (in order to help, educate and support users) can be found at

M2k~dewiki (talk) 19:24, 2 September 2024 (UTC)[reply]
For example, the functionality of
could also be implemented as bot in the future by someone. M2k~dewiki (talk) 21:13, 2 September 2024 (UTC)[reply]
This wasn't written but when the discussion started here, but here is a summary of the growth of the databases, that this policy partly addresses: User:ASarabadani (WMF)/Growth of databases of Wikidata. There are also some relevant links on Wikidata:WikiProject Limits of Wikidata. For an extremely high overview summary, Wikidata is growing so quick that we will hit various technical problems and slowing down the growth (perhaps by prioritizing quality over quantity) is a way to find time to address some of the problems. So the problem is wider than just new item creations, but slowing that would certainly help. Ainali (talk) 08:09, 3 September 2024 (UTC)[reply]
This has been also recently discussed at
Possible solutions could be:
M2k~dewiki (talk) 08:18, 3 September 2024 (UTC)[reply]
Just a note that the split seems to have happened now, so some more time is bought.
Ainali (talk) 21:14, 3 September 2024 (UTC)[reply]
Please note that "Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes." is not a error message or limit imposed from QuickStatement; it is a rate limit set by Wikibase, see phab:T272032. QuickStatement should be able to run a large batch (~10k commands) in a reasonable (not causing infra issue) speed. If QuickStatements does not retry when rate limit is hit, I consider it a bug; batches should be able to be run unattended with error recovery mechanism GZWDer (talk) 14:50, 4 September 2024 (UTC)[reply]
Thanks for linking to this report. I see that revisions is a very large table. @Nikki linked to this bot that does 25+ single edits to the same astronomical item before progressing to the next. This seems very problematic and given the information in that page, this bot should be stopped immediately. So9q (talk) 09:12, 5 September 2024 (UTC)[reply]
For what reason? That he is doing his job? Matthiasb (talk) 00:53, 6 September 2024 (UTC)[reply]
No, because it is wasting resources when doing the job. The bot could have added them in one edit, and instead it added unnecessary rows to the revisions table, which is a problem. Ainali (talk) 06:00, 6 September 2024 (UTC)[reply]
Well, non-bot users aka humans cannot add those statements in one single edit, right? That said, human editors by definition are wasting resource and probably should not edit at all? Nope. But that's only one side of the medal. It might be a surprise: the limes of the number of rows in version histories is ∞. For such items like San Francisco (Q62) that might go faster and for such times like asteroids slower because of no other editors are editing them. If we talk about the source of a river it cannot edited in one single edit, but we would have two for latitude and longitude, one for the altitude, one for a town nearby, the county, the state and the country each, at least. Several more if editors forgot to include proper sourcing and the language of the source. Until this point hundred of millions statements with "0 sources" are yet missing. So where is the problem? The edit behaviour of the single editor or did we forgot something to regulate or to implement back in 2015 or so what affects edits of every single user? Matthiasb (talk) 22:31, 7 September 2024 (UTC)[reply]
No, to me, a waste is when resources could have been saved but were not. So we're not talking manual editing here (which also accounts for a lot less edits in total). And while I agree that an item in theory may need almost limitless revisions, practically, we lack the engineering for it now. So let's do something smarter than Move Fast and Break Things (Q18615468). Ainali (talk) 07:30, 8 September 2024 (UTC)[reply]

I would echo the points raised above by M2k~dewiki. My feeling is that when we actually think about the problem we are trying to solve in detail, there will be better solutions than placing arbitrary restrictions on upload counts. There are many very active editors who are responsible and actually spend much of their time cleaning up existing issues, or enriching and diversifying the data. Many GLAMs also share lots of data openly on Wikidata (some exclusively so), helping to grow the open knowledge ecosystem. To throttle this work goes against everything that makes Wikidata great! It also risks fossilising imbalances and bias we know exist in the data. Of course there are some folks who just dump masses of data into Wikidata without a thought for duplication or value to the wider dataset, and we do need a better way to deal with this but I think that some automated checks of mass upload data (1000s not 100s) to look for potential duplication, inter connectivity and other key indicators of quality might be more effective at flagging problem edits and educating users, whilst preserving the fundamental principals of Wikidata as an open, collaborative data space. Jason.nlw (talk) 08:34, 3 September 2024 (UTC)[reply]

We definitely need more clarity on the guidelines, thanks for putting that up! Maybe we can start with some very large upper boundary so we can at least agree in the principle and enforcement tooling? I suggest we start with a hard limit for 10k new items / month for non-bot accounts + wording saying creation of over 1000 items per month should be preceded by a Bot Request if items are created based on the same source and any large scale creation of items (e.g. 100+ items in a batch) should be at least discussed on Wiki, e.g. on the related WikiProject. Also, I think edits are more complicated than new item creations; author-disambiguator.toolforge.org/, for example, allows users to make 100k edits in a year semi-manually. It may be a good idea for simplicity to focus only on item creation at this point. TiagoLubiana (talk) 21:00, 3 September 2024 (UTC)[reply]

User statistics can be found for example at:
Recent batches, editgroups, changes and creations can be found for example at:
Also see
M2k~dewiki (talk) 21:41, 3 September 2024 (UTC)[reply]
Including bots:
M2k~dewiki (talk) 21:45, 3 September 2024 (UTC)[reply]
@DaxServer, Sldst-bot, Danil Satria, LymaBot, Laboratoire LAMOP: for information. M2k~dewiki (talk) 13:23, 4 September 2024 (UTC)[reply]
@Kiwigirl3850, Romano1920, LucaDrBiondi, Fnielsen, Arpyia: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@1033Forest, Brookschofield, Frettie, AdrianoRutz, Luca.favorido: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Andres Ollino, Stevenliuyi, Quesotiotyo, Vojtěch Dostál, Alicia Fagerving (WMSE): for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Priiomega, Hkbulibdmss, Chabe01, Rdmpage, Aishik Rehman: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@Cavernia, GZWDer, Germartin1, Denelson83, Epìdosis: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
@DrThneed, Daniel Mietchen, Matlin: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)[reply]
Just a first quick thought (about method and not the content): apart from pings (which are very useful indeed), I think that the best place to make decisions on such an important matter would be an RfC; IMHO a RfC should be opened as soon as possible and this discussion should be moved to in its talk page, in order to elaborate there a full set of questions to be then clearly asked to the community. Epìdosis 13:49, 4 September 2024 (UTC)[reply]
Also see
M2k~dewiki (talk) 16:55, 4 September 2024 (UTC)[reply]
I don't really understand what the proposal here is or what problem it is trying to solve, there seem to be quite a number of things discussed.
That said, I am absolutely opposed to any proposal that places arbitrary limits on individual editors or projects for new items or edits simply based on number rather than quality of edits. I think this will be detrimental because it incentivises working only with your own data rather than contributing to other people's projects (why would I help another editor clean up their data if it might mean I go over my quota so can't do the work that is more important to me?).
And in terms of my own WikiProjects, I could distribute the new item creation to other editors in those projects, sure, but to what end? The items still get created, but with less oversight from the most involved and knowledgeable editor and so likely with greater variability in data quality. How is that a good thing? DrThneed (talk) 23:50, 4 September 2024 (UTC)[reply]
+1 Andres Ollino (talk) 01:22, 13 September 2024 (UTC)[reply]
Thanks for chipping in. Your arguments make a lot of sense to me. The community could embrace your view and e.g. tell the WMF to give us a plan for phasing out Blazegraph and replace it with a different backend now that Orb and QLever exist. See also Wikidata:WikiProject Couchdb
Also the revision issue is created because mediawiki is set up like any other project wiki. But it really doesn't have to be keeping all the revisions in memory. A performance degradation might be okay to the community for revisions older than say 100 days.
We a a community can either walk around in fear and take no decisions or take decisions based on the current best available information.
My suggestion to the community is to start discussing what to do. The answer might be: nothing. Maybe we don't care about Blazegraph now that we have good alternatives? Maybe it's not for us to worry about because WMF should just adapt to whatever the community is doing?
I would like to build a bot that imports all papers currently cited in Wikipedia. That would amount to at least a couple of million items. This I have wanted since 2021 but because of fear in the community I have waited patiently for some time and helped trying to find substitutes for Blazegraph.
Are we okay with that proposal now that we already split the graph? It would mean a great deal to Abstract Wikipedia to be able to use sources already in Wikidata.
Should I ask permission anywhere before go It also improves Scholia, etc. In all a very good idea if you ask me.ing ahead and writing a bot requests? So9q (talk) 08:05, 13 September 2024 (UTC)[reply]
  • For some reason I think that some/more/many of the discutants on this have a wrong understanding on how Wikidata works – not technically, but in coresspondence with other WM projects as Wikipedia. In fact we have the well established rule that each Wikipedia article deserves an item. I don't know how man populated places articles LSJ bot created or how many items we have which are populated places, but citing earlier discussions in the German Wikipedia years ago about how far Wikioedia can expand I calculated the the number of populated places on earth might exceed 10 millions. So would creating 10 million WParticles on populated places cause a mass upload on Wikidata since every WP article in any language is to be linked to other language versions via Wikidata? (I also kind of calculated the possible number of geographic features on earth with nearly two millions of it in the U.S. alone. I think it are up to 100 million on the earth totally. We consider all of them notable, so at some point we will have items on 100 million geoagraphic features caused by interwiki. Is this mass upload? Shall we look on cultural heritage? So, in the U.S., the United Kingdom, France and Germamy together exist about one million buildings which are culturally protected in one way or another. At this time some 136.080 of them or elsewhere in the world have articles in the German WP, and yes, they are linked in Wikidata. Counting all other countries together some more millions will add to this.
  • When I started in German WP, some 18 years ago, it hat some 800.000 articles or so. At that time we hat users who tried hardly to refrain the sice of the German WP back to 500.000 articles. They failed. At some point before the end of this year the number of articles in the German WP will exceed 3.000.000, with the French WP following to the same marker some four or five months later. Though many of those items might already exist in one language version or more, many of those articles to the three million mark might not have a Wikidata item yet. Are these, say, 100.000 items mass upload?
  • And, considering a project I am involved with, GLAM activities on Caspar David Friedrich (Q104884) that is, the Hamburg Caspar David Friedrich. Art for a New Age Hamburger Kunsthalle 2023/24 (Q124569443) featured some 170 or so works of the artist, all of the considered notable in the WP sense of notability but we need all of them in Wikidata anyways for reasons of proveniency research for which WD is essential. So if on the way I am creating 50, 70 or 130 items and linking them up with their image files on Commons and individual WP languages which can be found, so I am committing the crime of mass upload, even if the process take weeks and weeks because catalogues of different museums use different namings and identifying can be done only visually?

Nope. When talking about the size of Wikidata then we must take it as given that in the 2035 Wikidata is at least ten times bigger then today. If we are talking about better data, I agree with this as an important goal, which needs to add data based on open sources which do not rely on Wikis but original data from the source, e.g. statistical offices and which get sourced in a proper way. (But when I called for sources some months ago one laughed and said Wikidata is not about sourcing data but collecting them. Actually that user should have been banned for eternity and yet another three days.) Restricting upload won't work and even might prevent adding high quality data. --Matthiasb (talk) 00:48, 6 September 2024 (UTC)[reply]

@Matthiasb In an ideal world, your points make a lot of sense. However, the technical infrastructure is struggling (plenty of links to those discussions above), and if we put ideals over practicalities, then we will bring Wikidata down before the developers manage to solve them. And then we will definitely not be ten times bigger in 2035. Hence, the thoughts about slowing the growth (hopefully only temporarily). If we need to prioritize, I would say that maintaining sitelinks goes above any other type of content creation and would be the last one to slow down. Ainali (talk) 06:08, 6 September 2024 (UTC)[reply]
No objections to the latter part. It was in the second half of the 2000 years when Mediawiki was struggling by its own success, users getting time outs all the time. Yet Tim Starling told us something like "dont't care about resources". Well he said something slightly differrent, but I don't remember the actual wording. The core of his remarks was that we as a community should not make our head aches on it. When it comes to lacking resources it would be his work and that of the server admins and technical admins to fix it. (But if it got necessary to act then we should do what they say.) Matthiasb (talk) 11:31, 6 September 2024 (UTC)[reply]
You probably refer to these 2000s quotes: w:Wikipedia:Don't worry about performance. --Matěj Suchánek (talk) 17:29, 6 September 2024 (UTC)[reply]
Maybe we need the opposite for Wikidata?
If that mindset is carrying over to use of automated tools (e.g. to create every tree in OpenStreetMap, there are 26,100,742 as of 2024-09-07, and link them to other features in OSM/Wikidata) that would very quickly become totally technically unsustainable. Imagine every tree in OSM having a ton of statements like the scientific articles. That quickly becomes a mountain of triples.
I'm not saying we could not do it, perhaps even start today, but what is the combined human and technical cost of doing it?
What other items do we have to avoid importing to avoid crashing the system?
What implications would this and other mass-imports have on the current backend?
Would WMF split the graph again? -> Tree subgraph?
How many subgraphs do we want to have? 0?
Can they easily (say in QLever in less than an hour) be combined again or is that non-trivial after a split?
Is a split a problem in itself or perhaps just impetus for anyone to build a better graph backend like QLever or fork Blazegraph and fix it?
We need to discuss how to proceed and perhaps vote on new policies to avoid conflict and fear and eventually perhaps a total failure of the community with most of the current active users leaving.
Community health is a thing, how are we doing right now? So9q (talk) 08:30, 7 September 2024 (UTC)[reply]
Yes, thank you @Matěj Suchánek! --Matthiasb (talk) 11:22, 7 September 2024 (UTC)[reply]
Do we have a community? Or better asked, how many communities do we have? IMHO there are a least three different commmunites:
  • people formerly being active on Wikipedia who came over while their activity as interwiki bot owner wasn't needed anymore in Wikipedia; most of them will likely have tens of thousands of edits each year.
  • Wikipedia users occasionally active in Wikidata; some only might fix issues, some other might prepare further usage of Wikidata in their own Wikipedia. Most of them have from several hundreds or a few thousand edits.
  • Users which don't fit in the former two groups but use Wikidata for some external project, far away from WMF. I mentioned above groups of museums world-wide for whom WD is a easy accessable data bass providing the infrastructure need in provenience research. I don't think that this group is big, and they might edit selected items. Probably hundreds to few thousands edits. This group might or might not colloborate with the former.
Some months back I saw a visualization of how in the English Wikipedia WikiProjects create sub-communities, part of them overlapping, other not overlapping at all, about 50 of theme are of notable size. Maybe in this three classes of communities as I broke it down above several subcommunities exists also a set of sub-communities, with more or less interaction. I can't much comment your other questions. I don't know about OSM more than looking up the map. I have no clue how wikibase works.
Peeking over the horizon we see some performance issues at Commons for some time now. People begin to turn away from Commons it seems because the for example have been photographing at an event hundreds of photographs but batch upload doesn't work. Wikinews editors are waiting on the uploads which do not come. Well, they won't wait of course. They won't write articles on those events. So the commons issue also affects Wikinews. By the way, restrictions drive away users as well. That's the reason why or how German Wikiquote has killed itself several months ago.
As I said, I don't know what and which effects the measures you mentioned will have on Wikidata users and on users of other WMF projects and on this "external" user community I mentioned above. Matthiasb (talk) 11:58, 7 September 2024 (UTC)[reply]
And just another thought: If we want to have better data we should prohibit uploading data based on some Wikipedia only and maybe even remove statements sourced with WP only, after some transition, say end of 2026. We don't need to import population data for U.S. settlements, for example, from any Wikipedia, if the U.S. Census Bureau is offering ZIP files for every state containing all that data. (However the statement URL does not need a source as it is its source itself.) We should also enforce that users add the language of the sources, many users are neglecting it. (And I see some wrong directed mentality that "english" as source language is not needed at all but Wikidata isn't an english language database, is it?) Matthiasb (talk) 14:05, 7 September 2024 (UTC)[reply]
I totally agree with @Ainali. We need a healthy way for this community to operate and make decisions that take both human and technical limits into account. So9q (talk) 08:16, 7 September 2024 (UTC)[reply]
Rather than attempting to put various sorts of caps on data imports, I would put the priority on setting up a process to review whether existing parts of Wikidata really ought to be there or if they should rather be hosted on separate Wikibases. If the community decides that they should rather not be in Wikidata, they'd be deleted after a suitable transition period. For instance, say I manage to create 1 million items about individual electrical poles (spreading the import over a long period of time and done by many accounts from my fellow electrical pole enthusiasts). At some point, the community needs to wake up and to make a decision about whether this really ought to be in Wikidata (probably not). It's not something you can easily deal with in WD:RFD because it would be about many items, created from many different people over a long time, so I would say there should be a different process for such deletions. At the end of such a process, Wikidata's official inclusion guidelines would be updated to mention that the community has decided that electrical poles don't belong in Wikidata (except if they have sitelinks or match other criteria making them exceptional).
The reason why I'd put focus on such a process is that whatever caps you put on imports, people will find ways to work around them and import big datasets. If we work with the assumption that once it's in Wikidata, it deserves to stay there for eternity, it's a really big commitment to make as a community.
To me, the first candidate for such a process would be scholarly articles, because I'm of the opinion that they should rather be in a separate Wikibase. This would let us avoid the query service split. But I acknowledge that I'm not active on Wikidata anymore and may be out of touch on such topics (I stopped being active precisely because of this disagreement over scholarly articles) − Pintoch (talk) 11:44, 8 September 2024 (UTC)[reply]
@Pintoch Does a regular deletion of items really reduce the size of the database? Admins can still view all revisions, suggesting that this procedure would not mitigate the problem. Ainali (talk) 12:22, 8 September 2024 (UTC)[reply]
According to
currently we have
  • 2,2 million items marked as deleted (but can be "restored", i.e. made visible again for everyone, not only for admins)
  • 4,4 million items which currently are redirects
  • 11 million omitted Q-IDs, i.e. which have not been assigned (therefore we have QIDs > 130 million, but only 112,3 million objects)
M2k~dewiki (talk) 12:31, 8 September 2024 (UTC)[reply]
From my point of view deletion of items increases the size, since also the history of deletions will be stored (who marked the object as deleted and when, addtional comments, ...) M2k~dewiki (talk) 12:44, 8 September 2024 (UTC)[reply]
@Ainali: deleting entities removes their contents from the query service. If the community decided to delete the scholarly articles, the WDQS split wouldn't be needed anymore. It would also remove it from the future dumps, for instance. The size of the SQL database which underpins MediaWiki is much less of a concern. − Pintoch (talk) 14:09, 8 September 2024 (UTC)[reply]
@Pintoch: Less, but not by much if I read this problem statement correctly. Ainali (talk) 18:40, 8 September 2024 (UTC)[reply]
@Ainali: my bad, I hadn't read this yet. Indeed, this is also concerning. I would say it should be possible to remove all revisions from a big subset of items (such as scholarly articles) if they were to be deleted in a coordinated way. That should surely help quite a bit. − Pintoch (talk) 18:43, 8 September 2024 (UTC)[reply]
I agree, that would help. A separate Wikibases with scholarly papers would probably not take a lot of resources or time to set up.
If anyone wants to go that route I recommend against choosing federated properties.
Also authors and similar information needed besides the papers themselves should be kept in Wikidata.
Such a Wikibases could set up in less han a month. The import would take longer though because Wikibase is rather slow. (Think 1 item a second so 46 mio seconds = 532 days) So9q (talk) 09:45, 11 September 2024 (UTC)[reply]
OpenAlex has about 250 mio papers indexed, you do the math how long an import of them all would take 😉 So9q (talk) 09:47, 11 September 2024 (UTC)[reply]
If the import is done based on a SQL dump from Wikidata the loading would be much faster. Perhaps less than a month.
But AFAIK access to the underlying mariadb is not normally available to users of wikibase.cloud.
Also I'm not sure we have a sql dump of the Wikidata tables available anywhere currently. So9q (talk) 09:53, 11 September 2024 (UTC)[reply]
I don't think it would be good to transfer them over the Internet instead of copying them physically. That could be done by cloning the hard drives for example, and what I proposed here for Wikimedia Commons media could also be made possible for Wikidata. So it would take a few days if done right. One can e.g. use hashsums to verify things are indeed identical but if it's done via HDD mirroring then it should be anyway...a problem there would be it would contain lots of other Wikidata items at first which would have to be deleted afterwards if they can't be cloned separately (which they probably could if those scholarly items are on separate storage disks). Prototyperspective (talk) 10:43, 11 September 2024 (UTC)[reply]

IMHO, of course we can stop mass import or even mass editing, if there is an inminent risk of failure of the database/systems. Anyway, I am sure that the tech team can temporarly lock the database or increase maxlag, etc, in such urgent situation, if community is slow taking decisions. But I don't think that the stuff I am reading in this thread are long term solutions (removing items, removing revisions, increasing notability standards, etc). I think Wikidata should include all scholar articles, all books, all biographies and even all stars in the galaxy, everything described in reputable sources. I think that Wikidata (and Wikipedia), as an high quality dataset curated by humans and bots, is used more and more in AI products. Wikidata is a wonderful source for world modelling. Why don't we think some kind of partnership/collaboration/honest_synergy with those AI companies/research_groups to develop a long term solution and keep and increase the growth of Wikidata instead of reducing it? Thanks. Emijrp (talk) 14:54, 9 September 2024 (UTC)[reply]

Yes, agreed. Everyone is rushing to create insanely big AI models. Why can't Wikimedia projects have a knowledgebase about everything? Don't be conservative, or it will be like Abstract Wikipedia that is made obsolete by language models even before the product launch! And we must be ready for next emergency like COVID-19 pandamic, in which lots of items must be added and accessed by everyone in emergency! And splitting WDQS into two means our data capacity is potentially doubled, nice! Midleading (talk) 17:30, 9 September 2024 (UTC)[reply]
Interesting view that AW is made obsolete by AI models. The creators seem to think otherwise, ie it's a tool for people of marginalized languages that are not described by mountains of text anywhere to gain access to knowledge currently in Wikipedia.
It's also a way to write Wikipedia articles in one place in abstract forms and then generate the corresponding language representations. As such it is very different and might be complementary to GenAI. Google describes reaching any support for 1000 language as a challenge because of lack of readily available digital resources in small languages or dialects (even before reaching support for 300).
I'm predicting it will get increasingly harder as you progress toward supporting the long tail of smaller/minority languages. So9q (talk) 08:48, 10 September 2024 (UTC)[reply]
I agree with Midleading except that I don't think Wikidata is currently useful in emergencies. Do you have anything backing up ie it's a tool for people of marginalized languages that are not described by mountains of text anywhere? That would be new to me. It's also a way to write Wikipedia articles in one place in abstract forms doesn't seem to be what AW is about and is not scalable and not needed: machine translation of English (and a few other languages of the largest WPs) is good enough at this point that this isn't needed and if you're interested in making what you describe real, please see my proposal here (AW is also mentioned there). Also as far as I can see nothing supports the terminology of "marginalized languages" instead of various languages having little training data available. I think the importance of a language roughly matches how many speak them and Wikimedia is not at well-supporting the largest languages so I don't see why focus on small languages in particular would be due...and these could also be machine translated into once things progress further and models and the proposed post-machine-translation system improve. I think AW is like a query tool but unlike contemporary AI models reliable, deterministic, transparent, and open/collaborative so it's useful but I think not nearly as useful as having English Wikipedia available in the top 50-500 languages (next to their language Wikipedias, not as a replacement of these). Prototyperspective (talk) 16:46, 10 September 2024 (UTC)[reply]
Who said it will be a replacement? I have followed AW quite closely and have never heard anyone suggesting it. Please cite your sources. Ainali (talk) 06:58, 11 September 2024 (UTC)[reply]
There I was talking about the project I'm proposing (hopefully just co-proposing), I see how especially the italics makes it misunderstandable. AW is not about making English Wikipedia/the highest-quality article available in the top 50-500 languages so it's not what I was referring to. Prototyperspective (talk) 10:47, 11 September 2024 (UTC)[reply]
Ah, I see. I am also curious what makes you think that AW is not about writing Wikipedia articles in an abstract form? As far as I have understood it, that is the only thing it is about (and why we first need to build WikiFunctions). Ainali (talk) 14:15, 11 September 2024 (UTC)[reply]
Why would it be – please look into Wikifunctions, these are functions like exponent of or population of etc. You could watch the video on the right for an intro. The closest thing it would get to is have short pages that say things like London (/ˈlʌndən/ LUN-dən) is the capital and largest city of both England and the United Kingdom, with a population of 8,866,180 in 2022 (and maybe a few other parts of the WP lead + the infobox) which is quite a lot less than the >6,500 words of en:London. I had this confusion about Abstract Wikipedia as well and thought that's what it would be about and it could be that many supporters of it thought so too...it's not a feasible approach to achieving that and I think it's also not its project goal. If the project has a language-independent way of writing simple Wikipedia-like short lead sections of articles then that doesn't mean there's >6 million short articles since all of them would need to be written anew and it can't come close to the 6.5 k words of the ENWP article which is the most up-to-date and highest-quality for reasons that include that London is a city in an English-speaking country. Wikifunctions is about enabling queries that use functions like 'what is the distance between cities x and y'. That there is this project should not stopgap / delay / outsource large innovation and knowledge proliferation that is possible right now. Prototyperspective (talk) 15:27, 11 September 2024 (UTC)[reply]
As far as I have understood it, WikiFunctions is not the same as Abstract Wikipedia. It is a tool to build it, but then it might be built somewhere else. Perhaps on Wikidata in connection to the items, where the sitelinks are, or somewhere else completely. So just lack of evidence that you cannot do it on WikiFunctions now does not mean it will not be done at all in the future. Also, we don't need to write each abstract article individually. We can start with larger classes and then refine as necessary. For example, first we write a generic for all humans. It will get some details, but lack most. Then that can be refined for politicians or actors. And so on, pending on where people have interest, down to individual detail. I might have gotten it all wrong, but that's my picture after interviewing Denny (audio in the file). Ainali (talk) 15:38, 11 September 2024 (UTC)[reply]
It seems many people are still interested in Abstract Wikipedia. It's still not launched yet, may I have best wishes with it. What Abstract Wikipedia is able to show people is just the content in Wikidata, plus some grammar rules at WikiFunctions and maybe additional structured data in Abstract Wikipedia itself (why don't store them in Wikidata?). At Wikidata we have so many statements, but it's still not comparable to the number of sentences on a Wikipedia article. And the competitor, LLM, already imported very detailed information from the web, and it can tell some facts that are so detailed that only a few sentences in a section of the Wikipedia article mentioned. If Abstract Wikipedia can only generate stub-class articles, I would consider it a complete failure. If some articles on Abstract Wikipedia become a featured article, will it automatically become a featured article in every language? I highly doubt it can - because there aren't enough labels set in Wikidata. An item has much lower probability of having a label if it doesn't have a Wikipedia article. Ukraine (Q212) has 332 labels, but history of Ukraine (Q210701) has only 73 labels, human rights in Ukraine (Q4375762) only 18 labels. So, the "featured article" of Abstract Wikipedia in a small language will be, most likely, a large amount of content fallback to English, but being processed according to a non-English grammar, resulting in unreadable English. Anyway, the conclusion is that the quantity of data in Wikidata is still not comparable to LLM. So, why are we doubting that people must stop contributing data to Wikidata because it's "full"? Also, it's somewhat discriminating that English content is imported in Wikidata, and then it's "full", content in other languages can't be imported. Midleading (talk) 15:23, 12 September 2024 (UTC)[reply]
AW will be using the labels on the items, yes, but most of the text generation will be using lexemes. Yet, it will be able to show more than content in Wikidata, it is also planned to do inferences from that, using SPARQL or other functions. And I think you may have different expectations of the length of the articles that will be generated. I think it is highly unlikely that it will be anything compared to a featured article. That was never the goal. (And for the question if it would become featured in all languages, obviously no, such decisions cannot be enforced on local projects, they have the freedom to set such requirements themselves.) Ainali (talk) 07:18, 13 September 2024 (UTC)[reply]
I find claiming an open wiki project has become obsolete because of (proprietary) LLMs very dangerous. And promoting wiki projects as the place-to-go during an emergency, too. --Matěj Suchánek (talk) 16:23, 11 September 2024 (UTC)[reply]
I agree. In an emergency I go to trusted government sources. I expect them to be domain experts and guide the public in a way that contributes to the best possible health for everyone e.g. during COVID-19 I read in 1177.se which has expert reviewed content from the Swedish state agencies.
I also expect the government agencies to uphold the integrity and quality in their systems and punish anyone endangering the resource or spreading e.g. misinformation.
I don't expect that of Wikipedia in the same way. It's not supposed to be trusted per se. It's based on sources, but I don't know how many readers/users verified that the content corresponds to the statements in the source. 🤷‍♂️ So9q (talk) 07:52, 13 September 2024 (UTC)[reply]

Dump mirrors wanted

[edit]

WMF is looking for more mirror hosts of the dumps. They also rate limit downloads because the demand is high and few mirrors exist. See https://dumps.wikimedia.org/ So9q (talk) 10:13, 11 September 2024 (UTC)[reply]

"This includes, in particular, the Sept. 11 wiki." Do you have any context for this? Trade (talk) 02:56, 14 September 2024 (UTC)[reply]

More general item needed

[edit]

Q122921190 is only for Zweisimmen, but there a lot of other Railroad gauge system changers. In Spain there are a lot of them.Smiley.toerist (talk) 09:04, 12 September 2024 (UTC)[reply]

Class instances of physical object (Q223557)

[edit]

There are several classes in Wikidata where it seems clear from their description that their instances cannot be classes. The most obvious of these is physical object (Q223557), but there is also concrete object (Q4406616). These classes are also instances of first-order class (Q104086571), the class of all classes whose instances are not classes, or subclasses of its instances giving further evidence that their instances cannot be classes. Of course, if the instances of these classes cannot be classes then so also for their subclasses, including, for example, physical tool (Q39546).

However, in Wikidata there are currently many instances of these classes for which there is evidence in Wikidata that they themselves classes, either having instances themselves or having superclasses or subclasses. For example, there were recently 1,821,357 instances of physical object (Q223557) that fit one or more of these criteria, out of 21,154,884 instances of physical object (Q223557), an error rate of 8.6 per cent. (These numbers come from the QLever Wikidata query service, as the WDQS times out on the queries.) This is very likely an undercount of class instances of physical object (Q223557), as there are items in Wikidata that are classes but that have neither instances, superclasses, or subclasses in Wikidata, such as R3 device (Q7274706).

I propose trying to fix these problems. It is not necessary to make nearly two million corrections to Wikidata as in many cases a single change to Wikidata can fix the problem for many of these class instances. For example, there are at least 764,424 items related to protein (Q8054) involved so determining what to do with protein (Q8054) may fix a large part of the problem.

In many cases fixing these problems requires making a decision between several different ways forward. For example, airbag (Q99905) could be determined to be a class of physical objects, and thus stay a subclass of physical object (Q223557) but would have to have its class instance changed to a subclass. Alternatively, airbag (Q99905) could be determined to be a metaclass, and thus would need to be removed from the subclasses of physical object (Q223557) and probably be stated to be is metaclass for (P8225) physical object (Q223557). It would be useful to have a consistent treatment of involved classes like this one.

So who is interested in trying to address this large number of errors in Wikidata? This effort is likely to take some time and could have a larger discussion than is common here so I have created a page in the Wikidata Ontology Project for the effort. If you are interested please sign up on that page. Peter F. Patel-Schneider (talk) 14:03, 13 September 2024 (UTC)[reply]

I think as a very general rule instance of (P31) is overused and subclass of (P279) underused in Wikidata, and where things are ambiguous the right choice is to replace a instance of (P31) statement with subclass of (P279). The vast majority of items in wikidata are concepts, designs, etc., not physical objects. Yes there are lots of individual people or geographical or astronomical objects, but it is rare to have notable actual instances of most concepts. ArthurPSmith (talk) 19:36, 13 September 2024 (UTC)[reply]
Agreed. But it does not appear to be reasonable to just replace all instance of (P31) under physical object (Q223557) with subclass of (P279) as there are actual physical objects there, for example items in museum collections. And it is not sufficient to just do this only for those items for which there is evidence in Wikidata that they are classes because there are items in Wikidata that are classes in the real world but there is no information in Wikidata signalling this. Peter F. Patel-Schneider (talk) 13:38, 16 September 2024 (UTC)[reply]

Community Wishlist: Let’s discuss how to improve template discovery and reuse

[edit]

Hello everyone,

The new Community Wishlist now has a focus area named Template recall and discovery. This focus area contains popular wishes gathered from previous Wishlist editions:

We have shared on the focus area page how are seeing this problem, and approaching it. We also have some design mockups to show you.

We are inviting you all to discuss, hopefully support (or let us know what to improve) about the focus area. You can leave your feedback on the talkpage of the focus area.

On behalf of Community Tech, –– STei (WMF) (talk) 16:16, 13 September 2024 (UTC)[reply]

How to replace a statement to another

[edit]

Hi, editing Amagasaki Domain (dissolved in 1871), I found its statement "located in the administrative territorial entity" Property:P131 should be rather "located in the present-day administrative territorial entity" Property:P3842. OTOH I already added some elements to this statement and don't want to waste them. So I tried to edit the source of that page but found no way.

In addition, I'm afraid other * Domain entities might share the same problem, but have no time to dig it up deeper, now that 1:49 at my local time. There were around 300 Domains in Japan at that period (from 1600 till 1871).

Can anyone kindly suggest me what's the best to manage that?

Cheers. --Aphaia (talk) 16:52, 13 September 2024 (UTC)[reply]

The Move Claim gadget will do this for you. - PKM (talk) 01:59, 15 September 2024 (UTC)[reply]
@PKM How does one install this gadget? Peter F. Patel-Schneider (talk) 20:49, 17 September 2024 (UTC)[reply]
@Peter F. Patel-Schneider: Go to Special:Preferences#mw-prefsection-gadgets and tick the box next to moveClaim. Dexxor (talk) 08:44, 18 September 2024 (UTC)[reply]

deidetected.com, a self-published source potentially used for harassment

[edit]

This website launched and run by the creator of the "Sweet Baby Inc detected" Steam curator would fall under the definition of a self-published source on Wikipedia. The Steam curator has been linked to the harassment campaign against Sweet Baby Inc. by reputable sources like PC Gamer, The Verge, and multiple others.

Wikidata has a page for the website, with the website linked via the described at URL property, by User:Kirilloparma on more than one if not every occasion. Even within the scope of that source, it is done in a very targeted way in that the website seems to be added to the Wikidata pages only when the game is recommended against at deidetected.com (e.g. The First Descendant, Abathor, Valfaris: Mecha Therion recommended as "DEI FREE" by deidetected do not have the property set). Based on that, its goal of harassment or POV pushing appears to be evident.

Does Wikidata have any guidelines that would explicitly allow or disallow this behavior or the coverage of deidetected.com at all? Daisy Blue (talk) 09:45, 14 September 2024 (UTC)[reply]

There is no policy on WD for blacklisting websites for other than malicious cases such as spam or malware Trade (talk) 11:59, 14 September 2024 (UTC)[reply]
Now from having read the property description for described at URL on its talk page, which explains that it's for "reliable external resources", I'm convinced the website has no place on Wikidata, as it's not a reliable source (at least not per the guidelines of Wikipedia (WP:RSSELF)). What is the best place to initiate its removal without having to start a potential edit war? A bot would also do a more efficient job at removing it from all the pages. Daisy Blue (talk) 12:03, 14 September 2024 (UTC)[reply]
You might have more luck if you stopped bringing up Wikipedia guidelines and used the Wikidata ones instead Trade (talk) 00:09, 15 September 2024 (UTC)[reply]
Wikidata itself cites the Wikipedia guidelines on self-published sources (and on original research). Daisy Blue (talk) 05:04, 15 September 2024 (UTC)[reply]
English Wikipedia policy is im many cases useful to decide what should be done in Wikidata (e.g. which sources are reliable), but should never be considered normative and have no more authoritativeness than policies in any other project. GZWDer (talk) 06:37, 15 September 2024 (UTC)[reply]

This could be used to mass undo 18 of the edits that introduced the links, but it's not progressing for me when trying. Daisy Blue (talk) 11:14, 15 September 2024 (UTC)[reply]

Seems like a low-quality, private website that doesn't seem to add anything of value to our items. There are countless websites out there, but we generally don't add every single site via described at URL (P973) just for simply existing. IIRC, there were various cases in the past where users added unreliable websites to lots of items, that were then considered spam and deleted accordingly. And if the site's primary purpose is indeed purely malicious and causing harassment, there's really no point in keeping it. Best to simply put it on the spam blacklist and keep the whole culture war nonsense out of serious projects like Wikidata. Additionally, DEIDetected (Q126365310) currently has zero sources indicating a clear lack of notability. --2A02:810B:5C0:1F84:45A2:7410:158A:615B 13:50, 15 September 2024 (UTC)[reply]

I've already nominated that and Sweet Baby Inc detected for deletion citing the same reason, though specifically for the curator, one could stretch point 2 of Wikidata:Notability to argue against it, but I'm not sure what value it would bring to the project apart from enabling harassment and its use to justify any other related additions. Daisy Blue (talk) 16:06, 15 September 2024 (UTC)[reply]
Just add this website to the spam blacklist, no one will be able to add links to this website on Wikimedia projects anymore. Midleading (talk) 17:18, 16 September 2024 (UTC)[reply]
What's the proper venue for proposing that? Also, seeing how you have a bot, could you suggest a quick way to mass remove the remaining instances from Wikidata? I've already undone a number by hand but it's not the greatest experience. Having the knowledge may also help in the future. Daisy Blue (talk) 18:24, 16 September 2024 (UTC)[reply]
On the home page of Meta-Wiki, click Spam blacklist, and follow instructions there.
To clean up links to this website, I recommend External links search. A WDQS search is likely to time out. I also recommend reviewing each case manually, sometimes the item should be nominated for deletion, but tools can't do that. Midleading (talk) 01:27, 17 September 2024 (UTC)[reply]
Thanks. I'll remove the rest by hand then. As for the Wikimedia spam blacklist, it says that "Spam that only affects a single project should go to that project's local blacklist". I'm not sure if there have been any attempts to cite deidetected on Wikipedia or elsewhere. We can search for the live references (there are none) but not through the potential reverted edits, I don't think. Daisy Blue (talk) 07:33, 17 September 2024 (UTC)[reply]
Well, you may request this website be banned on Wikipedia first, then you may find some users who agree with you. Midleading (talk) 08:45, 18 September 2024 (UTC)[reply]
I believe Wikipedia has the same policy in that if it hasn't been abused (and I wouldn't know if it has been specifically on Wikipedia), then there is no reason to block it. On Wikidata, as it stands now, the additions come from one user, Kirilloparma, who pushed back on my removals here but hasn't reverted. Unless it becomes a sustained effort by multiple users, it will come down to whether Kirilloparma concedes that described at URL is for reliable sources and the website is not a reliable source. Daisy Blue (talk) 12:14, 18 September 2024 (UTC)[reply]

Basic instructions are lacking

[edit]

Editors with experience editing English Wikipedia who find ourselves here in Wikidata need a lot more handholding than is currently provided. I was editing the Wikipedia article en:User:Stitchbird2/sandbox, which includes a reference in this Wikidata space, Sinopsis de las especies austroamericanas del género Ourisia (Scrophulariaceae) ... which, when I came, did not have the genus name Ourisia in italics. I came to this page and tried to italicize the genus name, and I assumed that, like Wikipedia, surrounding something with two apostrophes, before and after, causes italics. Did I do it right? If so, why does the User sandbox article show the genus name with two apostrophes before and after, instead of treating the two apostrophes before and after as italic markup? Please, Wikidata needs instructions for this sort of thing. If they exist, tell me where and make them easier to find. If they don't exist, they need to be created. Signing off with four tildes, does that work? I guess I'll find out. Anomalocaris (talk) 03:46, 15 September 2024 (UTC)[reply]

For more information on item labels you can read Help:Label. The section "Fonts and characters" has a good explanation. William Graham (talk) 05:21, 15 September 2024 (UTC)[reply]
All right, I reverted my changes to the item mentioned above. Anomalocaris (talk) 07:33, 15 September 2024 (UTC)[reply]
@Anomalocaris: You may also find title in HTML (P6833) and/or title in LaTeX (P6835) useful, though I have no idea if either of them are used by the English Wikipedia templates used in your article. (You can try it out on Wikidata Sandbox (Q4115189) if you like.) Lucas Werkmeister (talk) 10:13, 16 September 2024 (UTC)[reply]

Wikidata weekly summary #645

[edit]

Petscan not returning full results list?

[edit]

Hi everyone! Over the weekend I used Petscan to add a bunch of P27 statements to items based on "Japanese people by occupation" categories. My hope was to reduce the number of results in this query that contains items with a link to jawiki that only have P31:Q5, since I usually fill in each item by hand. However, I was really disappointed to find that the number of results in the query didn't change at all. There's simply no way that there isn't a single actor, musician, or politician in all 4,400+ results! Most of the results in that query are newer, and were imported from jawiki sometime in 2023 or 2024.

I usually use the SPARQL query below in Petscan to filter out items that already have P27, but whether the SPARQL query is present or not I don't get results with Q numbers that are more than 8 digits long (not including Q). A good category to test with is "日本の銀行家‎ ", which returned 42 results for me, with the most recent being Q65291794.

SELECT ?item WHERE { ?item wdt:P31 wd:Q5 . OPTIONAL { ?item wdt:P27 ?dummy0 } FILTER(!bound(?dummy0)) }

I've read the manual over and over and I can't figure out why Petscan won't return items with newer, 9-digit long QIDs. What am I doing wrong? Thanks in advance! Mcampany (talk) 00:13, 17 September 2024 (UTC)[reply]

Replying to say that I'd ask on the Petscan discussion page, but it seems really quiet there. I don't think this is worth opening an issue in Github because I'm pretty sure this is user error. If someone knows of a better place to ask than here, I'd appreciate if you'd let me know! Mcampany (talk) 00:23, 17 September 2024 (UTC)[reply]
Hello Mcampany, in PetScan there is a Wikidata tabulator, where you could put the value P27 (citizenship) or P106 (occupation) or any other property ID in the field Uses items/props, plus the checkbox None, to find all items without the P27 or P106 property related to the selected subcategory.
M2k~dewiki (talk) 16:23, 17 September 2024 (UTC)[reply]
Hi @M2k~dewiki! Thanks, using the "Uses items/props" field brought back 500+ results, which is what I expected to see. Unfortunately, if I don't have a SPARQL query in Petscan, or am not using create mode, all the checkboxes and the option to use Quickstatements disappear. A list is really nice, but I'd rather not have to export and open up Quickstatements in another tab to make the edits. I really like how seamless having the checkboxes and Quickstatements box there is. Do you have any idea why the Quickstatements box in Petscan keeps disappearing? Mcampany (talk) 17:32, 17 September 2024 (UTC)[reply]
Sometimes it happens with PetScan that checkboxes disappear, I havent found out in which cases, yet. Could you post your PetScan-Short-URL? M2k~dewiki (talk) 17:35, 17 September 2024 (UTC)[reply]
For example, something like https://petscan.wmcloud.org/?psid=29304610&al_commands=P31%3AQ847017 M2k~dewiki (talk) 17:36, 17 September 2024 (UTC)[reply]
Sure! It's https://petscan.wmcloud.org/?psid=29304637 Mcampany (talk) 17:38, 17 September 2024 (UTC)[reply]
You have to switch on Other sources from Automatic to Wikidata:
https://petscan.wmcloud.org/?psid=29304729 M2k~dewiki (talk) 17:48, 17 September 2024 (UTC)[reply]
Amazing! Thank you so much, @M2k~dewiki. You're the best.
 Resolved Mcampany (talk) 17:54, 17 September 2024 (UTC)[reply]

Updates in Wikidata item not reflecting in Commons..

[edit]

Once after connecting a Commons-category to Wikidata (not by adding QID at first, but by adding category name in Wikidata), the data is visible in Commons. Then after adding some updates to the Wikidata item, no matter what is done, the updates are not visible in the Commons page, until unless the QID is added in Commons category. Similarly again, any updates in Wikidata-item are not visible in Commons-category until-unless some changes are made in the connection to Wikidata in Commons, such as removing QID.
The updates are not visible at all (like forever) even if cache memory is removed from the computer. Why this happens and does anybody know any solution? Gpkp (talk) 09:13, 17 September 2024 (UTC)[reply]

Hello Gpkp, please see
regaring (purging) server side caches. M2k~dewiki (talk) 15:48, 17 September 2024 (UTC)[reply]

Captions and colour of images of personalities scanned from books

[edit]

There are many drawings of Czech personalities used here in Wikidata which look like e. g. File:Adolf Heyduk – Jan Vilímek – České album.jpg or File:Karel Jaromír Erben – Jan Vilímek – České album.jpg. Sometimes there may be another version of the same image of similar quality in Commons (often uploaded by me) such as File:Karel Jaromír Erben (cut).jpg, which

  1. is without the caption present directly in the image,
  2. is more cut with less space around the portrait
  3. has the colour of the book paper removed.

Until now I did not have any problems with such replacements in Wikidata although I've been doing this occasionally for years, but now I have met with disagreement by User:Skot. Because my arguments did not convince him, and because I would like to prevent any edit-warring, I would like to ask other people for their opinions.

While I admit that the pictures with captions can be useful for somebody and so it is good to have them in Commons, I also believe that pictures without them like File:Karel Jaromír Erben (cut).jpg are better for Wikidata, because WD has a different tool to include captions if they are needed–the qualifier "media legend". Images from Wikidata are also being taken over by other projects, which also use different tools for captions, and the result then is that the caption is being doubled, as has happened e. g. in s:Author:Adolf Heyduk.

Cutting off the extra space around the portrait leads to better display in infoboxes, where the portrait looks larger, while taking the same infobox space.

I also think that the yellow-brown colour of the book paper is a noise contaminating the picture and worsening its contrast, and that it is not the colour of the picture but the colour of the medium from which the picture was taken. While it is absolutely possible to use such coloured pictures in Wikidata when there is no choice, if there is a possibility to choose between two alternatives, users should not be prevented to replace the coloured version by the de-coloured one in these cases.

Any opinions whether such replacements in Wikidata are possible are really appreciated. -- Jan Kameníček (talk) 20:58, 17 September 2024 (UTC)[reply]

"position held" vs. "member of"

[edit]

Hello, I am looking for advice on when to use "position held" or "member of" on Amy Brown Lyman. She was president of the Relief Society (the women's organization in the Church of Jesus Christ of Latter-day Saints). That seems clearly like a "position held." She was also part of the Relief Society General Board. I'm not sure which way to model that (I did both ways I could think of). Rachel Helps (BYU) (talk) 21:06, 17 September 2024 (UTC)[reply]

Daniel Levy: Two items for one person

[edit]

Q76245151 and Q3014345 are for the same person, each item lists biographies in two different Wikipedia language versions. Could someone who knows what they are doing, like User:M2k~dewiki, please merge them? Thank you! --Andreas JN466 08:31, 18 September 2024 (UTC)[reply]

Hello Jayen466, the two objects have been merged: Help:Merge M2k~dewiki (talk) 11:45, 18 September 2024 (UTC)[reply]

Dict: protocol

[edit]

Do we have a server for the dict: protocol, as described in this blog post and at DICT?

Curiously, if I type dict:cheese in the search bar here, I am taken to https://www.wikidata.org/wiki/Special:GoToInterwiki/dict:cheese (and similar if I do so on en.Wikipedia, etc.*), which displays:

Leaving Wikidata

You are about to leave Wikidata to visit dict:cheese, which is a separate website.

Continue to https://www.dict.org/bin/Dict?Database=*&Form=Dict1&Strategy=*&Query=cheese}}

and not to a Wikidata entry (nor a Wiktionary page**). Can we get that changed?

[* doing so on fr.Wikipedia still takes me to an English definition; does it do so for people whose browsers use other languages?]

[** Also raised at wikt:Wiktionary:Grease pit/2024/September#Dict: protocol]. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:15, 18 September 2024 (UTC)[reply]

See phab:T31229.--GZWDer (talk) 16:02, 18 September 2024 (UTC)[reply]

Map of Holocaust memorials

[edit]

I was just looking for images of Holocaust memorials, and made the below query. I never expect wiki info to be complete, but I was surprised how incomplete this map was compared to, say English Wikipedia. Even Commons has photos of memorials which don't seem to be on this map. I don't have time to work on this much myself but I hope it's okay to flag it up here as something that people may want to improve, mainly by adding commemorates (P547) The Holocaust (Q2763) to the relevant items. Holocaust education is a hot topic here in the UK as the new government is suggesting making it compulsory in all schools.

The following query uses these:

Features: map (Q24515275)  View with Reasonator View with SQID

#defaultView:Map{"hide":"?coords"}
SELECT ?item ?itemLabel ?coords ?image WHERE {
  ?item wdt:P547 wd:Q2763; wdt:P625 ?coords
      OPTIONAL {?item wdt:P18 ?image}
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],mul,en". }
}

MartinPoulter (talk) 12:43, 18 September 2024 (UTC)[reply]

Here's a Petscan query for articles in en:Category:Holocaust commemoration without property commemorates (P547): [1]. If you or someone else wants to add statements to them, go through the list, select suitable items, add "P547:2763" to the command list and press Start QS. Samoasambia 13:09, 18 September 2024 (UTC)[reply]

aircraft engine (Q743004) and models (also many classes similar to aircraft engine (Q743004))

[edit]

The class aircraft engine (Q743004) has problems in Wikidata. aircraft engine (Q743004) is a subclass of physical object (Q223557), which means that its instances are physical objects. But almost all the instances of aircraft engine (Q743004) are not physical objects, instead mostly being aircraft engine models like Poinsard (Q7207885).

What should be done to fix this problem? The simplest fix would be to just make aircraft engine (Q743004) no longer be a subclass of physical object (Q223557) and other classes that cause similar problems, perhaps by replacing aircraft engine (Q743004)subclass of (P279)aircraft component (Q16693356) with aircraft engine (Q743004)is metaclass for (P8225)aircraft component (Q16693356). But that would leave all the labels and descriptions for aircraft engine (Q743004) as is, and not corresponding to the actual intent of the class. Changing just the English label and descriptions would be possible but would cause a difference in the meaning of the labels in different languages. Adding an English value for Wikidata usage instructions (P2559) would help a bit not would not solve the mismatch. Changing all the descriptions doesn't seem immediately possible.

A variation of the first option would be to add a new class for aircraft engines, transfer all the labels and descriptions and aliases to the new class, correctly place the new class in the Wikidata ontology, give the new class appropriate label and description and aliases, and make the model instances also be subclasses of the new class.

Another option would be to make all the aircraft engine models that are currently instances of aircraft engine (Q743004) subclasses of it instead, perhaps in conjunction making the models instances of some suitable metaclass like engine model (Q15057021). This option probably requires many more statment changes than the previous approaches. If this is done, adding an appropriate English value for Wikidata usage instructions (P2559) on aircraft engine (Q743004) seems indicated.

Given what I have seen in related classes, I expect that there are many classes that have the same problem so perhaps the best way forward is to consider the problem in general and come up with a general solution.

Does anyone have preferences between these approaches? Does anyone have a different approach to fix this problem? Does anyone know what is the best way to gather a community that could come up with a consensus decision? Peter F. Patel-Schneider (talk) 15:32, 18 September 2024 (UTC)[reply]

Uploading photos on WikiLovesMonuments.org having problem with no category for .....

[edit]

In County Sligo, Ireland there's a historic site which I believe is incorrectly named "Carrowmore Passage Tomb". There is no 'no Wikimedia Commons category'. It has a Wikidata entry # Q33093088. I've uploaded a set of images on the site but have just disappeared.

⁨I understand correct name to be Carrowmore Megalithic Cemetery⁩, ⁨Carrowmore⁩, ⁨Co. Sligo⁩, ⁨Ireland⁩


Any help appreciated.

Gillfoto (talk) 23:10, 18 September 2024 (UTC)[reply]

Uploading Logo on NCWiki

[edit]

I could use some help to add the logo from Wikimedia Commons to WIkidata the logo is uploaded as Logo_NCWiki.png. 09:19, 19 September 2024 (UTC) Kroeppi (talk) 09:19, 19 September 2024 (UTC)[reply]