Jump to content

Wikidata:Bot requests

Shortcut: WD:RBOT
From Wikidata
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 3 days.


Request to add mul label values to names .. (2024-10-01)

[edit]

Request date: 2 October 2024, by: Iamcarbon

Link to discussions justifying the request

https://www.wikidata.org/wiki/Help_talk:Default_values_for_labels_and_aliases

Task description

Add mul labels to given and family names when the item has an existing native label value with a mul language. This will limit duplicate labels from being added by bots and users, while we continue to work on various issues preventing us from deleting existing duplicated labels.

Discussion


Request process

Request to fix italics in labels and titles of papers (2024-12-03)

[edit]

Request date: 3 December 2024, by: Pigsonthewing

Task description

It might be useful to have a bot make edits like this: [1]

The key points are:

  • Instance of a scientific or academic publication
  • Label and title include HTML <i> tag pair
  • HTML wraps the name of a taxon
  • Possible white space issues (not necessarily present in every case)

Might be better to write the corrected label in mul instead of a single language.

Discussion
I guess one could parse the title with BeautifulSoup (in python) to extract the text. It's possible that somebody has written a paper with a title that legitimately include these constructs, like "Proper use of the <b> and </b> tags in bibliography" Such items should be logged as exceptions to the relevant constraint. William Avery (talk) 18:56, 18 May 2025 (UTC)[reply]
I just realised what the OP meant about the white space issues, though there are also cases where introducing a space is not required, as at Hormone Signaling and Its Interplay With Development and Defense Responses in <i>Verticillium</i>-Plant Interactions (Q103727502). The original data in crossref.org has the problem. I need to think about when to introduce spaces. Generally speaking one doesn't switch to or from italic in the middle of an alphabetic (alphanumeric?) string. William Avery (talk) 12:10, 19 May 2025 (UTC)[reply]
Request process

Accepted by (William Avery (talk) 19:04, 18 May 2025 (UTC)) and under process[reply]

Request to add subscriber counts to subreddits (2025-03-11)

[edit]

Request date: 12 March 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Lots of items have a or multiple subreddit(s) about the item's subject set (which are often the largest online community or discussion-place / aggregated content location relating to the subject).

I think the subreddit subscriber counts are useful for many purposes like helping estimate roughly how popular things are and/or enabling some sorting since there are few other counts integrated into Wikidata on popularity – e.g. imagine a list of music genres for example, there a column that roughly shows how popular they are (among people online today) that one could sort by would be useful (one doesn't have to sort by it and it doesn't have to be perfect and there could be more columns like it). It can also be used for analysis of rise or slowdown/decline of subs (or to see such at a glance on the Wikidata item) etc.

However, many items do not have the subscriber count set or only have a very old one set. This is different for X/Twitter where most items have that set and it seems to get updated frequently by some bot. Here is a list of items with subreddit(s) sorted by the set subscriber count: Wikidata:List of largest subreddits. It shows that even of the largest subreddits, only few have a subscriber count set.

Please set the subscriber counts for all uses of subreddit (P3984) and add new one for the ones (with preferred rank) that already have one set (that is old). As qualifiers it needs point in time (P585) and subreddit (P3984). It would be best to run this bot task regularly, for example twice per year.
--Prototyperspective (talk) 23:38, 12 March 2025 (UTC)[reply]

Licence of data to import (if relevant)
Discussion
Request process

Request to add missing icons via logo property (2025-04-29)

[edit]

Request date: 29 April 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Many items that have an icon in logo image (P154) do not have an image set in icon (P2910).

That's an issue because sometimes logos are icons (like app icons on a phone) and sometimes they are wide banner-like logos as for example with Q19718090#P154 and Q50938515#P154. If one would then query icon, and if no icon set: logo that would result in mixed data of both these small more or less rectangular icons and other types of logos. When using that in a table column for example it would make the column much wider and having varying images in the column.

So I think it would be best if an icon was consistently in icon (P2910) without having to query logo image (P154). To understand what I mean, take a look at: Wikidata:List of free software accounts on Bluesky which has a nice-looking icon for nearly all entries and compare it with: Wikidata:List of free software accounts on Mastodon where the icon is missing for most items.

Licence of data to import (if relevant)
Discussion
  • Is there a straightforward way to find all items missing an icon where an icon is set in logo? Would it be better to copy it to the icon property or to move it to it (if unclear, I'd say just copy it)? Lastly, there also is a prop small logo or icon (P8972) but if that prop is to be used, shouldn't SVG files in icon be always copied to it in addition, assuming again this property is useful and should be set. That is because SVG files (that are set in the icon and/or logo property) can always be also used as small icon or not? --Prototyperspective (talk) 18:48, 29 April 2025 (UTC)[reply]
Note that by now many more items have been added to that free software accounts on Bluesky list, many of which do not have an icon. A simple explanation of the difference between a logo and an icon is that logos often have texts with them and are in horizontal format while icons are rectangular and usually have no text and sometimes just one word or a letter. In far more than 50% of cases the logo can simply be copied to the icon property. If one checks whether it's rectangular that probably already increases it to a percentage where it's reasonable to do this via mass-editing. Prototyperspective (talk) 11:35, 26 September 2025 (UTC)[reply]
Request process

Request to import data from Geni

[edit]

Request date: 11 June 2025, by: Immanuelle

Link to discussions justifying the request

Wikidata:Project_chat#Automated_tools_on_wikidata

Task description

I want to make a bot that will import data fom geni, adding more data on existing entries and creating new entries. I do not have a bot built right now, and would want to limit it to just adding profile links at first, then work towards adding additional information, and eventually new entries.

Licence of data to import (if relevant)
Discussion

I was asked on Project chat to directly ask here, so apologies if my request is too half baked right now. I am not sure about the data licensing on geni, but the person mentioned that this has been discussed before, and is a point of controversy.Immanuelle (talk) 06:25, 11 June 2025 (UTC)[reply]

I don't think the string "geni" is specific enough, are you referring to https://www.geni.com/? I guess it is good to add the full link or wiki page link to be more informative. Ttmms (talk) 12:00, 11 June 2025 (UTC)[reply]
@Ttmms Yes I am referring to that website. Immanuelle (talk) 19:38, 11 June 2025 (UTC)[reply]
So for example I could put in Jimmu with the profile https://www.geni.com/people/Emperor-Jimmu/6000000001829589817 and then it would do the following:
==First proposal==
add the profile link to him and his children/parents. It would require a manual decision to be made with the mother because of the ambiguity but could go up the paternal line in adding geni profiles
==Second proposal==
It could add other information like his participation in Q11590444
==Third proposal==
If it comes across a linked individual without a wikidata item, then it will create a new item based on the data there.
I just want to do the 1st proposal right now. Will do later proposals for the other ones. Immanuelle (talk) 20:11, 11 June 2025 (UTC)[reply]
Have you looked at Wikidata:WikiProject Genealogy? I see there are standards for adding properties for these efforts there, and also that Geni is in Mix-n-Match, which may be a better way to import the data for existing items. Jane023 (talk) 14:51, 12 June 2025 (UTC)[reply]
@Jane023 I have not done so and will do so, thank you. Immanuelle (talk) 20:43, 12 June 2025 (UTC)[reply]
@MisterSynergy I have run a test that you can see in my recent contributions. It appears to be successfully adding geni profiles to pages like this https://www.wikidata.org/w/index.php?title=Q76227140&oldid=2360627655 Immanuelle (talk) 02:59, 14 June 2025 (UTC)[reply]
Don’t you have a huge copyright issue here? --Emu (talk) 06:20, 14 June 2025 (UTC)[reply]
@Emu for adding geni links there is no copyright issue
However even for taking data from geni here is their policy https://www.geni.com/company/terms_of_use
Use of Data and Attribution Users of the Website or Members may collect and use other user Content only to the extent allowed by such Member’s privacy settings subject to our Privacy Policy and solely limited to that Content that is designated as public by the Member. If you collect and/or use any such Content you agree to the following: (a) provide proper attribution to Geni, (b) provide a link to the Geni Website, and (c) include a statement that your use/product/website is not endorsed by or affiliated with Geni. Immanuelle (talk) 21:31, 14 June 2025 (UTC)[reply]
I did test edits. It seemed to work pretty well. Although I have faced some technical issues I did not expect, nothing appeared to be an issue related to accuracy. Immanuelle (talk) 04:14, 15 June 2025 (UTC)[reply]
Yeah, that’s incompatible with CC0. --Emu (talk) 06:50, 15 June 2025 (UTC)[reply]
@Emu okay in this case taking data from geni is a no go. Is it still fine to have a bot that just adds the links? Immanuelle (talk) 02:36, 17 June 2025 (UTC)[reply]
That’s probably fine but keep in mind that geni.com data can be faulty --Emu (talk) 06:04, 17 June 2025 (UTC)[reply]
ChristianKl (talk) 15:11, 24 June 2017 (UTC) Melderick (talk) 12:22, 25 July 2017 (UTC) Richard Arthur Norton Jklamo (talk) 20:21, 14 October 2017 (UTC) Sam Wilson Gap9551 (talk) 18:41, 5 November 2017 (UTC) Jrm03063 (talk) 15:46, 22 May 2018 (UTC) Egbe Eugene (talk) Eugene233 (talk) 03:40, 19 June 2018 (UTC) Dcflyer (talk) 07:45, 9 September 2018 (UTC) Gamaliel (talk) 13:01, 12 July 2019 (UTC) Pablo Busatto (talk) 11:51, 24 August 2019 (UTC) Theklan (talk) 19:25, 20 December 2019 (UTC) SM5POR (talk) 20:17, 29 May 2020 (UTC) Pmt (talk) 23:22, 27 June 2020 (UTC) CarlJohanSveningsson (talk) 12:13, 30 July 2020 (UTC) Ayack (talk) 14:39, 12 October 2020 (UTC) EthanRobertLee (talk) 19:17, 20 December 2020 (UTC) -- Darwin Ahoy! 18:20, 25 December 2020 (UTC) Germartin1 (talk) 03:13, 30 December 2020 (UTC) Skim (talk) 00:13, 10 January 2021 (UTC) El Dubs (talk) 21:55, 29 April 2021 (UTC) CAFLibrarian (talk) 16:36, 30 September 2021 (UTC) Jheald (talk) 18:50, 23 December 2021 (UTC)[reply]

Notified participants of WikiProject Genealogy

Request process

Request to delete orphan talk pages (2025-06-13)

[edit]

Request date: 13 June 2025, by: Dr.üsenfieber

Link to discussions justifying the request
Task description
  • Delete all pages in the Talk namespace for which no page of the same name exists in the article namespace. Don't delete pages that transclude Template:Historical.
  • Delete all pages in the Property talk namespace for which no page of the same name exists in the Property namespace. Don't delete pages that transclude Template:Historical.
  • The bot would be quite simple to implement and should probably run regularly. -- Dr.üsenfieber (talk) 13:38, 13 June 2025 (UTC)[reply]
Discussion


Request process

Request to import UOIDs into Property:P13726

[edit]

Request date: 2 September 2025, by: William C. Minor

Link to discussions justifying the request
Task description

Import the UOID from this list on fr:wiki into Property:P13726 (‎CESAR title ID)

Licence of data to import (if relevant)
Discussion

Automating this work is a good idea; it would be very time consuming to do manually. Thanks! Jaucourt (talk) 17:13, 10 September 2025 (UTC)[reply]

Request process

@Jaucourt: @William C. Minor: Task completed (list of pages modified). Meilleures salutations. (17:45, 15 November 2025 (UTC)) [reply]

[edit]

Request date: 7 September 2025, by: ToxicPea

Task description

I would like to request that a bot make the read the value of legal citation of this text (P1031) for every item whose value of instance of (P31) is UK Statutory Instrument (Q7604686), Welsh Statutory Instrument (Q100754500), Scottish statutory instrument (Q7437991), or statutory rules of Northern Ireland (Q7604693) and make the value of legal citation of this text (P1031) an alias of that item.


Discussion


Request process

Request to change genre for film adaptations to 'has characteristic' (2025-09-28)

[edit]

Request date: 29 September 2025, by: Gabbe

Link to discussions justifying the request

Property_talk:P136#genre_(P136)_=_film_adaptation_(Q1257444)_or_based_on_(P144)_=_item_?.

Task description

For items with instance of (P31) set to film (Q11424) (or one of its subclasses) and genre (P136) set to film adaptation (Q1257444), film based on literature (Q52162262), film based on book (Q52207310), film based on a novel (Q52207399), or film based on actual events (Q28146524), the property for said statement should be changed to has characteristic (P1552). If the statements have sources or qualifiers these should accompany it.

Similarly for items with instance of (P31) set to television series (Q5398426) (or one of its subclasses) and genre (P136) set to television adaptation (Q101716172), television series based on a novel (Q98526239) or television series based on a video game (Q131610623).

The reason is that "based on a book" (and so on), is not a "genre".

Discussion


Request process

Request to import links/IDs of full films available for free in YouTube (2025-09-29)

[edit]

Request date: 29 September 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

This isn't a simple task but it would be very useful: please import the links (IDs) to full movies for free on YouTube via YouTube video ID (P1651) to the items about the films.

  • If this is functioning well, as a second step please expand it so that also new items are created for films that are on YouTube and IMDb but not yet in Wikidata. Maybe that should be a separate subsequent request.
  • For importing films, there would need to be some way to finding and adding source YT channels to import from that host such films that are scanned by the script for films (e.g. film-length duration + IMDb item with matching name)
  • Complications:
    • Videos that are offline should get their link removed from items. This may need some separate bot request. I noticed some of the added ones are offline (and quite a few geoblocked or just trailers – see below).
    • There are many channels containing full films. Maybe there already is a list of channels containing such somewhere or one creates a wiki-page where people can add legitimate channels containing full films.
    • I think the film should not be linked if it was uploaded less than e.g. 4 months ago to make sure it's not some illegitimate upload.
    • The language qualifier should be set. Generally, the language matches that of the video title.
    • It should be specified which type of video it is: the object of statement has role (P3831) should be set to full video available on YouTube for free (Q133105529). This distinguishes it from film trailers and could also be used to e.g. specify when it's one full episode of a series set on the series item. Currently, nearly none of the items have this value set and many trailers do not have it specified that they're trailers. This could be fixed in large part using the duration (P2047) qualifier since long videos are usually the full film and short ones the trailer.
    • If films are geoblocked in some or many regions, that should be specified (including the info where). This may require some new qualifier/property/item(s). Please comment if you have something to add for this. I think for now or early imports, it would be good to simply not import geoblocked videos. It may be less of an issue for non-English videos where it's not geoblocked in all regions where many people watch videos in that language.
    • I don't know if there is a qualifier that could be used to specify whether the film at the URL is available for free or only for purchase but such a qualifier should also be set to be able to distinguish these from YT videos only available for purchase.

Background: Adding this data may be very useful in the future to potentially improve WikiFlix by a lot which currently only shows films with the full video on Commons. So much so good but for films from 1930 or newer, YouTube has much more free films and this could be an UI to browse well-organized free films, including short films, in a UI dedicated to films and on the Web without having to install anything and using data in Wikidata, e.g. based on the genre. It could become one of the most useful and/or real-world popular uses of Wikidata.

Next to each film there would be metadata from Wikidata and more like the film description and IMDb rating or even Captain Fact (Q109017865) fact-check info could be fetched via the IDs set in it. If there is no URL for the film cover specified, it would just load the cached thumbnail as film cover. Somewhere at the top of Wikiflix there could be a toggle button whether or not to also show full free films on YouTube etc or only – mostly very old – public domain films as is currently the case. Theoretically, there could also be separate sites like it not on toolforge and possibly not even querying Wikidata directly. Lastly, until YT ID imports are done at a substantial scale, people could use listeria tables like these two I recently created which people can also use to improve the data – like adding main subjects or removing offline links – on these films and keep track of new additions:


  • There may already be tools for this out there that only need to be adjusted such as yt-dlp and import scripts / bots for other IDs that are being imported.

Note that later on one could use the same for full videos in public broadcast media centers (properties are mostly not there yet) like the ZDF Mediathek. Also one could import data from sites like doku-streams and fernsehserien. It would integrate full films scattered across many sites and YT channels and extend it with wiki features as well as improve the usefulness of Wikidata by having items for more films.

Previous thread

Licence of data to import (if relevant)

Irrelevant since it's just links but nevertheless see the discussion and point 3 under Complications.

Discussion


Request process

Request to import IMDb ratings (2025-10-07)

[edit]

Request date: 8 October 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

This is one of the most-used structured data number many people use in their daily lives and needed for any applications using Wikidata for movies. One such application is WikiFlix which could become a Netflix-alternative UI for browsing and watching freely available full films.

For example, this query of Wikidata can't work because films don't have their IMDb rating set

Not even the popular films named in Wikidata:Property proposal/IMDb rating have their IMDb rating set.

Could somebody import this data for all the films that have IMDb ID (P345) set?

Again, it would be very useful regardless of whether Wikiflix gets used a lot and I think Wikiflix could become the main way people learn about and first use Wikidata outside of Wikipedia where these ratings would be important data to have. Note that it also needs the qualifiers for the date and number of user-ratings.

Licence of data to import (if relevant)
Discussion


Request process

Request to check archival bot function

[edit]

Request date: 14 October 2025, by: Evel Prior

Link to discussions justifying the request

Request is to check existing bot archival. No discussion required.

Task description

Could someone check the archival function on Wikidata:Interwiki conflicts/Unresolved? There are several items – from at least the last 3 years – that have been resolved or declined, yet not moved to the relevant /Archive.

Discussion
Request process

Request to specify language of images, videos & audios (2025-11-07)

[edit]

Request date: 8 November 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Media files in items can be in any language but often the language is not specified. The language qualifier is used for example by the Wikidata infobox on Commons where it displays the file in the user's language if available.

Please set language of work or name (P407) for videos and audios as well as images like diagrams with text in them via the metadata in Commons categories.

There are multiple properties that can get media files set such as video (P10) and schematic (P5555).

See c:Category:Audio files by language, c:Category:Information graphics by language/c:Category:Images by language and c:Category:Videos by language.

I already did this for c:Category:Spoken English Wikipedia files and for c:Category:Videos in German and a few other languages. I described step-by-step how I did it here on the talk page of wish Suggest media set in Wikidata items for their Wikipedia articles which is another example of how this data could be useful/used (and there are many more for why specifying the language of files is important).

That was a 1) to a large degree slow and manual process 2) only done for a few languages and 3) isn't done periodically as a bot could do. One can't possibly check 300 language categories for 3 media types every second month or so. A challenge could be miscategorizations – however these are very rare – especially for all files not underneath the large cat 'Videos in English' – and the bot would set multiple languages to the files – so one could do a scan of all files that have multiple languages set and fix them (unless the file indeed is multilingual).

Here is another procedure including SPARQL (thanks to Zache) that uses Commons categories to set media file qualifiers in Wikidata, specifically the recording date of audio versions of Wikipedia articles (important metadata e.g. as many are outdated by over a decade). Maybe some of this is useful here too.

Licence of data to import (if relevant)

(not relevant)

Discussion


Request process

Request to import subject named as & tags for Stack Exchange sites (2025-11-10)

[edit]

Request date: 10 November 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Could somebody please import the subject named as (P1810) qualifiers for Stack Exchange site URL (P6541)?

This could then be displayed in a new column at Wikidata:List of StackExchange sites and then one could sort the table by it and compare it to https://stackexchange.com/sites?view=list#name to add all the missing Stack Exchange sites to items.

It would also be good if Stack Exchange tag (P1482) were imported as well as they are mostly just set for StackOverflow but not other Stack Exchange sites. For an example, I added two other tag URLs to Q1033951#P1482.

I think these things could be done with a script. There sites page linked above has this info all on one page and maybe there is another more structured format of it or a list of these sites that includes URL and name. One could also have it open the URLs and then import the page title as subject named as. For the tags, one could check for a tag named exactly like the item or, if present, like the linked stackoverflow tag.

Licence of data to import (if relevant)

(not relevant)

Discussion


Request process