Shortcut: WD:RBOT

Wikidata:Bot requests

From Wikidata
Jump to navigation Jump to search
Bot requests

If you have a bot request, add a new section using the button and tell exactly what you want. To reduce the process time, first discuss the legitimacy of your request with the community in the Project chat or in the Wikiprojects's talk page. Please refer to previous discussions justifying the task in your request.

For botflag requests, see Wikidata:Requests for permissions.

Tools available to all users which can be used to accomplish the work without the need for a bot:

  1. PetScan for creating items from Wikimedia pages and/or adding same statements to items
  2. QuickStatements for creating items and/or adding different statements to items
  3. Harvest Templates for importing statements from Wikimedia projects
  4. OpenRefine to import any type of data from tabular sources
  5. WikibaseJS-cli to write shell scripts to create and edit items in batch
  6. Programming libraries to write scripts or bots that create and edit items in batch
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2023/05.
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 2 days.


Cleaning of streaming media services urls[edit]

Request date: 12 December 2020, by: Swicher

I'm not sure if this is the best place to propose it but when reviewing the urls of a query with this script:

import requests
from concurrent.futures import ThreadPoolExecutor

# Checks the link of an item, if it is down then saves it in the variable "novalid"
def check_url_item(item):
    # Some sites may return error if a browser useragent is not indicated
    useragent = 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77'
    item_url = item["url"]["value"]
    print("Checking %s" % item_url, end="\r")
    req = requests.head(item_url, headers = {'User-Agent': useragent}, allow_redirects = True)
    if req.status_code == 404:
        print("The url %s in the element %s returned error" % (item_url, item["item"]["value"]))
        novalid.append(item)

base_query = """SELECT DISTINCT ?item ?url ?value
{
%s
  BIND(IF(ISBLANK(?dbvalue), "", ?dbvalue) AS ?value)
  BIND(REPLACE(?dbvalue, '(^.*)', ?url_format) AS ?url)
}"""
union_template = """  {{
    ?item p:{0} ?statement .
    OPTIONAL {{ ?statement ps:{0} ?dbvalue }}
    wd:{0} wdt:P1630 ?url_format.
  }}"""
properties = [
    "P2942", #Dailymotion channel
    "P6466", #Hulu movies
    "P6467", #Hulu series
]
# Items with links that return errors will be saved here
novalid = []

query = base_query % "\n  UNION\n".join([union_template.format(prop) for prop in properties])
req = requests.get('https://query.wikidata.org/sparql', params = {'format': 'json', 'query': query})
data = req.json()

# Schedule and run 25 checks concurrently while iterating over items
check_pool = ThreadPoolExecutor(max_workers=25)
result = check_pool.map(check_url_item, data["results"]["bindings"])

I have noticed that almost half are invalid. I do not know if in these cases it is better to delete or archive them but a bot should periodically perform this task since the catalogs of streaming services tend to be very changeable (probably many of these broken links are due to movies/series whose license was not renewed). Unfortunately I could only include Hulu and Dailymotion since the rest of the services have the following problems:

For those sites it is necessary to perform a more specialized check than a HEAD request (like using youtube-dl (Q28401317) for Youtube).

In the case of Hulu I have also noticed that some items can have two valid values in Hulu movie ID (P6466) and Hulu series ID (P6467) (see for example The Tower of Druaga (Q32256)) so you should take that into account when cleaning links.

Request process

request to add identifiers from FB (2021-02-11)[edit]

Thanks to a recent import, we currently have more than >1.2 items where the only identifier is Freebase ID (P646). However, checking https://freebase.toolforge.org/ some of them have identifiers available there.

Samples:

See Wikidata:Project_chat#Freebase_(bis) for discussion.

Task description

Import ids where available. Map keys to properties if not available at Wikidata:WikiProject_Freebase/Mapping.

Discussion


Request process

Request to change lexeme forms' grammatical features (2021-07-08)[edit]

Request date: 8 July 2021, by: Bennylin

Link to discussions justifying the request
Task description

How can I change grammatical features of form? (I operate bot, I just need to know the commands). I have the list of lexemes. I reckon this should be not too hard, I'm just not familiar with the command to do the changes.

Licence of data to import (if relevant)
Discussion


Request process

request to merge MNAC dups. (2021-11-13)[edit]

Request date: 13 November 2021, by: Jura1

Task description

Back in 2016, there seems to have been some duplication between two bots. Compare:

It showed up for several works at Wikidata:WikiProject_sum_of_all_paintings/Creator/Ramon_Casas_i_Carbó in Museu Nacional d'Art de Catalunya (Q861252) and Museu Nacional d'Art de Catalunya (Q23681318).

The idea to identify all of them (for other artists as well) and merge them.

Discussion


Request process

request to undo merge EC meetings (2021-12-02)[edit]

Request date: 2 December 2021, by: Jura1

Link to discussions justifying the request
Task description
Licence of data to import (if relevant)
Discussion


Request process

request to delete statements and sitelinks and merge items: dewiki duplicates (2021-12-16)[edit]

Request date: 16 December 2021, by: Jura1

Task description
Discussion


Request process

request to make buildings searchable by address (2022-01-05)[edit]

Request date: 5 January 2022, by: Jura1

Problem

When adding these statements Special:Search/haswbstatement:P669=Q688477 (currently 62), I noticed that most buildings can't be found by merely searching for the address: Special:Search/Getreidegasse Salzburg (currently 21).

This despite that most items include street address (P6375) with the building address: Q37970986#P6375. This as P6375 isn't indexed.

The easiest solution would have been to index the statement for full text search, but apparently this wont happen any time soon (see Wikidata:Report_a_technical_problem/WDQS_and_Search#index_"street_address"_(P6375)_strings).

The alternative would be to add the address as alias. Sample: Q37998050 with alias "Getreidegasse 11, Salzburg".

Task description
  • select items with buildings and P6375
  • check if address is in label or alias
  • if not, add address as alias (without postal code)
Discussion

@Jura1 This is a request I am willing to fulfill, but please start a community-wide discussion on it. I am afraid this job should first be considered by the community.Vojtěch Dostál (talk) 16:36, 14 February 2023 (UTC)Reply[reply]

Request process

standards importer (ISO and maybe others) (2022-10-16)[edit]

Request date: 17 October 2022, by: Vladimir Alexiev

Link to discussions justifying the request
  • I'm adding some basic items for the architecture/construction industry: AECO, CDE; and linking to existing items for BIM, IFC, etc. I was surprised there's no item for ISO 19650 which defines BIM
  • No discussion yet, I'll ping at
Task description
  • Create a bot that can import basic metadata about a standard
  • Example: https://www.iso.org/ics/93.010/x/ describes ISO 19650 Organization and digitization of information about buildings and civil engineering works, including building information modelling (BIM) — Information management using building information modelling, and a list of its parts
  • ISO 19650 has the following parts. Eg https://www.iso.org/standard/68078.html describes part 1:
ISO 19650-1:2018 — Part 1: Concepts and principles
ISO 19650-2:2018 — Part 2: Delivery phase of the assets
ISO 19650-3:2020 — Part 3: Operational phase of the assets
ISO 19650-4:2022 — Part 4: Information exchange
ISO 19650-5:2020 — Part 5: Security-minded approach to information management
ISO/CD 19650-6 — Part 6: Health and Safety
Licence of data to import (if relevant)

I don't think standards metadata is copyrighted

Discussion

From the enwiki page you linked,

The standards are protected by copyright and most of them must be purchased. However, about 300 of the standards produced by ISO and IEC's Joint Technical Committee 1 (JTC 1) have been made freely and publicly available.[2]

So I think there 100% needs to be some citations. RPI2026F1 (talk) 12:13, 27 October 2022 (UTC)Reply[reply]

Request process

Request to merge similar statements on items created by QuickStatements batch #104278 (2022-11-05)[edit]

Request date: 5 November 2022, by: RPI2026F1

Link to discussions justifying the request
Task description

I tried to make one claim with two references using Quick Statements but it created two identical claims and the references in undefined order. I need these two claims to be merged together (they are the same value) and both references should be kept.

Licence of data to import (if relevant)
Discussion


Request process

Request to link Icelandic categories, reusing article sitelink info (2022-11-05)[edit]

Request date: 6 November 2022, by: Snævar

Link to discussions justifying the request
  • Do I need one?
Task description

Link Icelandic categories with English categories according to quarry:query/25561. In the query the Icelandic category is on the left and the English on the right. The query uses sitelinks of articles in Icelandic where they link to English articles, finds an Icelandic category with the same name as the article and reuses the sitelink information to the English article to be used for the category. Basically, reusing information on the article and transferring it to an category with the same name.

Skip on any conflict. If the english category is an redirect, follow the redirect. (The Icelandic categories are not redirects, that has been excluded in the query.) The english categories have to be checked, they may or may not exist. This may also involve merging the item with the Icelandic sitelink to the item with the English category, have not checked that either. If the Icelandic sitelink is in an item it is the only one there, that has been checked.

Licence of data to import (if relevant)

Not relevant, considered uncopyrightable

Discussion


Request process

Request to periodically add VIAF IDs to humans (2022-11-06)[edit]

Request date: 6 November 2022, by: Epìdosis

Task description

Given an item with the all the following features:

  1. instance of (P31)human (Q5),
  2. at least one not-deprecated VIAF-member ID (for a least of VIAF-member properties use https://w.wiki/PRN),
  3. no VIAF ID (P214) in whichever rank;

the bot should

  1. check if all the not-deprecated VIAF-member IDs contained in the item are present in one or more VIAF clusters;
  2. all the VIAF cluster-IDs obtained in this way should be added by the bot to the item with the following reference: stated in (P248)Virtual International Authority File (Q54919) + VIAF ID (P214)cluster ID + retrieved (P813)date of retrieval + based on heuristic (P887)inferred from VIAF ID containing an ID already present in the item (Q115111315) (example edit).

Note 1: the restriction to personal items, thus to VIAF personal clusters, is due to the fact that the quality of VIAF non-personal clusters is much lower (i.e. there are many conflations), so it is better avoiding a massive addition of conflated clusters, which could afterwards attract not-pertinent IDs.

Note 2: Property talk:P214/Duplicates/humans (just emptied) can be easily used to monitor the quality of additions and evaluate if their quality will be low enough to justify an eventual suspension of the bot activity.

Note 3: as in the title, I think the bot should do this job not once, but on a periodical basis, ideally once a month in my opinion (but once in two/three/six months would be good as well)

Discussion


Request process

Request to downrank unofficial mastodon addresses (2022-11-19)[edit]

Request date: 19 November 2022, by: Shisma

Link to discussions justifying the request
Task description

All statements of Mastodon address (P4033) that have qualifiers like object has role (P3831) → (unofficial (Q29509080)|mirror storage (Q654822)) should be Deprecated rank deprecated. Should affect all these statements.

Discussion


Request process

Request to make sitelinks of asteroid items intentional (2022-12-08)[edit]

Request date: 8 December 2022, by: ChristianKl

Task description

Turn all sitelink to redirect (Q70893996)-badges on sitelinks to enwiki, tlwiki and ptwiki into intentional sitelink to redirect (Q70894304)-badges for all items that are instance of (P31) asteroid (Q3863). While most Wiki's have individual articles for individual asteroids those Wikis, use lists that show multiple asteroids and there are redirect that point toward those lists.


Discussion


Request process

Request to move sitelinks to redirect on year items to be intentional (2022-12-08)[edit]

Request date: 8 December 2022, by: ChristianKl

Task description

Many Wikis have items for each individual year. Other Wikis have only items that cover multiple decades or centuries together. For all items with instance of (P31) year BC (Q29964144), move all sitelink to redirect (Q70893996) to redirects on dewiki, enwiki, itwiki, ltwiki, ocwiki and tlwiki to be intentional sitelink to redirect (Q70894304). For all instance of (P31) year (Q577) move all sitelink to redirect (Q70893996) to redirects on ocwiki and tlwiki to be intentional sitelink to redirect (Q70894304).

Discussion
Request process

Request to remove constraint violating charge (P1595) claims (2022-12-08)[edit]

Request date: 8 December 2022, by: ChristianKl

Link to discussions justifying the request
Task description

Given the principle of the presumption of innocence it's problematic to feature charges centrally on items of living people. As a result the initial design of charge (P1595) was to have it on the item for the trial in question and not on the item for the person. There was never any consensus to extend the domain. Yet people used it in a constraint violating way. Given that we have a clear policy, I don't think we need a consensus finding discussion to decide to remove the statements that are problematic for privacy.

Therefore, I suggest to run a bot that removes all charge (P1595) from human (Q5).

Licence of data to import (if relevant)
Discussion


Request process

Request to replace qualifiers (2023-01-02)[edit]

Request date: 2 January 2023, by: M2545

Link to discussions justifying the request
Task description

For all the cases in which participant (P710) statements are qualified with subject has role (P2868), replace with the qualifier object has role (P3831).

Licence of data to import (if relevant)
Discussion


Request process

Request to precise the birth and death date of people with property born and or died on January 1 (with ODIS number) (2023-01-03)[edit]

Request date: 3 January 2023, by: MessensFien

Link to discussions justifying the request

https://www.wikidata.org/wiki/User_talk:MessensFien

Task description

Currently there are a few thousands persons with an ODIS ID that have a too precise of a birth and/or death date (always the 1st of January). Would it be possible to precise these statements to just the year (so delete 1st of January)?

Here the query for the people with property born on January 1: https://w.wiki/6BVE

And here for people with property died on January 1: https://w.wiki/6BVF

Discussion

I see here http://www.odis.be/lnk/en/PS_126253 (from Aenne Brauksiepe (Q272663)) the date is actually January 1st.

Here Johan Van Gastel (Q29336115) the date is duplicated causing constraint violations.

Just my first two random checks. --Bean49 (talk) 11:53, 4 January 2023 (UTC)Reply[reply]

I generated two lists of ODIS ID persons with their death or birth date on the 1st of January. Would it be possible to keep their entire dates in Wikidata? For the others (birth: https://w.wiki/6BVE, and death: https://w.wiki/6BVF); if there is no duplication, would it be possible to only keep the year. If there is a duplication, would it be possible to delete this in order to cause no constraint violantions? Thanks in advance!
For persons with death date on the 1st of January: PS_61, PS_8117, PS_11065, PS_11318, PS_11331, PS_444, PS_2771, PS_28640, PS_77288, PS_28373, PS_32616, PS_30192, PS_31385, PS_31773, PS_28188, PS_28978, PS_72861, PS_77304, PS_77305, PS_77878, PS_74619, PS_27964, PS_30880, PS_30586, PS_32118, PS_29201, PS_111295, PS_95186, PS_33806, PS_88469, PS_98687, PS_99353, PS_89621, PS_90379, PS_92169, PS_82497, PS_80814, PS_79404, PS_79412PS_15186, PS_19644,PS_15878, PS_15918, PS_13008, PS_26331, PS_17653, PS_17680, PS_16340, PS_17418, PS_80485, PS_17094, PS_57901, PS_21814, PS_21007, PS_21027, PS_21061, PS_21609, PS_86126, PS_20161, PS_86834, PS_18840, PS_104023, PS_64964, PS_109989, PS_110002, PS_106664, PS_66638, PS_87889, PS_120639, PS_95125, PS_127490, PS_127744, PS_126117, PS_127863, PS_129713, PS_129080, PS_67230, PS_70651, PS_69048, PS_132751, PS_132291, PS_131277, PS_133862, PS_133212, PS_138640, PS_142740, PS_124415, PS_139297, PS_132589, PS_147260, PS_154167, PS_143689, PS_144315, PS_154349, PS_152238, PS_145522, PS_151459, PS_145009, PS_152539, PS_146814, PS_160214, PS_161712, PS_157177, PS_175950, PS_177305, PS_173396, PS_173012, PS_168250, PS_167109, PS_168552, PS_174628, PS_176740, PS_167755, PS_172056, PS_175286
For persons with birth date on the 1st of January: PS_2613, PS_8261, PS_375, PS_404, PS_32933, PS_31335, PS_31042, PS_77321, PS_4645, PS_116666, PS_113971, PS_111506, PS_29175, PS_111614, PS_33520, PS_96895, PS_91383, PS_90799, PS_94855, PS_94917, PS_95546, PS_89357, PS_90015, PS_89443, PS_15800, PS_15839, PS_12985, PS_18381, PS_15874, PS_26347, PS_17389, PS_78333, PS_18228, PS_79675, PS_79743, PS_21870, PS_21907, PS_20824, PS_19912, PS_19230, PS_87356, PS_20873, PS_70418, PS_70424, PS_64476, PS_68163, PS_70216, PS_64820, PS_103939, PS_120708, PS_127776, PS_129053, PS_127095, PS_129452, PS_67193, PS_66341, PS_67214, PS_65127, PS_127217, PS_138082, PS_138396, PS_135352, PS_134828, PS_135838, PS_142035, PS_130008, PS_132818, PS_128679, PS_142105, PS_144079, PS_152163, PS_152589, PS_145018, PS_160971, PS_157609, PS_161733, PS_165425, PS_166672, PS_166697, PS_172452, PS_172557, PS_176251, PS_169216, PS_174715, PS_169876, PS_172564, PS_168033, PS_179486, PS_178864, PS_178932 MessensFien (talk) 15:25, 5 January 2023 (UTC)Reply[reply]


Request process

Request to .. (2023-01-07)[edit]

Request date: 7 January 2023, by: Laurameadows

Link to discussions justifying the request

update information on projects and bio

Task description

https://www.imdb.com/name/nm2568981/?mode=desktop

Licence of data to import (if relevant)
Discussion


Request process

Request to scrape user counts for existing Mastodon instance items (2023-01-28)[edit]

Request date: 28 January 2023, by: JesseW

Link to discussions justifying the request
  • I'm not aware of previous discussions of this.
Task description

WikiData currently has 37 items describing Mastodon server instances. It'd be nice to regularly update those items with current user counts. This information can be extracted from the sites themselves (via the NodeInfo standard, or scraped from FediDB.org (with links like https://fedidb.org/network/instance?domain=mastodon.social ). I'm not aware of somewhere where historical data can be referenced, but we can at least add reference entries describing how it was gotten (either direct NodeInfo or via FediDB).

Actually, I think historical data is available thru GraphQL from https://cloud.hasura.io/public/graphiql?endpoint=https%3A%2F%2Fthe-federation.info%2Fv1%2Fgraphql . So we should probably use that. JesseW (talk) 14:53, 30 January 2023 (UTC)Reply[reply]

Licence of data to import (if relevant)

Given that this is a single number, and factual, I don't think licensing is relevant.

Discussion


Request process

Is a bot really needed for 37 edits? How often would we update this? BrokenSegue (talk) 15:03, 30 January 2023 (UTC)Reply[reply]

@BrokenSegue A bot certainly isn't needed for 37 edits, although it'd be nice for updates (maybe monthly? or quarterly?). But I'd also like to add items for a bunch more Mastodon instances (maybe all the ones in the top 1% (or 5%) by number of users), and at that point, the bot would become much more helpful. JesseW (talk) 01:13, 1 February 2023 (UTC)Reply[reply]
I'm a little more interested in the related problem of pulling follower counts for Mastodon users but I'm unsure if there's a good API for that. This would mirror the work I already do for twitter/youtube. I guess I should check that GraphQL endpoint you found. BrokenSegue (talk) 06:18, 1 February 2023 (UTC)Reply[reply]
That doesn't list information about individual users, just about whole instances. But for pulling follower counts, see https://docs.joinmastodon.org/methods/accounts/#lookup . You just give it a username, and it will return the follower count. JesseW (talk) 14:55, 1 February 2023 (UTC)Reply[reply]


OK, well now I've got a (hacky) bit of code to extract such data and turn it into QuickStatements CSV input. The next question is -- how much of it do we want? For MastodonSocial (Q112059294) (the biggest instance), it looks like The Federation (Q106427280) has daily counts going back to June 2018. I don't know if we want all of that? Maybe we do?

gq https://the-federation.info/v1/graphql -q "query MyQuery {thefederation_stat(limit: 10, order_by: {date: asc}, where: {thefederation_node: {host: {_eq: \"mastodon.social\"}}}) {users_total,date}}" | jq -r '.data.thefederation_stat[]|["Q112059294",.users_total,"+"+.date+"T00:00:00Z/11","Q106427280","+2023-02-01T00:00:00Z/11"]|@csv' > ~/foo; echo "qid,P1833,qal585,S248,s813"; cat ~/foo

(gq is https://github.com/hasura/graphqurl ; jq is https://stedolan.github.io/jq/ ) Hope this helps! JesseW (talk) 15:43, 1 February 2023 (UTC)Reply[reply]

we definitely don't want daily counts. monthly at the very most. BrokenSegue (talk) 22:46, 1 February 2023 (UTC)Reply[reply]

I modified the code to use the query service to find all the current wikidata items, and to only write the most current value, and I've now run it: https://editgroups.toolforge.org/b/QSv2T/1675287564022 . We can run it again in a week (or month).

curl -H 'Accept: text/csv' 'https://query.wikidata.org/sparql?query=SELECT%20%3Fitem%20%3Fhost%20%0A%20WHERE%20%20%7B%0A%20%20wd%3AQ72705885%20%5E(wdt%3AP279*)%20%3Finstance.%0A%20%20%3Fitem%20wdt%3AP31%20%3Finstance.%0A%20%20OPTIONAL%20%7B%20%3Fitem%20wdt%3AP856%20%3Fhost.%20%7D%0A%7D' > ~/blah

function foobar() { host="$2"; qid="$1"; gq https://the-federation.info/v1/graphql -q "query MyQuery {thefederation_stat(limit: 1, order_by: {date: desc}, where: {thefederation_node: {host: {_eq: \"$host\"}}}) {users_total,date}}" | jq -r ".data.thefederation_stat[]|[\"$qid\",.users_total,\"+\"+.date+\"T00:00:00Z/11\",\"Q106427280\",\"+2023-02-01T00:00:00Z/11\"]|@csv" }

echo "qid,P1833,qal585,S248,s813" > ~/foo2; for line in $(cat ~/blah); do foobar "${${line##http://www.wikidata.org/entity/}%,*}" "${${${line#*,http?://}%%?}%%/}"; done | tee -a ~/foo2; cat ~/foo2

Request to periodically fix malformed BNF IDs (2023-02-15)[edit]

Request date: 15 February 2023, by: Epìdosis

Link to discussions justifying the request
Task description
  • as evident from many threads in Property talk:P268, a lot of problems have (and are) being caused by the trailing character of Bibliothèque nationale de France ID (P268), which is sometimes present and sometimes not in various sources, but is required by Wikidata; frequently values without the trailing character are added in good faith, sometimes massively (the most recent case, Topic:Xco5xqi1d4qk1q2f), and their fix is not easy for most users. Ideally, a bot should periodically convert the malformed IDs (if the malformation is due to the missing trailing character); unfortunately, this cannot be done through {{Autofix}} as the inference of the trailing character is too complex for its present functioning.
Discussion
Request process

Request to delete spaces from ISNI IDs (2023-02-15)[edit]

Request date: 15 February 2023, by: Epìdosis

Link to discussions justifying the request
Task description

Remove spaces in all occurrences of ISNI (P213) (main values and references); possibly the changes should be performed quickly, in order to minimize the coexistence of the formats with and without spaces (which could be confusing for data reusers).

Discussion


Request process

Request to replace 'kategorija Wikimedije' with 'kategorija Wikimedie' for sl (2023-02-26)[edit]

Request date: 26 February 2023, by: TadejM

Task description

Hello. Could someone please replace kategorija Wikimedije with kategorija Wikimedie in category descriptions for Slovene (sl) (e.g. here)? For the rationale, see e.g. [8], [9] or the Main Page of the Slovene Wikipedia. --TadejM (talk) 07:34, 2 March 2023 (UTC)Reply[reply]

Adding:

  • razločitvena stran Wikimedije should be universally replaced with razločitvena stran Wikimedie
  • seznam Wikimedije should be universally replaced with seznam Wikimedie

--TadejM (talk) 07:34, 2 March 2023 (UTC)Reply[reply]

Licence of data to import (if relevant)
Discussion
Request process

Request to Bot to import U.S. Patents or request for permission of my bot account (2023-02-28)[edit]

Request date: 28 February 2023, by: LucaDrBiondi

Task description

Bot to import U.S. Patents

I have a list of u.s. patents and i would import these data into wikidata. Actually data are on a csv file.

for example: US11387036; Inductor device ;Patent number: 11387036;Type: Grant ;Filed: Mar 19, 2020;Date of Patent: Jul 12, 2022;Patent Publication Number: 20200312521;Assignee REALTEK SEMICONDUCTOR CORPORATION (Hsinchu) Inventors: Hsiao-Tsung Yen (Hsinchu), Ka-Un Chan (Hsinchu) ;Primary Examiner: Adolf D BerhaneAssistant Examiner: Afework S Demisse;Application Number: 16/823,557

I have already create a bot account (LucaDrBiondi@Biondibot) and now, I think, I need the "request for permission". it is right?

Then i would try to write a my bot. I will use curl in language. I write this code just to connect to wikidata through my bot:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <curl/curl.h>

#define API_ENDPOINT "https://www.wikidata.org/w/api.php"

// Data structure for storing the login token
typedef struct {
    char token[1024];
} TokenData;

// Callback function for writing data received from the server
size_t write_callback(char *ptr, size_t size, size_t nmemb, void *userdata) {
    size_t bytes = size * nmemb;
    fwrite(ptr, size, nmemb, stdout);
    return bytes;
}

// Callback function for receiving the login token
size_t token_callback(char *ptr, size_t size, size_t nmemb, void *userdata) {
    size_t bytes = size * nmemb;
    TokenData *data = (TokenData *)userdata;
    char *start = strstr(ptr, "\"logintoken\":\"");
    if (start != NULL) {
        char *end = strchr(start + 15, '"');
        if (end != NULL) {
            size_t token_len = end - (start + 15);
            strncpy(data->token, start + 15, token_len);
            data->token[token_len] = '\0';
        }
    }
    return bytes;
}

int main() {
    // Initialize CURL
    CURL *curl = curl_easy_init();

    if (curl) {
        // Step 1: Get login token
        TokenData token_data = {0};
        curl_easy_setopt(curl, CURLOPT_URL, API_ENDPOINT);
		curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
        curl_easy_setopt(curl, CURLOPT_POST, 1L);
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "action=query&meta=tokens&type=login&format=json");
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, token_callback);
        curl_easy_setopt(curl, CURLOPT_WRITEDATA, &token_data);
        curl_easy_perform(curl);

        // Step 2: Login
        char login_data[1024];
        snprintf(login_data, sizeof(login_data), "action=login&lgname=%s&lgpassword=%s&lgtoken=%s&format=json",
            "xxx@xxxx", "xxxxxx", token_data.token);
		printf("token_data.token : %s", token_data.token);
        curl_easy_setopt(curl, CURLOPT_URL, API_ENDPOINT);
		curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);		
        curl_easy_setopt(curl, CURLOPT_POST, 1L);
        curl_easy_setopt(curl, CURLOPT_POSTFIELDS, login_data);
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback);
        curl_easy_perform(curl);

...
        curl_easy_cleanup(curl);
    }

    return 0;
}

But i get the following error:

{"login":{"result":"Failed","reason":"Unable to continue login. Your session most likely timed out."}}

Could you someone help me to do a step forward, please?

I wrote more than 500 pages in it.wikipedia but in wikidata ...i am a goat! but i want to learn.

Licence of data to import (if relevant)
Discussion


Request process

Request about linking between wikis, the old fashioned way (2023-04-02)[edit]

Request date: 2 April 2023, by: محک

Link to discussions justifying the request
Task description

Hello. I made a number of articles HERE that are linked to the Persian wiki the old fashioned way. I don't know if there is a bot-less way to do this. Can someone help link these articles to their own Wikidata items, please?--محک (talk) 11:33, 2 April 2023 (UTC)Reply[reply]

@محک I assume this is done by me now. Shall we close this request? Amir (talk) 19:54, 16 April 2023 (UTC)Reply[reply]
Yes. Tanks a lot 💚 محک (talk) 14:12, 17 April 2023 (UTC)Reply[reply]
Licence of data to import (if relevant)
Discussion


Request process
I think that this discussion is resolved and can be archived. If you disagree, don't hesitate to replace this template with your comment. —‍Mdaniels5757 (talk • contribs) 15:59, 26 May 2023 (UTC)Reply[reply]

Request to unify links to the Czech Bridge Management System (2023-04-16)[edit]

Request date: 16 April 2023, by: ŠJů

Link to discussions justifying the request

Supposed as non-controversial, most of the affected references (maybe all) were added by me. The purpose of the request is to unify the links to identical source to the preferred form.

Task description

Please find all references where reference URL (P854) is "http://bms.clevera.cz/Public", "http://bms.clevera.cz" or "http://bms.clevera.cz/" and replace it with stated in (P248) = Bridge Management System (Q108147651) (if the reference contains some qualifiers, eg. retrieved (P813), transfer them to the new form).

Discussion

These links reference the BMS web page generally. In most cases, a direct link to a detail of a specific bridge would be more appropriate, but unfortunately, the BMS page (http://bms.clevera.cz/Public) as well as the road net map (https://geoportal.rsd.cz/apps/silnicni_a_dalnicni_sit_cr_verejna/) seem to not enable direct URL links to details of road objects. If somebody discovers a way to refer directly to a detail of a specific object with a specific public URL, that would certainly be appropriate to apply such a method in the references.

Request process

Request to automatically add P11780 from ORCID and other identifiers (2023-05-09)[edit]

Request date: 9 May 2023, by: Tomodachi94

Link to discussions justifying the request
Task description

Import various identifiers from Humanities Commons member ID (P11780), such as ORCID iD (P496), Twitter username (P2002), Mastodon address (P4033), and other values.

Licence of data to import (if relevant)

Unknown.

Discussion


Request process