User talk:Epìdosis

From Wikidata
Jump to navigation Jump to search

On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2023.
Logo of Wikidata

Welcome to Wikidata, Epìdosis!

Wikidata is a free knowledge base that you can edit! It can be read and edited by humans and machines alike and you can go to any item page now and add to this ever-growing database!

Need some help getting started? Here are some pages you can familiarize yourself with:

  • Introduction – An introduction to the project.
  • Wikidata tours – Interactive tutorials to show you how Wikidata works.
  • Community portal – The portal for community members.
  • User options – including the 'Babel' extension, to set your language preferences.
  • Contents – The main help page for editing and using the site.
  • Project chat – Discussions about the project.
  • Tools – A collection of user-developed tools to allow for easier completion of some tasks.

Please remember to sign your messages on talk pages by typing four tildes (~~~~); this will automatically insert your username and the date.

If you have any questions, don't hesitate to ask on Project chat. If you want to try out editing, you can use the sandbox to try. Once again, welcome, and I hope you quickly feel comfortable here, and become an active editor for Wikidata.

Best regards! --Tobias1984 (talk) 12:08, 8 September 2013 (UTC)Reply[reply]


Ciao Epìdosis,

Now that we have Diktyon ID (P12042) (nice number :) we could also create a MnM catalogue. You said that the import of data would be fairly easy as the Diktyon items are part of a Wikibase. I would suggest a very lightweight approach. What kind of data could we even extract without issue? Best, Jonathan Groß (talk) 16:36, 24 September 2023 (UTC)Reply[reply]

@Jonathan Groß: it's a good question (the Wikibase is, but as of now I don't have an answer, but only a few doubts (maybe they are useful as well): 1) the general doubt is that I have never extracted data from a Wikibase ... this one seems to have an API but not a query service, so as of now I'm unsure about the best way to extract data from it; 2) whilst I'm sure that this Wikibase contains a fair amount of data about Greek manuscripts extracted from Pinakes (BTW, it also contains non-Greek manuscripts, e.g., I don't know exactly if all Greek manuscripts have been imported and when (checking some cases, it seems that a lot of Greek manuscripts have been imported in 2020, so probably manuscripts added to Pinakes after 2020 are not in this Wikibase); 3) I have a doubt about the usefulness of using Mix'n'match in this case, I usually recommend the use of MnM when in a catalog most entries are already present in Wikidata and/or when many of the entries of the catalog are not present in Wikidata but are also present in other Mix'n'match catalog, so MnM makes it easier to create new items starting from many catalogues instead of starting only from one - given these premises, in this case nearly all the manuscripts are absent in Wikidata and I don't know about other catalogues containing significant numbers of Greek manuscripts.
So, to conclude, I would propose a different approach, which I try to detail below (I take as example manuscript from Pinakes Excuse me for the length :(
1) firstly I would like to clarify one point of the data model of manuscripts (i.e. Wikidata:WikiProject Books#Manuscript properties), i.e. how to describe their collocation: it can be divided into three parts, i.e. city, institution and archival fond of the institution; as far as I see from the present data model, the city falls into location (P276) (reasonable), but then problems start, because the data model has both collection (P195) qualified with inventory number (P217) and the inverse, inventory number (P217) qualified with collection (P195) (see e.g. Shahnamah of Ibrahim Sultan (Q53676578)) - this is the first problem, only one option has to be chosen, and the other discarded, duplication of data inside the same item is surely a problem - unfortunately collection (P195) has as value not the most precise one, i.e. the archival fond, but the institution - so, second problem, how do we link the single manuscript to its archival fond of pertinence? I think the first practical thing we need to do is discussing these two points, establish a precise data model and also arrange one or two Wikidata:Showcase items of manuscripts for future reference
2) we also need to check that we have all the items for cities, institutions and archival fonds; cities are surely all present, I'm also confident we have most institutions, but we will need to create a few hundreds of archival fonds; BTW, Pinakes has IDs for countries, cities, institutions and archival fonds, and having these four properties (or at least the last two ones) would probably facilitate our check ... if you agree, I think we can propose them as a 4-batch (similarly to e.g. Wikidata:Property proposal/SDBM IDs)
3) when the points 1 and 2 are OK, we can safely proceed with the real import: I think that, given the situation, the best solution will be importing massively all the ~80k manuscripts in new items modeled more or less like this: label in English and French copied from Pinakes (but I would omit the country, I think it's a bit redundant), e.g. "Città del Vaticano, Biblioteca Apostolica Vaticana (BAV), Ott. gr. 046"; instance of (P31)manuscript (Q87167); location (P276)city; 1 or 2 statements for the institution, the archival fond and the inventory number; Diktyon ID (P12042)number. I estimate a maximum of a few hundreds duplicates (i.e. already existing manuscripts), which will become more evident after we have all the manuscripts imported. Of course this point 3 presupposes we have a complete list of the manuscripts in Pinakes, but, since I think the points 1 and 2 will need at least some weeks to be solved, I'm confident in the meanwhile I can obtain this list (the way I'm thinking about is a scrape via numerus currens, i.e. checking all possible Diktyon numbers from to, the highest Diktyon presently existing; it will take just a few days).
If you agree with the above plan, I think that: A) we should start as soon as possible a reflection about 1, because these discussions about data models are often very long (argh!); B) we can start also checking institutions and archival fonds per 2 (if you want, in a few days I can obtain a complete list of them; if we propose properties for them, I can create their Mix'n'match catalogs immediately after their properties are created), I would very much like to help you in this, especially for Italian institutions and archival fonds; C) I can get in a few weeks a complete list of the manuscripts, to be ready for the massive import when 1 and 2 will be completed. Let me know! --Epìdosis 17:32, 24 September 2023 (UTC)Reply[reply]

I'm thrilled for this prompt, qualified, informed and comprehensive answer, much more than I could have hoped for with my lazy ping. I agree that we need to clear up our data model first, and I agree with your overall approach. We can get started right away, I just don't know how much time I can put in over the next days as things are quite hectic at home. All the best, Jonathan Groß (talk) 19:51, 24 September 2023 (UTC)Reply[reply]

I will give the data model a bit more thought. Meanwhile, you got mail! Jonathan Groß (talk) 11:44, 28 September 2023 (UTC)Reply[reply]

So I browsed a bit and found Codex Coislinianus (Q1105939) – an interesting case, as the remnants of this manuscript are scattered across 8 libraries in 6 countries. How would we deal with those? Create items for the disiecta membra with Coislin 202part of (P361)Codex Coislinianus (Q1105939) and the inverse Codex Coislinianus (Q1105939)has part(s) (P527)Coislin 202? I think that would be a good course of action. Jonathan Groß (talk) 12:20, 29 September 2023 (UTC)Reply[reply]

And there are, of course, cases where it is the other way around, two separate codices bound together. Like with Lectionary 61 (Q6512323) and Minuscule 729 (Q6870880), both part of Diktyon 49751. I would suggest creating dedicated items for these cases as well. Jonathan Groß (talk) 12:44, 29 September 2023 (UTC)Reply[reply]

In case of Lectionary 61 (Q6512323) and Minuscule 729 (Q6870880), I did what I suggested earlier today and created a new item for the manuscript as it is bound today. Now on towards clearing up our data model on manuscripts. As Wikidata does not have a way to string hierarchical properties together, we would have to give each manuscript separate statements for their location (country, city, institution), fonds, and shelf number. Keeping our current data model intact should be a priority, along with compliance with scientific standards. Taking up what you said in your initial, exhaustive response, all manuscript items should have statements for:

  • country = country (P17)
  • city: location (P276) should work, although this property has a broader scope and is often used for finer granularity of values (meaning city districts, specific buildings, street adresses etc.). But for our purposes, that should not pose a problem.
  • institution: collection (P195) is a good candidate, but I have some gripes with it. 1) As of now it is mostly used to specify the value of a holding institution (bestandshaltende Institution) like Bibliothèque nationale de France (Q193563), but its name (en: collection, de: Sammlung) suggests something more along the lines of "fonds" or "(private) collection". 2) The data duplication you mentioned, because it is used as a qualifier to inventory number (P217) – and vice versa. And both properties have constraints prompting users to duplicate the data as well.
    In my opinion, the best course of action would be to dissolve 2) by removing the constraint in both properties and keeping to the practice of using collection (P195) for holding institutions, but also specifying that this property is meant to do just that, and change its name and alias accordingly. Of course this can only be done after discussing the matter with the community.
  • fonds: Minding the ambiguity of collection (P195) as it is today, we should propose and create a new property.
  • shelf mark: These of course are part of inventory number (P217) or identical to it, depending how you look at it. Pinakes only has the running number, but in scholarship location, institution and fonds are often used before that. I think the best course of action would be to use inventory number (P217) with at least fonds and shelf mark as a string, maybe even city, institution, fonds and shelf mark as in Pinakes. And of course, the data duplication issue with collection (P195) woudl have to be addressed.

What do you think? Should we move this discussion to the WikiProject Books? Jonathan Groß (talk) 18:47, 29 September 2023 (UTC)Reply[reply]

@Jonathan Groß: thanks very much for these reflections. Of course I agree that, for manuscripts composed of two previous ones and for ancient manuscripts divided into two, the best possible solution is having three items interlinked through part of (P361)/has part(s) (P527). About the fundamental statements, I agree on country (P17) for countries and location (P276) for cities; regarding collection (P195) I exactly agree with the solution you propose, changing its labels/descriptions/aliases and constraints to keep it only for the institution and to remove the redundancy with inventory number (P217), of course it needs community discussion (and a significant effort of cleaning after the - hoped - approval); I agree also on the need of a new property for archival fonds (I think we can propose it a few days after we have Pinakes property for archival fonds and we have started creating a few ones); finally good using inventory number (P217) for the shelf mark, in which I would probably include city, institution, fonds and shelf mark, and I would not qualify it with collection (P195) in order to avoid redundancy. Finally, I think we should submit our proposal about reforming in the aforementioned ways the usage of inventory number (P217) and collection (P195) to WikiProject Books, starting soon the discussion and hoping it evolves in reasonable time :) Good night, --Epìdosis 22:51, 29 September 2023 (UTC) BTW, here in Lisbon at Wikidata Days both yestarday and today (and, I think, also tomorrow) we have spoken a lot about data modeling of manuscripts!Reply[reply]

Thanks for your answer. It seems we are in agreement. This gives me hope that we might be able to move things along in due time. Meanwhile, have fun at Lisbon and give my best to everyone there! Jonathan Groß (talk) 22:54, 29 September 2023 (UTC)Reply[reply]

So I started adding Diktyon numbers to existing Wikidata items for manuscripts. My thought was that we might avoid creating hundreds of duplicates later with the data import. I thought there would be a few hundred items at most, but it seems there are a lot more. I've managed to assign numbers for most of the manuscripts from the BNF collections, but it took an entire evening. And there are some lists that show how much more work this would entail: NT uncials, NT lectionaries, NT minuscules 1–1000, 1001–2000 and 2001–. And there are some more lists and items in the en:Category:Greek-language manuscripts.

The problem with my approach is that it is inefficient. Even when focussing on a single institution like the BNF, I have several tabs open with lists of the various fonds and need to switch between them (the header of each an every page is "Pinakes", which is not helping). I need to ascertain that the shelf number on Wikipedia is correct (it wasn't in two or three cases). And even when things go smoothly (alternating only between Grec and Supplement Grec, assignment is correct) it takes at least 30 seconds to find, copy and paste the Diktyon number to the Wikidata item. As an added problem (which I already described above), many items describe codices or codicological units which belong to several Pinakes entries or only part of a single Pinakes entry. This is especially true for Uncials (majuscule manuscripts) which are scattered across the planes or sometimes encompass only a single leaf of a manuscript.

The reason it takes so long to find a Diktyon number for a specific manuscript is that the Pinakes search function is very cumbersome. To look up a manuscript you must enter (in French or English) its country, city, institution, fonds AND shelf number, all of which must be chosen from hierarchical drop-down menus. Google search doesn't really help as it mostly produces wrong results; it does help, however, in getting quicker to the fonds in question. To me this demonstrated the usefulness and importance of our endeavour to integrate the Pinakes data into Wikidata.

I am unsure if I should keep on adding Diktyon numbers in this way. Maybe I will work only the on straightforward cases where the Wikidata item is identical to the Pinakes item. This should avoid a few hundred duplicate items. But perhaps this isn't even adviseable? I'd love your feedback.

So long, and have a great weekend, Jonathan Groß (talk) 06:53, 30 September 2023 (UTC)Reply[reply]

@Jonathan Groß: wow, I also thought about there being very few Greek manuscripts already in Wikidata ... the thing I would suggest to do is in fact starting the match of manuscripts whose name on Wikidata coincides more or less with the shelf mark in Pinakes, for the others we can think more thoroughly about possible strategies. And of course we need to start the discussion in WikiProject Books, hoping to encounter good agreement on the issues. I will start working thoroughly on this probably in the middle of next week, after having settled a batch of things which I accumulated during my good days in Lisbon :) --Epìdosis 09:55, 30 September 2023 (UTC)Reply[reply]

This is just to let you know, I've decided on my approach: Matching the New Testament manuscripts from Aland's catalogue is the easiest way. I am doing this by searching for "Aland" in the Biblissima database and going through the results one by one, checking if the Aland number has an item on Wikidata (not all do), then check the Diktyon no. linked in the Biblissima catalogue, and if it checks out, add it to the Wikidata item. Takes about 30 seconds per manuscript. Jonathan Groß (talk) 18:47, 1 October 2023 (UTC)Reply[reply]

So far, after a few hours of work (on 3 consecutive days), I've gone through 62% (1,199) of the search results. I should be finished by the end of this week. Jonathan Groß (talk) 16:11, 3 October 2023 (UTC)Reply[reply]

I've done it. I've gone through all 1,853 search results and added Diktyon numbers to the existing WD items. This was a nice little monkey exercise, and I will continue with the (less numerous) Rahlfs numbers for the Septuagint manuscripts. BTW, you weren't that far off with your guess that we would have "a few hundred duplicates". As of now, we have 814 WD items with Diktyon numbers. Jonathan Groß (talk) 20:27, 5 October 2023 (UTC)Reply[reply]

This query lists Ancient Greek manuscripts without Diktyon ID. 1072 results right now. So still some work to do :) Jonathan Groß (talk) 08:31, 6 October 2023 (UTC)Reply[reply]

Ciao Epìdosis! I have another question about P12042: The database report has "single value" constraint violations even for instances where the additional values are qualified as "object has role:obsolete inventory number". Is there a way to get the bot to leave these cases out of the error report? Or do you know somebody who can answer this question for us? Best, Jonathan Groß (talk) 13:59, 5 November 2023 (UTC)Reply[reply]

I know this issue unfortunately. The bot which updates the database reports of constraint violations has a few limitations, including not taking into account qualifiers which "normalize" single value constraint violations, and not taking into account ranks. These limitations have been reported, but never solved. However, if you use the query in the Property talk:P12042, the query has none of these limitations, and has the additional advantage of having real-time data, instead of a weekly updates. Good evening from Athens! Epìdosis 16:02, 5 November 2023 (UTC)Reply[reply]
I see, thank you! Up until now I've used the violations reports as a basis for clearing these errors, but I can try switching to queries and finally acquaint myself with SPARQL.
So you're in Athens again, congrats! Hope you have a good time. Give my best to any Wikimedian. Jonathan Groß (talk) 13:07, 6 November 2023 (UTC)Reply[reply]

Ciao Epìdosis, l'identificativo corrispondente a Polo bibliografico della ricerca title ID (P9598) è passato da 7 a 8 cifre. Ho modificato la pagina della proprietà (elementi di esempio e istruzioni d'uso in Wikidata) ma non so come modificare la discussione Property_talk:P9598 dove si dovrebbe cambiare i valori permessi in \d{8}, potresti farlo tu per favore? ciao --moz (talk) 13:55, 2 November 2023 (UTC)Reply[reply]

@Elena moz: non avevi modificato il vincolo della proprietà; ora è OK. --Epìdosis 14:03, 2 November 2023 (UTC)Reply[reply]
@Epìdosis: grazie mille! --moz (talk) 14:11, 2 November 2023 (UTC)Reply[reply]

Categoria errata[edit]

Ciao Camillo, ho appena creato una categoria su Wikipedia, ma non ho creato l'elemento Wikidata a causa di un errore mio: si tratta di Opere di Camilla Läckberg che dovrebbe anche contenere i romanzi della medesima. Invece io ho fatto in modo che le opere (una raccolta di racconti) si trovino dentro alla categoria Romanzi. Scusa se sono sempre così strampalata, per favore vedi che razza di pasticcio ne è venuto. Ciao, AmaliaMM (talk) 06:57, 5 November 2023 (UTC)Reply[reply]

@AmaliaMM: ✓ Done. Buona serata! --Epìdosis 16:08, 5 November 2023 (UTC)Reply[reply]

Le ali della colomba[edit]

Ciao e grazie per il precedente intervento: Ora ho combinato l'ennesimo pasticcio creando la disambigua Le ali della colomba, necessaria, datto l'alto numero di opere omonime. Purtroppo ho trovato un'altra pagina di disambiguazione con titolo in inglese (la trovi nei "Puntano qui") e penso che la pagina fatta oggi sia più completa e si potrebbe rimuovere l'altra o quello che ti sembra giusto. Grazie, AmaliaMM (talk) 07:39, 7 November 2023 (UTC)Reply[reply]

Altro problema... ho creato un nuovo elemento intitolato "La notte del peccato" e tutto perché, nella pagina del film francese "Léviathan (film, 1962) sembrava non esserci elemento WD. Adesso c'è una pagina da cancellare... sono davvero molto incapace! --AmaliaMM (talk) 17:24, 7 November 2023 (UTC)Reply[reply]
@AmaliaMM: ✓ Done entrambi. --Epìdosis 23:22, 7 November 2023 (UTC)Reply[reply]
Grazieeee (così finisce anche la notte dell'angoscia D) AmaliaMM (talk) 04:40, 8 November 2023 (UTC)Reply[reply]


Buon giorno. Collaboro da qualche settimana con wiki in tarantino (roa-tara) che in questi ultimi tempi mi appare molto vivace. Ho riscontrato questi problemi tecnici: ho messo nella mia babel di Wikidata il tarantino ed è uscito un 'NAP X' che non so giustificare. Il tarantino, a quanto ne so, non ha nulla a che fare con il napoletano, non ne deriva affatto. Inoltre, se desidero compilare, in questa lingua, 'Lingua Etichetta Descrizione', non si apre la finestra e non posso scrivere nulla. Ho anche notato che, anche se esiste la voce in tarantino, quando sono sulla stessa voce ma in una diversa lingua, il tarantino non mi appare come la 'lingua suggerita', tra quelle che possiedono quella voce; ma sono presenti, ad esempio, ligure, lombardo, napoletano, veneto, ecc. Ritengo che tutto questo pesi sulla corretta visualizzazione delle voci in tarantino sui motori di ricerca. Grazie anticipatamente della risposta. Fausta Samaritani (talk) 10:34, 7 November 2023 (UTC)Reply[reply]

@Fausta Samaritani: buonasera, sì sapevo che ci fossero alcuni problemi tecnici relativi a roa-tara, ma non avevo mai indagato. Sono abbastanza convinto che il problema del babel in cui compare NAP-X sia strettamente legato al problema per cui il tarantino non ti compare in nel box di etichette, descrizioni e alias; ho provato io stesso e confermo il problema; siccome mi pare che il bug non fosse mai stato segnalato, ho aperto una segnalazione in phab:T350746. Per quanto riguarda il secondo problema, mi servono uno o due esempi di voci in cui questo ti accade per capire meglio il problema e valutare come segnalrarlo (forse si tratta di un bug già noto, phab:T322244, ma non ne sono sicuro se non vedo qualche esempio concreto). Mi spiace di non poter fare di più, trattandosi di bug nel software ovviamente non posso intervenire direttamente, non sapendo programmare, e posso solo segnalare il problema e sperare che venga risolto. A presto, --Epìdosis 23:32, 7 November 2023 (UTC)Reply[reply]
Bona sera. Presto detto: il tarantino non compare mai tra le lingue 'suggerite'. Faccio un esempio: Taranto si dice Tarde. Vedi: [1]. Se vai sulla voce 'Taranto' di il tarantino non appare tra le lingue suggerite. Posso notare gli effetti, ma non sono in grado di afferrare le cause di questi disguidi e mi affido alla tua esperienza. A proposito, sono passata recentemente da Pisa e ho saputo che all'Università si sono svolte esercitazioni su Wikidata. Fausta Samaritani (talk) 23:51, 7 November 2023 (UTC)Reply[reply]
@Fausta Samaritani: è strano, io andando in w:it:Taranto vedo, nella colonna di sinistra assieme ai link alle altre Wikipedie, anche il link alla voce w:roa-tara:Tarde. Questo forse dipende dal fatto che io in Special:GlobalPreferences#mw-prefsection-rendering ho deselezionato "Utilizza un elenco ridotto di lingue, con le lingue per te rilevanti", e quindi vedo tutte le lingue e non solo una selezione; puoi provare a fare lo stesso (ci sono due spunte a fianco a "Utilizza un elenco ridotto di lingue": puoi provare a mettere la spunta sulla casella di sinistra e a toglierla a quella di destra) e vedere se il problema si risolve, altrimenti scrivimi ancora e penso a un'altra possibile soluzione. Per quanto riguarda l'università di Pisa, penso che si tratti del corso del prof. Vittore Casarosa, dove dal 2019 ho tenuto in ogni anno accademico una lezione e un'esercitazione su Wikidata, con buon successo (User:Epìdosis/Lezioni/UniPi) e anche quest'anno ripeteremo l'esperienza in questo mese. Buona serata, --Epìdosis 17:29, 8 November 2023 (UTC)Reply[reply]
Si, in effetti si sblocca e sono visibili tutte le lingue. Ma rimane un dubbio: con che criterio sono state selezionale le lingue 'italiane' suggerite? Resta poi il problema della mia Babel. Come faccio ad aprire la casella 'Lingua Etichetta Descrizione' per le voci in tarantino? Fausta Samaritani (talk) 17:47, 8 November 2023 (UTC)Reply[reply]
@Fausta Samaritani: credo che le lingue suggerite dipendano dalle coordinate geografiche (in senso lato) del luogo da cui ci si connette, e (forse) dalle Wikipedie a cui si ha contribuito, ma non ne sono così sicuro, soprattutto per la seconda ipotesi. Sono felice che la mia proposta abbia risolto il problema. Per quanto riguarda il tarantino in "lingua etichetta descrizione", ho aperto la segnalazione di bug (phab:T350746) ma non penso, onestamente, che verrà risolta a breve; l'unica soluzione, non comodissima, che ho in mente, è la seguente: puoi attivare in Special:Preferences#mw-prefsection-gadgets il gadget "labelLister"; a fianco alla cronologia di ogni elemento ti comparirà un link "Etichette"; entrando in tale link puoi cliccare sul link in basso "Modifica", digitare il codice roa-tara, dare ok, poi aggiungere etichetta, descrizione ed eventuali alias in tarantino e salvare; il sistema è ovviamente più scomodo dell'avere il tarantino a fianco alle altre lingue nel box visualizzato immediatamente nell'elemento, ma al momento non ho soluzioni migliori. Ovviamente resto a disposizione se hai altri dubbi o domanda! A presto, --Epìdosis 06:04, 9 November 2023 (UTC)Reply[reply]
Grazie delle risposte, ne farò tesoro. Fausta Samaritani (talk) 08:10, 9 November 2023 (UTC)Reply[reply]

Please protect Q255704 and Q72795 per vandalism[edit]

Good evening @Epìdosis. I would like to inform you that the IP (talkcontribslogs) is vandalizing both Q255704 and Q72795, changing deliberately the labels and descriptions in English and some other languages ([2], [3] and [4]).

I reverted them ([5] and [6]) and notyfied them in their talk page ([7]) for not doing that editions again without getting consensus, but them blanked their talk page twice ([8] and [9]) and continues with their editions ([10]). Please consider protect both entries and restore them to their stable versions if continues with their changes again. All the best, 19:33, 9 November 2023 (UTC)Reply[reply]

As postscript, the user @Hjart also reverted some editions by this IP, see an example here. 19:38, 9 November 2023 (UTC)Reply[reply]
I agree with your analysis, the labels should usually be the same as the article's title in the corresponding Wikipedia, unless there is specific consensus for a different solution; other names could of course go into the aliases. Both items had been lastly edited by you restoring the old labels; I semiprotected them both for two weeks. In order to have quicker intervention than a single admin (as me) can guarantee, you can write directly in WD:AN, which is monitored by many admins at the same times. I take the opportunity to encourage you to register an account, so that you can continue to edit this semiprotected items. Good evening, --Epìdosis 21:16, 9 November 2023 (UTC)Reply[reply]


Ciao Epì, ho trovato due elementi che si riferiscono allo stesso romanzo: [11] e [12]. Anche se il primo contiene la scheda del romanzo originale, sul secondo ho inserito più informazioni (l'ho trovato per primo). Vedi se puoi fare qualcosa... grazie AmaliaMM (talk) 15:39, 11 November 2023 (UTC)Reply[reply]

@AmaliaMM: ✓ Done uniti; l'unione è molto semplice, puoi vedere Help:Merge/it#Accessorio e chiedermi se hai dubbi. A presto, --Epìdosis 16:50, 11 November 2023 (UTC)Reply[reply]

Removing link[edit]

Why did you remove the link in Eti Craif (Q116689521) field occupation (P106) to Sex for Judgeship scandal (Q61126904)? They are directly connected since the article describes both her becoming a judge and resigning from that position. DGtal (talk) 09:31, 19 November 2023 (UTC)Reply[reply]

Hi @DGtal:! Yesterday I discovered a wide range of misuses of statement is subject of (P805) as qualifier of occupation (P106) and I was trying to resolve them, possibly reaching the conclusion of removing P805 from the allowed qualifiers of P106; in the case of Eti Craif (Q116689521) my initial though was that significant event (P793) as main statement could be sufficient to link it to Sex for Judgeship scandal (Q61126904); however, continuing to check these cases, I have reached the conclusion that a small number of the use of P805 as qualifier of P106 are correct, so I also reconsidered the case of Q116689521 and I think restoring the qualifier is the best solution, so I did it now. Thanks for writing me. Bye, --Epìdosis 10:54, 19 November 2023 (UTC)Reply[reply]
Thanks for all your work. DGtal (talk) 10:58, 19 November 2023 (UTC)Reply[reply]

Errore in SBN[edit]

c'è da correggere un errore di battitura in questa data di nascita qui su SBN. DarioSolar (talk) 15:58, 22 November 2023 (UTC)Reply[reply]

@DarioSolar: grazie mille, ho appena modificato la voce di autorità eliminando l'errore di battitura. Buona serata, --Epìdosis 17:43, 22 November 2023 (UTC)Reply[reply]

distinzione tra morte e "non si hanno più sue notizie"[edit]

buondì, dopo una carriera da tipografo itinerante il buon Bertocchi pare si sia fatto prete (1502) e si siano perse le sue notizie, per il viaf invece è defunto in quella data. Per ora ho aggiunto valore sconosciuto a date of death (P570) con earliest date (P1319) 1502, c'è modo di specificare diversamente? divudì 09:21, 26 November 2023 (UTC)Reply[reply]

@Divudi85: puoi aggiungere anche floruit (P1317)"1502" e poi direi che è perfetto! Buona domenica, --Epìdosis 09:22, 26 November 2023 (UTC)Reply[reply]