Wikidata:Dataset Imports/Unpaywall
You may find these related resources helpful:
Guidelines for using this page[edit]
Documenting the import[edit]
- Guidelines on how to import a dataset into Wikidata are available at Wikidata:Data Import Guide.
- Please include notes on all steps of the process.
- Once a dataset has been imported into Wikidata please edit the page to change the progress status from in progress to complete.
- It is strongly recommended to use Visual Editor when making changes to this page, particularly for editing any of the tables.
Creating a Wikidata item for the dataset[edit]
- Please create a Wikidata item for the dataset, this will allow us to improve the coverage of datasets on Wikidata and understand what datasets are available on that topic and which of them have been added to Wikidata.
- If you are working with very large dataset you can break it into smaller Mix n' Match catalogues, but only create one Wikidata item.
- Link the dataset Wikidata item to this page using Wikidata Dataset Imports page (P5195)
Getting help[edit]
- If your dataset import runs into issues please edit the page to change the progress status from in progress to help needed.
- You can ask for help on Wikidata:Project chat.
Overview[edit]
Dataset name[edit]
Unpaywall
Source[edit]
Impactstory
Link[edit]
Dataset description[edit]
data format A data set linking DOIs to freely available versions of the article. This includes preprints (e.g. arXiv) and institutional repositories.
Additional information[edit]
Progress of import[edit]
The table below is used to track the progress of importing this dataset. The suggested column headings are most applicable to data being imported from a spreadsheet - you can change some column headings or add new columns as required to best describe the progress of this import.
Data sets used:[edit]
- Unpaywall (Q38352586) for doi to (arXiv, full text pdf) mapping
- inventaire.io (Q32193244) data dump for doi to Q mapping
Subtask | Process import | Structure of data within Wikidata | Match the dataset to Wikidata | Importing data into Wikidata |
---|---|---|---|---|
Import arXiv identifiers
|
| Automatic detection and decoding of honey bee waggle dances. (Q47562874)
|
| Export using QuickStatements (Q20084080)
|
Edit history[edit]
Use the table below to list batches of edits that have been completed for this dataset. Ideally each entry should have all applicable columns filled out, but at a minimum please make to add a date and description to give an idea of what was added to Wikidata and when.
Date | Description | Method | Properties | Qualifiers | References | Statements added | Statements removed | Link to import sheet |
---|---|---|---|---|---|---|---|---|
27 Oct 2018 | Initial arXiv import | See above | arXiv ID (P818) | - | Unpaywall (Q38352586) | 7217 | 0 | [2] |
Date 3 | Description 3 | Method 3 | Properties 3 | Qualifiers 3 | References 3 | Added Count 3 | Removed Count 3 | Link 3 |
Discussion of import[edit]
These headings are generally useful, please change this section to suit your needs.
Wikidata item for dataset[edit]
Import data into spreadsheet[edit]
Format the spreadsheet to import the data[edit]
Structure of data within Wikidata[edit]
Field name | Wikidata property | Notes |
---|---|---|
Name1 | Property1 | Notes1 |
Name2 | Property2 | Notes2 |
Name3 | Property3 | Notes3 |
Match the dataset to Wikidata[edit]
Importing data into Wikidata[edit]
Import completion notes[edit]
Visualisations[edit]
Maintenance[edit]
Queries and expected results[edit]
Query link | Description | Expected results |
---|---|---|
Link1 | Property1 | Notes1 |
Link2 | Property2 | Notes2 |
Link3 | Property3 | Notes3 |