Wikidata:Datenzugriff

From Wikidata
Jump to navigation Jump to search
This page is a translated version of the page Wikidata:Data access and the translation is 4% complete.
Outdated translations are marked like this.

Wikidata currently contains over 100 million Items and over 650,000 Lexemes, and these numbers will keep on growing. There are many methods available to access all that data -- this document lays them out and helps prospective users choose the best method to suit their needs.

It's crucial to choose an access method that gives you the data you need in the quickest, most efficient way while not putting unnecessary load on Wikidata; this page is here to help you do just that.

Before we begin

Using Wikidata's data

Das Logo

Wikidata bietet eine große Bandbreite an allgemeinen Daten über unser Universum, genauso wie Links zu anderen Datenbanken. Die Daten sind unter CC0 "Public domain dedication" Lizenz veröffentlicht. Sie können von jedem bearbeitet werden und werden von der Gemeinschaft der Wikidatabearbeiter gepflegt.

Änderungen an APIs und Datenformaten, die genutzt werden, um auf Wikidata zuzugreifen, unterliegen der Richtlinie für stabile Schnittstellen. Änderungen an stabilen Schnittstellen werden entsprechend angekündigt. Beachte, dass nicht alle Datenquellen, die auf dieser Seite erwähnt werden, als stabile Schnittstellen betrachtet werden.

Wikimedia projects

This document is about accessing data from outside Wikimedia projects. If you need to present data from Wikidata in another Wikimedia project, where you can employ parser functions, Lua and/or other internal-only methods, refer to How to use data on Wikimedia projects.

Data best practices

Freiwillige machen Wikidata, Menschen wie du und ich

Wikidata bietet seine Daten frei und ohne Verpflichtung zur Nennung des Urhebers unter CC-0 an. Trotzdem würden wir es sehr begrüßen, wenn Wikidata als Quelle der Daten genannt wird. Dies stellt sicher, dass das Projekt langfristig erhalten bleibt und aktuelle und hochwertige Daten zur Verfügung stellen kann. Wir fördern auch die besten Projekte, die Wikidata verwenden. Einige Beispiele, wie Wikidata genannt werden kann: „Powered by Wikidata“, „Powered by Wikidata Tags“, „Powered by Wikidata data“, „Nutzung von Wikidata-Daten“, „Mit Daten von Wikidata“, „Daten von Wikidata“, „Quelle: Wikidata“, „Enthält Daten von Wikidata“, … Du kannst auch eine der von uns erstellten Dateien nutzen.

Some examples for attributing Wikidata: "Powered by Wikidata", "Powered by Wikidata data", "Powered by the magic of Wikidata", "Using Wikidata data", "With data from Wikidata", "Data from Wikidata", "Source: Wikidata", "Including data from Wikidata" and so forth. You can also use one of our ready-made files.

Das Wikidata-Logo (siehe oben) darf auch verwendet werden, aber bitte nicht in solcher Form, dass es erscheint, als ob Wikidata oder die Wikimedia Foundation in irgendeiner Form damit organisatorisch verbunden ist.

Bitte ermögliche deinen Nutzern, Probleme und Fragen zu den Daten zu melden und mache diese Reaktionen der Wikidata-Community zugänglich. Wir arbeiten derzeit daran, dies zu vereinfachen. Bis dahin weise bitte im Project chat darauf hin, wo du das Feedback der Nutzer sammelst.

Access best practices

When accessing Wikidata's data, observe the following best practices:

  • Follow the User-Agent policy -- send a good User-Agent header.
  • Follow the robot policy: send Accept-Encoding: gzip,deflate and don’t make too many requests at once.
  • If you get a 429 Too Many Requests response, stop sending further requests for a while (see the Retry-After response header)
  • When available (such as with the Wikidata Query Service), set the lowest timeout that makes sense for your data.
  • When using the MediaWiki Action API, make liberal use of the maxlag parameter and consult the rest of the guidelines laid out in API:Etiquette.

Search

What is it?

Wikidata offers an Elasticsearch index for traditional searches through its data: Special:Search

When to use it?

Use search when you need to look for a text string, or when you know the names of the entities you're looking for but not the exact entities themselves. It's also suitable for cases in which you can specify your search based on some very simple relations in the data.

Don't use search when the relations in your data are better described as complex.

Details

You can make your search more powerful with these additional keywords specific to Wikidata: haswbstatement, inlabel, wbstatementquantity, hasdescription, haslabel. This search functionality is documented on the CirrusSearch extension page. It also has its own API action.

Linked Data Interface (URI)

What is it?

The Linked Data Interface provides access to individual entities via URI: http://www.wikidata.org/entity/Q???

When to use it?

Use the Linked Data Interface when you need to obtain individual, complete entities that are already known to you.

Don't use it when you're not clear on which entities you need -- first try searching or querying. It's also not suitable for requesting large quantities of data.

Details

Meet Q42

Jedes Objekt und jede Eigenschaft hat eine URI, die man erhält, indem die ID (z. B. Q42 or P12) an den Wikidatanamensraum angefügt wird.

The namespace for Wikidata's data about entities is https://wikidata.org/wiki/Special:EntityData.

Wenn Du die ID einer Entität an dieses Präfix anhängst, wird die "abstrakte" (formatneutrale) Form der Daten-URL der Entität erstellt. Wenn Du eine Special:EntityData-URL anforderst, wird auf der speziellen Seite Inhaltsbestimmung das Format der Ausgabe von Wikidata festgelegt. Höchstwahrscheinlich hast Du die URL in einem normalen Webbrowser geöffnet, und eine HTML-Seite mit Wikidata-Daten zur Entität wird angezeigt, da ein Webbrowser HTML anderen Formaten vorgezogen hat. Verknüpfte Datenclients würden Wikidatas Daten über die Entität in einem anderen Format wie JSON oder RDF empfangen, abhängig vom HTTP-Header $accept-field ihrer Anforderung.

For example, take this concept URI for Douglas Adams -- that's a reference to the real-world person, not to Wikidata's concrete description:
http://www.wikidata.org/entity/Q42
As a human being with eyes and a browser, you will likely want to access data about Douglas Adams by using the concept URI as a URL. Doing so triggers an HTTP redirect and forwards the client to the data URL that contains Wikidata's data about Douglas Adams: https://www.wikidata.org/wiki/Special:EntityData/Q42.

Wenn Content Negotiation nicht gut funktioniert (z.B. zum Betrachten von Nicht-HTML-Inhalt in einem Webbrowser), kannst du auf einzelne Datensätze in einem bestimmten Formate zugreifen, indem du die Daten-URL mit einem Erweiterungsanhang für das gewünschte Datenformat versiehst, wie .json, .rdf, .ttl, .nt oder .jsonld. Beispielsweise führt $url1 zu einem JSON-Export für Objekt Q42. Bestimmte Versionen können durch Anhängen eines revision-Abfrageparameters erhalten werden, wie $url2.

Less verbose RDF output

Standardmäßig enthält das von der Schnittstelle für verlinkte Daten ausgegebene RDF Beschreibungen und andere Entitäten, auf die es verweist und ist in sich geschlossen. Nutze ?flavor=dump, um solche Informationen auszuschließen.

By appending &flavor to the URL, you can control exactly what kind of data gets returned.

  • ?flavor=dump: Excludes descriptions of entities referred to in the data.
  • ?flavor=simple: Provides only truthy statements (best-ranked statements without qualifiers or references), along with sitelinks and version information.
  • ?flavor=full (default): An argument of "full" returns all data. (You don't need to specify this because it's the default.)

If you want a deeper insight into exactly what each option entails, you can take a peek into the source code.

Revisions and caching

You can request specific revisions of an entity with the revision query parameter: https://www.wikidata.org/wiki/Special:EntityData/Q42.json?revision=112.

The following URL formats are used by the user interface and by the query service updater, respectively, so if you use one of the same URL formats there’s a good chance you’ll get faster (cached) responses:

Wikidata Query Service

What is it?

The Wikidata Query Service (WDQS) is Wikidata's own SPARQL endpoint. It returns the results of queries made in the SPARQL query language: https://query.wikidata.org

When to use it?

Use WDQS when you know only the characteristics of your desired data.

Don't use WDQS for performing text or fuzzy search -- FILTER(REGEX(...)) is an antipattern. (Use search in such cases.)

WDQS is also not suitable when your desired data is likely to be large, a substantial percentage of all Wikidata's data. (Consider using a dump in such cases.)

Details

Du kannst Daten in Wikidata mit dem SPARQL-Endpunkt durchsuchen, dem Wikidata Query Service. Der Dienst kann sowohl als interaktive Weboberfläche als auch durch Programme durch GET oder POST-Anfragen an https://query.wikidata.org/sparql verwendet werden. Auf RDF-Daten kann alternativ über eine Linked-Data-Fragments[1]-Schnittstelle unter https://query.wikidata.org/bigdata/ldf zugegriffen werden. Lies das Benutzerhandbuch und lokale Community-Seiten für mehr Informationen.

The query service is best used when your intended result set is scoped narrowly, i.e., when you have a query you're pretty sure already specifies your resulting data set accurately. If your idea of the result set is less well defined, then the kind of work you'll be doing against the query service will more resemble a search; frequently you'll first need to do this kind of search-related work to sharpen up your query. See the Search section.

Linked Data Fragments endpoint

What is it?

The Linked Data Fragments (LDF) endpoint is a more experimental method of accessing Wikidata's data by specifying patterns in triples: https://query.wikidata.org/bigdata/ldf. Computation occurs primarily on the client side.

When to use it?

Use the LDF endpoint when you can define the data you're looking for using triple patterns, and when your result set is likely to be fairly large. The endpoint is good to use when you have significant computational power at your disposal.

Since it's experimental, don't use the LDF endpoint if you need an absolutely stable endpoint or a rigorously complete result set. And as mentioned before, only use it if you have sufficient computational power, as the LDF endpoint offloads computation to the client side.

Details

If you have partial information about what you're looking for, such as when you have two out of three components of your triple(s), you may find what you're looking for by using the Linked Data Fragments interface at https://query.wikidata.org/bigdata/ldf. See the user manual and community pages for more information.

MediaWiki Action API

What is it?

The Wikidata API is MediaWiki's own Action API, extended to include some Wikibase-specific actions: https://wikidata.org/w/api.php

When to use it?

Use the API when your work involves editing Wikidata. It's also suitable for situations when you need small groups of entities in JSON format (up to 50 entities per request).

Don't use the API when your result set is likely to be large. (Consider using a dump in such cases.)

The API is also poorly suited to situations in which you want to request the current state of entities in JSON. (For such cases consider using the Linked Data Interface, which is likelier to provide faster responses.)

Finally, it's probably a bad idea to use the API when you'll need to further narrow the result of your API request. In such cases it's better to frame your work as a search (for Elasticsearch) or a query (for WDQS).

Details

The MediaWiki Action API used for Wikidata is meticulously documented on Wikidata's API page. You can explore and experiment with it using the API Sandbox.

Bots

Wir freuen uns über gut funktionierende Bots

Auch mit einem Bot kann auf die API zugegriffen werden. Siehe Wikidata:Bots.

Recent Changes stream

What is it?

Die [[https://stream.wikimedia.org|Wikimedia-Event-Streams der letzten Änderungen]] können genutzt werden, um Entitätsänderungen in Echtzeit zu sehen. Die Letzte-Änderungen-API ist ebenfalls verfügbar, wird jedoch nicht für neue Werkzeuge empfohlen, da sie die Änderungen selbst nicht anzeigt und die Server stärker belastet, da jede Entitätsänderung einzeln nachgeschlagen werden muss.

When to use it?

Use the Recent Changes stream when your project requires you to react to changes in real time or when you need all the latest changes coming from Wikidata -- for example, when running your own query service.

Details

The Recent Changes stream contains all updates from all wikis using the server-sent events protocol. You'll need to filter Wikidata's updates out on the client side.

You can find the web interface at stream.wikimedia.org and read all about it on the EventStreams page.

Dumps

What are they?

Wikidata dumps are complete exports of all the Entities in Wikidata: https://dumps.wikimedia.org

When to use them?

Use a dump when your result set is likely to be very large. You'll also find a dump important when setting up your own query service.

Don't use a dump if you need current data: the dumps take a very long time to export and even longer to sync to your own query service. Dumps are also unsuitable when you have significant limits on your available bandwidth, storage space and/or computing power.

Details

If the records you need to traverse are many, or if your result set is likely to be very large, it's time to consider working with a database dump: (link to the latest complete dump).

You'll find detailed documentation about all Wikimedia dumps on the "Data dumps" page on Meta and about Wikidata dumps in particular on the database download page. See also Flavored dumps above.

Tools

  • JsonDumpReader is a PHP library for reading dumps.
  • At [2] you'll find a Go library for processing Wikipedia and Wikidata dumps.
  • You can use wdumper to get partial custom RDF dumps.

Local query service

It's no small task to procure a Wikidata dump and implement the above tools for working with it, but you can take a further step. If you have the capacity and resources to do so, you can host your own instance of the Wikidata Query Service and query it as much as you like, out of contention with any others.

To set up your own query service, follow these instructions from the query service team, which include procuring your own local copy of the data. You may also find useful information in Adam Shorland's blog post on the topic.