@ArthurPSmith I think the extreme slowness happens on all updates. Currently there are only 2 of my batches running, and it seems it's one triple per minute. All these items are people so some of them are pretty large, but why should an update of a large item take more time? That's a serious problem in the architecture of Wikidata cc @Lea Lacroix (WMDE), @Lucas Werkmeister (WMDE), @Lydia_Pintscher_(WMDE).
@Jheald and @Magnus Manske the error rate on some of the batches is very bad, this batch below is 50%. Curiously, all statements "Worldcat id" have gone through, and all references "from VIAF id" have failed:
@Jura1 Nice joke :-) It definitely is lohi-lohi.
@Hogü-456 I haven't tried with a bot, but I think QS itself is a sort of bot, and any bot that respects maxlag will face the same problems. Using SPARQL Update is not possible on WD because its main store is relational not RDF. Each of these statements is recorded as a diff, can be discussed, I can be thanked for it... so WD has a lot more features compared to a normal triplestore in this respect. But still: the extreme slowness of Wikidata will be its doom, if the dev team cannot fix it soon.