Skip to content
Olly Butters edited this page Jul 21, 2017 · 5 revisions

We have found it troublesome to upload large numbers of papers to zotero, so we've got a simple upload script that helps.

Call it like:

python uploadPapers.py --input <input file> --type doi --collection <zotero collection>

This does need the file tree to be in place to work, so manually make sure the following exist

/cache/shortname/processed/upload
/cache/shortname/raw/doi
/cache/shortname/raw/pubmed/xml
/cache/shortname/upload

The input file goes in /cache/shortname/upload.

Sometimes this fails with errors like "ssl.SSLError: ('The read operation timed out',)", running it again it may get past this - it's not an error specifically with an item. It may make sense to split up the input file into e.g. chunks of 200? Although this would take a bit of management - if one fails you would not be able to figure out where it went wrong. Might need to upload a chunk to one library and then move them when it's finished.

Can rerun this, it will return a status of unchanged from zotero if it already exists, although it does actually add it to zotero!