- Added Python scripts support to extract data from HTML web pages and 'sozd' example with scripts usage.
- Fixed "continue" mode. Now supports continue not only for "follow" command but for "run" command too. Use "apibackuper run continue" if it was stopped by error or user input.
- Added "default_delay", 'retry_delay' and "retry_count" to manage error handling
- If get HTTP status 500 or 503 starts retrying latest request till HTTP status 200 or retry_count ends
- Minor fixes
- Added "start_page" in case if start_page is not 1 (could be 0 sometimes)
- Added support of data returned as JSON array, not JSON dict and data_key not provided
- Added initial code to implement Frictionless Data packaging
- Added several new options
- Added aria2 download support for files downloading
- Using permanent storage dir "storage" instead of temporary "temp" dir
- Added logic to do requests to get addition info on retrieved objects, command "follow"
- Added logic to retrieve files linked with retrieved objects, command "getfiles"
- First public release on PyPI and updated github code