This format is also used by many other Microsoft formats, meanings that you can use The Unarchiver to extract internal data from DOC and PPT files, and others. There is probably no reason to do this, but you can.
2. the command for CUMI for the message archiver CLI is not "/useCUMI" - that's the CLI option for COBRAS Export. Message Archiver CLI uses all single character commands, "-h" is for "use HTTP" instead of IMAP. I know this works as it was used by a couple sites I was working with just last week. The tool's page has a breakdown of all the commands:
I get an error pg_restore: [archiver] unsupported version (1.13) in file header when I try to restore a dump file that was created using Heroku PGBackups (heroku pg:backups:capture or scheduled backups) feature or using pg_dump with one-off dynos.
pt-archiver is extensible via a plugin mechanism. You can inject your owncode to add advanced archiving logic that could be useful for archivingdependent data, applying complex business rules, or building a data warehouseduring the archiving process.
pt-archiver does not check for error when it commits transactions.Commits on PXC can fail, but the tool does not yet check for or retry thetransaction when this happens. If it happens, the tool will die.
If you specify --progress, the output is a header row, plus status outputat intervals. Each row in the status output lists the current date and time,how many seconds pt-archiver has been running, and how many rows it hasarchived.
If you do want to use the ascending index optimization (see --no-ascend),but do not want to incur the overhead of ascending a large multi-column index,you can use this option to tell pt-archiver to ascend only the leftmost columnof the index. This can provide a significant performance boost over notascending the index at all, while avoiding the cost of ascending the wholeindex.
Enabled by default; causes pt-archiver to check that the source and destinationtables have the same columns. It does not check column order, data type, etc.It just checks that all columns in the source exist in the destination andvice versa. If there are any differences, pt-archiver will exit with anerror.
Specify a comma-separated list of columns to fetch, write to the file, andinsert into the destination table. If specified, pt-archiver ignores othercolumns unless it needs to add them to the SELECT statement for ascending anindex or deleting rows. It fetches and uses these extra columns internally, butdoes not write them to the file or to the destination table. It does passthem to plugins.
This option is useful as a shortcut to make --limit and --txn-size thesame value, but more importantly it avoids transactions being held open whilesearching for more rows. For example, imagine you are archiving old rows fromthe beginning of a very large table, with --limit 1000 and --txn-size1000. After some period of finding and archiving 1000 rows at a time,pt-archiver finds the last 999 rows and archives them, then executes the nextSELECT to find more rows. This scans the rest of the table, but never finds anymore rows. It has held open a transaction for a very long time, only todetermine it is finished anyway. You can use --commit-each to avoid this.
WARNING: Using a default options file (F) DSN option that defines asocket for --source causes pt-archiver to connect to --dest usingthat socket unless another socket for --dest is specified. Thismeans that pt-archiver may incorrectly connect to --source when itconnects to --dest. For example:
The default ascending-index optimization causes pt-archiver to optimizerepeated SELECT queries so they seek into the index where the previous queryended, then scan along it, rather than scanning from the beginning of the tableevery time. This is enabled by default because it is generally a good strategyfor repeated accesses.
Adds an extra WHERE clause to prevent pt-archiver from removing the newestrow when ascending a single-column AUTO_INCREMENT key. This guards againstre-using AUTO_INCREMENT values if the server restarts, and is enabled bydefault.
The extra WHERE clause contains the maximum value of the auto-increment columnas of the beginning of the archive or purge job. If new rows are inserted whilept-archiver is running, it will not see them.
The presence of the file specified by --sentinel will cause pt-archiver tostop archiving and exit. The default is /tmp/pt-archiver-sentinel. Youmight find this handy to stop cron jobs gracefully if necessary. See also--stop.
WARNING: Using a default options file (F) DSN option that defines asocket for --source causes pt-archiver to connect to --dest usingthat socket unless another socket for --dest is specified. Thismeans that pt-archiver may incorrectly connect to --source when itis meant to connect to --dest. For example:
Causes pt-archiver to create the sentinel file specified by --sentinel andexit. This should have the effect of stopping all running instances which arewatching the same sentinel file. See also --unstop.
Specifies the size, in number of rows, of each transaction. Zero disablestransactions altogether. After pt-archiver processes this many rows, itcommits both the --source and the --dest if given, and flushes thefile given by --file.
This parameter is critical to performance. If you are archiving from a liveserver, which for example is doing heavy OLTP work, you need to choose a goodbalance between transaction size and commit overhead. Larger transactionscreate the possibility of more lock contention and deadlocks, but smallertransactions cause more frequent commit overhead, which can be significant. Togive an idea, on a small test set I worked with while writing pt-archiver, avalue of 500 caused archiving to take about 2 seconds per 1000 rows on anotherwise quiet MySQL instance on my desktop machine, archiving to disk and toanother table. Disabling transactions with a value of zero, which turns onautocommit, dropped performance to 38 seconds per thousand rows.
Causes pt-archiver to print a message if it exits for any reason other thanrunning out of rows to archive. This can be useful if you have a cron job with--run-time specified, for example, and you want to be sure pt-archiver isfinishing before running out of time.
This method is called just before pt-archiver begins iterating through rowsand archiving them, but after it does all other setup work (examining tablestructures, designing SQL queries, and so on). This is the only timept-archiver tells the plugin column names for the rows it will pass theplugin while archiving.
The cols argument is the column names the user requested to be archived,either by default or by the --columns option. The allcols argument isthe list of column names for every row pt-archiver will fetch from the sourcetable. It may fetch more columns than the user requested, because it needs somecolumns for its own use. When subsequent plugin functions receive a row, it isthe full row containing all the extra columns, if any, added to the end.
This method is called after pt-archiver exits the archiving loop, commits alldatabase handles, closes --file, and prints the final statistics, butbefore pt-archiver runs ANALYZE or OPTIMIZE (see --analyze and--optimize).
Now that your auto-archiving logic is created, you need to actually perform the archiving. How you do this is really up to you, as there are many was to create an archive. My preference is to create a folder item as a sibling of the home item and use permissions to ensure that only authenticated users with a particular role can view it. In your code, you would use a SecurityDisabler when moving the item, so access rights shouldn't be an issue for the "archiver" logic.
These shared characteristics, defined in the _Archive class inqiime2/core/archive/archiver.py,must be consistent across all formats over time,as they allow archive versions to be checked,and archives with different formats to be dispatched to the appropriate version-specific tools.
3a7c801d34