1 |
On Monday, 27 September 2021 02:39:19 BST Adam Carter wrote: |
2 |
> On Sun, Sep 26, 2021 at 8:57 PM Peter Humphrey <peter@××××××××××××.uk> |
3 |
> |
4 |
> wrote: |
5 |
> > Hello list, |
6 |
> > |
7 |
> > I have an external USB-3 drive with various system backups. There are 350 |
8 |
> > .tar files (not .tar.gz etc.), amounting to 2.5TB. I was sure I wouldn't |
9 |
> > need to compress them, so I didn't, but now I think I'm going to have to. |
10 |
> > Is there a reasonably efficient way to do this? |
11 |
> |
12 |
> find <mountpoint> -name \*tar -exec zstd -TN {} \; |
13 |
> |
14 |
> Where N is the number of cores you want to allocate. zstd -T0 (or just |
15 |
> zstdmt) if you want to use all the available cores. I use zstd for |
16 |
> everything now as it's as good as or better than all the others in the |
17 |
> general case. |
18 |
> |
19 |
> Parallel means it uses more than one core, so on a modern machine it is |
20 |
> much faster. |
21 |
|
22 |
Thanks to all who've helped. I can't avoid feeling, though, that the main |
23 |
bottleneck has been missed: that I have to read and write on a USB-3 drive. |
24 |
It's just taken 23 minutes to copy the current system backup from USB-3 to |
25 |
SATA SSD: 108GB in 8 .tar files. |
26 |
|
27 |
Perhaps I have things out of proportion. |
28 |
|
29 |
-- |
30 |
Regards, |
31 |
Peter. |