1 |
Daniel Troeder wrote: |
2 |
> Am Dienstag, den 16.12.2008, 03:15 -0600 schrieb Dale: |
3 |
> |
4 |
>> Daniel Troeder wrote: |
5 |
>> |
6 |
>>> Am Dienstag, den 16.12.2008, 01:59 -0600 schrieb Dale: |
7 |
>>> |
8 |
>>> |
9 |
>>>> I'm not to worried about this since I will be moving this over to the |
10 |
>>>> other drive anyway. I would like to know what command I should use to |
11 |
>>>> tar up everything, transfer it over and untar it all on one line if |
12 |
>>>> possible? I plan to do this while booted from a Gentoo CD. I just want |
13 |
>>>> to try this so that it will be compressed then transfered and untared |
14 |
>>>> once on the way. Does this make since? I have used cp -av in the past. |
15 |
>>>> |
16 |
>>>> Thanks. |
17 |
>>>> |
18 |
>>>> Dale |
19 |
>>>> |
20 |
>>>> :-) :-) |
21 |
>>>> |
22 |
>>>> |
23 |
>>> With "transfer" do you mean over a network, or to another local drive? |
24 |
>>> |
25 |
>>> You can of course use something like |
26 |
>>> # tar czpf - | ssh remote - tar xzpf -C /dir |
27 |
>>> (above probably not syntactically correct), but there are faster and |
28 |
>>> easier options: |
29 |
>>> |
30 |
>>> "cp -a" costs little resources locally and maintains POSIX permissions, |
31 |
>>> while "rsync -aASH --numeric-ids" is perfect for remote copy. |
32 |
>>> |
33 |
>>> You can use rsync also locally. It will (with the "-A" switch) also |
34 |
>>> transfer POSIX-ACLs, if that is of any concern. It is also useful, if a |
35 |
>>> transfer breaks at some moment, because it will kind of continue it :) |
36 |
>>> |
37 |
>>> Omiting the "-v" switch can significantly speed up things - depends on |
38 |
>>> your terminal. In every case it helps to only see the errors, and not |
39 |
>>> let them scroll away by everything that went well. |
40 |
>>> |
41 |
>>> Bye, |
42 |
>>> Daniel |
43 |
>>> |
44 |
>>> |
45 |
>>> |
46 |
>> The drive is in the same machine so there is no network involved. |
47 |
>> Should help make it a little more simple. Would this work? |
48 |
>> |
49 |
>> tar czpf - | tar xzpf -C /dir |
50 |
>> |
51 |
>> Basically, I want as clean a file system as I can get to start off with |
52 |
>> at least. Goal is very little fragmentation. |
53 |
>> |
54 |
>> Thanks |
55 |
>> |
56 |
>> Dale |
57 |
>> |
58 |
> While this will work perfectly well, this command is a waste of |
59 |
> resources. The compression ("-z") makes locally no sense, and there is |
60 |
> no need to tar the data (which will basically just concat files). You |
61 |
> will get the exact same result with |
62 |
> # cp -a /source /dest |
63 |
> |
64 |
> If the FS has been formatted before, no fragmentation should occur in |
65 |
> every scenario, as long as no parallelism is used while copying, because |
66 |
> each file will be created and filled with data one after another. |
67 |
> |
68 |
> Bye, |
69 |
> Daniel |
70 |
> |
71 |
> |
72 |
|
73 |
Cool. Then I can just use cp -a and let her rip. I plan to redo my |
74 |
partitions so I will have to reformat the partitions too. I guess this |
75 |
will be as good as it gets. I'll also report the results of fragck when |
76 |
I get this done. Just curious myself. I think I will skip shake this |
77 |
time tho. ;-) |
78 |
|
79 |
Thanks much. |
80 |
|
81 |
Dale |
82 |
|
83 |
:-) :-) |