1 |
On 06/09/2014 13:54, Alan McKinnon wrote: |
2 |
> On 06/09/2014 14:48, Dale wrote: |
3 |
>> James wrote: |
4 |
>>> Joseph <syscon780 <at> gmail.com> writes: |
5 |
>>> |
6 |
>>>> Thank you for the information. |
7 |
>>>> I'll continue on Monday and let you know. If it will not boot with sector |
8 |
>>> starting at 2048, I will |
9 |
>>>> re-partition /boot sda1 to start at 63. |
10 |
>>> |
11 |
>>> Take some time to research and reflect on your needs (desires?) |
12 |
>>> about which file system to use. (ext 2,4) is always popular and safe. |
13 |
>>> Some are very happy with BTRFS and there are many other interesting |
14 |
>>> choices (ZFS, XFS, etc etc)...... |
15 |
>>> |
16 |
>>> There is no best solution; but the EXT family offers tried and proven |
17 |
>>> options. YMMV. |
18 |
>>> |
19 |
>>> |
20 |
>>> hth, |
21 |
>>> James |
22 |
>>> |
23 |
>> |
24 |
>> I'm not sure if it is ZFS or XFS but I seem to recall one of those does |
25 |
>> not like sudden shutdowns, such as a power failure. Maybe that has |
26 |
>> changed since I last tried whichever one it is that has that issue. If |
27 |
>> you have a UPS tho, shouldn't be so much of a problem, unless your power |
28 |
>> supply goes out. |
29 |
> |
30 |
> XFS. |
31 |
> |
32 |
> It was designed by SGI for their video rendeing workstations back in the |
33 |
> day and used very aggressive caching to get enormous throughput. It was |
34 |
> also brilliant at dealing with directories containing thousands of small |
35 |
> files - not unusual when dealing with video editing. |
36 |
> |
37 |
> However, it was also designed for environments where the power is |
38 |
> guaranteed to never go off (which explains why they decided to go with |
39 |
> such aggressive caching). If you use it in environments where powerouts |
40 |
> are not guaranteed to not happen, well...... |
41 |
|
42 |
Well what? It's no less reliable than other filesystems that employ |
43 |
delayed allocation (any modern filesystem worth of note). Over recent |
44 |
years, I use both XFS and ext4 extensively in production and have found |
45 |
the former trumps the latter in reliability. |
46 |
|
47 |
While I like them both, I predicate this assertion mainly on some of the |
48 |
silly bugs that I have seen crop up in the ext4 codebase and the |
49 |
unedifying commentary that has occasionally ensued. From reading the XFS |
50 |
list and my own experience, I have formed the opinion that the |
51 |
maintainers are more stringent in matters of QA and regression testing |
52 |
and more mature in matters of public debate. I also believe that |
53 |
regressions in stability are virtually unheard of, whereas regressions |
54 |
in performance are identified quickly and taken very seriously [1]. |
55 |
|
56 |
The worst thing I could say about XFS is that it was comparatively slow |
57 |
until the introduction of delayed logging (an idea taken from ext3). [2] |
58 |
[3]. Nowadays, it is on a par with ext4 and, in some cases, scales |
59 |
better. It is also one of the few filesystems besides ZFS that can |
60 |
dynamically allocate inodes. |
61 |
|
62 |
> |
63 |
> |
64 |
> |
65 |
> ZFS is the most resilient filesystem I've ever used, you can through the |
66 |
> bucket and kitchen sink at it and it really doesn't give a shit (it just |
67 |
> deals with it :-) ) |
68 |
|
69 |
While its design is intrinsically resilient - particularly its |
70 |
capability to protect against bitrot - I don't believe that ZFS on Linux |
71 |
is more reliable in practice than the filesystems included in the Linux |
72 |
kernel. Quite the contrary. Look at the issues labelled as "Bug" filed |
73 |
for both the SPL and ZFS projects. There are a considerable number of |
74 |
serious bugs that - to my mind - disqualify it for anything but hobbyist |
75 |
use and I take issue with the increasing tendency among the community to |
76 |
casually recommend it. |
77 |
|
78 |
Here's my anecdotal experience of using it. My hosting company recently |
79 |
installed a dedicated backup server that was using ZFS on Linux. Its |
80 |
primary function was as an NFS server. It was very slow and repeatedly |
81 |
deadlocked under heavy load. On each occasion, the only remedy was for |
82 |
an engineer to perform a hard reboot. When I complained about it, I was |
83 |
told that they normally use FreeBSD but had opted for Linux because the |
84 |
former was not compatible with a fibre channel adapter that they needed |
85 |
to make use of. I then requested that the filesystem be changed to ext4, |
86 |
after which the server was rock solid. |
87 |
|
88 |
Another experience I have is of helping someone resolve an issue where |
89 |
MySQL was not starting. It transpired that he was using ZFS and that it |
90 |
does not support native AIO. I supplied him with a workaround but |
91 |
sternly advised him to switch to a de-facto Linux filesystem if he |
92 |
valued his data and expected anything like decent performance from |
93 |
InnoDB. Speaking of which, XFS is a popular filesystem among |
94 |
knowledgeable MySQL hackers (such as Mark Callaghan) and DBAs alike. |
95 |
|
96 |
For the time being, I think that there are other operating systems whose |
97 |
ZFS implementation is more robust. |
98 |
|
99 |
--Kerin |
100 |
|
101 |
[1] |
102 |
http://www.percona.com/blog/2012/03/15/ext4-vs-xfs-on-ssd/#comment-903938 |
103 |
[2] |
104 |
https://www.kernel.org/doc/Documentation/filesystems/xfs-delayed-logging-design.txt |
105 |
[3] https://www.youtube.com/watch?v=FegjLbCnoBw |