Gentoo Archives: gentoo-user

From: lee <lee@××××××××.de>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] installing Gentoo in a xen VM
Date: Fri, 09 Jan 2015 22:46:11
Message-Id: 87d26n7ez8.fsf@heimdali.yagibdah.de
In Reply to: Re: [gentoo-user] installing Gentoo in a xen VM by "J. Roeleveld"
1 "J. Roeleveld" <joost@××××××××.org> writes:
2
3 >> with firewalling and routing in between. You can't keep the traffic
4 >> separate when it all goes over the same bridge, can you?
5 >
6 > Not if it goes over the same bridge. But as they are virtual, you can make as
7 > many as you need.
8
9 I made as few as I needed. What sense would it make to, necessarily,
10 use a physical port for a pppoe connection and then go to the lengths of
11 creating a bridge for it to somehow bring it over to the VM? I found it
12 way easier to just pass the physical port to the VM.
13
14 >> > am i missing where the fight is ?
15 >>
16 >> setting up the bridges
17 >
18 > Really simple, there are plenty of guides around. Including how to configure it
19 > using netifrc (which is installed by default on Gentoo)
20
21 Creating a bridge isn't too difficult; getting it work is.
22
23 >> no documentation about in which order a VM will see the devices
24 >
25 > Same goes for physical devices.
26
27 That doesn't make it any better.
28
29 > Use udev-rules to name the interfaces
30 > logically based on the MAC-address:
31 > ***
32 > # cat 70-persistent-net.rules
33 > SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
34 > ATTR{address}=="00:16:3e:16:01:01", ATTR{dev_id}=="0x0", ATTR{type}=="1",
35 > KERNEL=="eth*", NAME="lan"
36 >
37 > SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
38 > ATTR{address}=="00:16:3e:16:01:02", ATTR{dev_id}=="0x0", ATTR{type}=="1",
39 > KERNEL=="eth*", NAME="dmz"
40 > ***
41
42 Who understands udev?
43
44 >> a handful of bridges and VMs
45 >
46 > Only 1 bridge per network segment is needed.
47
48 Yes, and that makes for a handful.
49
50 >> a firewall/router VM with it's passed-through port for pppoe and three
51 >> bridges
52 >
53 > Not difficult, had that for years till I moved the router to a seperate machine.
54 > (Needed something small to fit the room where it lives)
55
56 It's extremely confusing, difficult and complicated.
57
58 >> the xen documentation being an awful mess
59 >
60 > A lot of it is outdated. A big cleanup would be useful there.
61
62 Yes, it tells you lots of things that you find not to work and confuses
63 you even more until you don't know what to do anymore because nothing
64 works.
65
66 >> an awful lot of complexity required
67 >
68 > There is a logic to it.
69
70 If there is, it continues to escape me.
71
72 >> Just why can't you? ZFS apparently can do such things --- yet what's
73 >> the difference in performance of ZFS compared to hardware raid?
74 >> Software raid with MD makes for quite a slowdown.
75 >
76 > What do you consider "hardware raid" in this comparison?
77
78 A decent hardware raid controller, like an HP smart array P800 or an IBM
79 ServeRaid 8k --- of course, they are outdated, yet they work well.
80
81 > Most so-called hardware raid cards depend heavily on the host CPU to do all
82 > the calculations and the code used is extremely inefficient.
83 > The Linux build-in software raid layer ALWAYS outperforms those cards.
84
85 You mean the fake controllers? I wouldn't use those.
86
87 > The true hardware raid cards have their own calculation chips to do the heavy
88 > lifting. Those actually stand a chance to outperform the linux software raid
89 > layer. It depends on the spec of the host CPU and what you use the system for.
90
91 With all CPUs, relatively slow and relatively fast ones, I do notice an
92 awful sluggishness with software raid which hardware raid simply doesn't
93 have. This sluggishness might not be considered or even noticed by a
94 benchmark you might run, yet it is there.
95
96 > ZFS and BTRFS runs fully on the host CPU, but has some additional logic built-
97 > in which allows it to generally outperform hardware raid.
98
99 I can't tell for sure yet; so far, it seems that they do better than md
100 raid. Btrfs needs some more development ... ZFS with SSD cache is
101 probably hard to beat.
102
103 > I could do with a hardware controller which can be used to off-load all the
104 > heavy lifting for the RAIDZ-calculations away from the CPU. And if the stuff
105 > for the deduplication could also be done that way?
106
107 Yes, I've already been wondering why they don't make hardware ZFS
108 controllers. There doesn't seem to be much point in making "classical"
109 hardware raid controllers while ZFS has so many advantages over them.
110
111 >> > the only issue with bridges is that if eth0 is in the bridge, if you try
112 >> > to use eth0 directly with for example an IP address things go a bit
113 >> > weird, so you have to use br0 instead
114 >> > so don't do that.
115 >>
116 >> Yes, it's very confusing.
117 >
118 > It's just using a different name. Once it's configured, the network layer of the
119 > OS handles it for you.
120
121 I understand things by removing abstractions. When you remove all
122 abstractions from a bridge, there isn't anything left. A network card,
123 you can take into your hands and look at it; you can plug and unplug the
124 wire(s); these cards usually have fancy lights to show you whether
125 there's a connection or not, and the lights even blink when there's
126 network traffic. A bridge doesn't exist, has nothing and shows you
127 nothing. It's not understandable because it isn't anything, no matter
128 how it's called or handled.
129
130
131 --
132 Again we must be afraid of speaking of daemons for fear that daemons
133 might swallow us. Finally, this fear has become reasonable.