Gentoo Archives: gentoo-user

From: "J. Roeleveld" <joost@××××××××.org>
To: gentoo-user@l.g.o
Subject: Re: [gentoo-user] installing Gentoo in a xen VM
Date: Tue, 30 Dec 2014 18:05:15
Message-Id: 1809172.js1BIefuKq@andromeda
In Reply to: Re: [gentoo-user] installing Gentoo in a xen VM by lee
1 On Monday, December 29, 2014 02:55:49 PM lee wrote:
2 > thegeezer <thegeezer@×××××××××.net> writes:
3 > > On 08/12/14 22:17, lee wrote:
4 > >> "J. Roeleveld" <joost@××××××××.org> writes:
5 > >>> create 1 bridge per physical network port
6 > >>> add the physical ports to the respective bridges
7 > >>
8 > >> That tends to make the ports disappear, i. e. become unusable, because
9 > >> the bridge swallows them.
10 > >
11 > > and if you pass the device then it becomes unusable to the host
12 >
13 > The VM uses it instead, which is what I wanted :)
14 >
15 > >>> pass virtual NICs to the VMs which are part of the bridges.
16 > >>
17 > >> Doesn't that create more CPU load than passing the port? And at some
18 > >> point, you may saturate the bandwidth of the port.
19 > >
20 > > some forward planning is needed. obviously if you have two file servers
21 > > using the same bridge and that bridge only has one physical port and the
22 > > SAN is not part of the host then you might run into trouble. however,
23 > > you can use bonding in various ways to group connections -- and in this
24 > > way you can have a virtual nic that actually has 2x 1GB bonded devices,
25 > > or if you choose to upgrade at a later stage you can start putting in
26 > > 10GbE cards and the virtual machine sees nothing different, just access
27 > > is faster.
28 > > on the flip side you can have four or more relatively low bandwidth
29 > > requirement virtual machines running on the same host through the same
30 > > single physical port
31 > > think of the bridge as an "internal, virtual, network switch"... you
32 > > wouldn't load up a switch with 47 high bandwidth requirement servers and
33 > > then create a single uplink to the SAN / other network without seriously
34 > > considering bonding or partitioning in some way to reduce the 47into1
35 > > bottleneck, and the same is true of the virtual-switch (bridge)
36 > >
37 > > the difference is that you need to physically be there to repatch
38 > > connections or to add a new switch when you run out of ports. these
39 > > limitations are largely overcome.
40 >
41 > That all makes sense; my situation is different, though. I plugged a
42 > dual port card into the server and wanted to use one of the ports for
43 > another internet connection and the other one for a separate network,
44 > with firewalling and routing in between. You can't keep the traffic
45 > separate when it all goes over the same bridge, can you?
46
47 Not if it goes over the same bridge. But as they are virtual, you can make as
48 many as you need.
49
50 > And the file server could get it's own physical port --- not because
51 > it's really needed but because it's possible. I could plug in another
52 > dual-port card for that and experiment with bonding.
53
54 How many slots do you have for all those cards?
55 And don't forget there is a bandwidth limit on the PCI-bus.
56
57 > However, I've changed plans and intend to use a workstation as a hybrid
58 > system to reduce power consumption and noise, and such a setup has other
59 > advantages, too. I'll put Gentoo on it and probably use containers for
60 > the VMs. Then I can still use the server for experiments and/or run
61 > distcc on it when I want to.
62
63 Most people use a low-power machine as a server and use the fast machine as a
64 workstation to keep power consumption and noise down.
65
66 > >> The only issue I have with passing the port is that the kernel module
67 > >> must not be loaded from the initrd image. So I don't see how fighting
68 > >> with the bridges would make things easier.
69 > >
70 > > vif=[ 'mac=de:ad:be:ef:00:01,bridge=br0' ]
71 > >
72 > > am i missing where the fight is ?
73 >
74 > setting up the bridges
75
76 Really simple, there are plenty of guides around. Including how to configure it
77 using netifrc (which is installed by default on Gentoo)
78
79 > no documentation about in which order a VM will see the devices
80
81 Same goes for physical devices. Use udev-rules to name the interfaces
82 logically based on the MAC-address:
83 ***
84 # cat 70-persistent-net.rules
85 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
86 ATTR{address}=="00:16:3e:16:01:01", ATTR{dev_id}=="0x0", ATTR{type}=="1",
87 KERNEL=="eth*", NAME="lan"
88
89 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
90 ATTR{address}=="00:16:3e:16:01:02", ATTR{dev_id}=="0x0", ATTR{type}=="1",
91 KERNEL=="eth*", NAME="dmz"
92 ***
93
94 > a handful of bridges and VMs
95
96 Only 1 bridge per network segment is needed.
97
98 > a firewall/router VM with it's passed-through port for pppoe and three
99 > bridges
100
101 Not difficult, had that for years till I moved the router to a seperate machine.
102 (Needed something small to fit the room where it lives)
103
104 > the xen documentation being an awful mess
105
106 A lot of it is outdated. A big cleanup would be useful there.
107
108 > an awful lot of complexity required
109
110 There is a logic to it. If you use the basic xen install, you need to do every
111 layer yourself.
112 You could also opt to go for a more ready product, like XCP, Vmware ESX,...
113 Those will do more for you, but also hide the interesting details to the point
114 of being annoying.
115 Bit like using Ubuntu or Redhat instead of Gentoo.
116
117 > Guess what, I still haven't found out how to actually back up and
118 > restore a VM residing in an LVM volume. I find it annoying that LVM
119 > doesn't have any way of actually copying a LV. It could be so easy if
120 > you could just do something like 'lvcopy lv_source
121 > other_host:/backups/lv_source_backup' and 'lvrestore
122 > other_host:/backups/lv_source_backup vg_target/lv_source' --- or store
123 > the copy of the LV in a local file somewhere.
124
125 LVs are block-devices. How do you make a backup of an entire harddrive?
126
127 > Just why can't you? ZFS apparently can do such things --- yet what's
128 > the difference in performance of ZFS compared to hardware raid?
129 > Software raid with MD makes for quite a slowdown.
130
131 What do you consider "hardware raid" in this comparison?
132 Most so-called hardware raid cards depend heavily on the host CPU to do all
133 the calculations and the code used is extremely inefficient.
134 The Linux build-in software raid layer ALWAYS outperforms those cards.
135
136 The true hardware raid cards have their own calculation chips to do the heavy
137 lifting. Those actually stand a chance to outperform the linux software raid
138 layer. It depends on the spec of the host CPU and what you use the system for.
139
140 ZFS and BTRFS runs fully on the host CPU, but has some additional logic built-
141 in which allows it to generally outperform hardware raid.
142
143 I could do with a hardware controller which can be used to off-load all the
144 heavy lifting for the RAIDZ-calculations away from the CPU. And if the stuff
145 for the deduplication could also be done that way?
146
147 > > the only issue with bridges is that if eth0 is in the bridge, if you try
148 > > to use eth0 directly with for example an IP address things go a bit
149 > > weird, so you have to use br0 instead
150 > > so don't do that.
151 >
152 > Yes, it's very confusing.
153
154 It's just using a different name. Once it's configured, the network layer of the
155 OS handles it for you.
156
157 --
158 Joost

Replies

Subject Author
Re: [gentoo-user] installing Gentoo in a xen VM Rich Freeman <rich0@g.o>
Re: [gentoo-user] installing Gentoo in a xen VM lee <lee@××××××××.de>