1 |
On Wednesday, 17 June 2020 07:32:10 BST J. Roeleveld wrote: |
2 |
> On Wednesday, June 17, 2020 7:42:30 AM CEST n952162 wrote: |
3 |
> > On 06/17/20 06:48, J. Roeleveld wrote: |
4 |
> > > On Tuesday, June 16, 2020 11:08:23 PM CEST n952162 wrote: |
5 |
> > >> On 06/16/20 22:36, J. Roeleveld wrote: |
6 |
> <snipped> |
7 |
> |
8 |
> > > I have not come across MS HyperV outside of small businesses that need |
9 |
> > > some |
10 |
> > > local VMs. These companies tend to put all their infrastructure with one |
11 |
> > > of |
12 |
> > > the big cloud-VM providers (Like AWS, Azure, Googles,...) |
13 |
> > > |
14 |
> > > -- |
15 |
> > > Joost |
16 |
> > |
17 |
> > Thank you for this excellent survey/summary. It tells me that vbox is |
18 |
> > good for my current usages, but I should start exposing myself to Xen as |
19 |
> > a possible migration path. |
20 |
> |
21 |
> I would actually suggest to read up on both Xen and KVM and try both on |
22 |
> spare machines. |
23 |
> See which best fits your requirements and also see if the existing |
24 |
> management tools actually do things in a way that you can work with. |
25 |
> |
26 |
> My systems have evolved over the past 25-odd years and I started using Xen |
27 |
> to reduce the amount of physical systems I had running. At the time, VMWare |
28 |
> was expensive, KVM didn't exist yet and was missing some important features |
29 |
> for a few years after it appeared (not sure if this exists yet, not found |
30 |
> anything about it on KVM): |
31 |
> - limit memory footprint of host-VM during boot. |
32 |
> - Dedicate CPU-core(s) to the host |
33 |
> |
34 |
> Limiting the memory size is important, because there are several parts of |
35 |
> the kernel (and userspace) that base their memory-settings on this amount. |
36 |
> This is really noticable when the host thinks it has 384GB available when |
37 |
> 370GB is passed to VMs. |
38 |
> |
39 |
> Dedicating CPU-cores exclusively to the host means the host will always have |
40 |
> CPU-resources available. This is necessary because all the |
41 |
> context-switching is handled by the host and if this stalls, the whole |
42 |
> environment is impacted. |
43 |
> |
44 |
> For a lab-system, I was also missing the ability to save the full state of a |
45 |
> VM for a snapshot. All the howto's and guides I can find online only talk |
46 |
> about making a snapshot of the disks. Not of the memory as well. Especially |
47 |
> when used to Virtualbox, you will notice this issue. When only snapshotting |
48 |
> the disk, your snapshot is basically the state of when you literally pulled |
49 |
> the plug of your VM if you want to restore back to this. |
50 |
> |
51 |
> For KVM, I have found a few hints that this was planned. But I have not |
52 |
> found anything about this. Virt-manager does not (last time I looked) |
53 |
> support Xen's functionality of storing the memory when creating snapshots |
54 |
> either. Which is why I don't use that even for my lab/testing-server. |
55 |
|
56 |
As far as I know QEMU with KVM can take snapshots of the current state of RAM, |
57 |
disk(s), CPU - it can take snapshots of images while online. |
58 |
|
59 |
https://wiki.qemu.org/Features/Snapshots2 |
60 |
|
61 |
|
62 |
However, I've only taken snapshots of qcow2 images after I shut down the VM. |
63 |
These work as advertised and they are quite handy before major updates/ |
64 |
upgrades as temporary backups. |
65 |
|
66 |
|
67 |
> As for tips/tricks (below works for Xen, but should also work with KVM): |
68 |
> |
69 |
> The way I create a new Gentoo-VM is simply to create a new block-device |
70 |
> (Either LVM or ZFS), do all the initial steps in the chroot from the host |
71 |
> and when it comes to the first-reboot, umount the filesystems, hook it up |
72 |
> to a new VM and start that. |
73 |
> |
74 |
> Because of this, I can update the host as follows: |
75 |
> - create new "partitions" for the host-system. |
76 |
> - Install the latest versions, migrate the config across |
77 |
> - reboot into the new host. |
78 |
> |
79 |
> If all goes fine, I can clean up the "old" partitions and prepare them for |
80 |
> next time. If there are issues, I have a working "old" version I can quickly |
81 |
> revert to. |
82 |
> |
83 |
> -- |
84 |
> Joost |
85 |
|
86 |
I've wanted to migrate a qemu qcow2 image file or two of different OS', all |
87 |
currently stored on an ext4 partition on my desktop, to a dedicated partition |
88 |
on the disk. Would this be possible - how? Would I need to change the qcow2 |
89 |
to a raw image? |