分类目录归档:virtual machine

kvm xml configuration examples

 

<domain type='kvm'>
<name>win7-32</name>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='pc'>hvm</type>
<boot dev='cdrom'/>
<boot dev = 'hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/windows-virtio/win7-32.img'/>
<target dev='hda' bus='ide'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<mac address="00:16:3e:5d:aa:a9"/>
</interface>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen = '0.0.0.0' keymap='en-us'/>
</devices>
</domain>

 

 

 

 

virtio_net question

目标拓扑如下:

主机上tap0和eth1通过br0桥接在一起,tap0对应虚拟机的eth0。HOST上的br0配置地址192.168.1.2/24作为主机地址,虚拟机的eth0配置地址为192.168.1.5/24

虚拟机的启动命令如下:
qemu-system-aarch64 -machine virt -cpu cortex-a57 -nographic -smp 1 -m 4096 \
-global virtio-blk-device.scsi=off -device virtio-scsi-device,id=scsi \
-drive file=ubuntu-core-14.04.1-core-arm64.img,id=coreimg,cache=unsafe,if=none -device scsi-hd,drive=coreimg \
-kernel  vmlinuz-3.13.0-55-generic \
-initrd  initrd.img-3.13.0-55-generic  \
-netdev tap,id=mynet -device virtio-net-device,netdev=mynet \
--append "console=ttyAMA0 root=/dev/sda"

启动后在虚拟机里面可以看到eth0网卡,驱动位virtio_net

从虚拟机ping物理机,发现无法ping同

在虚拟机里面抓包,发现ARP请求没有收到响应

但是从物理机TAP0上抓包,发现物理机已经收到ARP请求,并且回应了报文

问题就是TAP0上有了ARP的回应报文,虚拟机的eth0口竟然没有ARP的回应报文,请问大侠们,我的配置是否有错?或启动命令是否有错?TAP0都收到响应报文了,为什么虚拟机eth0竟然没有,丢在哪里了?

【问题已经解决】
必须在使能VHOST的情况下,才可以进行正常的网络通信
vhost=on

 

 

 

..

 

Centos kvm+ceph

  • Centos kvm+ceph
  • 一. centos6.5 安装kvm
  • 1. disable selinux
  • 2. 确认支持intel虚拟化
  • 3. 安装需要的包
  • 4.设置桥接网络
  • 5.运行kvm instance(此步骤仅用于测试环境是否安装成功)
  • 6.连接到kvm
  • 二. centos安装ceph(firefly版本)
  • 准备机器
  • 管理机安装
  • 安装其他节点
  • 三. kvm使用ceph
  • 创建osd pool(块设备的容器)
  • 设置账号对该pool的读写权限
  • 用qemu-img在pool中创建img
  • 验证img创建成功
  • 用kvm创建一个虚拟机

Centos kvm+ceph

centos 7无法用公用源安装ceph, 包依赖不满足
centos 6可以安装,但内核不支持rbd,需要更新内核
rpm --importhttp://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvhhttp://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-lt-y )

一. centos6.5 安装kvm

1. disable selinux

vi /etc/selinux/config
reboot

2. 确认支持intel虚拟化

egrep '(vmx|svm)' --color=always /proc/cpuinfo
空代表不支持

3. 安装需要的包

rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
yum install virt-manager libvirt qemu-kvm openssh-askpass kvm python-virtinst

service libvirtd start
chkconfig libvirtd on

emulator 一般是qemu-kvm,也可能是其他。
/usr/libexec/qemu-kvm -M ? 查看支持的host系统类型 (我碰到的情况,启动虚拟机时提示host类型rhel6.5不支持,virsh edit ...将host中的rhel6.5改为pc通过)
/usr/libexec/qemu-kvm -drive format=? 查看支持的设备类型 (必须支持rbd,如果没有显示,需要安装支持的版本。源码安装git clone git://git.qemu.org/qemu.git;./configure --enable-rbd。 可能未必产生qemu-kvm,也许是qemu-system-x86_64之类的,那就需要在配置文件里将emulator换成编译好的可执行文件)

4.设置桥接网络

yum install bridge-utils
vi /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE="br0"
NM_CONTROLLED="no"
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
IPADDR=192.168.0.100
PREFIX=24
GATEWAY=192.168.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System br0"

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT=yes
TYPE="Ethernet"
UUID="73cb0b12-1f42-49b0-ad69-731e888276ff"
HWADDR=00:1E:90:F3:F0:02
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
BRIDGE=br0

/etc/init.d/network restart

5.运行kvm instance(此步骤仅用于测试环境是否安装成功)

virt-install --connect qemu:///system -n vm10 -r 512 --vcpus=2 --disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /dev/cdrom --vnc --noautoconsole --os-type linux --os-variant debiansqueeze --accelerate --network=bridge:br0 --hvm

6.连接到kvm

如果需要安装gui
yum -y groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts"

二. centos安装ceph(firefly版本)

准备机器

一台安装管理机:admin
一台monitor:node1
两台数据机osd: node2, node3
一台使用ceph rbd的客户端: ceph-client

所有机器上都创建ceph用户,给予sudo权限,所以后续命令以ceph用户执行。
所有机器关闭 selinux
所有机器关闭Defaults requiretty(sudo visudo)
所有机器检查iptables等防火墙设置,节点间通讯使用22,6789,6800等端口,防止被拒绝

管理机安装

添加repo
sudo vim /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

我将baseurl替换为baseurl=http://ceph.com/rpm-firefly/el6/noarch

执行sudo yum update && sudo yum install ceph-deploy

使得admin用ssh key登陆其他机器
ssh-keygen
ssh-copy-id ceph@node1
ssh-copy-id ceph@node2
ssh-copy-id ceph@node3

安装其他节点

admin机器上
mkdir my-cluster
cd my-cluster

初始化配置:
ceph-deploy new node1
此时会在当前目录下创建配置文件ceph.conf,编辑ceph.conf,
添加osd pool default size = 2, 这是因为我们只有2个osd
添加rbd default format = 2, 将默认的rbd image格式设置为2,支持image的clone功能
添加journal dio = false,

在所有节点上安装ceph
ceph-deploy install admin node1 node2 node3

初始化监控节点
ceph-deploy mon create-initial node1

初始化osd节点
ssh node2
sudo mkdir /var/local/osd0
exit
ssh node3
sudo mkdir /var/local/osd1
exit

ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
(此处两个命令应该在数秒内结束,若长时间不响应直至300秒超时,考虑是否有防火墙因素。)

将配置拷贝到各个节点
ceph-deploy admin admin-node node1 node2 node3

sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health
ceph status
希望得到active_clean状态

三. kvm使用ceph

创建osd pool(块设备的容器)

ceph osd pool create libvirt-pool 128 128

设置账号对该pool的读写权限

假设我们使用的账号是libvirt(如果使用admin账号默认拥有所有权限,无需设置)
ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'

用qemu-img在pool中创建img

qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 10G
这一步我遇到了Unknown file format 'rbd', 这是因为低版本的qemu-img不支持rbd.
但是我的qemu-img版本号已经够了,猜测是我的包在编译时没有加rbd的选项。无奈强制安装了版本号更低的 http://ceph.com/packages/qemu-kvm/centos/x86_64/qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64.rpm 才支持)

验证img创建成功

rbd -p libvirt-pool ls

用kvm创建一个虚拟机

1. 用virsh命令或virt-manager创建一个虚拟机
需要一个iso或img 在/var/lib/libvirt/images/下
我将它命名为test。 cd rom中选择一个iso,比如debian.iso.不选硬盘。

2. virsh edit test

将该vm的配置修改为使用rbd存储
找到
<devices>

在其后添加如下:
<disk type='network' device='disk'>
<source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
<host name='{monitor-host}' port='6789'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>

3. 创建访问ceph的账号
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<usage type='ceph'>
<name>client.libvirt secret</name>
</usage>
</secret>
EOF

sudo virsh secret-define --file secret.xml
<uuid of secret is output here>
保存用户libvirt的key
ceph auth get-key client.libvirt | sudo tee client.libvirt.key
保存产生的uuid

sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml

virsh edit test

add auth
...
</source>
<auth username='libvirt'>
<secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
</auth>

4. 开启虚拟机安装os
同时 virsh edit test
将配置的boot方式从cdrom改为hd

5.安装结束重启虚拟机,会boot from vda,也就是rbd

6. 下一步可以使用rbd snap, rdb clone, virsh, guestfs制作和使用虚拟机模板

 

virt-install

virt-install(1) - Linux man page

Name

virt-install - provision new virtual machines

Synopsis

virt-install [ OPTION ]...

Description

virt-install is a command line tool for creating new KVM , Xen, or Linux container guests using the "libvirt" hypervisor management library. See the EXAMPLES section at the end of this document to quickly get started.

virt-install tool supports both text based & graphical installations, using VNC or SDL graphics, or a text serial console. The guest can be configured to use one or more virtual disks, network interfaces, audio devices, physical USB or PCI devices, among others.

The installation media can be held locally or remotely on NFS , HTTP , FTP servers. In the latter case "virt-install" will fetch the minimal files necessary to kick off the installation process, allowing the guest to fetch the rest of the OS distribution as needed. PXE booting, and importing an existing disk image (thus skipping the install phase) are also supported.

Given suitable command line arguments, "virt-install" is capable of running completely unattended, with the guest 'kickstarting' itself too. This allows for easy automation of guest installs. An interactive mode is also available with the --prompt option, but this will only ask for the minimum required options.

Options

Most options are not required. Minimum requirements are --name, --ram, guest storage (--disk, --filesystem or --nodisks), and an install option.

-h, --help

Show the help message and exit

--connect=CONNECT

Connect to a non-default hypervisor. The default connection is chosen based on the following rules:

xenIf running on a host with the Xen kernel (checks against /proc/xen)

qemu:///system

If running on a bare metal kernel as root (needed for KVM installs)

qemu:///session

If running on a bare metal kernel as non-rootIt is only necessary to provide the "--connect" argument if this default prioritization is incorrect, eg if wanting to use QEMU while on a Xen kernel.

General Options

General configuration parameters that apply to all types of guest installs.

-n NAME , --name=NAME

Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down ('virsh shutdown') & delete ('virsh undefine') it prior to running "virt-install".

-r MEMORY , --ram=MEMORY

Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation.

--arch=ARCH

Request a non-native CPU architecture for the guest virtual machine. If omitted, the host CPU architecture will be used in the guest.

--machine=MACHINE

The machine type to emulate. This will typically not need to be specified for Xen or KVM , but is useful for choosing machine types of more exotic architectures.

-u UUID , --uuid=UUID

UUID for the guest; if none is given a random UUID will be generated. If you specify UUID , you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID

--vcpus=VCPUS[,maxvcpus=MAX][,sockets=#][,cores=#][,threads=#]

Number of virtual cpus to configure for the guest. If 'maxvcpus' is specified, the guest will be able to hotplug up to MAX vcpus while the guest is running, but will startup with VCPUS .CPU topology can additionally be specified with sockets, cores, and threads. If values are omitted, the rest will be autofilled prefering sockets over cores over threads.

--cpuset=CPUSET

Set which physical cpus the guest can use. "CPUSET" is a comma separated list of numbers, which can also be specified in ranges or cpus to exclude. Example:

0,2,3,5     : Use processors 0,2,3 and 5

1-5,^3,8    : Use processors 1,2,4,5 and 8

If the value 'auto' is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available.

--numatune=NODESET,[mode=MODE]

Tune NUMA policy for the domain process. Example invocations

--numatune 1,2,3,4-7

--numatune \"1-3,5\",mode=preferred

Specifies the numa nodes to allocate memory from. This has the same syntax as "--cpuset" option. mode can be one of 'interleave', 'preferred', or 'strict' (the default). See 'man 8 numactl' for information about each mode.The nodeset string must use escaped-quotes if specifying any other option.

--cpu MODEL[,+feature][,-feature][,match=MATCH][,vendor=VENDOR]

Configure the CPU model and CPU features exposed to the guest. The only required value is MODEL , which is a valid CPU model as listed in libvirt's cpu_map.xml file.Specific CPU features can be specified in a number of ways: using one of libvirt's feature policy values force, require, optional, disable, or forbid, or with the shorthand '+feature' and '-feature', which equal 'force=feature' and 'disable=feature' respectivelySome examples:

--cpu core2duo,+x2apic,disable=vmx

Expose the core2duo CPU model, force enable x2apic, but do not expose vmx

--cpu host

Expose the host CPUs configuration to the guest. This enables the guest to take advantage of many of the host CPUs features (better performance), but may cause issues if migrating the guest to a host without an identical CPU .

--description

Human readable text description of the virtual machine. This will be stored in the guests XML configuration for access by other applications.

--security type=TYPE[,label=LABEL][,relabel=yes|no]

Configure domain security driver settings. Type can be either 'static' or 'dynamic'. 'static' configuration requires a security LABEL . Specifying LABEL without TYPE implies static configuration. To have libvirt automatically apply your static label, you must specify relabel=yes.

Installation Method options

-c CDROM , --cdrom=CDROM

File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the "--location" argument. If a cdrom has been specified via the "--disk" option, and neither "--cdrom" nor any other install option is specified, the "--disk" cdrom is used as the install media.

-l LOCATION , --location=LOCATION

Distribution tree installtion source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.With libvirt 0.9.4 or later, network URL installs work for remote connections. virt-install will download kernel/initrd to the local machine, and then upload the media to the remote host. This option requires the URL to be accessible by both the local and remote host.The "LOCATION" can take one of the following forms:

DIRECTORY

Path to a local directory containing an installable distribution image

nfs:host:/path or nfs://host/path

An NFS server location containing an installable distribution image

http://host/path

An HTTP server location containing an installable distribution image

ftp://host/path

An FTP server location containing an installable distribution image

Some distro specific url samples:

Fedora/Red Hat Based

http://download.fedoraproject.org/pub/fedora/linux/releases/10/Fedora/i386/os/

Debian/Ubuntu

http://ftp.us.debian.org/debian/dists/etch/main/installer-amd64/

Suse

http://download.opensuse.org/distribution/11.0/repo/oss/

Mandriva

ftp://ftp.uwsg.indiana.edu/linux/mandrake/official/2009.0/i586/

--pxe

Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process.

--import

Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via "--disk" or "--filesystem".

--init=INITPATH

Path to a binary that the container guest will init. If a root "--filesystem" is has been specified, virt-install will default to /sbin/init, otherwise will default to /bin/sh.

--livecd

Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the "--nodisks" flag in combination.

-x EXTRA , --extra-args=EXTRA

Additional kernel command line arguments to pass to the installer when performing a guest install from "--location". One common usage is specifying an anaconda kickstart file for automated installs, such as --extra-args "ks=http://myserver/my.ks"

--initrd-inject=PATH

Add PATH to the root of the initrd fetched with "--location". This can be used to run an automated install without requiring a network hosted kickstart file:--initrd-inject=/path/to/my.ks --extra-args "ks=file:/my.ks"

--os-type=OS_TYPE

Optimize the guest configuration for a type of operating system (ex. 'linux', 'windows'). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks.By default, virt-install will attempt to auto detect this value from the install media (currently only supported for URL installs). Autodetection can be disabled with the special value 'none'See "--os-variant" for valid options.

--os-variant=OS_VARIANT

Further optimize the guest configuration for a specific operating system variant (ex. 'fedora8', 'winxp'). This parameter is optional, and does not require an "--os-type" to be specified.By default, virt-install will attempt to auto detect this value from the install media (currently only supported for URL installs). Autodetection can be disabled with the special value 'none'.If the special value 'list' is passed, virt-install will print the full list of variant values and exit. The printed format is not a stable interface, DO NOT PARSE IT .If the special value 'none' is passed, no os variant is recorded and OS autodetection is disabled.Values for some recent OS options are:

win7 : Microsoft Windows 7

vista : Microsoft Windows Vista

winxp64 : Microsoft Windows XP (x86_64)

winxp : Microsoft Windows XP

win2k8 : Microsoft Windows Server 2008

win2k3 : Microsoft Windows Server 2003

freebsd8 : FreeBSD 8.x

generic : Generic

debiansqueeze : Debian Squeeze

debianlenny : Debian Lenny

fedora16 : Fedora 16

fedora15 : Fedora 15

fedora14 : Fedora 14

mes5.1 : Mandriva Enterprise Server 5.1 and later

mandriva2010 : Mandriva Linux 2010 and later

rhel6 : Red Hat Enterprise Linux 6

rhel5.4 : Red Hat Enterprise Linux 5.4 or later

rhel4 : Red Hat Enterprise Linux 4

sles11 : Suse Linux Enterprise Server 11

sles10 : Suse Linux Enterprise Server

ubuntuoneiric : Ubuntu 11.10 (Oneiric Ocelot)

ubuntunatty : Ubuntu 11.04 (Natty Narwhal)

ubuntumaverick : Ubuntu 10.10 (Maverick Meerkat)

ubuntulucid : Ubuntu 10.04 (Lucid Lynx)

ubuntuhardy : Ubuntu 8.04 LTS (Hardy Heron)

Use '--os-variant list' to see the full OS list

--boot=BOOTOPTS

Optionally specify the post-install VM boot configuration. This option allows specifying a boot device order, permanently booting off kernel/initrd with option kernel arguments, and enabling a BIOS boot menu (requires libvirt 0.8.3 or later)--boot can be specified in addition to other install options (such as --location, --cdrom, etc.) or can be specified on it's own. In the latter case, behavior is similar to the --import install option: there is no 'install' phase, the guest is just created and launched as specified.Some examples:

--boot cdrom,fd,hd,network,menu=on

Set the boot device priority as first cdrom, first floppy, first harddisk, network PXE boot. Additionally enable BIOS boot menu prompt.

--boot kernel=KERNEL,initrd=INITRD,kernel_args="console=/dev/ttyS0"

Have guest permanently boot off a local kernel/initrd pair, with the specified kernel options.

Storage Configuration

--disk=DISKOPTS

Specifies media to use as storage for the guest, with various options. The general format of a disk string is

--disk opt1=val1,opt2=val2,...

To specify media, the command can either be:

--disk /some/storage/path,opt1=val1

or explicitly specify one of the following arguments:

path

A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume.Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a 'size' value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method.

pool

An existing libvirt storage pool name to create new storage on. Requires specifying a 'size' value.

volAn existing libvirt storage volume to use. This is specified as 'poolname/volname'.

Other available options:

device

Disk device type. Value can be 'cdrom', 'disk', or 'floppy'. Default is 'disk'. If a 'cdrom' is specified, and no install method is chosen, the cdrom is used as the install media.

busDisk bus type. Value can be 'ide', 'scsi', 'usb', 'virtio' or 'xen'. The default is hypervisor dependent since not all hypervisors support all bus types.

perms

Disk permissions. Value can be 'rw' (Read/Write), 'ro' (Readonly), or 'sh' (Shared Read/Write). Default is 'rw'

size

size (in GB ) to use if creating new storage

sparse

whether to skip fully allocating newly created storage. Value is 'true' or 'false'. Default is 'true' (do not fully allocate).The initial time taken to fully-allocate the guest virtual disk (sparse=false) will be usually by balanced by faster install times inside the guest. Thus use of this option is recommended to ensure consistently high performance and to avoid I/O errors in the guest should the host filesystem fill up.

cache

The cache mode to be used. The host pagecache provides cache memory. The cache value can be 'none', 'writethrough', or 'writeback'. 'writethrough' provides read caching. 'writeback' provides read and write caching.

format

Image format to be used if creating managed storage. For file volumes, this can be 'raw', 'qcow2', 'vmdk', etc. See format types in <http://libvirt.org/storage.html> for possible values. This is often mapped to the driver_type value as well.With libvirt 0.8.3 and later, this option should be specified if reusing and existing disk image, since libvirt does not autodetect storage format as it is a potential security issue. For example, if reusing an existing qcow2 image, you will want to specify format=qcow2, otherwise the hypervisor may not be able to read your disk image.

driver_name

Driver name the hypervisor should use when accessing the specified storage. Typically does not need to be set by the user.

driver_type

Driver format/type the hypervisor should use when accessing the specified storage. Typically does not need to be set by the user.

ioDisk IO backend. Can be either "threads" or "native".

error_policy

How guest should react if a write error is encountered. Can be one of "stop", "none", or "enospace"

serial

Serial number of the emulated disk device. This is used in linux guests to set /dev/disk/by-id symlinks. An example serial number might be: WD-WMAP9A966149

See the examples section for some uses. This option deprecates "--file", "--file-size", and "--nonsparse".

--filesystem

Specifies a directory on the host to export to the guest. The most simple invocation is:

--filesystem /source/on/host,/target/point/in/guest

Which will work for recent QEMU and linux guest OS or LXC containers. For QEMU , the target point is just a mounting hint in sysfs, so will not be automatically mounted.The following explicit options can be specified:

type

The type or the source directory. Valid values are 'mount' (the default) or 'template' for OpenVZ templates.

mode

The access mode for the source directory from the guest OS . Only used with QEMU and type=mount. Valid modes are 'passthrough' (the default), 'mapped', or 'squash'. See libvirt domain XML documentation for more info.

source

The directory on the host to share.

target

The mount location to use in the guest.

--nodisks

Request a virtual machine without any local disk storage, typically used for running 'Live CD ' images or installing to network storage (iSCSI or NFS root).

-f DISKFILE , --file=DISKFILE

This option is deprecated in favor of "--disk path=DISKFILE".

-s DISKSIZE , --file-size=DISKSIZE

This option is deprecated in favor of "--disk ...,size=DISKSIZE,..."

--nonsparse

This option is deprecated in favor of "--disk ...,sparse=false,..."

Networking Configuration

-w NETWORK , --network=NETWORK,opt1=val1,opt2=val2

Connect the guest to the host network. The value for "NETWORK" can take one of 3 formats:

bridge=BRIDGE

Connect to a bridge device in the host called "BRIDGE". Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN . Also use this if live migration will be used with this guest.

network=NAME

Connect to a virtual network in the host called "NAME". Virtual networks can be listed, created, deleted using the "virsh" command line tool. In an unmodified install of "libvirt" there is usually a virtual network with a name of "default". Use a virtual network if the host has dynamic networking (eg NetworkManager), or using wireless. The guest will be NATed to the LAN by whichever connection is active.

user

Connect to the LAN using SLIRP . Only use this if running a QEMU guest as an unprivileged user. This provides a very limited form of NAT .

If this option is omitted a single NIC will be created in the guest. If there is a bridge device in the host with a physical interface enslaved, that will be used for connectivity. Failing that, the virtual network called "default" will be used. This option can be specified multiple times to setup more than one NIC .Other available options are:

model

Network device model as seen by the guest. Value can be any nic model supported by the hypervisor, e.g.: 'e1000', 'rtl8139', 'virtio', ...

mac Fixed MAC address for the guest; If this parameter is omitted, or the value "RANDOM" is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence '00:16:3e', while for QEMU or KVM virtual machines it must be '52:54:00'.

--nonetworks

Request a virtual machine without any network interfaces.

-b BRIDGE , --bridge=BRIDGE

This parameter is deprecated in favour of "--network bridge=bridge_name".

-m MAC , --mac=MAC

This parameter is deprecated in favour of "--network NETWORK,mac=12:34..."

Graphics Configuration

If no graphics option is specified, "virt-install" will default to '--graphics vnc' if the DISPLAY environment variable is set, otherwise '--graphics none' is used.

--graphics TYPE ,opt1=arg1,opt2=arg2,...

Specifies the graphical display configuration. This does not configure any virtual hardware, just how the guest's graphical display can be accessed. Typically the user does not need to specify this option, virt-install will try and choose a useful default, and launch a suitable connection.General format of a graphical string is

--graphics TYPE,opt1=arg1,opt2=arg2,...

For example:

--graphics vnc,password=foobar

The supported options are:

type

The display type. This is one of:vnc

Setup a virtual console in the guest and export it as a VNC server in the host. Unless the "port" parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the "vncdisplay" command to "virsh" (or virt-viewer(1) can be used which handles this detail for the use).

sdl

Setup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated.

spice

Export the guest's console using the Spice protocol. Spice allows advanced features like audio and USB device streaming, as well as improved graphical performance.

Using spice graphic type will work as if those arguments were given:

--video qxl --channel spicevmc

noneNo graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the --extra-args option). Xen PV will set this up automatically. The command 'virsh console NAME ' can be used to connect to the serial device.

port

Request a permanent, statically assigned port number for the guest console. This is used by 'vnc' and 'spice'

tlsport

Specify the spice tlsport.

listen

Address to listen on for VNC/Spice connections. Default is typically 127.0.0.1 (localhost only), but some hypervisors allow changing this globally (for example, the qemu driver default can be changed in /etc/libvirt/qemu.conf). Use 0.0.0.0 to allow access from other machines. This is use by 'vnc' and 'spice'

keymap

Request that the virtual VNC console be configured to run with a specific keyboard layout. If the special value 'local' is specified, virt-install will attempt to configure to use the same keymap as the local system. A value of 'none' specifically defers to the hypervisor. Default behavior is hypervisor specific, but typically is the same as 'local'. This is used by 'vnc'

password

Request a VNC password, required at connection time. Beware, this info may end up in virt-install log files, so don't use an important password. This is used by 'vnc' and 'spice'

passwordvalidto

Set an expiration date for password. After the date/time has passed, all new graphical connections are denyed until a new password is set. This is used by 'vnc' and 'spice'The format for this value is YYYY-MM-DDTHH:MM:SS , for example 2011-04-01T14:30:15

--vnc

This option is deprecated in favor of "--graphics vnc,..."

--vncport=VNCPORT

This option is deprecated in favor of "--graphics vnc,port=PORT,..."

--vnclisten=VNCLISTEN

This option is deprecated in favor of "--graphics vnc,listen=LISTEN,..."

-k KEYMAP , --keymap=KEYMAP

This option is deprecated in favor of "--graphics vnc,keymap=KEYMAP,..."

--sdl

This option is deprecated in favor of "--graphics sdl,..."

--nographics

This option is deprecated in favor of "--graphics none"

--noautoconsole

Don't automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the "virsh" "console" command to display the text console. Use of this parameter will disable this behaviour.

Virtualization Type options

Options to override the default virtualization type choices.

-v, --hvm

Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.

-p, --paravirt

This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the "--hvm" are specified, this will be assumed.

--container

This guest should be a container type guest. This option is only required if the hypervisor supports other guest types as well (so for example this option is the default behavior for LXC and OpenVZ, but is provided for completeness).

--virt-type

The hypervisor to install on. Example choices are kvm, qemu, xen, or kqemu. Availabile options are listed via 'virsh capabilities' in the <domain> tags.

--accelerate

Prefer KVM or KQEMU (in that order) if installing a QEMU guest. This behavior is now the default, and this option is deprecated. To install a plain QEMU guest, use '--virt-type qemu'

--noapic

Force disable APIC for the guest.

--noacpi

Force disable ACPI for the guest.

Device Options

--host-device=HOSTDEV

Attach a physical host device to the guest. Some example values for HOSTDEV:

--host-device pci_0000_00_1b_0

A node device name via libvirt, as shown by 'virsh nodedev-list'

--host-device 001.003

USB by bus, device (via lsusb).

--host-device 0x1234:0x5678

USB by vendor, product (via lsusb).

--host-device 1f.01.02

PCI device (via lspci).

--soundhw MODEL

Attach a virtual audio device to the guest. MODEL specifies the emulated sound card model. Possible values are ich6, ac97, es1370, sb16, pcspk, or default. 'default' will be AC97 if the hypervisor supports it, otherwise it will be ES1370 .This deprecates the old boolean --sound model (which still works the same as a single '--soundhw default')

--watchdog MODEL[,action=ACTION]

Attach a virtual hardware watchdog device to the guest. This requires a daemon and device driver in the guest. The watchdog fires a signal when the virtual machine appears to hung. ACTION specifies what libvirt will do when the watchdog fires. Values are

reset

Forcefully reset the guest (the default)

poweroff

Forcefully power off the guest

pause

Pause the guest

none

Do nothing

shutdown

Gracefully shutdown the guest (not recommended, since a hung guest probably won't respond to a graceful shutdown)

MODEL is the emulated device model: either i6300esb (the default) or ib700. Some examples:Use the recommended settings:--watchdog defaultUse the i6300esb with the 'poweroff' action--watchdog i6300esb,action=poweroff

--parallel=CHAROPTS

--serial=CHAROPTS

Specifies a serial device to attach to the guest, with various options. The general format of a serial string is

--serial type,opt1=val1,opt2=val2,...

--serial and --parallel devices share all the same options, unless otherwise noted. Some of the types of character device redirection are:

--serial pty

Pseudo TTY . The allocated pty will be listed in the running guests XML description.

--serial dev,path=HOSTPATH

Host device. For serial devices, this could be /dev/ttyS0. For parallel devices, this could be /dev/parport0.

--serial file,path=FILENAME

Write output to FILENAME .

--serial pipe,path=PIPEPATH

Named pipe (see pipe(7))

--serial tcp,host=HOST:PORT,mode=MODE,protocol=PROTOCOL

TCP net console. MODE is either 'bind' (wait for connections on HOST:PORT ) or 'connect' (send output to HOST:PORT ), default is 'connect'. HOST defaults to '127.0.0.1', but PORT is required. PROTOCOL can be either 'raw' or 'telnet' (default 'raw'). If 'telnet', the port acts like a telnet server or client. Some examples:Connect to localhost, port 1234:

--serial tcp,host=:1234

Wait for connections on any address, port 4567:

--serial tcp,host=0.0.0.0:4567,mode=bind

Wait for telnet connection on localhost, port 2222. The user could then connect interactively to this console via 'telnet localhost 2222':

--serial tcp,host=:2222,mode=bind,protocol=telnet

--serial udp,host=CONNECT_HOST:PORT,bind_host=BIND_HOST:BIND_PORT

UDP net console. HOST:PORT is the destination to send output to (default HOST is '127.0.0.1', PORT is required). BIND_HOST:BIND_PORT is the optional local address to bind to (default BIND_HOST is 127.0.0.1, but is only set if BIND_PORT is specified). Some examples:Send output to default syslog port (may need to edit /etc/rsyslog.conf accordingly):

--serial udp,host=:514

Send output to remote host 192.168.10.20, port 4444 (this output can be read on the remote host using 'nc -u -l 4444'):

--serial udp,host=192.168.10.20:4444

--serial unix,path=UNIXPATH,mode=MODE

Unix socket, see unix(7). MODE has similar behavior and defaults as --serial tcp,mode=MODE

--channel

Specifies a communication channel device to connect the guest and host machine. This option uses the same options as --serial and --parallel for specifying the host/source end of the channel. Extra 'target' options are used to specify how the guest machine sees the channel.Some of the types of character device redirection are:

--channel SOURCE ,target_type=guestfwd,target_address=HOST:PORT

Communication channel using QEMU usermode networking stack. The guest can connect to the channel using the specified HOST:PORT combination.

--channel SOURCE ,target_type=virtio[,name=NAME]

Communication channel using virtio serial (requires 2.6.34 or later host and guest). Each instance of a virtio --channel line is exposed in the guest as /dev/vport0p1, /dev/vport0p2, etc. NAME is optional metadata, and can be any string, such as org.linux-kvm.virtioport1. If specified, this will be exposed in the guest at /sys/class/virtio-ports/vport0p1/NAME

--channel spicevmc,target_type=virtio[,name=NAME]

Communication channel for QEMU spice agent, using virtio serial (requires 2.6.34 or later host and guest). NAME is optional metadata, and can be any string, such as the default com.redhat.spice.0 that specifies how the guest will see the channel.

--console

Connect a text console between the guest and host. Certain guest and hypervisor combinations can automatically set up a getty in the guest, so an out of the box text login can be provided (target_type=xen for xen paravirt guests, and possibly target_type=virtio in the future).Example:

--console pty,target_type=virtio

Connect a virtio console to the guest, redirected to a PTY on the host. For supported guests, this exposes /dev/hvc0 in the guest. See http://fedoraproject.org/wiki/Features/VirtioSerial for more info. virtio console requires libvirt 0.8.3 or later.

--video=VIDEO

Specify what video device model will be attached to the guest. Valid values for VIDEO are hypervisor specific, but some options for recent kvm are cirrus, vga, qxl, or vmvga (vmware).

--smartcard=MODE[,OPTS]

Configure a virtual smartcard device.Mode is one of host, host-certificates, or passthrough. Additional options are:

type

Character device type to connect to on the host. This is only applicable for passthrough mode.

An example invocation:

--smartcard passthrough,type=spicevmc

Use the smartcard channel of a SPICE graphics device to pass smartcard info to the guest

See "http://libvirt.org/formatdomain.html#elementsSmartcard" for complete details.

Miscellaneous Options

--autostart

Set the autostart flag for a domain. This causes the domain to be started on host boot up.

--print-xml

If the requested guest has no install phase (--import, --boot), print the generated XML instead of defining the guest. By default this WILL do storage creation (can be disabled with --dry-run).If the guest has an install phase, you will need to use --print-step to specify exactly what XML output you want. This option implies --quiet.

--print-step

Acts similarly to --print-xml, except requires specifying which install step to print XML for. Possible values are 1, 2, 3, or all. Stage 1 is typically booting from the install media, and stage 2 is typically the final guest config booting off hardisk. Stage 3 is only relevant for windows installs, which by default have a second install stage. This option implies --quiet.

--noreboot

Prevent the domain from automatically rebooting after the install has completed.

--wait=WAIT

Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of --noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is exceeded, virt-install simply exits, leaving the virtual machine in its current state.

--force

Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit.

--dry-run

Proceed through the guest creation process, but do NOT create storage devices, change host device configuration, or actually teach libvirt about the guest. virt-install may still fetch install media, since this is required to properly detect the OS to install.

--prompt

Specifically enable prompting for required information. Default prompting is off (as of virtinst 0.400.0)

--check-cpu

Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do.

-q, --quiet

Only print fatal error messages.

-d, --debug

Print debugging information to the terminal when running the install process. The debugging information is also stored in "$HOME/.virtinst/virt-install.log" even if this parameter is omitted.

Examples

Install a Fedora 13 KVM guest with virtio accelerated disk/network, creating a new 8GB storage file, installing from media in the hosts CDROM drive, auto launching a graphical VNC viewer

# virt-install \

--connect qemu:///system \

--virt-type kvm \

--name demo \

--ram 500 \

--disk path=/var/lib/libvirt/images/demo.img,size=8 \

--graphics vnc \

--cdrom /dev/cdrom \

--os-variant fedora13

Install a Fedora 9 plain QEMU guest, using LVM partition, virtual networking, booting from PXE , using VNC server/viewer

# virt-install \

--connect qemu:///system \

--name demo \

--ram 500 \

--disk path=/dev/HostVG/DemoVM \

--network network=default \

--virt-type qemu

--graphics vnc \

--os-variant fedora9

Install a guest with a real partition, with the default QEMU hypervisor for a different architecture using SDL graphics, using a remote kernel and initrd pair:

# virt-install \

--connect qemu:///system \

--name demo \

--ram 500 \

--disk path=/dev/hdc \

--network bridge=eth1 \

--arch ppc64 \

--graphics sdl \

--location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/

Run a Live CD image under Xen fullyvirt, in diskless environment

# virt-install \

--hvm \

--name demo \

--ram 500 \

--nodisks \

--livecd \

--graphics vnc \

--cdrom /root/fedora7live.iso

Run /usr/bin/httpd in a linux container guest ( LXC ). Resource usage is capped at 512 MB of ram and 2 host cpus:

# virt-install \

--connect lxc:/// \

--name httpd_guest \

--ram 512 \

--vcpus 2 \

--init /usr/bin/httpd

Install a paravirtualized Xen guest, 500 MB of RAM , a 5 GB of disk, and Fedora Core 6 from a web server, in text-only mode, with old style --file options:

# virt-install \

--paravirt \

--name demo \

--ram 500 \

--file /var/lib/xen/images/demo.img \

--file-size 6 \

--graphics none \

--location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/

Create a guest from an existing disk image 'mydisk.img' using defaults for the rest of the options.

# virt-install \

--name demo

--ram 512

--disk /home/user/VMs/mydisk.img

--import

Test a custom kernel/initrd using an existing disk image, manually specifying a serial device hooked to a PTY on the host machine.

# virt-install \

--name mykernel

--ram 512

--disk /home/user/VMs/mydisk.img

--boot kernel=/tmp/mykernel,initrd=/tmp/myinitrd,kernel_args="console=ttyS0"

--serial pty

Authors

Written by Daniel P. Berrange, Hugh Brock, Jeremy Katz, Cole Robinson and a team of many other contributors. See the AUTHORS file in the source distribution for the complete list of credits.

 

links:

https://linux.die.net/man/1/virt-install
https://libvirt.org/formatdomain.html#elementsDevices
https://www.linux-kvm.org/page/Processor_support

##################################

CPU LIST:

x86           qemu64  QEMU Virtual CPU version 1.5.3
x86           phenom  AMD Phenom(tm) 9550 Quad-Core Processor
x86         core2duo  Intel(R) Core(TM)2 Duo CPU     T7700  @ 2.40GHz
x86            kvm64  Common KVM processor
x86           qemu32  QEMU Virtual CPU version 1.5.3
x86            kvm32  Common 32-bit KVM processor
x86          coreduo  Genuine Intel(R) CPU           T2600  @ 2.16GHz
x86              486
x86          pentium
x86         pentium2
x86         pentium3
x86           athlon  QEMU Virtual CPU version 1.5.3
x86             n270  Intel(R) Atom(TM) CPU N270   @ 1.60GHz
x86      cpu64-rhel6  QEMU Virtual CPU version (cpu64-rhel6)
x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)
x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)
x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)
x86     Nehalem-IBRS  Intel Core i7 9xx (Nehalem Core i7, IBRS update)▒U
x86         Westmere  Westmere E56xx/L56xx/X56xx (Nehalem-C)
x86    Westmere-IBRS  Westmere E56xx/L56xx/X56xx (IBRS update)
x86      SandyBridge  Intel Xeon E312xx (Sandy Bridge)
x86 SandyBridge-IBRS  Intel Xeon E312xx (Sandy Bridge, IBRS update)
x86        IvyBridge  Intel Xeon E3-12xx v2 (Ivy Bridge)
x86   IvyBridge-IBRS  Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS)
x86          Haswell  Intel Core Processor (Haswell)
x86     Haswell-IBRS  Intel Core Processor (Haswell, IBRS)
x86        Broadwell  Intel Core Processor (Broadwell)
x86   Broadwell-IBRS  Intel Core Processor (Broadwell, IBRS)
x86   Skylake-Client  Intel Core Processor (Skylake)
x86 Skylake-Client-IBRS  Intel Core Processor (Skylake, IBRS)
x86   Skylake-Server  Intel Xeon Processor (Skylake)
x86 Skylake-Server-IBRS  Intel Xeon Processor (Skylake, IBRS)
x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)
x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)
x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)
x86       Opteron_G4  AMD Opteron 62xx class CPU
x86       Opteron_G5  AMD Opteron 63xx class CPU
x86             EPYC  AMD EPYC Processor
x86        EPYC-IBPB  AMD EPYC Processor (with IBPB)
x86             host  KVM processor with all supported host features 
(only available in KVM mode)
Recognized CPUID flags:
pbe ia64 tm ht ss sse2 sse fxsr mmx acpi ds clflush pn pse36 pat cmov mca pge mtrr sep apic cx8 mce pae msr tsc pse de vme fpu
hypervisor rdrand f16c avx osxsave xsave aes tsc-deadline popcnt movbe x2apic sse4.2|sse4_2 sse4.1|sse4_1 dca pcid pdcm xtpr cx16 fma cid ssse3 tm2 est smx vmx ds_cpl monitor dtes64 pclmulqdq|pclmuldq pni|sse3
avx512vl avx512bw sha-ni avx512cd avx512er avx512pf clwb clflushopt pcommit avx512ifma smap adx rdseed avx512dq avx512f mpx rtm invpcid erms bmi2 smep avx2 hle bmi1 fsgsbase
avx512-vpopcntdq ospke pku avx512vbmi
ssbd arch-facilities stibp spec-ctrl avx512-4fmaps avx512-4vnniw
3dnow 3dnowext lm|i64 rdtscp pdpe1gb fxsr_opt|ffxsr mmxext nx|xd syscall
perfctr_nb perfctr_core topoext tbm nodeid_msr tce fma4 lwp wdt skinit xop ibs osvw 3dnowprefetch misalignsse sse4a abm cr8legacy extapic svm cmp_legacy lahf_lm
ibpb
pmm-en pmm phe-en phe ace2-en ace2 xcrypt-en xcrypt xstore-en xstore
kvm_pv_unhalt kvm_pv_eoi kvm_steal_time kvm_asyncpf kvmclock kvm_mmu kvm_nopiodelay kvmclock
pfthreshold pause_filter decodeassists flushbyasid vmcb_clean tsc_scale nrip_save svm_lock lbrv npt
xsaves xgetbv1 xsavec xsaveopt

########################################

device

'cdrom', 'disk', or 'floppy'.

target bus
"ide", "scsi", "virtio", "xen", "usb", "sata", or "sd"

rbd
<target dev="hda" bus="ide"/>
<target dev="sda" bus="sata"/>
<target dev="sda" bus="scsi"/>
<target dev='vda' bus='virtio'/>

hda
<target dev="sda" bus="ide"/>

lun
<target dev='sda' bus='scsi'/>

iSCSI
<target dev='vda' bus='virtio'/>

 

 

 

 

 

虚拟化网络模式

1. 隔离模式(Host-only):虚拟机之间组建网络,该模式无法与宿主机通信,无法与其他网络通信,相当于虚拟机只是连接到一台交换机上。
2. 路由模式:相当于虚拟机连接到一台路由器上,由路由器(物理网卡),统一转发,但是不会改变源地址。
3. NAT模式:在路由模式中,会出现虚拟机可以访问其他主机,但是其他主机的报文无法到达虚拟机,而NAT模式则将源地址转换为路由器(物理网卡)地址,这样其他主机也知道报文来自那个主机,在docker环境中经常被使用。
4. 桥接模式:在宿主机中创建一张虚拟网卡作为宿主机的网卡,而物理网卡则作为交换机。

------------------------------------------------------------------

+ 隔离模型

+ 路由模型及NAT模型

+ 桥接模式

----------------------------------------------------------------------

对比图示:

---------------------------------------------------------------------

桥接示例:

网桥:

vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DNS2=8.8.8.8

物理网卡:

vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
HWADDR=00:0C:29:3B:3F:6F
UUID=dfd0cxde-5054-4c81-abe6-e7958f31549d
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
#IPADDR=192.168.1.10
#NETMASK=255.255.255.0
#GATEWAY=192.168.1.1
#DNS1=192.168.1.1
#DNS2=8.8.8.8
# 将 eth0 绑定到网桥接口 br0 上
BRIDGE=br0

查看已桥接状态:

brctl show

 

 

 

参考:

https://www.cnblogs.com/hukey/p/6436211.html

 

 

vmware的后台启动

摘自网络:

 

vmware界面启动呢,消耗很大的资源,相信后台启动是大家喜欢的方式,简单介绍以下几种常用的命令如下:

 

打开dos窗口,执行以下命令:

进入虚拟机安装目录:cd C:\Program Files (x86)\VMware\VMware Workstation
启动:vmrun -T ws start "C:\ubuntu/Ubuntu.vmx" nogui

[说明:C:\ubuntu/Ubuntu.vmx 就是你在虚拟机中创建好的ubuntu系统,以下同样]
查看是否启动成功:tasklist|findstr vmware
[说明:这个就是 unix系统的中ps -ef|grep "vmware"]

正常关机:vmrun stop "C:\ubuntu/Ubuntu.vmx" soft
强制关机:vmrun stop "C:\ubuntu/Ubuntu.vmx" hard
挂起休眠:vmrun suspend "C:\ubuntu/Ubuntu.vmx" hard | soft

列出正在运行的虚拟机:vmrun list

以上是使我们常用到的操作,但是每次都输入肯定比较麻烦,写一个bat脚本,方便很多,贴出来如下:

 

::start vmware    
@echo s:start vmware p:stop vmware 
@set /p select=输入: 
@if "%select%" == "s" goto start
@if "%select%" == "p" goto stop

:stop
     cd C:\Program Files (x86)\VMware\VMware Workstation
     vmrun stop "C:\ubuntu/Ubuntu.vmx" hard
     @echo stop succeed!
     pause
     exit
:start
     cd C:\Program Files (x86)\VMware\VMware Workstation
     vmrun list
     vmrun -T ws start "C:\ubuntu/Ubuntu.vmx" nogui
     tasklist|findstr vmware
     @echo start succeed!
     pause
     exit

KVM的B/S架构虚拟化管理系统

云计算的提出为信息技术学术界和产业界的发展提供了一个全新的思路。虚拟化作为云计算Iaas层的关键技术,近年来也得到了迅速发展。20世纪60年代,IBM就开始研究虚拟化技术,使得大型机的资源能得被多用户使用。经过几十年的发展,虚拟化技术已经日渐成熟,诞生了如VMware,virtual PC,Xen和KVM(kernel based virtualmachine)等一批成熟的虚拟化产品。与其他虚拟化产品相比,KvM的最大优势是完全开源。KVM是基于内核的完全虚拟化,在与其他虚拟化产品效率对比中表现出色,但是其管理系统仍然只有C/S架构的,C/S架构本身的缺点造成了系统资源的浪费。而且用户进行管理工作之前,需要预先安装管理软件;而且不同的管理软件对操作系统和硬件资源还有不同的要求。相比起来,B/S模式的管理系统就灵活很多,只要有浏览器的终端都可以用来进行管理工作,提高了管理效率,节省了客户端资源占用。

  本文实现了一个基于KVM虚拟化产品的B/S架构的虚拟化管理系统。通过调用KVM的libvirt开发接口,利用J2EE技术,系统管理员可在任何地方通过浏览器登陆系统,进行虚拟机和虚拟存储的管理工作。最后通过实际测试表明B/S架构的优越性。

  1.系统组成部分

  基于KVM的虚拟化管理系统,主要包括客户端、web服务器、虚拟化服务器集群和共享存储服务器4个模块。其中客户端是具有浏览器的上网设备,操作系统可以是Linux、windows甚至嵌入式操作系统;web服务器上安装web容器tomcat,操作系统采用“nux或windows,将完成管理工作的J2EE工程放在web容器里;虚拟化服务器集群上统一安装Linux操作系统,KVlⅥ虚拟化软件和lib—virt接口,并与web服务器建立ssh无密码连接;共享存储服务器是一个磁盘阵列,安装freenas后通过IP网络向虚拟化集群提供数据存储服务。

  用户通过jsp页面登陆后进行相关操作,操作参数传递给web服务器,web服务器与管理目标服务器建立ssh连接,通过调用libvirt接口进行虚拟化集群的管理工作,并将操作结果通过jsp页面呈现给用户。用户还可以通过调用spice插件,用图形界面的方式查看虚拟机的工作情况。整个系统结构如图1所示。

点击查看原图

图1 系统架构

  1.1 KVM(Kernel based virtual machine)

  KVM即基于内核的虚拟化,依赖于“Linux内核,其性能优越,接近单机操作系统。由于其优越的性能和开源性,得到业界的一致认可并在近年来取得很大的发展。KVM使用软件模拟的方式实现完全虚拟化,通过将客户操作系统的I/0指令提交给宿主操作系统(即Linux操作系统)上的QEMU,QEMU将操作指令转换为对宿主机的I/0操作这种方式来实现虚拟化,然后宿主操作系统调用驱动程序访问硬件。通过这种方式对硬件进行了模拟,实现起来比较简洁,而且效率较高|。

  1.2 SPICE(simple procotol for independent computing environment)

  spice是一个开源的远程桌面协议,可以用于在服务器、远程计算机和瘦客户端等设备上部署虚拟桌面,与微软的Remote Desktop Protocol相似,支持windows和Linux操作系统。

  spice包括两部分:spice server和spice client。通过在虚拟机里安装spice Server进行相应的配置后,就可以在客户端通过spice client插件远程链接该虚拟机,以图形界面形式进行操作。

  1.3 Libvirt接口

  Libvirt是一套开源的、用C语言开发的支持虚拟化函数库编程接口,用一种单一方式管理不同的虚拟化提供方式和管理工具,适用于主流虚拟化工具包括Xen、KVM和vmware等,支持与Java、python、C/C++等开发语言的绑定。Linux下常用的虚拟化管理工具Virt-manager和virt—install都是基于libvirt开发。

  1.4 Freenas

  Freenas是一款免费的NAS(网络附加存储)服务器软件,它能将普通PC或服务器的硬盘资源变成网络存储服务器。该软件基于FreeBSD,Samba及PHP,支持FTP/NFS/RSYNC/CIFS/AFP/UNISON/SSH协议及web界面的设定工具。Freems还可以被安装在移动存储设备中,使用方便灵活,在组建网络存储服务器方面应用广泛。

  2.J2EE工程架构

  系统实现的核心是完成管理工作并和用户交互的J2EE工程。J2EE工程架构如图2所示。

点击查看原图

图2 J2EE工程架构

  客户层实现用户和系统的交互,用户通过Web浏览器登录系统,并通过页面提交操作参数。浏览器负责用户请求接收和服务器返回信息的显示。

  struts控制器负责接收和处理用户请求,并将相应请求转发到业务逻辑层,然后从业务逻辑层接收结果并返回给客户层进行显示。

  业务逻辑层是J2EE工程的核心,负责接收stmts传递的用户请求,调用底层libvirt API完成相应的操作,并将处理结果经Web层展现给用户。

  将管理虚拟化的J2EE工程放在中心服务器(即web服务器)的tomcat容器里,这台web服务器应该与虚拟化集群在同一局域网中以保证连接速度。用户通过前台jsp页面登陆管理系统,web服务器接收到用户的请求参数后,与管理目标服务器通过ssh建立连接,用户即可进行相关的管理操作。在web服务器接收到用户管理操作参数后,将参数转换为libvirt接口认可的xml格式文件的参数,然后发送给管理目标服务器,目标服务器接收到参数后调用Libvirt接口,执行相关操作。具体实现过程如下:

  用户提交请求参数:用户登录成功后,可以进行如下四方面管理:用户管理、存储磁盘、虚拟机管理和集群管理。通过用户管理可以修改用户登录密码,定期修改密码可以增强安全性;通过存储管理可以添加或删除资源池,查看、删除或者新建虚拟磁盘;通过虚拟机管理,可以查

看和修改虚拟机信息,并通过spice查看虚拟机图形界面;集群管理可以查看集群中的物理机信息,并可以将虚拟机迁移到另一台物理机上。用户操作通过表单形式提交给Web服务器。

  Web服务器转换请求参数并发送给目标服务器:web服务器通过servIet接收到用户请求参数,转换为字符串,然后将参数组合为ml格式的libvirt配置文件,然后与目标服务器建立ssh连接,通过配置文件将需要进行的操作传递给目标服务器。

  目标服务器进行相关管理操作并向web服务器返回结果:目标服务器收到配置文件参数后,调用libvirt接口将操作参数传人,进行相应的操作。然后将操作结果返回给Web服务器。

  Web服务器将操作结果返回给用户:web服务器接收到返回结果后,进行相应的封装后传递给前台页面,显示给用户。如果用户的请求是图形界面查看虚拟机,则Web服务器将虚拟机的参数传递给客户端后,客户端与目标服务器直接建立连接。

  3.系统设计功能

  用户登录后可以进行如下四方面管理工作:用户管理、存储管理、虚拟机管理和集群管理。系统功能框图如图3所示。

   点击查看原图

图3 系统功能结构   

  3.1用户管理

  用户名和密码通过加密后写在配置文件里,用户提交登录请求后web服务器读取配置文件并进行登录验证,验证通过即可进行权限内的相关管理操作。用户可以定期修改密码,以增强安全性。超级用户可以对用户进行管理,包括添加、删除用户,为用户分配权限等,但是超级用户本身不能对虚拟化集群直接进行管理。通过这种权限分离的方式提高系统的安全性。

  3.2虚拟存储管理

  创建虚拟磁盘之前需要先创建资源池,然后在资源池中创建虚拟磁盘。虚拟磁盘就是提供给虚拟机用的存储空间,是位于共享存储之上的虚拟逻辑磁盘空间。用户登录之后,存储管理界面提供的管理操作包括:新建和删除资源池,新建和删除虚拟磁盘。新建和删除虚拟磁盘的时候,将虚拟磁盘名作为参数传递给web服务器,创建虚拟磁盘时提交表单包括磁盘名称和磁盘大小。表单提交给servlet进行处理。

  3.3虚拟机管理

  创建虚拟机之后,KVM会给每个虚拟机分配一个UUD(universally unique identifier),用于唯一标识该虚拟机。KVM每次启动的时候,都会为每个虚拟机分配一个ID。通过UUID或者ID都可以唯一找到该虚拟机。虚拟机管理进行的操作包括:新建和删除虚拟机,查看和修改虚拟机配置信息,启动、挂起和关闭虚拟机,导出虚拟机模板,从模板创建虚拟机,显示虚拟机界面(spice)。本系统将虚拟机的ID作为参数进行传递。首先用户将操作指令和虚拟机ID传递给web服务器,web服务器封装后传递给目标服务器,然后接受目标服务器执行后的返回结果,再传递给用户显示。

  在调用spice通过图形界面方式查看虚拟机中,首先检测spice插件是否正确安装,如果没有正确安装,提示用户下载并安装。如果正确安装,在拦截到查看指令后,从网页启动spice插件。并将虚拟机参数传递给spice,输入密码即可查看虚拟机的界面。

  3.4 集嚣管理

  集群管理包括物理主机管理和集群调度。物理机管理包括向虚拟化集群添加和删除物理机,通过SNMP(simplenetwork management protocol)协议收集物理机信息,维护活动物理机列表;集群调度采用虚拟机迁移来实现,采用中心任务调度算法,根据管理节点服务器(即Web服务器)通过SNⅧ协议收集到的主机状态信息,在物理机负载过高时,将其上的部分虚拟机迁移到其他物理机上,也可以将负载量很小的物理机上的虚拟机迁移到其他主机上,然后关闭该物理机以节省电能。 


    4.关键技术

    4.1 封装ssh连接类

    系统中用到建立ssh连接的地方比较多,因此通过对连接类进行封装可以有效减少代码量,提高可读性。

    封装ssh连接类需要用到两个类:org. libvirt包下的Connect和LibvirtException类。实现封装的类如下:

  public class SSHConnect{

    private static final String SSHURLl=“

qemu+ssh://”;

   private static final String SSHURL2=“/system”;

   public static Connect getSSHConnect(String userIp,

boolean flag){

   Connect conn=null;

   try{

   conn = new Connect (SSHURL1+userIp+

SSHURL2,fIag);

   }catch(LibvirtException e){

   e.printstackTrace();

   System.out.println(e.getError());

   return:

   }

   return conn:

   }

   public static void close(Connect conn){

   if(conn!=null){

   try{

   conn.close();

   }catch(LibvirtException e){

   e.printStackTrace();

   Systern.out.println(e_getError());

   }

   }

   }

  }

  4.2 Ajax(Asynchronous JavaScript and XML)技术



    考虑到不同的用户同时对一台服务器进行管理的可能性,本系统在向服务器读取数据的时候都是实时读取的,这就要用到Ajax技术。

  Ajax即异步JavaScript和XML,是一种创建交互式网页应用的网页开发技术。其工作原理是,在用户和服务器之间增加了了一个中间层——Ajax引擎.使用户操作与服务器响应异步化,部分操作由Ajax直接完成,无需向服务器提交数据。在用户需要从服务器读取新数据时由Ajax引擎代为向服务器提交请求,然后将读取结果发送给用户。在使用Ajax擎后,用户从感觉上几乎所有的操作都会很快响应没有页面重载的等待,也没有页面跳转。其工作原理如图5所示。

 

  点击查看原图

图4 Ajax工作原理  

  4.3用spice查看虚拟机

  在创建虚拟机的时候,将图形设备选择为spice,添加spice server并设置端口号,将显卡设备选择为视频卡QXL设备。Web服务器在接收到查看图形界面请求后,首先进行客户操作系统的判断,并检测用户是否正确安装spice,如果没有安装则根据用户操作系统提示用户下载安装spice包。

  如果spice正确安装,则通过网页启动spice,并将相应参数传人,即可打开虚拟机的图形界面。

  4.4 提供类应用软件试图

  为了使管理的步骤更加清晰和兼顾用户使用习惯,本系统设计类似应用软件的用户交互界面。首先将操作拆分成各个步骤,并为每一步骤设计一个jsp页面,然后使用Css+div建立一个可拖动的层,然后采用ifr锄e技术将操作步骤对应的jsp页面加载到层中显示,用Javascript控制层中内容的切换和数据的缓存,需要与服务器及时交互的数据则采用Ajax技术实时获取。在一个操作完成后将表单数据整合并提交。

  5.系统性能测试

  以VMware的vCenter server/vClient为例,与本系统性能做一个比较,测试环境为包含6台服务器的集群。用pc做客户端进行测试,CPU为双核2.9GHz,内存为2G,Windows Xp系统使用内存约350M。使用的系统资源情况分别如图6和图7所示。

 

点击查看原图   

图5 VMware 客户端资源使用   

点击查看原图

图6 B/S架构客户端资源使用

  由图可见,C/S架构客户端CPU利用率约为50%,内存使用为350M左右,而使用B/S架构系统的客户端CPU平均使用率约为20%,内存使用为190M左右。管理节点采用的是2个64位CPU,主频2.5GHz,内存8G,硬盘100G。安装hyper_V和作web服务器时系统资源使用情况见表1。

 

表1 管理节点消耗的系统资源对比

点击查看原图

  由此可见,B/S架构管理系统能极大降低系统资源的消耗。这一特点在集群规模增大时会体现的更加明显,因此对大规模虚拟化集群部署有重要的参考价值。

  6.结束语

  基于KVM的B/S架构虚拟化管理系统,将虚拟化管理工作由C/S模式转向了B/S模式,用户在管理虚拟化集群之前不用先安装客户端,直接通过浏览器登陆即可进行管理,管理工作不用限定在特定的客户机上,也不用限定在特定的网络环境中。由于web服务器取代了管理中心服务器,使系统的资源消耗更少,管理方式更加灵活。本系统在WindowS和Linux下经过测试,运行良好