Evolution of virtual infrastructure with Hyper-V Juraj Sucik,
28 Slides2.31 MB
Evolution of virtual infrastructure with Hyper-V Juraj Sucik, Slavomir Kubacka Internet Services Group CERN IT CERN IT Department CH-1211 Genève 23 Switzerland
Let’s continue 2006 Microsoft Virtual Server 2005 2008 Hyper–V 2008 SCVMM 2008 2009 Sep Hyper–V 2.0 SCVMM 2008 R2 CERN IT Department CH-1211 Genève 23 Switzerland 2
Hyper-V Features CERN IT Department CH-1211 Genève 23 Switzerland Hypervisor feature of WS 2008 32 and 64-bit virtual machines Up to 4 CPUs per VM Max 32 GB of memory per VM Snapshots Failover clustering Scriptable interface 3
SCVMM 2008 Features CERN IT Department CH-1211 Genève 23 Switzerland Enterprise management solution Windows Powershell API V2V and P2V capabilities Web portal Intelligent placement Library and templates Delegated management roles Job history Support for highly available VM VM Migration 4
Hyper-V Infrastructure CERN IT Department CH-1211 Genève 23 Switzerland 5
System Architecture CERN Virtual Infrastructure Web Interface SOAP Services Application Management Virtual Machine Manager Admin Console Windows Powershell OS Maintenance Backups LAN DB Microsoft Virtual Machine Manager 6 CERN IT Department CH-1211 Genève 23 Switzerland 6
CERN Virtual Infrastructure CERN IT Department CH-1211 Genève 23 Switzerland 7
CERN Virtual Infrastructure CERN IT Department CH-1211 Genève 23 Switzerland 8
Enhancements CERN IT Department CH-1211 Genève 23 Switzerland
Hyper-V 2.0 Features Live migration Cluster Shared Volume (CSV) Enables multiple nodes in a cluster to access a single shared LUN Dynamic I/O redirection Network optimizations TCP/IP Traffic in a VM can be offloaded to a physical NIC on the host computer Processor compatibility mode Allows live migration across different CPU versions within the same processor family CERN IT Department CH-1211 Genève 23 Switzerland 10
Hyper-V 2.0 Features Logical Processor Support Support for 32 logical processors on host computer Hot Add/Remove Storage Add and remove VHD disks to a running VM without requiring a reboot Second Level Translation (SLAT) Leverage new processor features to improve performance and reduce load on Windows Hypervisor Better SMP support for Linux CERN IT Department CH-1211 Genève 23 Switzerland 11
SCVMM 2008 R2 Features Manage WS 2008 R2 Hyper-V Live Migration Detects if Live migration can be done Maintenance mode Placement of new VM not allowed Existing VMs migrated off or saved Multiple VM per LUN using CSV Supports CSV feature of HV 2.0 V2P feature CERN IT Department CH-1211 Genève 23 Switzerland 12
SCVMM 2008 R2 Features SAN related enhancements Promote non-HA VM to HA VM by migrating it to a clustered host, and vice versa to “demote” the VM Network optimizations If enabled, VMM will configure the VM to use VMQ or Chimney, if available on the host Rapid provisioning Avoids copying VHD from library VDI integration CERN IT Department CH-1211 Genève 23 Switzerland 13
Why Migration? CERN IT Department CH-1211 Genève 23 Switzerland Maintenance reasons Load balancing Green IT Fast migration SOAP interface 14
Live Migration CERN IT Department CH-1211 Genève 23 Switzerland No dropped network connections No perceived loss of service Clustered Shared Volumes facilitates LM Leverages Failover Clustering 15
Quick vs. Live Migration Quick Migration (Windows Server 2008 Hyper-V) 1. Save state Create VM on the target Write VM memory to shared storage Move virtual machine Move storage connectivity from source host to target host via Ethernet Restore state & Run Take VM memory from shared storage and restore on Target Run 2. 3. CERN IT Department CH-1211 Genève 23 Switzerland Host 1 Host 2 16
Quick vs. Live Migration Live Migration (WS08R2 Hyper-V) 1. VM State/Memory Transfer Create VM on the target Move memory pages from the source to the target via Ethernet Final state transfer and virtual machine restore Pause virtual machine Move storage connectivity from source host to target host via Ethernet Un-pause & Run 2. 3. CERN IT Department CH-1211 Genève 23 Switzerland Host 1 Host 2 17
VMware vs. Hyper-V R2 CERN IT Department CH-1211 Genève 23 Switzerland Aspect vSphere 4 Hyper-V R2 # CPU core 64 64 Memory 1TB 2TB # nodes in cluster 32 16 # virtual CPU 8 4 # guest per host 256 192 Virtual memory 256GB 64GB Hot-add disk Yes SCSI only VM move Live Live # of snapshots 32 50 HA via clustering Yes Yes Market share 44% 23% Source: Login, USENIX Magazine, Oct 2009 18
Hyper-V Linux VM RHEL supported as guest OS Open source drivers (GPL) in 2.6.32 CPU Benchmark HEP - SPEC Benchmark 80 70 60 Hyper-V 2.0 Hyper-V 1.0 50 40 30 20 10 0 8-core PH CERN IT Department CH-1211 Genève 23 Switzerland 1-core 1GB VM 2-core 2GB VM 4-core 4GB VM 19
Linux in VM Time synchronization Kernel parameters, e.g. notsc divider 10 Virtual serial console Admin privileges 5 Linux templates Operating systems running in VM 6% 6% 31% Linux 32-bit Linux 64-bit Windows 32-bit Windows 64-bit 57% CERN IT Department CH-1211 Genève 23 Switzerland 20
Consolidation vs. batch Aspect CERN IT Department CH-1211 Genève 23 Switzerland Service Batch consolidation virtualization Scale (machines) 100 1000 CPU usage Little High Hardware Reliable Cheap Services Critical Non-critical Migration Live Not required VM life time Long Limited 21
ELFms Integration Perl SOAP client HMS AIMS Lemon SOAP Services Application Management Virtual Machine Manager Admin Console Windows Powershell OS Maintenance Backups LAN DB Microsoft Virtual Machine Manager 22 CERN IT Department CH-1211 Genève 23 Switzerland 22
Experiment use case VOBox service – dedicated servers for experiments: 222 and growing rapidly! CERN IT Department CH-1211 Genève 23 Switzerland 23
CC Virtualization Future Consolidation of servers on critical power supply as the power is very limited Development resources for IT-FIO CERN IT Department CH-1211 Genève 23 Switzerland 24
What’s next? CERN fabric management integration LEAF Lemon Quattor SLS Integrate Hyper-V drivers with SLC Rapid provisioning CERN IT Department CH-1211 Genève 23 Switzerland 25
Virtual Desktop Infrastructure Office Computer Centre Connection Broker Blade PCs PC Terminal Services Cluster Hyper-V servers with Virtual Desktops CERN IT Department CH-1211 Genève 23 Switzerland Thin Client 26
VDI Use Cases Propose Virtual Desktop self service for experiment developers as an alternative to dual-boot as an alternative to Terminal Services Evaluate a thin client technology, which could be solution for public computers basic office users Jack PC Thin Client CERN IT Department CH-1211 Genève 23 Switzerland 27
Conclusion Latest editions of Hyper-V SCVMM in production Better Linux support Live migration Integration with CERN IT services Fabric management tools Visit our website CERN Virtual Infrastructure: http://cern.ch/cvi CERN IT Department CH-1211 Genève 23 Switzerland 28