ldom discovery dayv2.2a

Upload: unal-tonka

Post on 06-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 LDOM Discovery DayV2.2a

    1/102

    LDom Discovery Day

    UK Systems Practice

    Sun Microsystems Ltd.

  • 8/3/2019 LDOM Discovery DayV2.2a

    2/102

    v2.2 Page 2

    Shift Happens...

    Requires... More performance More IP-based services More pervasive security More network bandwidth More network threads

    All of these requirements mustbe delivered within increasinglyconstrainedspace andpower envelopes

    More...

    The Swelling Network Tide

  • 8/3/2019 LDOM Discovery DayV2.2a

    3/102

    v2.2 Page 3

    Agenda

    Welcome & Introduction Understanding the Technology

    Server Virtualisation Chip Multi-Threading Logical Domains

    Native & Branded Solaris Containers Sun xVM

    Total Cost of Ownership Benefits Coffee & Tea Break

    Demonstration of LDoms in detail- Creation / Deletion- Resource Allocation / Re-allocation- Rapid deployments (cloning)

  • 8/3/2019 LDOM Discovery DayV2.2a

    4/102v2.2 Page 4

    Agenda

    Demonstration of LDoms in action Building and cloning a Glassfish application Load test and resource re-allocation using a content

    management application built on a SAMP stack Oracle log shipping between LDoms across systems

    Solaris 8 containers with a legacy Oracle database Solaris 8 to Solaris 10 binary compatibility

    Zeus Application Delivery Controller

  • 8/3/2019 LDOM Discovery DayV2.2a

    5/102v2.2 Page 5 5

    Understanding the Technology

    Server Virtualisation

  • 8/3/2019 LDOM Discovery DayV2.2a

    6/102v2.2 Page 6

    DynamicSystem Domains

    Addressing Virtualisation Challenges

    Hard Partitions Virtual Machines OS Virtualisation Resource Mgmt.

    Server

    OS

    AppCalendarServer Database WebServerSun RayServer

    AppServerDatabase

    MailServer

    WebServer

    FileServer

    IdentityServer

    AppServer Database

    Trend to Flexibility Trend to Isolation

    Sun xVM(Ldoms & xVM Server)

    Virtual Box

    VMWare

    Microsoft Hyper-V

    Solaris Containers

    Solaris Containersfor Linux Apps

    Solaris 8 & 9 Containers

    SolarisResource Manager

    Single OSMultiple OS

  • 8/3/2019 LDOM Discovery DayV2.2a

    7/102v2.2 Page 7

    Hyper-V

    x86SPARC

    Solaris VMwareMicrosoftLinux

    Logical Domains Container VI3 (ESX)XENSystem Domains xVM Server

    Enterprise

    Systems

    - SunFire

    - M Series

    CMT based

    Systems

    - T Series

    All SPARC &

    x86

    based

    Systems

    Certified

    x86

    based

    Systems

    Certified

    x86

    based

    Systems

    Certified

    x86

    based

    Systems

    Certified

    x86

    based

    Systems

    Systems Virtualisation Landscape

  • 8/3/2019 LDOM Discovery DayV2.2a

    8/102v2.2 Page 8 8

    Understanding the Technology

    Chip Multi-Threading (CMT)

  • 8/3/2019 LDOM Discovery DayV2.2a

    9/102v2.2 Page 9

    1980 1985 1990 1995 2000 20051

    10

    100

    1000

    10000

    Re

    lativePerformanc

    e

    Memory Bottleneck

    2x every 2 years

    < 2x every 6 years

    Gap

    DRAM SpeedsCPU Frequency

  • 8/3/2019 LDOM Discovery DayV2.2a

    10/102v2.2 Page 10

    Single ThreadedPerformance

    Single Threading

    Thread

    Memory Latency Compute

    Time

    C C CM M M

    Up to 85% Cycles Waiting for Memory

    US-IV+ 25% 75%

    Intel 15% 85%

    HURRYUP ANDWAIT!

  • 8/3/2019 LDOM Discovery DayV2.2a

    11/102v2.2 Page 11

    Comparing Modern CPU Design Techniques

    1 GHz

    Time

    Instruction Level Parallelism Offers Limited HeadroomThread Level Parallelism Provides Greater Performance Efficiency

    CC MM CC MM CC MM CC MMCompute

    Memory Latency

    TLP Time Saved

    CC MM

    CC MM

    CC MMCC MM

    2 GHz CC MM CC MM CC MM CC MM

  • 8/3/2019 LDOM Discovery DayV2.2a

    12/102v2.2 Page 12

    Chip Multi-threaded(CMT) Performance

    UltraSPARC-T1 single core

    Memory Latency Compute

    Time

    C

    Thread 4

    Thread 3

    Thread 2

    Thread 1MC M MC C

    MC M MC C

    MC M MC C

    MC M MC C

    T2000,T1000 and T6300 servers

  • 8/3/2019 LDOM Discovery DayV2.2a

    13/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    14/102

    v2.2 Page 14

    n

    cores per processor m strands per coren

    xm

    threads per processor

    T2000, T2000, T6300 have 1x US-T1 socket = 8 cores x 4 threads x 1 socket = 32 threads

    T5220, T5120, T6320 have 1x US-T2 socket = 8 cores x 8 threads x 1 socket = 64 threads

    T5240, T5140, T6340 have 2x US-T2 sockets = 8 cores x 8 threads x 2 sockets = 128 threads

    T5440 has 4x US-T2 sockets = 8 cores x 8 threads x 4 sockets = 256 threads

    CMPchip multiprocessing

    FG-MTfine-grained

    multithreading

    CMTchip multithreading

    Chip Multi-Threading (CMT)

  • 8/3/2019 LDOM Discovery DayV2.2a

    15/102

    v2.2 Page 15

    Industry's Most Highly Threaded ServersMaximum Threading = Higher Throughput, Greater Energy & Space Efficiency

    T5120 1U/ T5220 2UUltraSPARC T2 based

    1 socket 8-cores

    Up to 64 Threads

    T5140 1U/ T5240 2U

    UltraSPARC T2 Plus based2 socket servers(each 8-cores)

    Up to 128 ThreadsT6300 / T6320 / T6340

    UltraSPARC T1,T2 & T2 Plus blades1 socket 8-cores, 1 socket 8-cores &

    2 socket 8-cores each

    T5440 4UUltraSPARC T2 Plus based

    4 socket server(each 8-cores)256 Threads

    T1000 1U / T2000 2UUltraSPARC T1 based

    1 socket 8-cores

    Up to 32 Threads

  • 8/3/2019 LDOM Discovery DayV2.2a

    16/102

    v2.2 Page 16

    1 of 8 Cores J-BUS (3.2GB/sec)

    (V880 1.2GB/sec)

    C8C7C6C5C4C3C2C1

    L2$L2$L2$L2$

    Xbar

    DDR-2SDRAM

    DDR-2SDRAM

    DDR-2SDRAM

    DDR-2SDRAM

    FPU

    Introducing UltraSPARC T1

    Sys I/FBuffer Switch

    Core

    SPARC V9 implementation means Binary compatible

    Up to 8 cores with 4 threads per core to provide up to 32 simultaneousthreadsAll cores connected through a 134GB/sec crossbar switch

    High-bandwidth 12-way associative3MB Level-2 cache on chip

    4x DDR2 channels (23GB/s total)

    5,750MB/sec per controller1,437MB/sec per DIMM

    1.8Volts (DDR = 2.5Volts)

    US-T1 Power : < 79W !

    ~300M transistors

    90nm chip

    T2000 = 325W, 2U

    V880 = 3000W, 17U

  • 8/3/2019 LDOM Discovery DayV2.2a

    17/102

    v2.2 Page 17

    C8C7C6C5C4C3C2C1

    Xbar

    FB-DIMMs FB-DIMMs FB-DIMMs FB-DIMMs

    Introducing UltraSPARC T2

    Sys I/FBuffer Switch

    Core

    SPARC V9 implementation means Binarycompatible

    Up to 8 cores with 8 threads per core to provideup to 64 simultaneousthreads

    2x Execution Units per core

    Dedicated Floating-Point Unit per coreallows for non-blocking FP threads

    Integrated/Enhanced crypto co-processorper core

    High-bandwidth (8bank) 16-way associative4MB Level-2 cache on chip

    4x FB-DIMM channels(50GB/s Read, 42GB/sec Write)

    up to 64 DIMMs supported

    Power : < 80W !

    65nm chip (T1 is 90nm)16 integer execution units (T1 has 8)

    16 instructions/clock cycle (T1 is 8)

    8 integer pipelines (T1 has 6)

    E10K = 9600W (half perf of US-T2)

    FPUFPUFPUFPUFPUFPUFPUFPU

    4x Dual FBDIMM Channels 32 - 64 DIMMS

    Memory B/w = 50GB/sec Read, 42GB/sec Write

    L2$ L2$ L2$ L2$ L2$ L2$ L2$ L2$

    2x 10GE Ethernet2.5Gb/sec bi-directional per lane

    PCI-ExNIU

    (E-net+)

    x8 @ 2.5GHz

  • 8/3/2019 LDOM Discovery DayV2.2a

    18/102

    v2.2 Page 18

    Checking Application Fit with cooltst

    De-risks investment decisions> Measures floating point content> Measures number of active LWPs

    Download at http://cooltools.sunsource.net

    Sample Output:

    Peak thread utilization at 2007-10-01 20:41:15

    Corresponding file name 1191296475CPU utilization 24.5%Command XorgPID/LWPID 627/1Thread utilization 12%

    Advice

    Floating Point GREENObserved floating point content was not excessive for

    an UltraSPARC T1 processor. Floating point content is nota limitation for UltraSPARC T2.

    Parallelism GREEN

    Observed parallelism was adequate for an UltraSPARC T1/T2processor to be effectively utilized.

  • 8/3/2019 LDOM Discovery DayV2.2a

    19/102

    v2.2 Page 19

    Benchmarks and Cool ToolsApplication Benchmarks on T-Series servers

    http://www.sun.com/servers/coolthreads/benchmarks/index.jsp(e.g. SAP, Lotus Notes, Java etc...)

    ISV Endorsements for T-Series servershttp://www.sun.com/servers/coolthreads/testimonials/isv.jsp

    Tuning resources for T-Series servershttp://www.sun.com/servers/coolthreads/tnb/applications.jsp(CoolThreadsTuning and Resources)

    CoolTools Suite for T-Series serversavailable at http://www.opensparc.net/cooltools/index.html

    Cooltst at http://cooltools.sunsource.net/cooltst/to determine whether a running workload on a UNIX server is suitable for the T-Series servers

    For T-Series application development there is :Sun Studio 12 (Optimising C, C++ and Fortran compilers with Netbeans, IDE and other performance tools)Sun Application Porting Assistant (A Static Source Code Analysis and Code Scanning Tool that identifies

    incompatible APIs between Linux and Solaris)GCC4SS (C, C++ compiler and for apps that are normally compiled with gcc)BIT (Binary Improvement Tool works directly with SPARC binaries to instrument, optimize, and analyze them forperformance or code coverage)

    SPOT (Simple Performance Optimisation Tool produces a report on the performance of an application. The spot reportcontains detailed information about various common conditions that impact performance)

    Faban (Benchmark Framework Consolidation of benchmark development and management knowledge and experienceto aid the development and running of benchmarks)

    Solaris Grid Compiler

    http://www.sun.com/servers/coolthreads/benchmarks/index.jsphttp://www.sun.com/servers/coolthreads/tnb/applications.jsphttp://www.opensparc.net/cooltools/index.htmlhttp://www.opensparc.net/cooltools/index.htmlhttp://www.sun.com/servers/coolthreads/tnb/applications.jsphttp://www.sun.com/servers/coolthreads/benchmarks/index.jsp
  • 8/3/2019 LDOM Discovery DayV2.2a

    20/102

    v2.2 Page 20

    Cool ToolsFor T-Series Tuning and Debugging there is :

    ATS (Automatic Tuning and Troubleshooting System is a binary reoptimization and recompilation tool that can be used fortuning and troubleshooting applications)

    Corestat (for the online monitoring of core utilisation)

    Discover(Sun Memory Error Discovery Tool detects programming errors related to the allocation and use of programmemory at runtime)

    Thread Analyser(Analyses the execution of a multi-threaded program and checks for multi-threaded programming errorssuch as "data races" and "deadlocks")

    For T-Series Deployment there is :CoolTuner(Automatically tunes T-Series servers, applying patches and setting System Parameters to best practicerecommendations - with self auto-update feature)

    Cool Stack (Optimised Open Source Software Stack for Apps such as Apache, MySQL, Perl, PHP, Squid and Tomcat)

    Consolidation Tool (Simplifys the task of consolidating multiple applications on the T-Series servers using SolarisContainers)

    For T-Series Architecture exploration there is :

    SHADE (is a fast SPARC instruction set simulator that is used to perform a variety of analysis functions on SPARCexecutables)

    RST Trace Tool (is a trace format for SPARC instruction-level traces.

    Sun Studio 12 Compilers and Tools

    Sun Studio 12software is the premier development environment for the Solaris operating system.

  • 8/3/2019 LDOM Discovery DayV2.2a

    21/102

    v2.2 Page 21 21

    Understanding the Technology

    Logical Domains (LDom)

  • 8/3/2019 LDOM Discovery DayV2.2a

    22/102

    v2.2 Page 22

    Consolidation

    Logical Domain Benefits

    Run multiple virtual machines simultaneously on a single platform> Secure consolidation of different operating environments

    > Increase utilisation of CoolThreads architecture

    Domains can communicate & serve each other> Virtual data center in a box

    Minimise/eliminate need for OS upgrades with new platforms> Reduce customer qualification costs, and protect software investments

  • 8/3/2019 LDOM Discovery DayV2.2a

    23/102

    v2.2 Page 23

    Concepts of Logical Domains (LDoms)

    SPARC / CMT based virtualisation technology Each Logical Domain :

    > Appears as a fully independent server> Has a unique OS install and configuration in all ways>

    Configurable CPU, Disk. Memory and I/O resources Up to :

    > 32 LDom on T2000 (UltraSPARC T1)> 64 LDom on T5220 (UltraSPARC T2)> 128 LDom on T5240 (UltraSPARC T2 Plus)

    > 128 Ldom on T5440 (UltraSPARC T2 Plus)

    Dynamic resource allocation

    Isolation via hardware/firmwarex8 @2.5GHz

    Full Cross Bar

    C0 C1 C2 C3 C4 C5 C6 C7FPU FPU FPU FPU FPU FPU FPU FPU

    L2$ L2$ L2$ L2$ L2$ L2$ L2$ L2$

    FB DIMM FB DIMM FB DIMM FB DIMM

    FB DIMM FB DIMM FB DIMM FB DIMM

    PCI-ExNIU(E-net+)Sys I/F

    Buffer Switch Core

    2x 10GE EthernetPower

  • 8/3/2019 LDOM Discovery DayV2.2a

    24/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    25/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    26/102

    v2.2 Page 26

    Hypervisor Support

    Hypervisor firmware responsible for :> maintaining separation (e.g. visible hardware parts) between domains> Provides Logical Domain Channels (LDCs) so domains can communicate

    with each other> Mechanism by which domains can be virtually networked with each other, or provide services to

    each other

    > MMU maps RAM into domains' address spaces, and a protocol lets hypervisor and domainsqueue and dequeue messages

    Using extensions built into a sun4v CPU> Is an integral component of the shipping systems

    > Not installed as part of a software distribution

  • 8/3/2019 LDOM Discovery DayV2.2a

    27/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    28/102

    v2.2 Page 28

    Control Domain

    Configuration platform for managing server and domains> Allows monitoring and re-configuration of domains> Interfaces with the hypervisor to set up access rule sets> Administers the constraints engine and resource mapping

    Runs the LDom Manager software> One Manager per host Hypervisor> Controls Hypervisor and all its Logical Domains> Exposes control interfaces

    > ldm command and ldmd daemon

  • 8/3/2019 LDOM Discovery DayV2.2a

    29/102

    v2.2 Page 29

    One Manager per host Hypervisor> Controls Hypervisor and all its Logical Domains

    Exposes control interfaces> CLI

    > WS-MAN> XML

    Maps Logical Domains to physical resources> Constraint engine

    > Heuristic binding of Logical Domains to resources> Assists with performance optimisation

    > Assists in event of failures / blacklisting

    LDom Manager

  • 8/3/2019 LDOM Discovery DayV2.2a

    30/102

    v2.2 Page 30

    Ldm Evolution

    1.1 (December 2008)> Virtual I/O Dynamic Reconfiguration> no need to reboot when adding/removing storage

    > no need to reboot when adding/removing vnets

    > VLAN support

    > use VLANs with guest domains

    > VLAN tagging supported

    > Virtual Disk Failover

    > a traffic manager in guest LDOMs

    > Single Disk Slice enhancements

    > NIU hybrid I/O implemented

    > Fault Management Architecture (FMA) I/O improvesI/O reporting

    > Enhanced XML v3 interface for monitoring andcontrolling LDOMs

    > Power management feature in T2 and T2+ chipssaves power when all of the threads on a core are idle

    Firmware required for 1.1UltraSPARC T2 Plus 7.2.x

    UltraSPARC T2 7.2.x

    UltraSPARC T1 6.7.x

    Required Software Patches

    Solaris 10 5/08 137111-09

    Solaris 10 8/07 137111-09

    Solaris 10 11/06 137111-09

  • 8/3/2019 LDOM Discovery DayV2.2a

    31/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    32/102

    v2.2 Page 32

    I/O Domain

    Provides virtual device services to other domains> Networking - virtual switches> Storage - virtual disk servers> Serial - virtual console concentrator

    Multiple I/O domains can exist with shared or sole access to

    system facilities Owns the physical I/O and provides I/O facilities to itself and other

    guest domains> Allows I/O load separation and redundancy within domains deployed on a

    platform

  • 8/3/2019 LDOM Discovery DayV2.2a

    33/102

    v2.2 Page 33

    I/O Virtualisation

    Paravirtualised Model> Frontend device / backend

    service architecture> Bi-directional, point to point

    channel> Separate transmit & receive

    queues

  • 8/3/2019 LDOM Discovery DayV2.2a

    34/102

    v2.2 Page 34

    I/O Virtualisation

  • 8/3/2019 LDOM Discovery DayV2.2a

    35/102

    v2.2 Page 35

    I/O Virtualisation

  • 8/3/2019 LDOM Discovery DayV2.2a

    36/102

    v2.2 Page 36

    Control Domain

    Redundant NetworkGuest Ldom #1

    IPMP in Control Domain

    Hypervisor

    I/O Domain

    e1000g0

    Driver Driver Driver Driver Driver Driver Driver Driver Driver Driver Driver Driver

    IPMP0

    }

    VNET0

    VSW0

    Guest Ldom #3

    Link Aggregation

    DLA

    VNET3

    VSW3

    Guest Ldom #4

    Redundant IPMP

    VNET4 VNET5

    VSW4 VSW5

    IPMP2

    }

    e1000g1 e1000g2 e1000g3 e1000g4 e1000g5 e1000g6 e1000g7 e1000g8 e1000g9 e1000g10 e1000g11

    Guest Ldom #2

    IPMP in Guest Ldom

    IPMP1

    VNET1 VNET2

    VSW1 VSW2

    }

  • 8/3/2019 LDOM Discovery DayV2.2a

    37/102

    v2.2 Page 37

    Redundant Network

    IPMP across 2 virtual switches Create a second virtual switch in the control domain

    > ldm add-vswitch mac-addr=net-dev=e1000g1 primary-vsw1

    primary Add a second network interface in the guest domain

    > ldm add-vnet vnet2 primary-vsw1

    In the guest domain make an ipmp group of the two interfaces> Ifconfig vnet0 group ipmp1

    > Ifconfig vnet1 group ipmp1

    Alternatively:> Create an ipmp group in the control domain and connect that

    to the virtual switch> ifconfig e1000g1 group ipmp2

    > ifconfig e1000g2 group ipmp2

    > ldm add-vswitch net-dev=ipmp2 primary-vsw2primary

  • 8/3/2019 LDOM Discovery DayV2.2a

    38/102

    v2.2 Page 38

    Redundant Network

    Data link administration ( in Solaris 10 ) also provides linkaggregation

    > # dladm create-aggr -d e1000g1 -d e1000g3 -d e1000g4 agg_link1

    > # dladm show-aggr

    > key: agg_link1 (0x0001) policy: L4 address: 0:14:4f:97:87:69 (auto)

    > device address speed duplex link state> e1000g1 0:14:4f:97:87:69 1000 Mbps full up attached

    > e1000g3 0:14:4f:97:87:6b 1000 Mbps full up attached

    > e1000g4 0:15:17:3a:92:18 1000 Mbps full up attached

    > # ldm add-vsw net-dev=agg_link1primary-vsw0 primary

    > # ifconfigprimary-vsw0 plumb 10.1.1.110

    > # ifconfig e1000g0 unplumb

    > # dladm add-aggr -d e1000g0 agg_link1

    > # mv /etc/hostname.e1000g0 /etc/hostname.vsw0

  • 8/3/2019 LDOM Discovery DayV2.2a

    39/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    40/102

    v2.2 Page 40

    Virtual Subsystems

    Virtual devices abstract physical devices

    Inter-Domain I/O via Logical Domain Channels (LDCs) configured inthe Control domain through the hypervisor

    Virtual devices are :-> CPU's

    > Memory> Crypto cores

    > Network switches

    > NICs

    > Disk servers

    > Disks> Consoles

    > A Virtual Terminal Server (vntsd)

    R R fi ti

  • 8/3/2019 LDOM Discovery DayV2.2a

    41/102

    v2.2 Page 41

    Resource Reconfiguration

    Ability to grow or shrink compute capacity of an LDom on demand Simply add / remove:

    > Cores / threads - dynamic> Memory - delayed reconfiguration> I/O - dynamic

    Improve utilisation by balancing resources between LDoms

    W Mi ti 1 I iti li ti

  • 8/3/2019 LDOM Discovery DayV2.2a

    42/102

    v2.2 Page 42

    Warm Migration 1 - Initialisation

    GuestDomain

    virtual disk backend

    (NFS file or shared disk)

    vdsk

    ControlDomain

    vnet vsw

    vds

    ControlDomain

    vsw

    vds

    Physical

    MemoryCPUs

    MemoryCPUs

    MemoryCPUs

    System A System B

    ldmd ldmd

    Network

    ldmd on System A checks with ldmd on System B if migration is possible

    W Mi ti 2 N G t C ti

  • 8/3/2019 LDOM Discovery DayV2.2a

    43/102

    v2.2 Page 43

    Warm Migration 2 New Guest Creation

    GuestDomain

    virtual disk backend

    (NFS file or shared disk)

    vdsk

    ControlDomain

    vnet vsw

    vds

    GuestDomain

    vdsk

    ControlDomain

    vnetvsw

    vds

    Physical

    MemoryCPUs

    MemoryCPUs

    MemoryCPUs

    Memory1 CPU

    System A System B

    ldmd ldmd

    Network

    ldmd on System B creates and binds a similar domain with 1 CPU

    W Mi ti 3 Sh i k S G t

  • 8/3/2019 LDOM Discovery DayV2.2a

    44/102

    v2.2 Page 44

    Warm Migration 3 Shrink Source Guest

    GuestDomain

    virtual disk backend

    (NFS file or shared disk)

    vdsk

    ControlDomain

    vnet vsw

    vds

    GuestDomain

    vdsk

    ControlDomain

    vnetvsw

    vds

    Physical

    Memory1 CPU

    MemoryCPUs

    MemoryCPUs

    Memory1 CPU

    System A System B

    ldmd ldmd

    Network

    ldmd on system A removes all but one CPUs on the source guest

    W Mi ti 4 St t T f

  • 8/3/2019 LDOM Discovery DayV2.2a

    45/102

    v2.2 Page 45

    Warm Migration 4 State Transfer

    GuestDomain

    virtual disk backend

    (NFS file or shared disk)

    vdsk

    ControlDomain

    vnet vsw

    vds

    GuestDomain

    vdsk

    ControlDomain

    vnetvsw

    vds

    Physical

    Memory1 CPU

    MemoryCPUs

    MemoryCPUs

    Memory1 CPU

    transfer

    memoryand

    cpu state

    System A System B

    ldmd ldmd

    Network

    ldmd on system A suspends the last CPU and transfers state

    Warm Migration 5 Target Guest Resume

  • 8/3/2019 LDOM Discovery DayV2.2a

    46/102

    v2.2 Page 46

    Warm Migration 5 Target Guest Resume

    GuestDomain

    virtual disk backend

    (NFS file or shared disk)

    vdsk

    ControlDomain

    vnet vsw

    vds

    GuestDomain

    vdsk

    ControlDomain

    vnetvsw

    vds

    Physical

    Memory1 CPU

    MemoryCPUs

    MemoryCPUs

    Memory1 CPU

    System A System B

    ldmd ldmd

    Network

    ldmd on System B resumes the target guest with one cpu

    Warm Migration 6 Completion & Cleanup

  • 8/3/2019 LDOM Discovery DayV2.2a

    47/102

    v2.2 Page 47

    Warm Migration 6 Completion & Cleanup

    virtual disk backend

    (NFS file or shared disk)

    ControlDomain

    vsw

    vds

    GuestDomain

    vdsk

    ControlDomain

    vnetvsw

    vds

    Physical

    MemoryCPUs

    MemoryCPUs

    MemoryCPUs

    System A System B

    ldmd ldmd

    Network

    ldmd on System B add other cpus ldmd on System A destroys the source guest

    Cold and Live Migration

  • 8/3/2019 LDOM Discovery DayV2.2a

    48/102

    v2.2 Page 48

    Cold and Live Migration

    Cold and Live migration can migrate between different system

    and CPU types Warm Migration requires same system and CPU type

    Cold Migration operation is fast

    Live Migration requires OS support (aka cooperative guest

    support) Time to migrate a domain is largely determined by

    > Type of migration being performed> Network speed

    > Size of guest image (Warm Migration)

    Virtual I/O Dynamic Reconfiguration

  • 8/3/2019 LDOM Discovery DayV2.2a

    49/102

    v2.2 Page 49

    Virtual I/O Dynamic Reconfiguration

    Add/Remove virtual I/O services and deviceswithout rebooting> vds, vsw, vdisk, vnet, vcc

    No CLI changes but effect is immediate

    Examples:

    # ldm add-vdisk vdiskN diskN@primary-vds0 ldg1

    # ldm add-vnet vnetN primary-vsw0 ldg1

    vdiskN and vnetN are immediately available in domain ldg1 A device cannot be removed if it is in use

    Network Hybrid I/O

    mailto:diskN@primary-vds0mailto:diskN@primary-vds0
  • 8/3/2019 LDOM Discovery DayV2.2a

    50/102

    v2.2 Page 50

    Network Hybrid I/O

    Network virtualised I/O path:

    Guest domain service domain physical NIC

    Network hybrid I/O path:

    Guest domain physical NIC

    Except broadcast and multicast Better performance and scalability

    No overhead of the service domain virtual switch

    Hardware Requirements:> UltraSPARC T2 based system> 10Gb ethernet XAUI adapter (nxge interface)

    Network Hybrid I/O 1

  • 8/3/2019 LDOM Discovery DayV2.2a

    51/102

    v2.2 Page 51

    Network Hybrid I/O 1

    A non-hybrid vnet sends/receives all packets through the service domain

    Service Domain Guest Domain

    vsw vnet

    (hybrid)

    nxge0

    LDC Hypervisor

    Physical Network

    Guest Domain

    vnet

    (non-hybrid)

    XAUIAdapter

  • 8/3/2019 LDOM Discovery DayV2.2a

    52/102

    Network Hybrid I/O 3

  • 8/3/2019 LDOM Discovery DayV2.2a

    53/102

    v2.2 Page 53

    Network Hybrid I/O 3

    A hybrid vnet sends/receives unicast packets directly to/from the NIU card using dedicatedDMA channels

    Service Domain Guest Domain

    vsw vnet

    (hybrid)

    nxge0

    LDC

    Physical Network

    Guest Domain

    vnet

    (non-hybrid)

    XAUIAdapter

    VLAN (802 1q) Support

  • 8/3/2019 LDOM Discovery DayV2.2a

    54/102

    v2.2 Page 54

    VLAN (802.1q) Support

    Add VLAN Support to Virtual Network I/O

    Ethernet packets switching based on VLAN Ids

    Support Added to vsw and vnet> vnet and vsw can now service multiple subnets

    Features Similar to a Physical Switch with VLAN

    Support untagged and tagged mode

    VLAN IDs are assigned with the ldm CLI

    Untagged Mode:

    > Associate a port-vlan-id (PVID) with a vnet/vsw interface Tagged Mode

    > Associate VLAN id(s) with a vnet/vsw interface

    Virtual Disk Failover

  • 8/3/2019 LDOM Discovery DayV2.2a

    55/102

    v2.2 Page 55

    Virtual Disk Failover

    Disk multipathing between different service domains

    CLI:

    # ldm add-vdsdev mpgroup=foo/path/to/disk/backend/from/primary/domain disk@primary-vds0

    # ldm add-vdsdev mpgroup=foo

    /path/to/disk/backend/from/alternate/domain disk@alternate-vds0

    # ldm add-vdisk disk disk@primary-vds0 guest

    Virtual Disk Failover 1

  • 8/3/2019 LDOM Discovery DayV2.2a

    56/102

    v2.2 Page 56

    Virtual Disk Failover 1

    ServiceDomain 1(primary)

    Guest Domain

    disk@primary-vds0mpgroup=foo

    virtual disk server(primary-vds0)

    vdc

    virtual diskbackend

    vdisk

    LDC 1Hypervisor

    ServiceDomain 2

    (alternate)

    disk@alternate-vds0mpgroup=foo

    virtual disk server(alternate-vds0)

    LDC 2active channel backup channel

    Virtual Disk Failover 2

  • 8/3/2019 LDOM Discovery DayV2.2a

    57/102

    v2.2 Page 57

    Virtual Disk Failover 2

    ServiceDomain 1(primary)

    Guest Domain

    disk@primary-vds0mpgroup=foo

    virtual disk server(primary-vds0)

    vdc

    virtual diskbackend

    vdisk

    LDC 1Hypervisor

    ServiceDomain 2

    (alternate)

    disk@alternate-vds0mpgroup=foo

    virtual disk server(alternate-vds0)

    LDC 2channel down active channel

    Service Domain/LDC 1 down Guest switches to another LDC channel

    Other Features

  • 8/3/2019 LDOM Discovery DayV2.2a

    58/102

    v2.2 Page 58

    Other Features

    Single-Slice Disk Enhancements

    Ability to install Solaris on a single-slice disk

    Single-slice disks are now visible with format(1m)

    Power Management

    Ability to power off unused CPU cores LDC VIO Shared Memory for DRing

    Improved virtual network I/O performances> Requires a Solaris patch

    iostat(1M) Support in Guest Domains

  • 8/3/2019 LDOM Discovery DayV2.2a

    59/102

    v2.2 Page 59

    Understanding the Technology

    Native & BrandedSolaris Containers

    Solaris Containers Summary

  • 8/3/2019 LDOM Discovery DayV2.2a

    60/102

    v2.2 Page 60

    y

    Solaris 10 technology providing OS virtualisation

    Support multiple, isolated application environments in one OSinstance

    Software-based solution therefore> No application changes or recompilation

    > No additional hardware requirements> No licensing or support fees

    A combination of :> Zones

    > Resource Management Branded extension to zones technology

    > Enables Solaris Containers to assume different OS personalities> Solaris 8, 9 & Linux Containers

    Containers Block Diagram

  • 8/3/2019 LDOM Discovery DayV2.2a

    61/102

    v2.2 Page 61

    g

    network device

    (hme0)

    storage complex

    globalzone(v880-room2-rack5-1; 129.76.1.12)dns1 zone (dnsserver1)

    zoneadmd

    Sol 8zone (Solaris 8)

    remote admin/monitoring

    (SNMP, SunMC, WBEM)

    platform administration

    (syseventd, devfsadm, ifconfig, metadb,...)

    core services(inetd)

    Sol 8 core services(NIS, xinetd, autofs)

    core services

    (inetd, rpcbind, sshd, ...)

    zone root: /zone/dns1 zone root: /zone/sol8

    network device

    (ce0)

    zone management (zonecfg(1M), zoneadm(1M), zlogin(1), ...)

    ce0:3

    ce1:1

    hme0:1

    zcons

    zcons

    zoneadmd

    /usr

    /usr

    Application

    Enviro

    nment

    Virtual

    Platform

    login services(SSH sshd)

    network services(named)

    zoneadmd

    web1 zone (foo.org)

    network services(Apache, Tomcat)

    core services(inetd)

    zone root: /zone/web1

    hme0:2

    ce0:1

    zcons

    /usr

    zoneadmd

    web2zone (bar.net)

    network services(IWS)

    core services(inetd)

    zone root: /zone/web2

    hme0:3

    ce0:2

    zcons

    /usr

    pool2 (4 CPU)

    network device

    (ce1)

    login services(SSH sshd)

    login services(SSH sshd, telnetd)

    10

    pool1 (4 CPU), FSS

    30 60

    Sol 8 user apps(OpenSSHacroread,MATLAB, yum,pandora)

    Containers Today

  • 8/3/2019 LDOM Discovery DayV2.2a

    62/102

    v2.2 Page 62

    y

    Fair-Share Scheduler> Guarantees minimum

    portion of CPU to a zone

    > Conflict based

    enforcement

    Dedicated CPUs> Specifies a quantity of

    available CPU to a zone

    > Requires the use oftemporary resource

    pools> Configured using

    zonecfg command

    CPU Caps> Hard limit on allocated

    CPU to a zone

    RAM cap zone awarercapd> Enforces a maximum

    amount of physicalmemory

    > Configured and enforced inthe global zone

    Swap cap> Specifies a maximum

    amount of swap spaceavailable to a zone not a

    region of swap disk> Configured and enforced

    by the global zone

    Locked-memory cap> Limits amount of memory

    that is specifically marked

    'not eligible for paging'

    Multiple Stacks/IP Instances

  • 8/3/2019 LDOM Discovery DayV2.2a

    63/102

    v2.2 Page 63

    In an exclusive-IP zone:all IP packets enter or leave through the zone's NIC(s)

    DHCPv4 and IPv6 stateless address auto-configuration works

    routing can be configured for that zone

    IP Multipathing can be configured if the zone has >1 NICsndd can be used to set TCP/UDP/SCTP/IP/ARP parameters

    Limitations -Reliance on GLDv3 means no initial support for 'legacy' NICs (ce, hme, qfe, eri)

    pProviding Even More Isolation

    Solaris 8 or 9 Containers

  • 8/3/2019 LDOM Discovery DayV2.2a

    64/102

    v2.2 Page 64

    Solaris 10Global

    M-seriesT2000/T5120/T5220

    Solaris 10 Container

    ZFS DTraceFMA Solaris 10 Kernel

    DatabaseApplication

    Solaris 8 or 9

    Physical to Virtual (P2V)

    Using Containers to migrate to Solaris 10

    Branded

    Server

    OS

    ApplicationDatabaseApplication

    Solaris 8 or 9Container

    only

    High Level Functional Comparison

  • 8/3/2019 LDOM Discovery DayV2.2a

    65/102

    v2.2 Page 65

    Solaris Containers Logical Domains

    Available everywhere Solaris runs:USII and up, x86/64

    Far smaller CPU, disk, memoryfootprint than full OS images

    Extremely efficient suitable forlarge N deployments

    SRM or pool managed sub-CPUresource management

    Some restrictions (like NFS serverin a zone)

    Single kernel, with implications forpatching

    Available on CMT SPARC only> As many domains as there are

    CPU threads> For workloads applicable to T-

    series servers

    Flexible resource allocation

    Each domain runs an independentOS: can be different levels andindependently patched

    Efficient implementation, with lessoverhead than other virtualmachines, but still an entire OSinstance

  • 8/3/2019 LDOM Discovery DayV2.2a

    66/102

    v2.2 Page 66 66

    Understanding the Technology

    xVM Infrastructure

  • 8/3/2019 LDOM Discovery DayV2.2a

    67/102

    v2.2 Page 67

    Open Virtualisation for Desktop to Datacentre

    Open developerVirtualisationplatform

    Manageheterogeneousdatacentres

    Enterprise-classhypervisor

    Only VDI with choice:Windows, OpenSolaris and Linuxdelivered securely

    Sun xVM Infrastructure Overview

  • 8/3/2019 LDOM Discovery DayV2.2a

    68/102

    v2.2 Page 68

    Sun xVM Infrastructure

    +

    For x86 For SPARC

    Inv

    en

    tory*

    A complete solution for Virtualising and Managing your Datacentre

    Solaris Windows Linux LinuxSolarisVDI

    CMT based

    SPARC Platforms

    Sun / 3rd Party

    x86 Platforms

    Linux Linux on CMT is not directly supported by Sun

    xVM Ops Center

    Physical & Virtual platforms

    xVM Server

    H/WM

    onito

    ring*

    Patchlife

    cy

    cle

    Firmw

    areM

    gr

    O/SProvision

    xVM

    Serv

    er

    Mgr

    Dis

    cov

    ery*

    AppProvisi

    on

    * Not automated on all platforms today, manual intervention maybe needed

    Will be released as a Software Appliance

    SPARC & x86 Systems

  • 8/3/2019 LDOM Discovery DayV2.2a

    69/102

    v2.2 Page 69

    ManageHeterogeneousDatacenters

    Sun xVM Roadmap

  • 8/3/2019 LDOM Discovery DayV2.2a

    70/102

    v2.2 Page 70

    xVM Server

    xVM Ops Center

    4QCY07 1QCY08 2QCY08 3QCY08

    xVM ServerFirst release of xVMServer available inOpenSolaris

    xVM Early AccessHypervisor optimised distro,Management interfaces

    xVM Ops CenterFirst Alphacustomer installs

    xVM Ops Center1.0 FCS

    xVM Ops CenterAdditional platforms,

    performance,install/upgrade automation

    xVM Ops CenterActive xVM ServerMgmt for x86

    xVM Server FCS

  • 8/3/2019 LDOM Discovery DayV2.2a

    71/102

    v2.2 Page 71 71

    Total Cost of Ownership (TCO)Benefits

  • 8/3/2019 LDOM Discovery DayV2.2a

    72/102

    Production, Dev, TestVirtualised Approach with Multiple LDOMs

  • 8/3/2019 LDOM Discovery DayV2.2a

    73/102

    v2.2 Page 73

    System 1> Production LDOM> Failover LDOM

    System 2

    > Business Continuity LDOM> User Acceptance Test LDOM> Multiple Development LDOM> Multiple Test LDOM

    If System 1 fails, then the Test & Dev LDOMs contract to allow theUAT LDOM to become a Business Continuity copy of theProduction LDOM

    Virtualised Approach with Multiple LDOMs

    TCO Savings

  • 8/3/2019 LDOM Discovery DayV2.2a

    74/102

    v2.2 Page 74

    Netra 240

    Sun Rack 900 38

    T5220

    Sun Rack 900 38

    V490

    V490

    V490

    Netra 240Netra 240

    Netra 240

    Netra 240

    T5220

    OLD NEW

    List Price 170,450 43,800 x4

    Space 25 RU 4 RU x6

    Power 5,355 Watts 1,110 Watts x5

    58,356 19,984 x3

    Cores 34 16 x2

    M Values 540,500 576,000 =

    3 YearSupport

    3 x V490 4 x 1.8GHz USIV+ 32GB Memory

    5 x V245 2 x 1.5GHz USIIIi 2GB Memory

    2 x T5220 8 core 1.4GHz 64GB Memory

    Try out todays latest technologybefore you cut a P.O.

    The cost of infrastructure softwareis as low as it goes

  • 8/3/2019 LDOM Discovery DayV2.2a

    75/102

    v2.2 Page 75

    We Make it

    Easy

    Download Software

    Free!

    Sun Developer NetworkSun Advisory Panel

    Inner CircleExecutive Boardroom

    PromotionsCustomer Stories

    Sun StoreJonathans Blog

    Go to sun.comJoin a CommunityTry and Buy

    before you cut a P.O. is as low as it goes

    60 Days Risk-Free

  • 8/3/2019 LDOM Discovery DayV2.2a

    76/102

    v2.2 Page 76

    Coffee & Tea Break

  • 8/3/2019 LDOM Discovery DayV2.2a

    77/102

    v2.2 Page 77

    Demonstrationof LDoms in detail

    Logical Domain Setup

  • 8/3/2019 LDOM Discovery DayV2.2a

    78/102

    v2.2 Page 78

    Control Domain setup

    > Services required for control domain

    Guest Domain setup

    Demo> Scripts

    Guest Domain Creation Deletion resource allocation rapid deployment migration

  • 8/3/2019 LDOM Discovery DayV2.2a

    79/102

    ZFS overview

  • 8/3/2019 LDOM Discovery DayV2.2a

    80/102

    v2.2 Page 80

    128 bit file system Simple to use

    > Two commands only> zpool to create and manage pools of storage ( zpool create mypool c1t0d0 )

    > zfs to create and manage file systems (zfs create mypool/terry )

    Self Healing capabilities

    Snapshot and cloning capability> Quick and cheap> Snapshot is a read-only copy of the original file system and only holds the

    deltas> Clone is a writeable copy ( but still references original )> Allows an easy way to create and clone both LDOMs and zones

  • 8/3/2019 LDOM Discovery DayV2.2a

    81/102

  • 8/3/2019 LDOM Discovery DayV2.2a

    82/102

    Workloads on LDom PlatformReal World Applications

  • 8/3/2019 LDOM Discovery DayV2.2a

    83/102

    v2.2 Page 83

    1. Enterprise Middleware Solutions> Glassfish J2EE Reference Application Server> Built as if using cloned image

    2. Suitability for Web2.0 and the SAMP Stack> Drupal Content Management System built on Solaris, Apache, MySQL and

    PHP> Allocate additional (vCPU) resources on the fly

    3.RAS with an Enterprise Database> Oracle 10.2.0.3 RDBMS and Dataguard to a Standby DB

    4. Migration of legacy apps to Solaris 10> with Solaris 8 Containers

    5. Solaris Binary Compatibility> Solaris 10 Container running the same app as Solaris 8 container

    Virtualisation Implementation

  • 8/3/2019 LDOM Discovery DayV2.2a

    84/102

    v2.2 Page 84

    Virtualising Devices

  • 8/3/2019 LDOM Discovery DayV2.2a

    85/102

    v2 2 Page 85

  • 8/3/2019 LDOM Discovery DayV2.2a

    86/102

    2. Web 2.0/SAMP StackTesting with Drupal

  • 8/3/2019 LDOM Discovery DayV2.2a

    87/102

    v2 2 Page 87

    2. Web 2.0/SAMP Highlights

  • 8/3/2019 LDOM Discovery DayV2.2a

    88/102

    v2 2 Page 88

    Highlights> Demonstrates a leading CMS system, Drupal, built on SAMPSolaris, Apache, MySQL and PHP/Perl/Python (Cool Stack optimised for CMT)

    Similar to the LAMP stack (Linux, Apache, MySQL and PHP/Perl/Python) Free and Open !

    > LDom running under stress, as in the real world

    > Additional resources added dynamically A complex stack can be put together in minutes using CoolStack

    > Deployed to great effect on CMT servers at very little expense

    3. Oracle Testing

  • 8/3/2019 LDOM Discovery DayV2.2a

    89/102

    v2 2 Page 89

    3. Oracle Highlights

  • 8/3/2019 LDOM Discovery DayV2.2a

    90/102

    v2 2 Page 90

    HighLights> Oracle 10.2.0.3 RDBMS runs just as it does on a Native SPARC/Solarissystem

    > DataGuard

    > Logs shipped in the normal manner

    > The tools work in the normal manner> Oracle Enterprise Manager

    > A test harness (iGenOLTP) that simulates real workload> Demonstrates that the following work with LDoms:

    Sun Java System Directory Server

    Tomcat servlet engine NB: iGenOLTP is built on SLAMD which is constructed from an open LDAP server such as Sun

    Java System Directory Server and a J2EE servlet engine such as Tomcat - seehttp://www.slamd.org

    4. & 5.Virtualising Legacy AppsUtilising Containers

  • 8/3/2019 LDOM Discovery DayV2.2a

    91/102

    v2 2 Page 91

    4. & 5. Container Demo Highlights

  • 8/3/2019 LDOM Discovery DayV2.2a

    92/102

    v2 2 Page 92

    Highlights> Legacy App: Oracle 8.0.5> De-supported by Oracle for a number of years

    > Two options> Use binary compatibility and deploy on Solaris 10

    > Use Solaris 8 containers

  • 8/3/2019 LDOM Discovery DayV2.2a

    93/102

    v2 2 Page 93

    ZeusApplication Delivery Controller

    Who are Zeus Technology

    Zeus Technology develops Internet Application Traffic

  • 8/3/2019 LDOM Discovery DayV2.2a

    94/102

    v2 2 Page 94

    Zeus Technology develops Internet Application TrafficManagement software - aka Application DeliveryControllers(ADC).

    Corporate HQ in Cambridge, UK

    US Headquarters in Mountain View, CA

    Over 10 years experience in network and web application delivery. Over 1300 deployments of ZXTM (Zeus Extensible Traffic Manager).

    Many global brand customers.

    Steve Webb

    VP Strategic AccountsPhone: +44 1223 525000

    Cell: +44 7973 122784

    [email protected]

    www.zeus.com

    Why do you need an Application DeliveryController?

  • 8/3/2019 LDOM Discovery DayV2.2a

    95/102

    v2 2 Page 95

    To make your networked and web-enabled applicationsfaster, more reliable, secure and easier to manage.

    MORE RELIABLE: Load Balancing Fault Tolerance Monitoring Bandwidth Shaping Request Rate Shaping

    FASTER: SSL and XML offload Content Compression HTTP Caching HTTP Multiplexing TCP Offload

    MORE SECURE: Server Isolation Traffic Filtering Traffic Scrubbing DoS protection Application Protection

    EASIER TO MANAGE: Deployment of apps Visualization via powerful GUI Reporting and alerting Control API

    The Logical View

  • 8/3/2019 LDOM Discovery DayV2.2a

    96/102

    v2 2 Page 96

    Web servers: Apache, IIS, Sun, Zeus, lighttpd etc.

    App. servers: Tomcat, JBOSS, JES, Glassfish, WebLogic,WebSphere, OAS, .NET, PHP, Ruby on Rails

    Dbase servers: MySQL, SQL server, Oracle etc.

    Solaris LDOM Advantage 1

  • 8/3/2019 LDOM Discovery DayV2.2a

    97/102

    v2 2 Page 97

    DB SvrApp SvrWeb SvrZXTM

    LDOM 4LDOM 3LDOM 2LDOM 1

    DB SvrApp SvrWeb SvrZXTM

    LDOM 4LDOM 3LDOM 2LDOM 1

    Load-balanced, fault-tolerant SAMP

    cluster in just 2U of rack space

    (saving space, power and heat)

    Solaris LDOM Advantage 2

    Traffic partitioned through 4 * ADCclusters (with each cluster running

  • 8/3/2019 LDOM Discovery DayV2.2a

    98/102

    v2 2 Page 98

    ZXTMZXTMZXTMZXTM

    LDOM 4LDOM 3LDOM 2LDOM 1

    clusters (with each cluster runningactive-active) on just 2 * CMT servers

    ZXTMZXTMZXTMZXTM

    LDOM 4LDOM 3LDOM 2LDOM 1

    Summary

    Software ADCs can be deployed like any other application:

  • 8/3/2019 LDOM Discovery DayV2.2a

    99/102

    2 2 Page 99

    Software ADCs can be deployed like any other application:

    Load balanced SAMP stack - all on a single Sun CMT server.Scale up and out as your business grows.

    Multiple ADC clusters on just 2 * Sun CMT servers enabling application trafficpartitioning for hightened security and to protect against any possible application-induced crashes.

    Making network and web applications run faster, run more reliably, run more securelyand making them easier to manage all with maximum flexibility of deployment.

    Try ZXTM yourself

  • 8/3/2019 LDOM Discovery DayV2.2a

    100/102

    2 2 P 100

    ZXTM Desktop Evaluator runs as a virtual appliance on anyWindows or Linux laptop/desktop. Request a managed evaluation when you are ready.

    Useful references Presentation and ldom demo scripts

    > http://uk.sun.com/discoveryday

    http://uk.sun.com/discoverydayhttp://uk.sun.com/discoveryday
  • 8/3/2019 LDOM Discovery DayV2.2a

    101/102

    v2.2 Page 101

    Sun Logical Domains Wiki> http://wikis.sun.com/display/SolarisLogicalDomains/Home

    Sun.com logical domains page (link to S/W and docs )> http://www.sun.com/servers/coolthreads/ldoms/index.xml

    Sun Virtualisation Training Courses

    > http://uk.sun.com/training/catalog/operating_systems.virtualization.xml

    Opensolaris LDOM's site> http://www.opensolaris.org/os/community/ldoms

    Blueprint: Beginners Guide to LDOM's> http://wikis.sun.com/display/BluePrints/Beginners+Guide+to+LDoms+1.0

    Blueprint: Using Logical Domains and Coolthreads Technology> http://wikis.sun.com/display/BluePrints/Using+Logical+Domains+and+CoolThreads+Technology

    http://wikis.sun.com/display/SolarisLogicalDomains/Homehttp://www.sun.com/servers/coolthreads/ldoms/index.xmlhttp://wikis.sun.com/display/BluePrints/Beginners+Guide+to+LDoms+1.0http://wikis.sun.com/display/BluePrints/Using+Logical+Domains+and+CoolThreads+Technologyhttp://wikis.sun.com/display/BluePrints/Using+Logical+Domains+and+CoolThreads+Technologyhttp://wikis.sun.com/display/BluePrints/Beginners+Guide+to+LDoms+1.0http://www.sun.com/servers/coolthreads/ldoms/index.xmlhttp://wikis.sun.com/display/SolarisLogicalDomains/Home
  • 8/3/2019 LDOM Discovery DayV2.2a

    102/102

    v2.2 Page 102

    LDom Discovery Day

    UK Systems PracticeSun Microsystems Ltd.