aix

18
Good back-up of operating system and all user data on system is MUST before initiating o.s updation tep 1. Check for current status of file-sets #lppchk –v (it won’t give anything as output if all file sets are in consistent state) (if ask for any requisite ,need to fulfill it prior moving ahead) 2. place all the all the file sets corresponds to ML into a directory. (ex :- /tmp/ibm/MLXX) 3. cd to that directory (ex :- cd /tmp/ibm/MLXX) 4. give following command to prepare the table of content #inutoc . 5. give the following command to install bos.rte.install file set #installp –acXgd . bos.rte.install 6. say smitty update_all -for path say . -for preview say yes -accept the licences 7. the above command will run for some time and should show “OK” on the top at last if it generataes some error (fail status or failure) rectify the cause it is pointing towards 9. now go back to the previous menu(press F3 or esc+3) and for preview say “NO” the above command will run for some time and should show “OK” on the top at last if it generataes some error (fail status or failure) rectify the cause it is pointing towards 8. check out for current os level #oslevel –r (here it should show the ml level updated)

Upload: navneetnba

Post on 14-Oct-2014

311 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: aix

Good back-up of operating system and all user data on system is MUST before initiating o.s updationtep  1. Check for current status of file-sets

#lppchk –v      (it won’t give anything as output if all file sets are in consistent state)                                    (if ask for any requisite ,need to fulfill it prior moving ahead)2. place all the all the file sets corresponds to ML into  a directory.              (ex :-    /tmp/ibm/MLXX)3. cd to that directory                                                                         (ex :- cd /tmp/ibm/MLXX)4. give following command to prepare the table of content

#inutoc .5. give the following command to install bos.rte.install file set

#installp –acXgd . bos.rte.install                                                         6. say smitty update_all            -for path say .             -for preview say yes            -accept the licences7. the above command will run for some time and should show “OK” on the top at last            if it generataes some error (fail status or failure) rectify the cause it is pointing towards9. now go back to the previous menu(press F3 or esc+3) and for preview say “NO”                         the above command will run for some time and should show “OK” on the top at last                        if it generataes some error (fail status or failure) rectify the cause it is pointing towards8. check out for current os level

#oslevel –r                               (here it should show the ml level  updated)( (if updated to mlXX than for AIX 5.3 it will show :-

5300-XX9. now repeat step no 1-7 for service packs step by step.                   (replace ml with sp at each step)10. run the following command to check the current os level with service pack            #oslevel –s                               (here it should show the sp level  updated)

(if updated to spYY :- 5300-XX-YY)           11. run the following commands once again at last

#lppchk –v      (it won’t give anything as output if all file sets are in consistent state)12. u r done with ml updation.

Page 2: aix

1.check if you have 8 Mb(max.) space free in /tmp #df -g /tmp2.Run the snap command #snap -gbcthis will create a file named snap.pax.Z in /tmp/ibmsupt folder

3.ftp this file in binary mode on your PC.    ->ftp server_ip ->cd /tmp/ibmsupt ->bin ->prompt ->mget snap.pax.Z ->byethis would bought the file in the current path on your PC.4.Mail this file to ibm support

Very good stuff for you all.

Setting partition-availability priorities for your managed system

To avoid shutting down mission-critical workloads when your server firmware deconfigures a failing processor, you

can use the Hardware Management Console (HMC) to set partition-availablity priorities for the logical partitions on

your managed system. A logical partition with a failing processor can acquire a replacement processor from logical

partitions with a lower partition-availability priority. The acquisition of a replacement processor allows the logical

partition with the higher partition-availability priority to continue running after a processor failure.

To set partition-availability priorities for your managed system using the HMC, follow these steps:

1.    In the navigation pane, open Systems Management and click Servers.

2.    In the work pane, select the managed system whose partition-availability priorities you want to set, click

the Tasks button, and select Configuration > Partition Availability Priority.

3.    Select the logical partitions whose partition-availability priority you want to set, set Availability priority to

the partition-availability priority value that you want to use for all selected logical partitions, and click OK.

You can enter any value from 0 to 255 into Availability priority, or you can select one of the preset choices.

All selected logical partitions are set to the same partition-availability priority value.

4.    Repeat this procedure for other logical partitions to set the partition-availability priority for those logical

partitions.

 Activating a system profile

You can activate many logical partitions at a time by using the Hardware Management Console (HMC) to activate a

system profile. A system profile is an ordered list of partition profiles. When you activate a system profile, the

managed system attempts to activate the partition profiles in the system profile in the order in which the partition

profiles are listed.

Page 3: aix

Restriction: You cannot activate a system profile that contains partition profiles that specify shared memory.

To activate a system profile using the HMC, follow these steps:

1.    In the navigation pane, open Systems Management and click Servers.

2.    In the work pane, select the managed system, click the Tasks button, and choose Configuration > Manage

System Profiles.

3.    Select the system profile and click Activate.

4.    Select the desired activation settings for the system profile and click Continue.

 

Shared processors

Shared processors are physical processors whose processing capacity is shared among multiple logical partitions.

The ability to divide physical processors and share them among multiple logical partitions is known as the Micro-

Partitioning™ technology.

Note: For some models, the Micro-Partitioning technology is an option for which you must obtain and enter a PowerVM™ Editions activation code.

By default, all physical processors that are not dedicated to specific logical partitions are grouped together in a

shared processor pool. You can assign a specific amount of the processing capacity in this shared processor pool to

each logical partition that uses shared processors. Some models allow you to use the HMC to configure multiple

shared processor pools. These models have a default shared processor pool that contains all the processors that do

not belong to logical partitions that use dedicated processors or logical partitions that use other shared processor

pools. The other shared processor pools on these models can be configured with a maximum processing unit value

and a reserved processing unit value. The maximum processing unit value limits the total number of processing unit

that can be used by the logical partitions in the shared processor pool. The reserved processing unit value is the

number of processing units that are reserved for the use of uncapped logical partitions within the shared processor

pool.

You can assign partial processors to a logical partition that uses shared processors. A minimum of 0.10 processing

units can be configured for any logical partition that uses shared processors. Processing units are a unit of measure

for shared processing power across one or more virtual processors. One shared processing unit on one virtual

processor accomplishes approximately the same work as one dedicated processor.

Some server models allow logical partitions to use only a portion of the total active processors on the managed

system, so you are not always able to assign the full processing capacity of the managed system to logical partitions.

This is particularly true for server models with one or two processors, where a large portion of processor resources is

used as overhead. The System Planning Tool (SPT) shows how many shared processors are available for logical

partitions to use on each server model, so use the SPT to validate your logical partition plan.

On HMC-managed systems, shared processors are assigned to logical partitions using partition profiles.

Page 4: aix

Logical partitions that use shared processors can have a sharing mode of capped or uncapped. An uncapped logical

partition is a logical partition that can use more processor power than its assigned processing capacity. The amount

of processing capacity that an uncapped logical partition can use is limited only by the number of virtual processors

assigned to the logical partition or the maximum processing unit allowed by the shared processor pool that the

logical partition uses. In contrast, a capped logical partition is a logical partition that cannot use more processor

power than its assigned processing units.

For example, logical partitions 2 and 3 are uncapped logical partitions, and logical partition 4 is a capped logical

partition. Logical partitions 2 and 3 are each assigned 3.00 processing units and four virtual processors. Logical

partition 2 currently uses only 1.00 of its 3.00 processing units, but logical partition 3 currently has a workload

demand that requires 4.00 processing units. Because logical partition 3 is uncapped and has four virtual processors,

the server firmware automatically allows logical partition 3 to use 1.00 processing units from logical partition 2.

This increases the processing power for logical partition 3 to 4.00 processing units. Soon afterwards, logical

partition 2 increases its workload demand to 3.00 processing units. The server firmware therefore automatically

returns 1.00 processing units to logical partition 2 so that logical partition 2 can use its full, assigned processing

capacity once more. Logical partition 4 is assigned 2.00 processing units and three virtual processors, but currently

has a workload demand that requires 3.00 processing units. Because logical partition 4 is capped, logical partition 4

cannot use any unused processing units from logical partitions 2 or 3. However, if the workload demand of logical

partition 4 decreases below 2.00 processing units, logical partitions 2 and 3 could use any unused processing units

from logical partition 4.

By default, logical partitions that use shared processors are capped logical partitions. You can set a logical partition

to be an uncapped logical partition if you want the logical partition to use more processing power than its assigned

amount.

Although an uncapped logical partition can use more processor power than its assigned processing capacity, the

uncapped logical partition can never use more processing units than its assigned number of virtual processors. Also,

the logical partitions that use a shared processor pool can never use more processing units than the maximum

processing units configured for the shared processor pool.

If multiple uncapped logical partitions need additional processor capacity at the same time, the server can distribute

the unused processing capacity to all uncapped logical partitions. This distribution process is determined by the

uncapped weight of each of the logical partitions.

Uncapped weight is a number in the range of 0 through 255 that you set for each uncapped logical partition in the

shared processor pool. On the HMC, you can choose from any of the 256 possible uncapped weight values. By

setting the uncapped weight (255 being the highest weight), any available unused capacity is distributed to

contending logical partitions in proportion to the established value of the uncapped weight. The default uncapped

weight value is 128. When you set the uncapped weight to 0, no unused capacity is distributed to the logical

partition.

Uncapped weight is only used where there are more virtual processors ready to consume unused resources than there

are physical processors in the shared processor pool. If no contention exists for processor resources, the virtual

processors are immediately distributed across the logical partitions independent of their uncapped weights. This can

result in situations where the uncapped weights of the logical partitions do not exactly reflect the amount of unused

capacity.

Page 5: aix

For example, logical partition 2 has one virtual processor and an uncapped weight of 100. Logical partition 3 also

has one virtual processor, but an uncapped weight of 200. If logical partitions 2 and 3 both require additional

processing capacity, and there is not enough physical processor capacity to run both logical partitions, logical

partition 3 receives two additional processing units for every additional processing unit that logical partition 2

receives. If logical partitions 2 and 3 both require additional processing capacity, and there is enough physical

processor capacity to run both logical partitions, logical partition 2 and 3 receive an equal amount of unused

capacity. In this situation, their uncapped weights are ignored.

The server distributes unused capacity among all of the uncapped shared processor partitions that are configured on

the server, regardless of the shared processor pools to which they are assigned. For example, you configure logical

partition 1 to the default shared processor pool. You configure logical partition 2 and logical partition 3 to a different

shared processor pool. All three logical partitions compete for the same unused physical processor capacity in the

server, even though they belong to different shared processor pools.

 

Virtual processorsA virtual processor is a representation of a physical processor core to the operating system of a logical partition that uses shared processors.

When you install and run an operating system on a server that is not partitioned, the operating system calculates the number of operations that it can perform concurrently by counting the number of processors on the server. For example, if you install an operating system on a server that has eight processors, and each processor can perform two operations at a time, the operating system can perform 16 operations at a time. In the same way, when you install and run an operating system on a logical partition that uses dedicated processors, the operating system calculates the number of operations that it can perform concurrently by counting the number of dedicated processors that are assigned to the logical partition. In both cases, the operating system can easily calculate how many operations it can perform at a time by counting the whole number of processors that are available to it.

However, when you install and run an operating system on a logical partition that uses shared processors, the operating system cannot calculate a whole number of operations from the fractional number of processing units that are assigned to the logical partition. The server firmware must therefore represent the processing power available to the operating system as a whole number of processors. This allows the operating system to calculate the number of concurrent operations that it can perform. A virtual processor is a representation of a physical processor to the operating system of a logical partition that uses shared processors.

The server firmware distributes processing units evenly among the virtual processors assigned to a logical partition. For example, if a logical partition has 1.80 processing units and two virtual processors, each virtual processor has 0.90 processing units supporting its workload.

There are limits to the number of processing units that you can have for each virtual processor. The minimum number of processing units that you can have for each virtual processor is 0.10 (or ten virtual processors for every processing unit). The maximum number of processing units that you can have for each virtual processor is always 1.00. This means that a logical partition cannot use more processing units than the number of virtual processors that it is assigned, even if the logical partition is uncapped.

Page 6: aix

A logical partition generally performs best if the number of virtual processors is close to the number of processing units available to the logical partition. This lets the operating system manage the workload on the logical partition effectively. In certain situations, you might be able to increase system performance slightly by increasing the number of virtual processors. If you increase the number of virtual processors, you increase the number of operations that can run concurrently. However, if you increase the number of virtual processors without increasing the number of processing units, the speed at which each operation runs will decrease. The operating system also cannot shift processing power between processes if the processing power is split between many virtual processors.

On HMC-managed systems, virtual processors are assigned to logical partitions using partition profiles.

 

Abstract: What is the difference between Active Memory Expansion vs. Active Memory Sharing?

 

Active Memory Expansion:

* Effectively gives more memory capacity to the partition using compression/decompression of the contents in true memory * It is a feature code number for quantity 1 configured for each server (no software needs to be configured) (for example, POWER 740 = FC4794 = POWER ACTIVE MEMORY EXPANSION) * Supported on AIX partitions only * Requires AIX 6.1 or later and POWER7 hardware

Active Memory Sharing:

* Moves memory from one partition to another * Best fit when one partition is not busy when another partition is busy * PowerVM hardware feature and PowerVM software required * Supported on AIX, IBM i, and Linux partitions

 

 

Abstract: I wonder if you could confirm how the new "Intelligent Thread" technology in POWER7 is different from the Processor Folding that has been available for some time with SMT. Is this just an expansion to include SMT 4, or is it more than that?

 

Virtual Processor Folding is the process by which the AIX kernel (not Power Hypervisor) stops dispatch processing for processors when unnecessary. This is a process to lessen the impact of the tendency to pre-allocate larger numbers of VPs in LPARs to cope with spikes in workload, activation of new physical processors or moving LPARs onto a systems with more cores. http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/virtual_proc_mngmnt_part.htm

Page 7: aix

Intelligent Threads is the process by which the system will (if enabled) vary the operation of SMT between 1, 2 and 4 depending on workload and results in more or fewer logical processors.

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FQ128785

I've pasted a diagram below as well.

Thus, when you say "Processor Folding that has been available for some time with SMT", you're not quite correct as processor folding is not related to SMT. When you say "Intelligent Thread technology in POWER7 ... an expansion to include SMT 4" then that is essentially correct, the expansion being from the SMT1 and SMT2 in both POWER5 and POWER6.

The visual way to think is this is that virtual processor folding happens at the virtual processor level and that intelligent threads is happening at the logical processor level.

 

Page 8: aix

 

Abstract: What are the limitations on POWER p of the FC 5735 adapter when attaching tape drives and disk drives in an AIX environment?

 

On POWER p it is best to use one port to attach disks drives and one port to attach tape drives when connecting to FC 5735 . Mixing disks/tapes may slow down tape backups if you are doing tape backups and heavy disk operations

 

Abstract: What adapter can be used for FCoE Converged Network Adapter on PS701?

 

(#8275) - QLogic 2-port 10Gb Converged Network Adapter (CFFh) The QLogic 2-port 10 Gb Converged Network Adapter (CFFh) for IBM BladeCenter offers robust 8 Gb Fibre Channel storage connectivity and 10 Gb networking over a single Converged Enhanced Ethernet (CEE) link. Because this adapter combines the functions of a network interface card and a host bus adapter on a single converged adapter, clients can realize potential benefits in cost, power, and cooling, and data center footprint by deploying less hardware. Features The expansion card has the following features:

Combo Form Factor (CFFh) PCI Express 2.0 x8 adapter Communication module: QLogic ISP8112 Support for up to two CEE HSSMs in a BladeCenter H or HT chassis Support for 10 Gb Converged Enhanced Ethernet (CEE) Support for Fiber Channel over Converged Enhanced Ethernet (FCoCEE) Full hardware offload for FCoCEE protocol processing Support for IPv4 and IPv6 Support for SAN boot over CEE, PXE boot, and iSCSI boot Support for Wake on LAN Support for BladeCenter Open Fabric Manager for BIOS, UEFI, and FCode

Stateless offload features include:

IP, TCP, and UDP checksum offloads Large and Giant Send Offload (LSO, GSO) Receive Side Scaling (RSS) Header-data split Interrupt coalescing

Page 9: aix

NetQueue

Note: VIOS attachment requires VIOS 2.1.3.0, or later. Refer to Software Requirements for specifics.

Attributes provided: 8Gb Fibre Channel connectivity and 10Gb CEE port Attributes required: Available PCI slot For 8406-71Y: (#8275) Minimum required: 0 Maximum allowed: 2 (Initial order maximum: 2) OS level required: AIX Version 5.3 supported AIX Version 6.1 supported IBM i 6.1 supported IBM i 7.1 supported SUSE Linux supported Red Hat Linux supported Refer to Software Requirements for specific O/S levels supported Initial Order/MES/Both/Supported: Both CSU: Yes Return parts MES: No

The QLogic 2-port 10 Gb Converged Network Adapter requires at least one 10 Gb Ethernet Pass-Thru Module or BNT 10-port 10 Gb Ethernet Switch Module to be installed in a BladeCenter H or BladeCenter HT chassis.

For more information, see the following publications:  QLogic 2-port 10 Gb Converged Network Adapter (CFFh) at-a-glance guide http://www.redbooks.ibm.com/abstracts/tips0716.html  QLogic 2-port 10 Gb Converged Network Adapter (CFFh) Installation and User's Guide (You will need to download the “publication release” EXE file and run it to unpack the pdf)

 

 

Abstract: If I trunk together multiple ports on the same FC 5717 network card, can I expect > 1GBps speeds total through that trunk, or is the card itself limited to a maximum throughput of a total of 1Gbps?

 

As for FC 5717 performance, each port is capable of running at 1 Gbit per second. Effective

Page 10: aix

payload rate at user level (what a user level program would see) is the following. This is MTU 1500 and running two TCP sessions per port. Single TCP session per port would be slightly slower, about 930 Mbits vs 940 Mbits for example.1 port 940 Mbits or 112 MBytes/sec2 ports 1880 Mbits or 224 MBytes3 ports 2820 Mbits or 336 MBytes4 ports 3761 Mbits or 448 MBytes

That is running dedicated adapter without EtherChannel.

If you use EtherChannel (port aggregation), a lot depends on how the traffic is hashed across the links in the channel. We recommend the src-dest-port hashing option which will hash a given TCP connection to a specific link. That keeps packets in order for any given TCP connections. So any single TCP session will only run on one link of the channel. It takes multiple TCP sessions to take advantage of the channel. Also, the switch does the hashing inbound to AIX so you have to select similar hashing there.

This adapter is the first 4-port adapter we have had that can run all 4-ports at speed. This is for streaming workloads moving larger packets and this is for default MTU 1500.

 

 

Abstract: Can customers who have APV (Advanced POWER Virtualization) installed on their server, upgrade this server to the PowerVM/Enterprise version?

 

No, the customer can not upgrade to the PowerVM/Enterprise version. APV is basically the same software as the PowerVM Standard software license. The customer can convert their 5765-G31 and 5765-G34 software licenses shown below to the 5771-PVS PowerVM/Standard license. However, there is no upgrade path to the PowerVM/Enterprise version. These older installed servers such as the POWER5 9117-570 with it's APV feature (FC7942) installed, can not upgrade to the PowerVM/Enterprise version.

9117-570 with this hardware feature and software licenses installed: FC7942 Advanced POWER Virtualization - Standard 5765-G31 PARTITION LOAD MANAGER 5765-G34 VIRTUAL I/O SERVER

Software licenses above can convert to this PowerVM/Standard software license only: 5765-PVS PowerVM Standard Edition

Page 11: aix

1. INTRODUCTION TO PARTITIONING o What is Partition , LPAR ,PPARo List Partition charactersticso List benefits of using partitiono Software licensing,What is POWER hypervisor and its functiono Hardware management console (HMC) and its usageo What is DLPAR , processor concepts, Virtual I/O,Power VM

2. HARDWARE OVERVIEW o What are P series hardware .o Discuss p4,P5,P6,P7 hardwareo What are features implemented in each serieso I/O drawer optionso What is location codes, physical location code conventiono How to see AIX location codeso Service processor and its role in systemo Advanced System Management Interface

3. HARDWARE MANAGEMENT CONSOLE o What is HMC ? Its configuration ,HMC hardware and software,o Taking you through HMC application windowo How to configure new HMC ,User and Network Management in HMCo Configuring firewall in HMCo How to connect manged node to HMCo Real time issues related to HMC and Managed Nodeo What are important task AIX HMC manage

4. Integrated Virthual Ethernet Adapter o Discuss IVE in Power 6o Discuss Host Ethernet Adapter, How it used .

Page 12: aix

o Configure an HEA adaptero How IVE is compared with SEA .

5. LPAR and Managed Node Power Management o Managed system power states typeso How they are used in real timeo managed system power on/off optiono What is partition standby , system profile,auto starto How to power off managed systemo Real time scenario related to Power On /Offo How to create LPAR , Usage of Memory,CPU and other resources for LPARo What are I/O Resourceso Creating logical partitiono Guidelines in creating LPAR in real timeo Activating partition ,its option during activationo Terminal window , Partition shutdown and restart optiono Creating ,modifying,copying and deleting profileo Profile backupo Real time issues and Troubleshooting

6. DYNAMIC RESOURCE ALLOCATION o What is DLPAR , How it workso What are all we can DLPARo Memory,I/O slots , pre-requise and steps for DLPARo Commands during DLPARo Command Line option for DLPARo Scenario and Isuues in DLPARo HMC and LPAR communication ,authenticationo RSCT and RMC usage in DLAPRo Troubleshooting for DLAPR issues .

7. Processor Concepts o What is dedicated,shared processoro New features in P6 related to processoro What is shared processor poolo capped and uncapped partitiono What is Virtual Processor ,what they do ,New Feature in P6o Fold cpu concept ,What is SMT Smiultaneous Multi Threadingo When SMT is used in Real Timeo Option in SMT

8. VIRTUAL I/O o What is virtual I/Oo Why virtualize I/O, What are benefits of Virtualized I/Oo What are virtual Devices we use in VIO ,Discuss Virtual LANo What is Virtual ethernet ,Role of POWER Hypervisor Ethernet Switcho What is virtual ethernet adapters , How we use themo Creating a virtual ethernet adapter

9. VIRTUAL I/O SERVER

Page 13: aix

o Discuss Virtual I/O servero Role of Virtual I/O servero Creating a virtual I/O server and Installation of VIO softwareo What is Shared ethernet adapter ,Role of SEAo Trunking of adaptero What is Virtual SCSI ,Creating Virtual SCSI server and Client Adaptero What is Storage Pools ,LVPools and FBPoolso How Client Side Disk are seeno Discuss Backing Storage Discovery

10. SOFTWARE INSTALLATION o How to install LPARS ,Various Option in Installationo What is SMS menu ,How it used in Realtimeo Setting up NIM , How to install LPAR through NIMo Issues during network boot and Trouble shooting Tips

11. HMC AND MANAGED NODE Maintenance o What is Backup critical console data and its importanceo What is period we backup CCD in Real Timeo How to restore CCD in Real Time .o HMC patch management ,Website to Downloando How to Upgrade HMC software versiono Different option to upgrade .o How we use HMC to update Mangaed Node Firmwareo Real Time Steps and Precautiono Discuss HMC Role today in Sys Admin Job