Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1745786
  • 博文数量: 335
  • 博客积分: 4690
  • 博客等级: 上校
  • 技术积分: 4341
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-08 21:38
个人简介

无聊之人--除了技术,还是技术,你懂得

文章分类

全部博文(335)

文章存档

2016年(29)

2015年(18)

2014年(7)

2013年(86)

2012年(90)

2011年(105)

分类: DB2/Informix

2013-03-05 22:57:30

One of the problems with mechanical storage disks is that they are single threading; Only one process can access the data on a disk at a time. OS/390 managed this by allocating a Unit Control Block (UCB) to each storage device. If several applications wanted to access a device at the same time, then the IO operations were queued up by the IO supervisor. In performance terms, this is called IOSQ delay. 
IOSQ is an especial problem for disks with several very active datasets, and storage administrators allocated busy databases, RACF files, HSM control files, Page and Spool datasets etc. on their own dedicated volumes. This required careful monitoring, and manual effort to change. 
The principle behind IOSQ is illustrated in the GIF below. 4 applications are trying to access a JBOD disk. (JBOD is the opposite of RAID. The acronym is Just a Bunch Of Disks and it means one simple disk assembly) When an application turns green, it is successfully getting to the disk. The other applications are queued, waiting for their turn. 
IOSQ illustration

Modern disks subsystems do not have this physical restriction. They are usually RAID so data is spread over several physical disks. Also, all write IOs go to solid state cache and over 90% of read IOs come from cache. It is possible to schedule concurrent IOs to a cache as it has several access paths, but OS/390 was unaware of the physical implementation behind its virtual disks. The UCB architecture still said that only one IO was allowed to a disk. PAV was introduced to fix this. The concept behind PAV is that every disk has its normal 'base' UCB and also a number of 'alias' UCBs, all of which connect to the logical disk. This means that it is possible to schedule concurrent IOs to a disk, one through the base UCB, the rest through alias UCBs. If there are enough alias UCBs available, IOSQ should not happen.

There are three flavours or PAV; STATIC, DYNAMIC and HyperPAV 
STATIC means you specify how many PAV aliases each BASE (the real UCB) alias can have, and then that number is fixed. 
illustration of static PAV

DYNAMIC means you define just a bunch of aliases, and workload manager decides how many aliases are needed by each base aliases, depending on how busy the virtual disk is, and how important the application is. However Workload Manager can take a while to work out that a disk is busy, and will not eliminate IOSQ completely. 
DYNAMIC PAV needs fewer aliases than Static PAV and performs better, as more aliases are available for busy disks. 
Illustration of Dynamic PAV

Dynamic PAV has two problems; it uses up a lot of UCB addresses and it takes workload manager a while to notice that a disk is busy and needs more aliases. HyperPAV is designed to fix these problems. 
HyperPAV keeps all its aliases in a pool and just assigns one to a volume when it is needed to service an IO. It does not use WLM to decide when to allocate an alias. Each HyperPAV host can also use the same alias to service a different base, which means fewer aliases are needed. 
illustration of HyperPAV 
HyperPAV then requires fewer aliases per base, I've seen a ratio of one alias to 4 bases work well, but your requirement will depend on your workload. 
HyperPAV is especially useful if you are planning to use EAV volumes.

Invoking PAV

If you set MIH times in SYS1.PARMLIB(IECIOSxx), then IBM recommends that you do not set them for PAV alias devices.

The following steps are needed to invoke Static PAV on an ESS disk subsystem

  • Define PAV aliases in HCD, associated with each DASD subsystem. Base definitions are added as type 3390B, and aliases of type 3390A
  • Through storwatch on an ESS DASD subsystem, define a number of PAV aliases to every disk. 
    Select 'S/390 Storage' to get the config up 
    Hilight the LCU you want from the list presented. You may have to scroll down to find it. The screen has scrolling windows within scrolling windows, which can be a bit confusing. 
    Select the 'Configure LCU' button, then change the 'Parallel access volumes' button to Enabled, and set your 'PAV Starting Address' window to the correct level as discussed in the  section. 
    Hilight 'volumes' then select them all by holding the shift key down while scrolling down the list. Then go the ADD window, and select how many aliases you want to add to each base unit. 
    The total number must match the HCD. See the  for some hints. Other DASD subsystems will have equivalent definition methods.

and that is about it.

The following steps are needed to invoke Dynamic PAV

  • Define PAV aliases in HCD, as above
  • Through storwatch on an ESS DASD subsystem, define a number of PAV aliases to every disk. You should need fewer aliases than with static PAV
  • In Workload Manager, set WLMPAV=YES, and run Workload Manager in Goal mode, so Workload Manager moves the aliases around as required

To set up HyperPAV, you need to

  • Define the aliases in HCD
  • Authorise HyperPAV on your disk subsystem
  • Add them to your disk subsystem
  • Add HYPERPAV=YES in SYS1.PARMLIB(IECIOSxx)

You can enable HyperPAV dynamically, but IBM recommends that you do this at a quiet time, with no other configuration work running on a DS8K. You use the command .

 SETIOS HYPERPAV=YES 

You can check if HyperPAV is active with the command

D IOS,HYPERPAV

RESPONSE=SP00                        
 IOS098I 15.37.01 HYPERPAV DATA 109  
 HYPERPAV MODE IS SET TO YES     

Hyperpav requires z/OS 1.8. or higher, although fixes are available for earlier z/OS releases. It is supported by the latest IBM, EMC and HDS (including SUN and HP) devices, but usually needs a chargeable code upgrade. It will only work with FICON channels, ESCON is not supported.

Be sure to define the same typs of PAV on the same range of volumes for each LPAR.

Querying PAV status

From SDSF, use the devserv command /DS QPAV,uuuu,nn where uuuu is the starting unit address, and nn is the number of units you want to display. If you display a base address, you'll see something like

DS QPAV,F000,4 
IEE459I 10.25.49 DEVSERV QPAVS 459  
     HOST                             SUBSYSTEM
 CONFIGURATION                      CONFIGURATION 
 -------------                   ------------------
 UNIT                                  UNIT    UA 
 NUM. UA  TYPE        STATUS     SSID  ADDR.   TYPE
 ---- --  ----        ------     ----  ----    -----
 F000 00  BASE                   0E10   00     BASE
 F001 01  BASE                   0E10   01     BASE
 F002 02  BASE                   0E10   02     BASE
 F003 03  BASE                   0E10   03     BASE

If you display an alias address, you'll see

DS QPAV,F060,4 
IEE459I 10.26.48 DEVSERV QPAVS 714 
     HOST                             SUBSYSTEM 
 CONFIGURATION                      CONFIGURATION
 -------------                   --------------------
 UNIT                                  UNIT    UA 
 NUM. UA  TYPE        STATUS     SSID  ADDR.   TYPE 
 ---- --  ----        ------     ----  ----    --------
 F060 60  ALIAS-F017             0E10   60     ALIAS-17     
 F061 61  ALIAS-F018             0E10   61     ALIAS-18    
 F062 62  ALIAS-F00B             0E10   62     ALIAS-0B
 F063 63  ALIAS-F005             0E10   63     ALIAS-05

To find out what aliases are active to a volume, use the command DS QPAV,uuuu,VOLUME. An example of dynamic PAV in action is -

DS QPAV,DC0A,VOLUME                                          
IEE459I 10.43.46 DEVSERV QPAVS 363                           
     HOST                             SUBSYSTEM              
 CONFIGURATION                      CONFIGURATION            
 -------------                   --------------------        
 UNIT                                  UNIT    UA            
 NUM. UA  TYPE        STATUS     SSID  ADDR.   TYPE          
 ---- --  ----        ------     ----  ----    ------------  
 DC0A 0A  BASE                   E30C   0A     BASE          
 DC47 47  ALIAS-DC0A             E30C   47     ALIAS-0A      
 DC4B 4B  ALIAS-DC0A             E30C   4B     ALIAS-0A      
 DC50 50  ALIAS-DC0A             E30C   50     ALIAS-0A      

The output from the same commands with hyperpav active looks like

RESPONSE=SP00                                             
 IEE459I 16.00.34 DEVSERV QPAVS 055                       
      HOST                             SUBSYSTEM          
  CONFIGURATION                      CONFIGURATION        
 ---------------                  ---------------------   
  UNIT                                  UNIT    UA        
  NUM. UA  TYPE        STATUS     SSID  ADDR.   TYPE      
 ----- --  ----        ------     ----  ----    ----------
 05BA9 A9  BASE-H                 B134   A9     BASE      
 15BFD FD  ALIAS-H                B134   FD               
 ****      2 DEVICE(S) MET THE SELECTION CRITERIA           


D M=DEV(200F)

RESPONSE=SP00                                                     
IEE174I 15.27.53 DISPLAY M 871                                   
DEVICE 400F   STATUS=ONLINE                                      
CHP                   	16   19   1A   26   27   3B   43   54      
ENTRY LINK ADDRESS    	4C0F 4C8F 4C1F 4C9F 4D4F 4DCF 4D5F 4DDF    
DEST LINK ADDRESS     	7D0B 7D8B 7D1B 7D9B 7C4B 7CCB 7C5B 7CDB    
PATH ONLINE          	N    Y    N    Y    N    Y    N    Y       
CHP PHYSICALLY ONLINE	N    Y    N    Y    N    Y    N    Y       
PATH OPERATIONAL      	N    Y    N    Y    N    Y    N    Y       
MANAGED              	N    N    N    N    N    N    N    N       
CU NUMBER            	2000 2000 2000 2000 2000 2000 2000 2000    
MAXIMUM MANAGED CHPID(S) ALLOWED:  0                             
DESTINATION CU LOGICAL ADDRESS = 36                              
SCP CU ND         		= 002107.900.IBM.75.0000002AX701.0320          
SCP TOKEN NED     		= 002107.900.IBM.75.0000002AX701.3720          
SCP DEVICE NED    		= 002107.900.IBM.75.0000002AX701.372F HYPERPAV ALIASES CONFIGURED = 32  

An RMF Snapshot report with both Dynamic and HyperPAV active looks like this

                                    
15:51:26 I=43%   DEV                 ACTV   RESP  IOSQ
STG GRP VOLSER   NUM  PAV     LCU    RATE   TIME  TIME
                                              
TEMP3   T5E20A   E20A  3      020A   657.2   1.2  0.0
INFSAS  SXCE60   6DCC  1.0H   029F   617.1   0.7  0.0
TEMP3   S5E231   E231  2      020A   338.8   1.9  0.0
TEMP3   T5FD67   3BBA  1.0H   00E3   162.1   1.8  0.0
OPAC5   SAA55F   5BA8  1.0H   0294   81.74   0.5  0.0
RXZ22   SAC99B   2827  1.0H   02D4   80.38   0.9  0.0
TEMP3   T5FD3B   34C7  1.0H   025C   74.78   2.7  0.0
INFO    SXE21E   E21E  2      020A   68.66   6.1  2.8

There are three volumes on ESS using dynamic PAV and five on DS8K using HyperPAV as indicated by the 1.0H in the PAV column. One ESS volume has 2 aliases and a bit of IOSQ wait, but all the hyperpav volumes have one alias and no wait.

Multiple Allegiance (MA)

PAV addresses queuing issues with IOs coming from the same CPU or LPAR. If a disk was shared between several CPUs or LPARs, the owning LPAR puts a hardware reserve on the disk, to prevent others from interfering. Multiple Allegiance removes that requirement, and lets the storage controller manage cross system IOs. 
You do not set up MA, or switch it on. If your disk subsystems are MA capable, it happens.

转载于:

IBM mainframe operating systems from the OS/360 and successors line, a Unit Control Block (UCB) is amemory structure, or a control block, that describes any single input/output peripheral device (unit), or an exposure (alias), to the operating system. Certain data within the UCB also instructs the Input/Output Supervisor (IOS) to use certain closed subroutines in addition to normal IOS processing for additional physical device control.

A similar concept in Unix-like systems is a kernel's devinfo structure, addressed by a combination of major and minor number through a device node.

During initial program load (IPL) of current[1] MVS systems, the Nucleus Initialization Program (NIP) reads necessary information from the I/O Definition File (IODF) and uses it to build the UCBs. The UCBs are stored in system-owned memory, in the Extended System Queue Area (ESQA). After IPL completes, UCBs are owned by Input/Output Support. Some of the information stored in the UCB are: device type (disk, tape, printer, terminal, etc.), address of the device (such as1002), subchannel identifier and device number, channel path ID (CHPID) which defines the path to the device, for some devices the volume serial number(VOLSER), and a large amount of other information, including OS Job Management data.

While the contents of the UCB has changed as MVS evolved, the concept has not. It is a representation to the channel command processor of an external device. Inside every UCB is a representation of a subchannel information block, that is used in the SSCH assembler instruction[2] (put in the IRB, for input, or put in the ORB, for output), to start a chain of channel commands, known as CCWs. CCWs are queued onto the UCB with the STARTIO macro interface,[3] although that reference does NOT discuss the STARTIO macro as that macro instruction is NOT an IBM-supported interface, not withstanding the fact that that interface has remained the same for at least the past three decades. The STARTIO interface will either start the operation immediately, should the Channel Queue be empty, or it will queue the request on the Channel Queue for deferred execution. Such deferred execution will be initiated immediately when the request is at the head of the queue and the device becomes available, even if another program is in control at that instant. Such is the basic design of IOS.

The UCB evolved to be an anchor to hold information and states about the device. The UCB [4] currently has 5 areas used for an external interface: Device Class Extension, UCB Common Extension, UCB Prefix Stub, UCB Common Segment and the UCB Device Dependent Segment. Other areas are internal use only. This information can be read and used to determine information about the device.

In the earliest implementations of this OS, the UCBs (foundations and extensions) were assembled during SYSGEN, and were located within the first 64 kbytes of the system area, as the I/O device lookup table consisted of 16-bit Q-type (i.e., relocatable) addresses. Subsequent enhancements allowed the extensions to be above the 64 kbyte line, thereby saving space for additional UCB foundations below the 64 kbyte line and also thereby preserving the architecture of the UCB lookup table (converting a CUu to a UCB foundation address).

[edit]Handling parallel I/O operations

UCBs were introduced in the 1960s with OS/360. Then a device addressed by UCB was typically a moving head hard disk drive or a tape drive, with no internalcache. Without it, the device was usually grossly outperformed by the mainframe's channel processor. Hence, there was no reason to execute multiple input/output operations at the same time, as these would be impossible for a device to physically handle. In 1968 IBM introduced the 2305-1 and 2305-2 fixed-head disks, which had 8 exposures (alias addresses) per disk, and the OS/360 support provided a UCB per exposure in order to permit multiple concurrent channel programs.

[edit]Workload Manager and UCBs

When originally implemented, operating system had no real way to determine if a waiting I/O was more or less important than any other waiting I/Os. I/Os to a device were handled first in, first outWorkload Manager (WLM) was introduced in MVS/ESA 5.1. OS/390 added "smart" I/O queuing. It allowed the operating system, using information provided to WLM by the systems programmer, to determine which waiting I/Os were more, or less, important than other waiting I/Os. WLM would then, in a sense, move a waiting I/O further up, or down, in the queue so when the device was no longer busy, the most important waiting I/O would get the device next. WLM improved the I/O response to a device for the more important work being processed. However, there was still the limit of a single I/O to a single UCB/device at any one time.

[edit]Parallel Access Volumes (PAVs)

As mentioned before only one set of channel commands or I/O could be run at one time. This was fine in the 1960s when CPUs were slow and I/O could only be processed as fast as CPUs could process it. As systems matured and CPU speed greatly surpassed I/O input capacity, access to the device that was serialized at the UCB level became a serious bottleneck.

Parallel Access Volume (PAV) allow UCBs to clone themselves to allow multiple I/O to run simultaneously. With appropriate support by the DASD hardware, PAV provides support for more than one I/O to a single device at a time. For backwards compatibility reasons, operations are still serialized below UCB level. But PAV allows the definition of additional UCBs to the same logical device, each using an additional alias address. For example, a DASD device at base address 1000, could have alias addresses of 1001, 1002 and 1003. Each of these alias addresses would have their own UCB. Since there are now four UCBs to a single device, four concurrent I/Os are possible. Writes to the same extent, an area of the disk assigned to one contiguous area of a file, are still serialized, but other reads and writes occur simultaneously. The first version of PAV the disk controller assigns a PAV to a UCB. In the second version of PAV processing, WLM (Work Load Manager) re-assigns a PAV to new UCBs from time to time. In the third version of PAV processing, with the DS8000 controller series, each I/O uses any available PAV with the UCB it needs.

The net effect of PAVs is to decrease the IOSQ time component of disk response time, often to zero. As of 2007, the only restrictions to PAV are the number of alias addresses, 255 per base address, and overall number of devices per logical control unit, 256 counting base plus aliases.

[edit]Static versus dynamic PAVs

There are two types of PAV alias addresses, static and dynamic. A static alias address is defined, in both DASD hardware and z/OS, to refer to a specific single base address. Dynamic means that the number of alias addresses assigned to a specific base address fluctuates based on need. The management of these dynamic aliases is left to WLM, running in goal mode. (Which is always the case with supported levels of z/OS.) On most systems that implement PAV, there is usually a mixture of both PAV types. One, perhaps two, static aliases are defined for each base UCB and a bunch of dynamic aliases are defined for WLM to manage as it sees fit.

As WLM watches over the I/O activity in the system, WLM determines if there a high-importance workload is delayed due to high contention for a specific PAV-enabled device. Specifically, for a disk device, base and alias UCBs must be insufficient to eliminate IOS Queue time. If there is high contention, WLM will try to move aliases from another base address to this device - if WLM estimates doing so would help the workload achieve its goals more readily.

Another problem may be certain performance goals are not being met, as specified by WLM service classes. WLM will then look for alias UCBs that are processing work for less important tasks (service class), and if appropriate, WLM will re-associate aliases to the base addresses associated with the more important work.

[edit]HyperPAVs

WLM's actions in moving aliases from one disk device to another take a few seconds for the effects to be seen. For many situations this is not fast enough. HyperPAVs are much more responsive because they acquire a UCB from a pool for the duration of a single I/O operation, before returning it to the pool. There is no delay waiting for WLM to react.

Further, because with HyperPAV the UCB is acquired for only the duration of a single I/O, a smaller number of UCBs are required to service the same workload, compared to Dynamic PAVs. For large z/OS images UCBs can be a scarce resource. So HyperPAVs can provide some relief in this regard.


阅读(2254) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~