Manuale d’uso / di manutenzione del prodotto HP-UX 11i v3 del fabbricante HP (Hewlett-Packard)
Vai alla pagina of 557
Ve r i t a s V olume Manager 5.0 Administrator’ s Guide HP-UX 11i v3 First Edition Manufacturing P art Number: 5992-3942 May 2008.
2 Legal Notices © Copyrig ht 2008 H ewlett- P ackard Developme nt Compan y , L.P . Publicat io n Date: 2008 Confident ial computer softw are. V alid licens e from HP required for possessio n, use, or copying.
.
Contents Chapter 1 Understanding Veritas Volume Manager VxVM and the operating system ....................................................................... 19 How data is stored ........................................................... ...........
6 Contents DCO volume versioning ............................................ .................................. 68 FastResync limitations ................................... ............................................ 74 Hot-relocation .............
7 Contents Taking a disk offline ............................................................... ........................... 118 Renaming a disk ................................................... .............................................. 119 Reserving disks .
8 Contents Displaying the status of the DM P path restoration thread ................. 161 Displaying information about th e DMP error-handling thread ......... 162 Configuring array policy modules .....................................................
9 Contents Creating subdisks ........................................................................................ ....... 215 Displaying subdisk information ..................................................... ................. 216 Moving subdisks .
10 Contents Creating a concatenated-mirror volume ................................................ 249 Creating a volume with a vers ion 0 DCO volume ................... ...................... 250 Creating a volume with a vers ion 20 DCO volume ......
11 Contents Adding a RAID-5 log using vxplex ...........................................................283 Removing a RAID-5 log .................................................. ........................... 284 Resizing a volume ....................
12 Contents Adding a snapshot to a cascaded snapshot hierarchy ......................... 337 Refreshing an instant snapshot .......... .................................................... 337 Reattaching an instant snapshot ...........................
13 Contents Chapter 12 Administe ring hot-relo cation How hot-relocation works ..................................................... ........................... 380 Partial disk failure mail messages ...................................................
14 Contents Converting a disk group from shared to private ................................... 424 Moving objects between disk groups ...................................................... 424 Splitting disk groups ....................................
15 Contents Running a rule ................................................................. ........................... 447 Identifying configuration problem s using Sto rage Expert ........ ................. 449 Recovery time .......................
16 Contents Dirty region logging guidelines ............................................................ ... 515 Striping guidelines ............ ..................................................................... ... 515 RAID-5 guidelines .........
Chap ter 1 Understanding Veritas Volume Manager Veritas TM Volume Manager (VxVM) by Syma ntec is a storage management subsystem that allows yo u to manage ph ysical disks as logical devices called volumes.
18 Understanding Verita s Volume Manager ■ Volume snapshots ■ FastResync ■ Hot-relocation ■ Volume sets Further information on administering Veritas Volume Manager may be found in the followin.
19 Understanding Veritas Volume Manager VxVM and the ope rating system VxVM and the operating system VxVM operates as a subsystem between your operating system and your data management systems, such as file syst ems and database management systems. VxVM is tightly coupled with the operati ng system.
20 Understanding Verita s Volume Manager How VxVM hand les storage management How VxVM handles storage management VxVM uses two types of objec ts to handle storage management: physical objects and virtual objects .
21 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt VxVM writes identification information on physical disks under VxVM control (VM disks). VxVM disks can be identified even after physical disk disconnection or system outages.
22 Understanding Verita s Volume Manager How VxVM hand les storage management Figure 1-2 How VxVM presents the disks in a disk array as volumes to the operating system Multipathed disk arrays Some disk arrays provide multiple po rts to access their disk devices.
23 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt Device Discovery service enables you to add support dynamically for new disk arrays. This operation, which uses a faci lity called the Device Discovery Layer (DDL), is achieved without the need for a reboot.
24 Understanding Verita s Volume Manager How VxVM hand les storage management Figure 1-3 Example configuration for disk enclosures connected via a fibre channel hub or switch In such a configuration, enclosure-base d naming can be used to refer to each disk within an enclosure.
25 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt In High Availability (HA) configuratio ns, redundant-l oop access to storage can be implemented by connecting independent controllers on the host to separate hubs with independent paths to the enclosures as shown in Figure 1-4 .
26 Understanding Verita s Volume Manager How VxVM hand les storage management See “ Disk device naming in VxVM ” on page 78 and “ Changing the disk-nam ing scheme ” on page 91 for details of the standa rd and the enclosure-based naming schemes, and how to switch between them.
27 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt ■ Subdisks ( each r epresent ing a specific region of a disk ) are c ombined to f orm plex es ■ V olumes are c omposed of one or more ple xes Figure 1-5 shows the connections between Veritas Volume Manager virtual objects and how they relate to physical disks.
28 Understanding Verita s Volume Manager How VxVM hand les storage management Veritas Volume Manager, such as data change objects (DCOs), and cache objects, to provide extended functionality.
29 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt Figure 1-6 VM disk example Subdisks A subdisk is a set of co ntiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks.
30 Understanding Verita s Volume Manager How VxVM hand les storage management Figure 1-8 Example of three subdisks assigned to one VM Disk Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. VxVM release 3.
31 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt You can organize data on subdisks to form a plex by using the following methods: ■ conc atenation ■ striping (RAID-0) .
32 Understanding Verita s Volume Manager How VxVM hand les storage management Note: You can use the Veritas Inte lligent Storage Provisio ning (ISP) feature to create and administer appl icatio n volumes. These vo lu mes are very simi lar to the traditional VxVM volume s that are described in th is chapter.
33 Understanding Veritas Volume Manager How VxVM handl es storage manageme nt In Figure 1-11 a volume, vol06 , with two data plexes is mirro red . Each plex of the mirror contains a complete copy of the volu me data.
34 Understanding Verita s Volume Manager V olu me layouts in VxV M Volume layouts in VxVM A VxVM virtual device is defined by a volum e. A volume has a layout defi ned by the association of a volume to one or more plexes, each of which map to subdisks.
35 Understanding Veritas Volume Manager V olu me layouts in VxVM Layout methods Data in virtual objects is organized to create volumes by using the following layout methods: ■ Concatenation and span.
36 Understanding Verita s Volume Manager V olu me layouts in VxV M Figure 1-12 Example of conca tenation You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
37 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-13 Example of spanning Caution: Spanning a plex across multiple disks increases the chance that a disk failure results in failure of the assigned volume. Use mirroring or RAID-5 (both described later) to reduce th e risk that a single disk failure results in a volume failure.
38 Understanding Verita s Volume Manager V olu me layouts in VxV M Striping (RAID-0) Note: You need a full license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is im portant.
39 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-14 Striping across three columns A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
40 Understanding Verita s Volume Manager V olu me layouts in VxV M Figure 1-15 shows a striped plex with three equ a l sized, single-subdisk columns. There is one column per physical disk. Th is example shows three subdisks that occupy all of the space on the VM disks.
41 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-16 Example of a striped pl ex with concatenated subdisks per column See “ Creating a striped volume ” on page 253 for information on how to create a striped volume.
42 Understanding Verita s Volume Manager V olu me layouts in VxV M Mirroring (RAID-1) Note: You need a full license to use this fe ature with disks other than the r oot disk. Mirr oring uses multiple mirrors (plexes) to duplicate the information contained in a volume.
43 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-17 Mirrored-stripe volume laid out on six di sks See “ Creating a mirrored-stripe volume ” on page 254 for information on how to create a mirrored-stripe volume. The layout type of the data plexes in a mirror can be concatenated or striped.
44 Understanding Verita s Volume Manager V olu me layouts in VxV M Figure 1-18 Striped-mirror volume laid out on six disks See “ Creating a striped-mirror volume ” on page 254 for information on how to create a striped-mirrored volume.
45 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-19 How the failure of a single disk affects mirrored-s tripe and striped- mirror volumes Compared to mirrored-stripe volume s, striped-mirror volumes are more tolerant of disk failure, an d recovery time is shorter.
46 Understanding Verita s Volume Manager V olu me layouts in VxV M Although both mirroring (R AID-1) and RAID-5 provide redundancy of data, they use different methods. Mirroring provide s data redundancy by maintaining multiple complete copies of the data in a volume.
47 Understanding Veritas Volume Manager V olu me layouts in VxVM parity stripe. Figure 1-21 shows the row and column arrangement of a traditional RAID-5 array. Figure 1-21 T raditional RAID-5 array This traditional array structure suppo rts growth by adding more rows per column.
48 Understanding Verita s Volume Manager V olu me layouts in VxV M Figure 1-22 V eritas V olume Manager RAID-5 array Note: Mirroring of RAID-5 volumes is not supported. See “ Creating a RAID-5 volume ” on page 256 for information on how to create a RAID-5 volume.
49 Understanding Veritas Volume Manager V olu me layouts in VxVM Figure 1-23 Left-symmetric l ayout For each stripe, data is orga nized starting to the right of the parity stri pe unit. In the figure, data organization for the firs t stripe begins at P0 and continues to stripe units 0-3.
50 Understanding Verita s Volume Manager V olu me layouts in VxV M Note: Failure of more than one column in a RAID-5 plex detaches the volume. The volume is no longer allowed to s atisf y read or write requests. Once the failed columns have been recovered, it may be necessary to recover user data from backups.
51 Understanding Veritas Volume Manager V olu me layouts in VxVM Logs are associated with a RAID-5 volum e by being attached as log plexes. More than one log plex can exist for each RAID -5 volume, in which case the log areas are mirrored. See “ Adding a RAID-5 log ” on page 283 for information on how to add a RAID-5 log to a RAID-5 volume.
52 Understanding Verita s Volume Manager V olu me layouts in VxV M Figure 1-25 Example of a striped-mirror la yered volume Figure 1-25 illustrates the structure of a typical layered volume. It shows subdisks with two columns, built on un derlying volumes with each volume internally mirrored.
53 Understanding Veritas Volume Manager V olu me layouts in VxVM plex (for example, resizing the volume, ch anging the column width, or adding a column). System administrators ca n manipulate the layered volume structure for troubleshooting or other operations (for example, to place data on specific disks).
54 Understanding Verita s Volume Manager Online rela yout Online relayout Note: You need a full license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data acces s. Typically, you would do this to change the redundancy or performance characteristics of a volume.
55 Understanding Veritas Volume Manager Online relayout amount of temporary space that is requi red is usually 10% of the size of the volume, from a minimum of 50MB up to a maximum of 1GB. For volumes smaller than 50MB, the temporary space required is the same as the size of the volume.
56 Understanding Verita s Volume Manager Online rela yout ( shown by the shaded ar ea) decr ease s the o ver all storag e space that the vo lu m e re q u i re s . Figure 1-27 Example of rela yout of a RAID -5 volume to a striped volume ■ Change a v olume to a RAID-5 v olume ( add parity ).
57 Understanding Veritas Volume Manager Online relayout Figure 1-30 Example of increasi ng the stripe width for the colu mns in a volume For details of how to perform on line relayout operations, see “ Performing online relayout ” on page 294.
58 Understanding Verita s Volume Manager Online rela yout ■ The number of mirrors in a mirr or ed volume cannot be changed using re l a yo u t . ■ Only one r elayout may be applied to a v olume at a time.
59 Understanding Veritas Volume Manager V olume resynchronizatio n Volume resynchronization When storing data redundantly an d usin g mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly.
60 Understanding Verita s Volume Manager Dirty region log ging Resynchronization of data in the volume is done in the background. This allows the volume to be availabl e for us e while recovery is taking place. The process of resynchronization can im pact system performance.
61 Understanding Veritas Volume Manager Dirty region log ging becomes the least recently accessed for writ es. This allows writes to the same region to be written immediately to disk if the region’s log bit is set to dirty.
62 Understanding Verita s Volume Manager Dirty region log ging SmartSync recovery accelerator The SmartSync feature of Veritas Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as r esilvering .
63 Understanding Veritas Volume Manager V olu me snapshots Redo log volume configuration A re d o l o g is a log of changes to the database data. Because the database does not maintain changes t o the redo logs, it cannot provide information about which sections require resilvering.
64 Understanding Verita s Volume Manager V olu me snapshots Figure 1-31 V olume snapsho t as a point-in-time image of a volume The traditional type of volume snapshot in VxVM is of the th ird-mirr or br eak-off type. This name comes from its implemen tation where a snapshot plex (or third mirror) is added to a mirrored volume.
65 Understanding Veritas Volume Manager V olu me snapshots mirror snapshots such as immediate availability and easier configuration and administration. You can also use the th ird-mirror break-off usage model with full-sized snapshots, where this is necessa ry for write-intensive applications.
66 Understanding Verita s Volume Manager Fast Re sy nc FastResync Note: You need a Veritas FlashSnap or FastResync license to use this feature. The FastResync feature (previously ca lled Fast Mirror Resynchronization or FMR) performs quick and efficient resynch ronization of stale mirrors (a mirror that is not synchronized).
67 Understanding Veritas Volume Manager FastResync snapshot is taken, it can be accessed in dependently of the volume from which it was taken. In a clustered VxVM environmen t with shared access to st.
68 Understanding Verita s Volume Manager Fast Re sy nc Availability (HA) environment requires the full resynchroni zation of a mirror when it is reattached to its parent volume.
69 Understanding Veritas Volume Manager FastResync V ersion 0 DCO volume la yout In VxVM releases 3.2 and 3.5, the DCO object only managed information about the FastResync maps. These maps track writes to the original volume and t o each of up to 32 snapshot volumes since the last snapshot operation.
70 Understanding Verita s Volume Manager Fast Re sy nc (by default) are used either for tracking writes to snapshots, or as copymaps. The size of the DCO volume is determined by the size of the regions that are tracked, and by the number of per-volume maps.
71 Understanding Veritas Volume Manager FastResync Figure 1-32 Mirrored volume with persis tent F astResync enabled To create a traditional third-mirror snapshot or an instant (copy-on-write) snapshot, the vxassist snapstart or vxsnap make operation respectively is performed on the volume.
72 Understanding Verita s Volume Manager Fast Re sy nc Note: Space-optimized instant snapshots do not require additional full-sized plexes to be created. Instead, they us e a storage cache that typically requires only 10% of the storage that is required by full-sized snapshots.
73 Understanding Veritas Volume Manager FastResync Note: The vxsnap reattach , dis and split operations are not supported for instant space-optimized snapshots. See “ Administering volume snapshots ” on page 303, and the vxsnap (1M) and vxassist (1M) manual pages for more information.
74 Understanding Verita s Volume Manager Fast Re sy nc different effects on the map that FastResync uses to track changes to the original volume: ■ F or a v ersion 20 DCO v olume, the size of the map is incre ased and the size of the re gion that is t r ack ed by ea ch bit in the map stays the same.
75 Understanding Veritas Volume Manager Hot-relocation association. H owe ver , in such a case, y ou can use the vxplex snapback command with the -f (for ce ) option t o perform the snapback. Note: This restriction only applies to tradit ional snapshots.
76 Understanding Verita s Volume Manager V olu me sets and availability characterist ics of the underlying volumes. For example, file system metadata could be stored on volumes with higher redundancy, and user data on volumes with better performance.
Chap ter 2 Administering disks This chapter describes the operations fo r managing disks us ed by the Veritas Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks.
78 Administe ring disks Disk devices and /dev/rdisk directories. To maintain backwa rd compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories. VxVM recreates disk devices for all path s in the operating system’s hardware device tree as metadevices ( DMP nodes ) in the /dev/vx/dmp and /dev/vx/rdmp directories.
79 Ad ministering d isks Disk devi ces The syntax of a legacy devi ce name is c # t # d # , where c# represents a controller on a host bus adapt er, t# is the target controller ID, and d# identifies a disk on the target controller.
80 Administe ring disks Disk devices Private and public disk regions Most VM disks have two regions: priva te region A small area where configuration information is stored. A disk header label, configuration records for VxVM objects (such as volumes, plexes and subdisks), and an intent log for the configuration database are sto red here.
81 Ad ministering d isks Disk devi ces auto When the vxconfigd daemon is started, VxVM obtains a list of known disk device addresses from the operating system and configures disk access recor ds for them automatically.
82 Administe ring disks Discovering and configuring newly added disk devices Discovering and configuring newly added disk devices When you physically connect new disks to a host or when you zone new f.
83 Ad ministering d isks Discovering and configuring newly added disk devices Alternatively, you can specify a ! prefix ch aracter to indicate that you want to scan for all devices exce pt those that .
84 Administe ring disks Discovering and configuring newly added disk devices Adding support for a new disk array The following example illustrates how to add support for a new disk array named vrtsda .
85 Ad ministering d isks Discovering and configuring newly added disk devices See “ Changing device naming for TPD-controlled enclosures ” on page 94 for information on how to chan ge the form of TPD device names that are displayed by VxVM.
86 Administe ring disks Discovering and configuring newly added disk devices This command displays the vendor ID ( VID ), product IDs ( PID s) for the arrays, array types (for example, A/A or A/P), and array names. The fo llowing is sample output. # vxddladm listsupport libname=libvxfujitsu.
87 Ad ministering d isks Discovering and configuring newly added disk devices Listing suppo rted disks in the DISKS category To list disks that are supported in the DISKS (JBOD) category, use the foll.
88 Administe ring disks Discovering and configuring newly added disk devices [length= serialno_length ] [policy=ap] where vendorid and pr oduc tid are the VID and PID values that you found from the previous step. For example, vendorid might be FUJITSU , IBM , or SEAGATE .
89 Ad ministering d isks Discovering and configuring newly added disk devices For more information, enter the command vxddladm help addjbod , or see the vxddladm (1M) and vxdmpadm (1M) manual pages. Removing disks from the DISK S categor y To remove disks from the DISKS (JBOD) category, use the vxddladm command with the rmjbod keyword.
90 Administe ring disks Placing disks under VxVM control ■ Enclosur e inf ormation is not av ailable t o V xVM. This can reduc e the av ailability of any disk gr oups that ar e cre ated using such dev ices. ■ EFI disks that ar e under the c ontrol of HP-UX nati ve multipathing c annot be initialized as f or eign disks.
91 Ad ministering d isks Changing the di sk -naming scheme ■ If the disk was pr eviously in use b y the L VM su bsy stem, you can pr eserve exist ing data while still let ting V xVM take c ontr ol of the disk. This is acc omplished using con ve rs io n .
92 Administe ring disks Changing the di sk -naming scheme Alternatively, you can change the nami ng scheme from the command line. The following commands select enclosure-based and operating system-bas.
93 Ad ministering d isks Changing the di sk -naming scheme # vxdmpadm getlungroup dmpnodename=disk25 VxVM vxdmpadm ERROR V-5-1-10910 Invalid da-name # vxdmpadm getlungroup dmpnodename=Disk_11 NAME STA.
94 Administe ring disks Changing the di sk -naming scheme Changing device naming for TPD-controlled enclosures Note: This feature is available only if the default disk-naming scheme is set to use operating system-based naming, and the T PD-controlled enclosure does not contain fabric disks.
95 Ad ministering d isks Changing the di sk -naming scheme ■ Persistent simple or nopriv disks in the boot disk group ■ Persistent simple or nopriv disks in non-boot disk groups These procedures use the vxdarestore utility to handle errors in persistent simple and nopriv disks that arise from changing to the enclosure-based naming scheme.
96 Administe ring disks Installing and formatting disks 3 If yo u want to use enc losure-based naming, use vxdiskadm t o add a non-persist ent simple disk to the bootdg disk gr oup, chang e back to th.
97 Ad ministering d isks Displaying and ch anging defa ult disk layout attri butes Displaying and changing default disk layout attributes To display or change the default values fo r initializi ng disk s, select menu item 21 (Change/display the default disk layout) from the vxdiskadm main menu.
98 Administe ring disks Adding a disk to VxVM disks available for use as replacement disks. More than one disk or pattern may be entered at the prompt.
99 Ad ministering d isks Adding a disk to VxVM 3 T o continue with the oper ation, enter y ( or pr ess Return ) at the f ollowing pr ompt: Here are the disks selected.
100 Administe ring disks Adding a disk to VxVM A site tag is usually applied to disk arrays or enclosures, and is not required unless you want to use the Remote Mirror feature. If you enter y to choose to add a site tag, you are prompted to the site name at step 11 .
101 Administerin g disks Adding a disk to VxVM vxdiskadm then proceeds to add the disks. Adding disk device de vice name to disk group disk group name with disk name disk nam e . . . . Note: To bring LVM disks under VxVM control, use the Migration Utilities.
102 Administe ring disks Rootab ility Note: If you are adding an uninitialized disk, warning and error messages are displayed on the console during the vxdiskadd command.
103 Administerin g disks Rootability VxVM root disk volume restrictions Volumes on a bootable VxVM root disk have the following configuration restrictions: ■ All volumes on the r oot disk must be in the disk gr oup that you c hoose to be the bootdg disk gr oup.
104 Administe ring disks Rootab ility Booting root volumes Note: At boot time, the system firmware pro vides you with a short time period during which you can manu ally override the automatic boot process and select an alternate boot device.
105 Administerin g disks Rootability Note: The -b option to vxcp_lvmroot uses the setboot command to define c0t4d0 as the primary boot device. If this option is not specified, the primary boot device is not changed.
106 Administe ring disks Rootab ility Note: You may want to keep the LVM root disk in case you ever need a boot disk that does not depend on VxVM being present on the system. However, this may require that you update the contents of the LVM root disk in parallel with changes that you make to the VxVM root disk.
107 Administerin g disks Rootability Adding swap volumes to a VxVM rootable system T o add a swap volume to an HP-UX system with a VxVM root disk 1 Initializ e the disk that is to be used to hold the .
108 Administe ring disks Dynamic LUN expans ion Removing a persistent dump volume Caution: The system will not boot correctly if you delete a dump volume without first removing it from the crash dump configuration. Use this procedure to remove a du mp volume from t he crash dump configuration.
109 Administerin g disks Dynamic LU N expansion Any volumes on the device should only be grown after the device itself has first been grown. Otherwise, storage other than the device may be used to grow the volumes, or the volume resize may fai l if no free storage is available.
110 Administe ring disks Removing disks Removing disks Note: You must disable a disk group as des cribed in “ Disabling a disk group ” on page 207 before you can remove the last disk in that group. Alternatively, you can destroy the disk group as described in “ Destroying a disk group ” on page 208.
111 Administerin g disks Removing disks Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displa ys the following success message: VxVM INFO V-5-2-268 Removal of disk mydg01 is complete. You can now remove the disk or leave it on your system as a replacement.
112 Administe ring disks Removing a disk from V xVM control Removing a disk with no subdisks To remove a disk that contains no su bdisks from its disk group, run the vxdiskadm program and select item .
113 Administerin g disks Removing and replacing disks T o replace a disk 1 Select menu item 3 (Remove a disk for replacement) from the vxdiskadm main menu.
114 Administe ring disks Removing and replacing disks The following devices are available as replacements: c0t1d0 You can choose one of these disks now, to replace mydg02.
115 Administerin g disks Removing and replacing disks VxVM NOTICE V-5-2-158 Disk replacement completed successfully. 9 At the f ollowing pr ompt, indicate whether y ou want to remo ve another disk ( y.
116 Administe ring disks Removing and replacing disks c0t1d0 c1t1d0 You can choose one of these disks to replace mydg02. Choose "none" to initialize another disk to replace mydg02.
117 Administerin g disks Enabling a disk 8 After using the vxdiskadm c ommand to r eplace one or mor e failed disks in a V xVM cluster , run the f ollowing c ommand on all the cluster nodes: # vxdctl enable Then run the following co mmand on the master node: # vxreattach -r accesname where accessname is the disk access name (such as c0t1d0 ).
118 Administe ring disks Ta k i n g a d i s k o f f l i n e vxdiskadm enables the specified device. 3 At the f ollowing pr ompt, indicate whether y ou want t o enable another device ( y ) or return t .
119 Administerin g disks Renaming a disk Renaming a disk If you do not specify a VM disk name, Vx VM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or t he disk type.
120 Administe ring disks Displaying disk information The vxassist command overrides the reservatio n and creates a 20 megabyte volume on mydg0 3 . However, the command: # vxassist -g mydg make vol04 20m does not use mydg0 3 , even if there is no free space on any other disk.
121 Administerin g disks Displaying disk information Displaying disk information with vxdiskadm Displaying disk information shows you whic h disks are initialized, to which disk groups they belong, and the disk status.
122 Administe ring disks Controlling P owerfail Timeout Controlling Powerfail Timeout Powerfail Timeout is an attribute of a SCSI disk connected to an HP-UX host. This is used to detect and handle I/O on non-responding disks. See the pfto (7) man pag e.
123 Administerin g disks Controllin g Powerfail Timeo ut Enabling or disabling PFTO To enable or disable PFTO on a disk, use the following command: $ vxdisk -g dg_name set disk_name pftostate={enabled.
124 Administe ring disks Controlling P owerfail Timeout.
Chap ter 3 Administering dynamic multipathing (DMP) The dynamic multipathing (DMP) featu re of Veritas Volume Manager (VxVM) provides greater availability, reliability and performance by using path failover and load balancing. This feature is avai lable for multiported disk arrays from various vendors.
126 Administering dyn amic multipathing (DMP) How DMP works For Active/Passive arrays with L UN group failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs.
127 Administe ring dynamic multipathing (DMP) How DMP works Figure 3-1 How DMP represents multi ple phys ical paths to a disk as one node As described in “ Enclosure-based naming ” on page 23, VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs.
128 Administering dyn amic multipathing (DMP) How DMP works See “ Changing the disk-naming scheme ” on page 91 for details of how to change the naming scheme that VxVM uses for disk devices.
129 Administe ring dynamic multipathing (DMP) How DMP works DMP is also informed when a connection is repaired or restored, and when you add or remove devices after the system ha s been fully booted (provided that the operating system recognizes the devices correctly).
130 Administering dyn amic multipathing (DMP) How DMP works DMP coexistence with HP -UX native multipathing The HP-UX 11i v3 release includes suppo rt for native multipathing, which can coexist with DMP.
131 Administe ring dynamic multipathing (DMP) How DMP works 3 Rest art all the v olumes in each disk gr oup: # vxvol -g diskgroup startall The output from the vxdisk list command now shows only HP-UX .
132 Administering dyn amic multipathing (DMP) How DMP works and under the new naming scheme as : # vxdisk list DEVICE TYPE DISK GROUP STATUS disk155 auto:LVM - - LVM disk156 auto:LVM - - LVM disk224 a.
133 Administe ring dynamic multipathing (DMP) Disabling and enabling mu ltipathing for specific device s Enabling or disabling controllers with shared disk groups Prior to release 5.0, VxVM did not allo w enabling or disabling of paths or controllers connected to a disk that is part of a shared Veri tas Volume Manager disk group.
134 Administering dyn amic multipathing (DMP) Disabling and enabling mul tipathing for specific devices ◆ Select option 1 to ex clude all paths th r ough the specified c ontr oller from the view of V xVM. These paths r emain in the disabled state until the ne xt r eboot , or until the paths are r e-included.
135 Administe ring dynamic multipathing (DMP) Disabling and enabling mu ltipathing for specific device s ? Display help about menu ?? Display he lp about the menuing system q Exit from menus ◆ Select option 1 t o make all paths thr ough a specified contr oller visible t o Vx V M .
136 Administering dyn amic multipathing (DMP) Enabling and disabling I/ O for co ntrollers and storag e processors Enabling and disabling I/O for controllers and storage processors DMP allows you to turn off I/ O for a co ntroller or the array port of a storage processor so that you can perform admini strative operations.
137 Administe ring dynamic multipathing (DMP) Displaying DMP database information Displaying DMP database information You can use the vxdmpadm command to list DMP database information and perform other administrative tasks.
138 Administering dyn amic multipathing (DMP) Displaying the paths to a disk devicetag: c1t0d3 type: simple hostid: zort disk: name=mydg04 id=962923652.
139 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm Administering DMP using vxdmpadm The vxdmpadm utility is a command line admi nistrative interface to the DMP feature of VxVM. You can use the vxdmpadm utility to perform the following tasks.
140 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm The physical path is spec ified by argument to the nodename attribute, which must be a valid pat h listed in the /dev/rdsk directory.
141 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm For A/P arrays in which the I/O policy is set to singleac tive , only one path is shown as ENABLED(A) .
142 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm operations being disabled on that controller by using the vxdmpadm disable command.
143 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm NAME ENCLR-NAME ARRAY-PORT-ID pWWN ============================================================== c2t66d0 HDS9500V0 1A 20.
144 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm Gathering and displa ying I/O statistics You can use the vxdmpadm iostat command to gather and display I/O statistics for a specified DMP node, enclosure, path or controller.
145 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm c2t115d0 87 0 44544 0 0.001200 0.000000 c3t115d0 0 0 0 0 0.000000 0.000000 c2t103d0 87 0 44544 0 0.007315 0.000000 c3t103d0 0 0 0 0 0.000000 0.000000 c2t102d0 87 0 44544 0 0.
146 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm c3t115d0 0 0 0 0 0.000000 0.000000 cpu usage = 59us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c3t115d0 0 0 0 0 0.
147 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm ■ primary Defines a path as being the primary path for an Active/Passi ve disk array.
148 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm Note: Starting with release 4.1 of VxVM, I/ O policies are recorded in the file /etc/vx/dmppolicy.info , and are persistent acro ss reboots of the system. Do not edit this file yourself.
149 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm You can use the size argument to the partitionsize attribute to specify the partition size. The partition size in blocks is adjustable in powers of 2 from 2 up to 2^31 as illustrated in the table below: The default value for the partition size is 1024 blocks (1MB).
150 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm ■ minimumq This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queu e for a LUN. This is suitable for low-end disks or JBODs where a signific ant track cache does not exist.
151 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm # vxdmpadm setattr arrayname DISK iopolicy=singleactive Scheduling I/O on the paths of an Asymmetric Activ e / Active arr.
152 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm # dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null & By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15 : # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 .
153 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm c4t2d15 1086 0 1086 0 0.390424 0.000000 c4t3d15 1048 0 1048 0 0.391221 0.
154 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm The disable operation fails if it is iss ued to a controller that is connected to the root disk through a single path, and there are no root disk mirrors configured on alternate paths.
155 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm For a system with a volume mirrored acro ss 2 controllers on one HBA, set up the configuration as follows: 1 Disable the .
156 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm Configuring the response to I/O failures By default, DMP is configu red to retry a failed I/O request u p to 5 times for a single path.
157 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm The following example configures time -bound recovery for the enclosure enc0 , and sets the val ue of iotimeout to 60 sec.
158 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm The following example shows how to disabl e I/O throttling for the paths to the enclosure enc0 : # vxdmpadm setattr encl.
159 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm Displaying recoveryoption values The following example shows the vxdmpadm getattr command being used to display the recoveryoption option values that are set on an enclosure.
160 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm Configuring DMP path restoration policies DMP maintains a kernel thread that re-ex amines the condition of paths at a specified interval. The type of analysis that is performed on the paths depends on the checking policy that is configured.
161 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm The interval attribute must be specified for this policy. The default number of cycles between running the check_all policy is 10. The interval attribute specifies how ofte n the path restoration thread examines the paths.
162 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm Displaying information about th e DMP error-handling thread To display information about the kernel thread that handle s DMP errors, use the following command: # vxdmpadm stat errord One daemon should be shown as running.
163 Administe ring dynamic multipathing (DMP) Administering DMP using vxdmpadm Note: By default, DMP uses the most recent APM that is available. Specify the -u option instead of the -a option if you want to force DMP to use an earlier version of the APM.
164 Administering dyn amic multipathing (DMP) Administering DMP us ing vxdmpadm.
Chap ter 4 Creating and administering disk groups This chapter describes how to create and manage disk groups . Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group.
166 Creating and admi nistering disk gr oups As system administrator, you can create additional disk groups to arrange your system’s disks for different purposes. Ma ny systems do not use more than one disk group, unless they have a large nu mber of disks.
167 Creating and administering disk groups Specifying a disk group to comm ands Specifying a disk group to commands Note: Most VxVM commands require superuser or equivalent privileges. Many VxVM commands allow you to specify a disk group using the -g option.
168 Creating and admi nistering disk gr oups Specifying a disk group to commands Rules for determining the default disk group It is recommended that you use the -g option to specify a disk group to VxVM commands that accept this option.
169 Creating and administering disk groups Displaying disk gro up information If bootdg is specified as the argument to th is command, the default disk group is set to be the same as the currently defined system-wide boot disk group. If nodg is specified as the argument to the vxdctl defaultdg command, the default disk group is undefined.
170 Creating and admi nistering disk gr oups Creating a disk group flags: online ready private autoconfig autoimport imported diskid: 963504891.1070.bass dgname: newdg dgid: 963504895.
171 Creating and administering disk groups Adding a disk to a dis k group A disk group must have at least one disk ass ociated with it. A new disk group can be created when you use menu item 1 (Add or initialize one or more disks) of the vxdiskadm command to add disks to VxVM control, as described in “ Adding a disk to VxVM ” on page 97.
172 Creating and admi nistering disk gr oups Removing a disk from a disk grou p Removing a disk from a disk group Note: Before you can remove the last disk fr om a disk group, you must disable the disk group as described in “ Disabling a disk group ” on page 207.
173 Creating and administering disk groups Deporting a disk group ■ Ther e is not enough space on the remaining disk s. ■ Ple xes or striped subdisks cannot be alloc ated on differ ent disks fr om exist ing plexes or striped subdisks in the v olume.
174 Creating and admi nistering disk gr oups Importing a disk group Enter name of disk group [<group>,list,q,?] (default: list) newdg 5 At the f ollowing pr ompt , enter y if you int end to remo ve the disks in this disk gr oup: VxVM INFO V-5-2-377 The requested operation is to disable access to the removable disk group named newdg.
175 Creating and administering disk groups Handling disk s with duplicated identifiers Enable access to (import) a disk group Menu: VolumeManager/Disk/EnableDiskGroup Use this operation to enable access to a disk group. This can be used as the final part of moving a disk group from one system to another.
176 Creating and admi nistering disk gr oups Handling disks with dupl icated identifiers compared with the UDID that is set in the disk’s private region. If the UDID values do n ot match, the udid_mismatch flag is set on the disk. This flag can be viewed with the vxdisk list command.
177 Creating and administering disk groups Handling disk s with duplicated identifiers # vxdg -o useclonedev=on [ -o updateid ] import mydg Note: This form of the command allows only cloned disks to be imported. All non-cloned disks remain unimported.
178 Creating and admi nistering disk gr oups Handling disks with dupl icated identifiers To check which disks in a disk group contain copies of this configuration information, use the vxdg listmeta command: # vxdg [-q] listmeta diskgroup The -q option can be specified to suppress detailed configuration information from being display ed.
179 Creating and administering disk groups Handling disk s with duplicated identifiers These tags can be viewed by using the vxdisk listtag command: # vxdisk listtag DEVICE NAME VALUE TagmaStore-USP0_.
180 Creating and admi nistering disk gr oups Handling disks with dupl icated identifiers To import the cloned disks, they must be assigned a new disk group name, and their UDIDs must be updated: # vxd.
181 Creating and administering disk groups Handling disk s with duplicated identifiers DEVICE TYPE DISK GROUP STATUS EMC0_1 auto:cdsdisk EMC0_1 mydg online EMC0_27 auto:cdsdisk - - online udid_mismatc.
182 Creating and admi nistering disk gr oups Handling disks with dupl icated identifiers As the cloned disk EMC0_15 is not tagged as t1 , it is not imported. Note that the state of the imported clon ed disks has changed from online udid_mismatch to online clone_disk .
183 Creating and administering disk groups Renaming a disk gro up Renaming a disk group Only one disk group of a given name ca n exist per system. It is not possible to import or deport a disk gr oup when the target system already has a disk group of the same name.
184 Creating and admi nistering disk gr oups Moving disks between disk g roups dgid: 774226267.1025.tweety Note: In this example, the administrator has chosen to name the boot disk group as rootdg . The ID of this disk group is 774226267.1025.tw eety .
185 Creating and administering disk groups Moving disk groups between systems You can also move a disk by using the vxdiskadm command. Select item 3 (Remove a disk) from the main menu, and then select item 1 (Add or initialize a disk) .
186 Creating and admi nistering disk gr oups Moving disk groups be t ween system s Caution: The purpose of the lock is to ensure that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time.
187 Creating and administering disk groups Moving disk groups between systems The following error message indicates a recoverable error. VxVM vxdg ERROR V-5-1-587 Disk group groupname : import failed:.
188 Creating and admi nistering disk gr oups Moving disk groups be t ween system s minor numbers near the top of this rang e to allow for temporary device number remapping in the event that a device minor number collision may still occur. VxVM reserves the range of minor numbers from 0 to 999 for use with volumes in the boot disk grou p.
189 Creating and administering disk groups Moving disk groups between systems reminor operation on the nodes that are in th e cluster to resolve the conflict. In a cluster where more than one node is joined, use a base minor number whic h does not conflict on any node.
190 Creating and admi nistering disk gr oups Handling conflicti ng configuration copies You can use the foll owing command to discover the maximum number of volumes that are supported by VxVM on a Linux host: # cat /proc/sys/vxvm/vxio/vol_max_volumes 4079 See the vxdg (1M) manual page for more information.
191 Creating and administering disk groups Handling confli cting configuration copies Figure 4-1 T ypical arrangement of a 2-node campus c luster A serial split brain condition typically arises in a cluster when a private (non- shared) disk group is imported on Node 0 with Node 1 configured as the failover node.
192 Creating and admi nistering disk gr oups Handling conflicti ng configuration copies for the disks in their copies of the config uration database, and also in each disk’s private region, are updated separately on that host.
193 Creating and administering disk groups Handling confli cting configuration copies ■ If the other disks wer e also imported on another host, no disk can be consider ed to hav e a definitive c opy of the conf igurat ion database. The figur e below illustr ates how th is condition ca n arise for tw o disks.
194 Creating and admi nistering disk gr oups Handling conflicti ng configuration copies The following section, “ Correcting conflicting configuration information ,” describes how to fix this condition.
195 Creating and administering disk groups Reorgani zing the contents of disk group s In this example, the disk group has four disks, and is split so that two disks appear to be on each side of the split.
196 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups ■ T o perform online maint enance and upgr ading of fault-t olerant s ystems that can be split int o separate hos ts for this purpose, an d then rejoined.
197 Creating and administering disk groups Reorgani zing the contents of disk group s imported disk gr oup exists with the same name as the tar get disk gr oup. An exist ing deported disk group is destr oy ed if it has the same name as the tar get disk gr oup (as is the c ase for the vxdg init com ma n d ).
198 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups Figure 4-6 Disk group join operation These operations are performed on VxVM objects such as disks or top-level volumes, and include all component objects such as sub-volumes, plexes and subdisks.
199 Creating and administering disk groups Reorgani zing the contents of disk group s must recover the disk group manually as described in the section “Recovery from Incomplete Disk Group Moves” in the chapter “Recov ery from Hardware Failure” of the Veritas Volume Manager Troubleshooting Guide .
200 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups within st orage pools ma y not be split or moved. See the Veritas Storage Foundation Intelligent Storage Provisioni ng Administrator’s Guide fo r a descript ion of ISP and stor age pools.
201 Creating and administering disk groups Reorgani zing the contents of disk group s plexes were placed on the same disks as the data plexes for convenience when performing disk group split and move operations.
202 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups Figure 4-7 Examples of disk group s that can and cannot be split X Snapshot Vo l u m e data plexes plex V olume D.
203 Creating and administering disk groups Reorgani zing the contents of disk group s Moving objects between disk groups To move a self-contained set of VxVM ob jects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg tar getdg object .
204 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups For example, the following output from vxprint shows the contents of disk groups rootdg and mydg : # vxprint Disk.
205 Creating and administering disk groups Reorgani zing the contents of disk group s Disk group: mydg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg mydg mydg - - - - - - dm mydg07 c1t99d0.
206 Creating and admi nistering disk gr oups Reorganizing th e contents of disk groups The output from vxprint after the split shows the new disk group, mydg : # vxprint Disk group: rootdg TY NAME ASS.
207 Creating and administering disk groups Disabling a disk group Disk group: mydg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg mydg mydg - - - - - - dm mydg05 c1t96d0 - 17678493 - - - - .
208 Creating and admi nistering disk gr oups Destroying a disk group Destroying a disk group The vxdg command provides a destroy option that rem oves a disk group from the system and frees the disks in th at disk group for reinitialization: # vxdg destroy diskgroup Caution: This command destroys all data on the disks.
209 Creating and administering disk groups Upgrading a disk group becomes incompatible with earlier releases of VxVM that do not support the new version.
210 Creating and admi nistering disk gr oups Upgrading a disk group Importing the disk group of a previous ver sion on a Veritas Volume Manager system prevents the use of features intr oduced since that version was released.
211 Creating and administering disk groups Upgrading a disk group To list the version of a di sk group, use this command: # vxdg list dgname You can also determine the disk group version by using the vxprint command with the -l format option.
212 Creating and admi nistering disk gr oups Managing th e configuration daemon in VxVM To create a disk group with a previous version, specify the -T ve r s i o n option to the vxdg init command . For example, to create a disk group with version 120 that can be imported by a system running VxVM 4.
213 Creating and administering disk groups Backing up an d restoring di sk group conf iguration da ta For more information about how to use vxdctl , refer to the vxdctl (1M) manual page.
214 Creating and admi nistering disk gr oups Using vxnotify to monitor configuration changes.
Chap ter 5 Creating and administering subdisks This chapter describes how to create and maintain subdisks . Subdisks are the low-level building blocks in a Veritas Volume Mananger (VxVM) configuration that are required to create plexes and volumes. Note: Most VxVM commands require superuser or equivalent privileges.
216 Creating and admini stering subdis ks Displaying s ubdisk information Note: As for all VxVM commands, the default size unit is s , representing a sector. Add a suffix, such as k for kilobyte, m for megabyte or g for gigabyte, to change the unit of size.
217 Creating and administering subdisks Moving subdisks Moving subdisks Moving a subdisk copies the disk space contents of a subdisk onto one or more other subdisks. If the subdisk being move d is associated with a plex, then the data stored on the original subdisk is copied to the new subdisks.
218 Creating and admini stering subdis ks Joining subdisks For example, to split subdisk mydg03-02 , with size 2000 megabytes into subdisks mydg03-02 , mydg03-03 , mydg03-04 and mydg03-05 , each with .
219 Creating and administering subdisks Associatin g subdisks wit h plexes Subdisks can also be associated with a pl ex that already exists. To associate one or more subdisks with an existing plex, use the following command: # vxsd [-g diskgroup ] assoc plex subdisk1 [ subdisk2 subdisk3 .
220 Creating and admini stering subdis ks Associatin g log su bdisks If the volume is enabled, the associatio n operation regenerates data that belongs on the subdisk. Otherwise, it is mark ed as stale and is recovered w hen the volume is started. Associating log subdisks Note: The version 20 DCO volume layout incl udes space for a DRL.
221 Creating and administering subdisks Dissociating su bdisks from plexes Dissociating subdisks from plexes To break an established conn ection betw een a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in anoth er plex.
222 Creating and admini stering subdis ks Changing subdi sk attributes ■ putil n ■ tutil n ■ len ■ comment The putil n field attributes are maintained on reboot; tutil n fields are temporary and are not retained on reboot. VxVM sets the putil0 and tutil0 utility fields.
Chap ter 6 Creating and administering plexes This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a sing le volume.
224 Creating and administering plexes Creating a striped plex Creating a striped plex To create a striped plex, you must specif y additional attributes.
225 Creating and administering plexes Displaying plex information VxVM utilities use plex states to: ■ indicate w hether volume c ontents hav e been initialized t o a known state ■ determine if a .
226 Creating and administering plexes Displaying plex information EMPT Y plex state Volume creation sets all plexes associat ed with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOF AIL plex state The IOFAIL plex state is associated wi th persistent state logging.
227 Creating and administering plexes Displaying plex information SNAPTMP plex state The SNAPTMP plex state is used during a vxassist sn apstart operation when a snapshot is being prepared on a volume.
228 Creating and administering plexes Displaying plex information TEMPRMSD plex state The TEMPRMSD plex state is used by vxassist when attaching new data plexes to a volume. If the synchronizatio n operation does not complete, the plex and its subdisks are removed.
229 Creating and administering plexes Attaching and associating plexes Plex kernel states The plex kerne l state indicates the accessibility of the plex to the volume driver which monitors it. Note: No user intervention is required to se t these states; they are maintained internally.
230 Creating and administering plexes Ta k i n g p l e x e s o f f l i n e Note: You can also us e the command vxassist mirror volume to add a data plex as a mirror to an existing volume. Taking plexes offline Once a volume has been c reated and placed onli ne ( ENABLED ), VxVM can temporarily disconnect plexes from the volume.
231 Creating and administering plexes Detaching plexes Detaching plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup ] det ple x For exam.
232 Creating and administering plexes Moving plex es If the vxinfo command shows that the volume is unstartable (see “Listing Unstartable Volumes” in the section “Recovery from Hardware Failure.
233 Creating and administering plexes Copying v olumes to ple xes Copying volumes to plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume.
234 Creating and administering plexes Changing plex attributes Alternatively, you can first dissociate the plex and subdisks, and then remove them with the following commands: # vxplex [-g diskgroup ] dis plex # vxedit [-g diskgroup ] -r rm plex When used together, these commands produce the same result as the vxplex -o rm dis command.
Chap ter 7 Creating volumes This chapter describes how to create volumes in Veritas Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management syst ems. Volumes enhance recovery from hardware failure, data av ailability, performance, an d storage configuration.
236 Creating volume s T ypes of volume la youts Types of volume layouts VxVM allows you to create volume s with the following layout types: Concat enated A volume whose subdisks ar e arr anged both sequent ially and cont iguously within a plex.
237 Creating volumes T ypes o f volume layouts Supported volume logs and maps Veritas Volume Manager supports the use of several types of logs and maps with volumes: ■ F astResyn c Maps are used t o perform q uick and efficient r esync hronization of mirr ors ( see “ FastResync ” on page 66 f or details ).
238 Creating volume s Creating a volume Ref er to the f ollowing sect ions for inf ormation on cr eating a v olume on which DRL is enabled: ■ “ Creating a volume with di rty region logging enabled ” on page 252 for creating a volume wi th DRL log plexes.
239 Creating volumes Using v xassist 3 Associate plex es with the volume using vxmake vol ; see “ Creating a volume using vxmake ” on pag e 258. 4 Initializ e the volume using vxvol start or vxvol init zero ; see “ Initializing and starting a volume created using vxmake ” on page 261.
240 Creating volume s Using vxass ist ■ Oper ations result in a set of co nfig uration changes that either suc ceed or fail as a gr oup, rather than individually . System cr ashes or other interrupt ions do not leave int ermediate states that you hav e to clean up.
241 Creating volumes Using v xassist The section, “ Creating a volume on any disk ” on page 243 describes the simplest way to create a vol ume with default attr ibutes.
242 Creating volume s Discovering the maximum siz e of a volume max_nstripe=8 min_nstripe=2 # for RAID-5, by default create between 3 and 8 stripe columns max_nraid5stripe=8 min_nraid5stripe=3 # by de.
243 Creating volumes Creating a volume on any disk To discover the value in blocks of the alig n ment that is set on a disk group, use this command: # vxprint -g diskgroup -G -F %align By default, vxassist automatically rounds up the volume size and attribute size values to a mul tiple of the alignm ent valu e.
244 Creating volume s Creating a volume on specifi c disks Creating a volume on specific disks VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. If you wa nt a vo lume to be created on specific disks, you must designate those disks to VxVM.
245 Creating volumes Creating a volume o n specific disks Specifying ordered allocation of storage to volumes Ordered allocation gives you complete co ntrol of space allocation. It requires that the number of disks that you specify to the vxassist command must match the number of disks that are required to create a volume.
246 Creating volume s Creating a volume on specifi c disks Figure 7-2 Example of using ord ered alloca tion to create a striped-mirror volume Additional ly, you can use the col_switch attribute to specify how to concatenate space on the disks into columns.
247 Creating volumes Creating a volume o n specific disks Figure 7-3 Example of using c oncatenated disk space to create a mi rrored- stri pe vo lum e Other storage specification classes for co ntrollers, enclosures, targets and trays can be used with ordered allocation .
248 Creating volume s Creating a volume on specifi c disks Figure 7-4 Example of storage allocation us ed to create a mirrored-stripe volume across controllers For other ways in whic h you c an control ho w vxassist lays out mirrored volumes across controllers, see “ Mirroring across targets, controllers or enclosures ” on page 255.
249 Creating volumes Creating a mirrored volume Creating a mirrored volume Note: You need a full license to use this feature. A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is st ored on different disks from the original copy of the volume and from other mirro rs.
250 Creating volume s Creating a volu me with a version 0 DCO volu me # vxassist [-b] [-g diskgroup ] make volume length layout=concat-mirror [nmirror= number ] Creating a volume with a version 0 DCO volume If a data change object (D CO) and DCO vo lume are associated with a volume, this allows Persistent FastResync to be used with the volume.
251 Creating volumes Creating a volume with a vers ion 0 DCO volume # vxdg list diskgr oup To upgrade a disk group to vers ion 90, use the following command: # vxdg -T 90 upgrade diskgroup For more information, see “ Upgrading a disk group ” on page 208.
252 Creating volume s Creating a volu me with a version 20 DCO vo lume Creating a volume with a version 20 DCO volume T o create a volume with an attached version 20 DCO object and volume 1 Ensure that the disk gr oup has been up graded t o the latest version.
253 Creating volumes Creating a striped volume Dirty region logging (DRL), if enable d, speeds recovery of mirrored volumes after a system crash. To enable DRL on a volume that is created within a dis.
254 Creating volume s Creating a striped volu me You can specify the disks on which the vo lumes are to be created by including the disk names on the command line.
255 Creating volumes Mirroring across targets, controllers or enclosures for the attribute stripe-mirror-col-split-trigger-pt that is defined in the vxassist defaults file. If there are multiple subdisks per column, you can choose to mirror each subdisk individually instead of each column.
256 Creating volume s Creating a RAID-5 volume See “ Specifying ordered allocation of storage to volumes ” on page 245 for a description of other ways in which you can control how v olumes are laid out on the specified storage.
257 Creating volumes Creating tagged volumes RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated with a RAID-5 volume has a co mplete copy of the logging information for the volume. To support concurrent access to the RAID-5 array, the log should be several times the st ripe size of the RAID-5 plex.
258 Creating volume s Creating a volume using vxmake Tag names and tag values are case-sensi tive character strings of up to 256 characters. Tag names can consist of letters (A through Z and a through z), numbers (0 through 9), dashes (-), und erscores (_) or periods (.
259 Creating volumes Creating a volume using vxmake If each column in a RAID-5 plex is to be created from multiple subd isks which may span several physical disks, you can specify to which column each subdisk should be added.
260 Creating volume s Initializing and starting a volume The following sampl e description file defines a volume, db , with two plexes, db- 01 and db-02 : #rty #name #options sd mydg03-01 disk=mydg03 .
261 Creating volumes Initializing and starting a volume As an alternative to the -b option, you can specify the init=active attribute to make a new volume immediately ava il able for use.
262 Creating volume s Accessing a vol ume Accessing a volume As soon as a volume has been created and initialized, it is av ailable for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as rel ational databases a nd other data management software.
Chap ter 8 Administering volumes This chapter describes how to perform common maintenance tasks on volumes in Veritas Volume Manager (VxVM). Th is includes displaying volume information, monitoring tasks, addi ng and removing logs, resizing volumes, removing mirrors, removing volumes, an d changing the layout of volumes without taking them offline.
264 Administe ring volumes Displayin g volume informati on Displaying volume information You can use the vxprint command to display information abou t how a volume is configured.
265 Administerin g volumes Displaying vo lume informati on # vxprint -g mydg -t voldef This is example output from this command: V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE v voldef .
266 Administe ring volumes Displayin g volume informati on INV ALID volume state The contents of an instant snapshot volume no longer represent a true point-in- time image of the original volume. NEEDSYNC v olume state The volume requires a resynchronization operation the next time it is started.
267 Administerin g volumes Monitoring a nd controll ing tasks Note: No user intervention is required to se t these states; they are maintained internally. On a system th at is operating properly, all volumes are ENABLED. The following volume kernel states are defined: DET A CHED volume kernel state Maintenance is being performed on the volume.
268 Administe ring volumes Monitoring an d controlling tasks Any tasks started by the utilities invoked by vxrecover also inherit its task ID and task tag, so establishing a parent-child task relationship. For more information about the utilities that support task tagging, see their respective manual pages.
269 Administerin g volumes Monitoring a nd controll ing tasks generated when the task completes. When this occurs, the state of the task is printed as EXITED . pause Puts a running task in the paused state, causing it to suspend operation. resume Causes a paused task to continue operation.
270 Administe ring volumes Stopping a volume Stopping a volume Stopping a volume renders it unavailabl e to the user, and changes the volume kernel state from ENABLED or DETACHED to DISABLED . If the volume cannot be disabled, it remains in its current state.
271 Administerin g volumes Starting a volume Starting a volume Starting a volume makes it availa ble for use, and changes the volume state from DISABLED or DETACHED to E NABLED. To start a DIS ABLED or DETACHED volume, use the following command: # vxvol [-g diskgroup ] start volume .
272 Administe ring volumes Adding a mirror to a volume Mirroring all volumes To mirror all volumes in a disk group to available disk space, use the following command: # /etc/vx/bin/vxmirror -g diskgro.
273 Administerin g volumes Removing a mirror You can choose to mirror volumes from disk mydg02 onto any available disk space, or you can choose to mirror onto a specific disk. To mirror to a specific disk, select the name of that disk. To mirror to any available disk space, select "any".
274 Administe ring volumes Adding logs and maps to volumes This command removes the mirror vol01-02 and all associated subdisks. This is equivalent to entering th e following separate commands: # vxpl.
275 Administerin g volumes Preparing a volume for DR L and instant snapshots Preparing a volume for DRL and instant snapshots Note: T his procedure describes how to add a version 20 data change object (DCO) and DCO volume to a volume that you prev iously created in a disk group with a version number of 110 or greater.
276 Administe ring volumes Preparing a volume for DR L and ins tant snapshots Note: The vxsnap prepare command automatically enables Persistent FastResync on the volume. Persistent Fa stResync is also set automatically on any snapshots that are g enerated from a volume on which this feature is enabled.
277 Administerin g volumes Preparing a volume for DR L and instant snapshots If required, you can use the vxassist move command to relocate DCO plexes to different disks.
278 Administe ring volumes Preparing a volume for DR L and ins tant snapshots Determining if DRL is enabled on a volume T o determine if DRL (configured using a version 20 DC O volume) is enabled on a.
279 Administerin g volumes Upgrading e xisting volume s to use version 20 DCO s To re-enable DRL on a vo lume, enter this command: # vxvol [-g diskgroup ] set drl=on volume To re-enable sequential DRL.
280 Administe ring volumes Upgrading e xisting volumes to use ve rsion 20 DCOs # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup For more information, see “ Upgrading a disk group ” on page 208.
281 Administerin g volumes Adding traditi onal DRL logging to a mirrored vo lume subsequently create from the snap shot plexes. For example, specify ndcomirs=5 for a volume with 3 data pl exes and 2 snapshot plexes. The value of the regionsize attribute specifies the size of the tracked regions in the volume.
282 Administe ring volumes Adding traditional DRL logging to a mirrored vol ume where each bit represents one region in the volume. For example, the size of the log would need to be 20K for a 10GB volu me with a region size of 64 kilobytes.
283 Administerin g volumes Adding a RAID-5 log Adding a RAID-5 log Note: You need a full license to use this feature. Only one RAID-5 plex can exist per RA ID-5 volume. Any additional plexes become RAID-5 log plexes, which are us ed to log information about data and parity being written to th e volume.
284 Administe ring volumes Resizing a volume Removing a RAID-5 log To identify the plex of the RAID-5 log, use the following command: # vxprint [-g diskgroup ] -ht volume where volume is the name of the RAID-5 volu me. For a RAID-5 log, the output lists a plex wi th a STATE field entry of LOG .
285 Administerin g volumes Resizing a volume vxassist command also allows you to specif y an increment by which to change the volume’s size. Caution: If you use vxassist or vxvol to resize a volume, do not shrink i t below the size of the file system which is located on it.
286 Administe ring volumes Resizing a volume ■ Resizing a v olume with a usage type other than FSGEN or RAID5 can r esult in loss of data. If such an oper ation is r equired, use the -f option t o for cibly re s i ze s u ch a vo lu m e . ■ Y ou cannot r esize a v olume that contains ple xes with dif fer ent layout t ypes.
287 Administerin g volumes Resizing a volume Note: If specified, the -b option makes growing the volume a background task. For example, to extend volcat by 100 sectors, use th e follo wing command: # .
288 Administe ring volumes Setting tags on volumes Note: The vxvol set len command cannot increase the size of a volume unless the needed space is available in the plex es of the volume. Wh en the size of a volume is reduced using the vxvol set len command, the freed space is not released into the disk group’s free space pool.
289 Administerin g volumes Changing the read poli cy for mirrored volumes # vxassist -g mydg settag myvol "dbvol=table space 1" Dotted tag hierarchies are understood by the list operation. For example, the listing for tag=a.b includes all volum es that have tag names that st art with a.
290 Administe ring volumes Removing a volume For example, to set the policy for vol01 to read preferentially from the plex vol01-02 , use the following command: # vxvol -g mydg rdpol prefer vol01 vol0.
291 Administerin g volumes Moving volume s from a VM disk T o move volumes from a disk 1 Select menu item 6 (Move volume s from a disk) from the vxdiskadm main menu.
292 Administe ring volumes Enabling F astResync on a volume Enabling FastResync on a volume Note: The recommended method for enabling FastResync on a volume with a version 20 DCO is to use the vxsnap prepare command as described in “ Preparing a volume for DRL and instant snapshots ” on page 275.
293 Administerin g volumes Enabling F astResync on a volume Note: To use FastResync with a snapshot, FastResync must be enabled before the snapshot is taken, and must remai n enabled until after the snapback is completed.
294 Administe ring volumes Perf orming online relayout Performing online relayout Note: You need a full license to use this feature. You can use the vxassist relayout command to reconfigure the layout of a volume without taking it offline.
295 Administerin g volumes Perf orming online relayout Permitted relayout transformations The tables below give details of the relayout operations that are possible for each type of source storage layout. Table 8-2 Supported relayout transformati on s for concatenated volumes Relayout to Fro m c on c at concat No.
296 Administe ring volumes Perf orming online relayout Table 8-4 Supported relayout transfor mations for RAID-5 volumes Relayout to From raid5 concat Ye s . concat-mirror Ye s . mirror-concat No. Us e vxassist convert after r elayout t o concate nated-mirr or volume instead.
297 Administerin g volumes Perf orming online relayout Table 8-6 Supported relayout transformati ons f or mirrored-stripe volumes Relayout to From mirror-st ripe concat Ye s . concat-mirror Ye s . mirror-concat No. Us e vxassist convert after r elayout t o concate nated-mirr or volume instead.
298 Administe ring volumes Perf orming online relayout Specifying a non-default layout You can specify one or more relayout op tions to change the default layout configuration. Examples of these options are: ncol= numbe r Specifies the number of columns.
299 Administerin g volumes Perf orming online relayout Viewing the status of a relayout Online relayout operations take some time to perform. You can use the vxrelayout command to obtain information about the status of a relayout operation.
300 Administe ring volumes Converting between layered and non-layered volumes inserts a delay of 1000 milliseconds (1 second) between copying each 10- megabyte region: # vxrelayout -g mydg -o bg,slow=1000,iosize=10m start vol04 The default delay and region size valu es are 250 milliseconds and 1 megabyte respectively.
301 Administerin g volumes Converting between layered and non- layered volumes When the relayout has completed, use the vxassist convert command to change the resulting lay ered striped-mirror volume .
302 Administe ring volumes Converting between layered and non-layered volumes.
Chap ter 9 Administering volume snapshots Veritas Volume Manager (VxVM) provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot . You can also take a snapshot of a volume set as described in “ Creating instant snapshots of volume sets ” on page 334.
304 Administer ing volume snapshots Note: A volume snapshot represents the data that exists in a volu me at a given point in time. As such, VxVM does n ot have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
305 Administe ring volume snapshots T raditional third-mi rror break-off snapsho ts Traditional third-mirror break-off snapshots The traditional thir d-mirror break -off volume snapshot model that i s supported by the vxassist command is shown in Figure 9-1 .
306 Administer ing volume snapshots T raditional third-mirror break -off snapsho ts its data plexes. The snapshot volume cont ains a copy of the original volume’s data at the time that you took the snap shot. If more than one snapshot mirror is used, the snapshot volume is itself mirrored.
307 Administe ring volume snapshots Full-sized instant snapshots Full-sized instant snapshots Full-sized instant snapshots are a va riation on the third-mirror volume snapshot model that make a sn apshot volume available for access as soon as the snapshot plexes have been created.
308 Administer ing volume snapshots Full-s ized instant snap shots volume are updated, its original contents are gradually relocated to the snapshot volume. If desired, you can additionally select to perform either a background (non- blocking) or foreground (bl ocking) sync hronization of the snapshot volume.
309 Administe ring volume snapshots Space-optimized instant snapshots Space-optimized instant snapshots Volume snapshots, such as those described in “ Traditional third-mirror break- off snapshots .
310 Administer ing volume snapshots Emulation of third-mirro r break-off snapshots As for instant snapshots, space-optimized snapshots use a copy-on-write mechanism to make them immediatel y av ailable for use when they are first created, or when their data is refreshed.
311 Administe ring volume snapshots Linked bre ak-off snapshot volumes ■ Use the vxsnap make command w ith the sync=yes and type=full at tributes specified to c reate the snapshot v olume, and then use the vxsnap syncwait command t o wait f or synchr onization of the snapshot v olume to complete.
312 Administer ing volume snapshots Cascaded snapshot s to recover the mirror volume in the same way as for a DISABLED volume. See “ Starting a volume ” on page 271. If you resize (that is, grow or shrink) a volume, all its ACTIVE linked mirror volumes are also resized at the same time .
313 Administe ring volume snapshots Cascaded snapshots to read data from an older snapshot that does not exis t in that snapshot, it is obtained by searching recursively up the hierarchy of more recent snapshots.
314 Administer ing volume snapshots Cascaded snapshot s Figure 9-5 Creating a snapshot of a snap shot Even though the arrangemen t of the snapshots in this figure appears similar to the snapshot hierarchy shown in “ Snapshot cascade ” on page 312, the relationship between the snapshots is not recursive.
315 Administe ring volume snapshots Cascaded snapshots Figure 9-6 Using a snapshot of a snaps hot to restore a database If you have configured snapshots in this way, you may wish to make one or more of the snapshots into independent volumes.
316 Administer ing volume snapshots Cascaded snapshot s Figure 9-7 Dissociating a snapsho t volume ■ vxsnap split dissociates a snapshot and its dependent snapshots fr om its parent v olume. The snapshot v olume that is to be split must have been f ully synchr onized fr om its parent v olume.
317 Administe ring volume snapshots Creating mu ltiple sn apshots Figure 9-8 Splitting snapshots Creating multiple snapshots To make it easier to create snapshots of sev eral volumes at the same time, both the vxsnap make and vxassist snapshot commands accept more than one volume name as their argument.
318 Administer ing volume snapshots Restoring t he original volu me from a snapsho t Figure 9-9 Resynchronizing an original volu me from a snapshot Note: The original volume must not be in use during a snapback operation that specifies the option -o resyncfromreplica to resynchronize the volume from a snapshot.
319 Administe ring volume snapshots Creating instant snapshots Creating instant snapshots Note: You need a full license to use this feature. VxVM allows you to make instant snapshots of volumes by using the vxsnap command.
320 Administer ing volume snapshots Creating instant snapsho ts You can create instant snapshots of vo lume sets by replacing volume names with volume set names in the vxsnap command. For more information, see “ Creating instant snapshots of volume sets ” on page 334.
321 Administe ring volume snapshots Creating instant snapshots Preparing to create instan t and break-off snapshots T o prepare a volume for th e creation of instant and break -off snapshots 1 Use the.
322 Administer ing volume snapshots Creating instant snapsho ts created, and it must also h a ve the same region size. See “ Creating a volume for use as a full-sized instant or linked break-off snapshot ” on page 323 for details.
323 Administe ring volume snapshots Creating instant snapshots Note: All space-optimized snapshots that share the cache must have a region size that is equal to or an inte ger multiple of the region size set on the cache. Snapshot creation also fails if the origin al volume’s region size is smaller than the ca che’s region size.
324 Administer ing volume snapshots Creating instant snapsho ts 4 Use the vxassist comm a nd to c reat e a vo lu m e , snapvol , of the r equired siz e and redundanc y , tog ether with a version 20 DC.
325 Administe ring volume snapshots Creating instant snapshots For space- optimized instant sna pshots t h at share a cache object, the specified region size must be greater than or eq ual to the regi on size specified for the cache object. See “ Creating a shared cache object ” on page 322 for details.
326 Administer ing volume snapshots Creating instant snapsho ts For example, to create the sp ace-optimized instant snapshot, snap4 myvol , of the volume, myvol , in the disk group, mydg , on the disk.
327 Administe ring volume snapshots Creating instant snapshots Creating and ma naging fu ll-sized instant snapshots Note: Full-sized instant snapshots are not suitable for write-intensive volumes (such as for database redo logs) because the copy-on-write mechanism may degrade the performance of the volume.
328 Administer ing volume snapshots Creating instant snapsho ts If required, you can use the follo wing command to test if the synchronization of a volume is complete: # vxprint [-g diskgroup ] -F%incomplete snapvol This command returns the value of f if synchronization of the volume, snapvol , is complete; otherwise, it returns the value on .
329 Administe ring volume snapshots Creating instant snapshots ■ Dissociate the snapshot volume entirel y from the original volume. Th is may be useful if you want to use the copy for other purposes such as testing or report generation. If desired, you can delete the dissociated volume.
330 Administer ing volume snapshots Creating instant snapsho ts If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchr onization of the s.
331 Administe ring volume snapshots Creating instant snapshots synchronization was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “ Refreshing an instant snapshot ” on page 337 for details.
332 Administer ing volume snapshots Creating instant snapsho ts [mirdg= snapdg ] The optional mirdg attribute can be used to specify the snapshot volume’s current disk group, snapdg . The -b option can be used to perform the synchronization in th e background.
333 Administe ring volume snapshots Creating instant snapshots Note: This operation is not possible if the linked volume and snapshot are in different disk groups. ■ Reattach the snaps hot volume with the original volume. See “ Reattaching a linked break-off snapshot volume ” on page 339 for details.
334 Administer ing volume snapshots Creating instant snapsho ts In this example, snapvol1 is a full-sized snapshot that uses a prepared volume, snapvol2 is a space-optimized snapshot that uses a prepared cache, and snapvol3 is a break-off full-sized snapshot that is formed from plexes of the original volume.
335 Administe ring volume snapshots Creating instant snapshots VOLUME INDEX LENGTH KSTATE CONTEXT svol_0 0 204800 ENABLED - svol_1 1 409600 ENABLED - svol_2 2 614400 ENABLED - A full-sized instant sna.
336 Administer ing volume snapshots Creating instant snapsho ts Adding snapshot mirrors to a volume If you are going to create a full-sized b reak-off snapshot volume, you can use the following comman.
337 Administe ring volume snapshots Creating instant snapshots Note: This command is similar in usage to the vxassist snapabort command. If a volume set name is specified instea d of a volume, a mirror is removed from each volume in the volume set.
338 Administer ing volume snapshots Creating instant snapsho ts To disable resynchroni zation, specify the syncing=no attribute. This attribute is not supported for space-optimized snapshots. Note: The snapshot being refreshed must no t b e open to any application.
339 Administe ring volume snapshots Creating instant snapshots snapwait command (but not vxsnap syncwait ) to wait for the resynchronization of the reattached plexes to complete, as shown here: # vxsn.
340 Administer ing volume snapshots Creating instant snapsho ts syncwait ) to wait for the resynchronizati on of the reattached volume to complete, as shown here: # vxsnap -g snapdg snapwait myvol mir.
341 Administe ring volume snapshots Creating instant snapshots snapshots remain, snapvol may be dissociated. The snapshot hierarchy is then adopted by snapvol ’s parent volume. Note: To be usable after dissociation, the snapshot volume and any snapshots in the hierarchy must have been fully synchronized.
342 Administer ing volume snapshots Creating instant snapsho ts Note: The topmost snapshot volume in the hierarchy must have been fully synchronized for this command to succ eed. Snapshots that are lower down in the hierarchy need not have b een fully resynchronized.
343 Administe ring volume snapshots Creating instant snapshots Alternatively, you can use the vxsnap list command, which is an al ias for the vxsnap -n print command: # vxsnap [-g diskgroup ] [-l] [-v.
344 Administer ing volume snapshots Creating instant snapsho ts See the vxsnap (1M) manual page for more information about using the vxsnap print and vxsnap list commands. Controlling instant sn apshot synchronization Note: Synchronization of the contents of a snapshot with its original volume is not possible for space-optimized instant snapshots.
345 Administe ring volume snapshots Creating instant snapshots instant snapshot ” on page 338 and “ Reattaching a linked break-off sn apshot volume ” on page 339 for details.
346 Administer ing volume snapshots Creating instant snapsho ts Tuning the autogrow at tributes of a cache The highwatermark , autogrowby and maxautogro w attributes determine how the VxVM cache daemo.
347 Administe ring volume snapshots Creating instant snapshots Caution: Ensure that the cache is suffic iently large, and that the autogrow attributes are configured correctly for your needs.
348 Administer ing volume snapshots Creating traditional t hird-mirror break-off snap shots Creating traditional third-mirror break-off snapshots VxVM provides third-mirror break-off snap shot images of volume devices using vxassist and other commands.
349 Administe ring volume snapshots Creating traditional th ird-mirror break-off snap shots creating the snapshot mirror is long in contrast to the brief amount of time that it takes to create the snaps hot volume. The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror.
350 Administer ing volume snapshots Creating traditional t hird-mirror break-off snap shots It is also possible to make a snapshot pl ex from an existing plex in a volume. See “ Converting a plex into a snapshot plex ” on page 351 for details. 2 Choose a suitable time t o create a sn apshot.
351 Administe ring volume snapshots Creating traditional th ird-mirror break-off snap shots Note: Dissociating or removin g the snapshot volume loses the adva ntage of fast resynchronization if FastResync was enabled.
352 Administer ing volume snapshots Creating traditional t hird-mirror break-off snap shots To convert an existing plex into a snapshot plex in the SNAPDONE state for a volume on which Non-Persistent .
353 Administe ring volume snapshots Creating traditional th ird-mirror break-off snap shots plexes are snapped back. This task res ynch ronizes the data in the volume so that the plexes are consistent.
354 Administer ing volume snapshots Creating traditional t hird-mirror break-off snap shots 2 Use the vxassist mirror c ommand to cr eate mirr ors of the existing snapshot v olume and its DC O volume:.
355 Administe ring volume snapshots Creating traditional th ird-mirror break-off snap shots Displaying snapshot information The vxassist snapprint command displays the associations between the origina.
356 Administer ing volume snapshots Adding a version 0 DCO and DCO volume Adding a version 0 DCO and DCO volume Note: The procedure described in this sectio n adds a DCO log vol ume that has a version 0 layout as introduced in Vx VM 3.
357 Administe ring volume snapshots Adding a version 0 DCO and DCO volume 3 Use the fo llowing command to add a DCO and DCO v olume to the exist ing vo lu m e: # vxassist [-g diskgroup ] addlog volume.
358 Administer ing volume snapshots Adding a version 0 DCO and DCO volume the volume named vol1 (the TUTIL0 and PUTIL0 columns are omitted for clarity): TY NAME ASSOC KSTATE LENGTH PLOFFS STATE .
359 Administe ring volume snapshots Adding a version 0 DCO and DCO volume This form of the command dissociates the DCO object from the volume but does not destroy it or the DCO volume. If the -o rm option is specified, the DCO object, DCO volume and its plexes, and an y snap objects are also removed.
360 Administer ing volume snapshots Adding a version 0 DCO and DCO volume.
Chap ter 10 Creating and administering volume sets This chapter describes how to use the vxvset command to create and administer volume sets in Veritas Volume Manager (V xVM). Volume sets enable the use of the Multi-Volume Support feature with Veri tas File System (VxFS).
362 Creating and admini stering volum e sets Creating a volume set ■ V olume sets can be used in place of v olumes with the f ollowing vxsnap oper ations on instant snapshots: addmir , dis , make , prepare , reattach , refresh , restore , rmmir , split , syncpause , syncresume , syncstart , syncstop , syncwait , and unprepare .
363 Creating and admini stering v olume sets Listing detai ls of vo lume sets Caution: The -f (force) option must be specifie d if the volume being added, or any volume in the volume set, is either a snapshot or the parent of a snapshot.
364 Creating and admini stering volum e sets Removing a volume from a volume set # vxvset -g mydg list set1 VOLUME INDEX LENGTH KSTATE CONTEXT vol1 0 12582912 DISABLED - vol2 1 12582912 DISABLED - vol.
365 Creating and admini stering v olume sets Raw device node access to component volumes Caution: Writing directly to or reading from the raw device node of a component volume of a volume set should only be performed if it is known that the volume's data will not otherwise change during the period of access.
366 Creating and admini stering volum e sets Raw device node access to component vol umes value of the makedev attribute is currently set to on . The access mode is determined by the current setting of the compvol_access attribute.
367 Creating and admini stering v olume sets Raw device node access to component volumes The syntax for setting the compvol_access attribute on a volume set is: # vxvset [ -g diskgroup ] [ -f ] set c.
368 Creating and admini stering volum e sets Raw device node access to component vol umes.
Chap ter 11 Configuring off-host processing Off-host processing allows you to implement the following activities: Data backup As the r equirement f or 24 x 7 a vailability bec omes essential f or many businesses, org anizations c annot affor d the downt ime involv ed in backing up cr itical data offline.
370 Configuring off-host process ing Implement ing off-host processing so lutions Off-host processing is made simpler by using linked break-off snapshots, which are described in “ Linked break-off snapshot volumes ” on page 311.
371 Configuring off-host process ing Implementing off-hos t processing solutions ■ Implementing decision support These applications use the Persistent FastResync feature of VxVM in conjunction with linked break-off snapsh ots. Note: A volume snapshot represents the data th at exists in a volume at a given point in time.
372 Configuring off-host process ing Implement ing off-host processing so lutions Note: If the volume was created under VxVM 4.0 or a later release, and it is not associated with a new-style DC O object and DCO volume, f ollow the procedure described in “ Preparing a volume for DRL and instant snapshots ” on page 275.
373 Configuring off-host process ing Implementing off-hos t processing solutions If a database spans more than one volume, you can specify all the volumes and their snapshot volumes using one command,.
374 Configuring off-host process ing Implement ing off-host processing so lutions # vxsnap -g snapvoldg reattach snapvol source= vol sourcedg= volumedg For example, to reattac h the snapshot volumes .
375 Configuring off-host process ing Implementing off-hos t processing solutions This command returns on if FastResync is enabled; otherwise, it returns off .
376 Configuring off-host process ing Implement ing off-host processing so lutions 8 On the primary host, if y ou temporaril y suspended updates to a volume in step 6 , r elease all the data base ta bles from hot backup mode.
377 Configuring off-host process ing Implementing off-hos t processing solutions For example, to reattac h the snapshot volumes svol1 , svol2 and svol3 : # vxsnap -g sdg reattach svol1 source=vol1 so.
378 Configuring off-host process ing Implement ing off-host processing so lutions.
Chap ter 12 Administering hot-relocation If a volume has a disk I/O failure (for example, the disk has an uncorrectable error), Veritas Volume Manager (VxVM) ca n detach the plex involved in the failure. I/O stops on that plex but co ntinues on the remaining plexes of the volume.
380 Administer ing hot-relocatio n How hot-relocation works How hot-relocation works Hot-relocation allows a sy stem to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects.
381 Administer ing hot-relocation How hot-relocation works spares ( mark ed spare ) in the disk group wher e the failur e occ urred. It then r elocates the subdisks to use this space.
382 Administer ing hot-relocatio n How hot-relocation works Figure 12-1 Example of hot-relocation for a subdisk in a RAID-5 volume mydg01 mydg02 mydg03 mydg04 mydg05 mydg01 mydg02 mydg03 mydg04 mydg05.
383 Administer ing hot-relocation How hot-relocation works Partial disk failure mail messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root . If a partial disk failure occurs, the mail identifies the failed plexes.
384 Administer ing hot-relocatio n How hot-relocation works Complete disk failure mail messages If a disk fails completely and hot-relocat ion is enabled, the mail message lists the disk that failed and a ll plexes that use the disk.
385 Administer ing hot-relocation Configurin g a system for hot-re location does not take place. If relocation is no t possible, the system administrator i s notified and no further action is taken. From the eligible disks, hot-relocation atte mpts to use the disk that is “closest” to the failed disk.
386 Administer ing hot-relocatio n Displaying spare disk in formation After a successful relocation, remove and repl ace the failed disk as described in “ Removing and replacing disks ” on page 112).
387 Administer ing hot-relocation Marking a disk as a hot-relocation spare Marking a disk as a hot-relocation spare Hot-relocation allows the system to react au tomati cally to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data.
388 Administer ing hot-relocatio n Removing a disk from use as a ho t-relocation spare electronic mail. After successful relocation, you may want to replace the failed disk.
389 Administer ing hot-relocation Making a disk a vailable for h ot-relocation use T o use v xdiskadm to exclude a disk from hot-relocation use 1 Select menu item 15 (Exclude a disk from hot-relocation use) fr om the vxdiskadm main menu.
390 Administer ing hot-relocatio n Configuring ho t-relocation to use only spare disks Enter disk name [<disk>,list,q,?] mydg01 The following confirmation is displayed: V-5-2-932 Making mydg01 in mydg available for hot-relocation use is complete.
391 Administer ing hot-relocation Moving and unrelocating su bdisks Volume home Subdisk mydg02-03 relocated to mydg05-01, but not yet recovered. Before you move any relocated subdisks, f ix or replace the disk that failed (as described in “ Removing and replacing disks ” on page 112).
392 Administer ing hot-relocatio n Moving and unreloc ating subdis ks subdisks using vxassist ” on page 392 and “ Moving and unrelocating subd isks using vxunreloc ” on page 392. Moving and unrelocating subdisks using vxassist You can use the vxassist command to move and unrelocate subdisks.
393 Administer ing hot-relocation Moving and unrelocating su bdisks without using the original offsets. Refer to the vxunreloc (1M) ma nual page for more information.
394 Administer ing hot-relocatio n Moving and unreloc ating subdis ks Examining which subdisks were hot-relocated from a disk If a subdisk was hot relocated more than on ce due to multiple disk failures, it can still be unrelocated back to its original location.
395 Administer ing hot-relocation Modifying t he behavior of hot-relocatio n If the system goes down after the new subdisks are created on the destination disk, but before all the data has been moved, re-execute vxunreloc when the system has been rebooted.
396 Administer ing hot-relocatio n Modifying the beh avior of hot-re location Alternatively, you can us e the following command: # nohup /etc/vx/bin/vxrelocd root user1 user2 & See the vxrelocd (1M) manual page for more information.
Chap ter 13 Administering cluster functionality A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of clus ter configurations are: The cluster functionality of Veritas Vo lume Manager (CVM) allows up to 16 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control (VM disks).
398 Administer ing cluster function ality Overview of cluster volume managemen t enabled, all the nodes in the cluster can share VxVM objects such as shared disk groups. Private disk groups are supported in the same way as in a non-clustered environment.
399 Administerin g cluster fun ctionality Overview of cl uster volume managemen t membership. Each node starts up indepen dently and has its own cluster monitor plus its own copies of the operating sy stem and VxVM with support for cluster functionality.
400 Administer ing cluster function ality Overview of cluster volume managemen t Figure 13-1 Example of a 4-node cluster To the cluster monitor, all nodes are th e same. VxVM objects configured within shared disk groups can potentially be acce ssed by all nodes that join the cluster.
401 Administerin g cluster fun ctionality Overview of cl uster volume managemen t Private and shared disk groups Two types of disk groups are defined: In a cluster, most disk groups are sh ared.
402 Administer ing cluster function ality Overview of cluster volume managemen t cluster-shareable disk group is available as long as at least one node is active in the cluster. The failure of a cluster node does not affect access by the remaining active nodes.
403 Administerin g cluster fun ctionality Overview of cl uster volume managemen t The following table summarizes the allo wed and conflicting activation modes for shared disk groups: Shared disk groups can be automatically activated in any mode during disk group creation or during manual or auto -import.
404 Administer ing cluster function ality Overview of cluster volume managemen t Note: The activation mode of a disk grou p controls volume I/O from different nodes in the cluster. It is not possible to activate a disk g roup on a given node if it is activated in a conflicting mode on another node in the cluster.
405 Administerin g cluster fun ctionality Overview of cl uster volume managemen t policy . However, in some cases, it is not desirable to have all nodes react in this way to I/O failure. To address this, an alternate way of responding to I/O failures, known as the local det ach policy , was introduced in release 3.
406 Administer ing cluster function ality Overview of cluster volume managemen t Local detach policy Caution: Do not use the local detach policy if you use the VCS agents that monitor the cluster func.
407 Administerin g cluster fun ctionality Overview of cl uster volume managemen t Disk group failure policy The local detach policy b y itself is insu fficient to determine the desired behavior if the master node loses access to al l disks that contain copies of the configuration database and logs.
408 Administer ing cluster function ality Overview of cluster volume managemen t Guidelines for choosing detach and failure policies In most cases it is recommended that you use the global detach poli.
409 Administerin g cluster fun ctionality Overview of cl uster volume managemen t The default settings for the detach and failure policies are global and dgdisable respectively.
410 Administer ing cluster function ality Cluster init ialization and configurat ion Cluster initialization and configuration Before any nodes can join a new clust er for the first time, you must supply certain configuration in formation during clust er monitor setup.
411 Administerin g cluster fun ctionality Cluster initi alization and con figuration During cluster reconfiguration, VxVM sus pends I/O to shared disks. I/O resumes when the reconfiguration completes. A pplications may appear to freeze for a short time during reconfiguration.
412 Administer ing cluster function ality Cluster init ialization and configurat ion Table 13-5 Node abort messages Reason Description cannot find disk on slave node Missing disk or bad disk on the slave node. cannot obtain configuration data The node cannot r ead the conf iguration dat a due to an error such as disk failur e.
413 Administerin g cluster fun ctionality Cluster initi alization and con figuration See the vxclustadm (1M) manual page for more information about vxclustadm and for examples of its usage. Volume reconfiguration V olume r econfiguration is the process of creati ng, changing, and removing VxVM objects such as disk groups, volumes and plexes.
414 Administer ing cluster function ality Cluster init ialization and configurat ion When an error occurs, such as when a chec k on a slave fails or a node leaves the cluster, the error is returned to the utilit y and a message is sent to the console on the master node to identify on which node the error occurred.
415 Administerin g cluster fun ctionality Cluster initi alization and con figuration stopped, volume reconfigura tion cannot take place. O ther nodes can join the cluster if the vxconfigd daemon is not running on the slave nodes.
416 Administer ing cluster function ality Cluster init ialization and configurat ion Note: The -r reset option to vxconfigd restarts the vxconfigd daemon and recreates all states from scratch. This option cannot be used to restart vxconfigd while a node is joined to a cluster because it causes cluster information to be discarded.
417 Administerin g cluster fun ctionality Multiple h ost failover configurations Note: Once shutdown succeeds, the node has left the cluster. It is not possible to access the shared volumes until the node joins the cluster again. Since shutdown can be a lengthy process, other reconfiguration can take place while shutdown is in progress.
418 Administer ing cluster function ality Multiple ho st failover co nfigurations corrupted. Similar corruption can also occur if a f i l e s y s t e m o r da t a b a s e o n a r a w disk partition is accessed concurrently by two hosts, so this problem in not limited to Veritas Volume Manager.
419 Administerin g cluster fun ctionality Multiple h ost failover configurations For details on how to clear lo cks and force an import, see “ Moving disk groups between systems ” on page 185 and the vxdg (1M) manual page.
420 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts Administering VxVM in cluster environments The following sections describe the administration of VxVM’s cluster functionality. Note: Most VxVM commands require superuser or equivalent pri vileges.
421 Administerin g cluster fun ctionality Administering VxVM in cluster environme nts Determining if a disk is shareable The vxdisk utility manages VxVM disks.
422 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts The following is example output for the command vxdg list group1 on the master: Group: group1 dgid: 774222028.
423 Administerin g cluster fun ctionality Administering VxVM in cluster environme nts Caution: The operating system cannot tell if a disk is shared. To protect data integrity when dealing with disks that can be accessed by multiple systems, use the correct designation when adding a disk to a disk group.
424 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts ■ Some of the nodes to wh ich disks in the disk gr oup are att ached ar e not cur rently in the clust er , so the disk gro up cannot acc ess all of its disks.
425 Administerin g cluster fun ctionality Administering VxVM in cluster environme nts can join two private disk groups on any cluster node where those disk groups are imported. If the source disk group and the target disk group are both shared, you must perform the join on the master node.
426 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts Setting the disk group failure policy on a shared disk group Note: The disk group failure policy for a sha red disk group can only be set on the master node.
427 Administerin g cluster fun ctionality Administering VxVM in cluster environme nts Multiple opens by the same node are also supported. Any attempts by other nodes to open the volu me fail until the final close of the volume by the node that opened it.
428 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts Upgrading the cluster protocol version Note: The cluster protocol version can only be updated on the master node.
429 Administerin g cluster fun ctionality Administering VxVM in cluster environme nts This comm and produces ou tput similar to the following: OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE vol vol1 2421 0 600000 0 99.
430 Administer ing cluster function ality Administeri ng VxVM in cluster environme nts.
Chap ter 14 Administering sites and remote mirrors In a Remote Mirror configuration (also known as a campus cluster or stretch cluster) the hosts and storage of a cluster that would usually be located in one place, are instead divided between two or more sites.
432 Administering sites and remot e mirrors If a disk group is config ured across the storage at the sites, and inter-site communication is disrupted, there is a possi bility of a serial split brain c.
433 Administering sites and rem ote mirrors To enhance read performance, VxVM will service reads from the plexes at the local site where an application is running if the siteread read policy is set on a volume. Writes are written to plexes at all sites.
434 Administering sites and remot e mirrors Configuring sites f or hosts and disks Configuring sites for hosts and disks Note: The Remote Mirror feature requires th at the Site Awareness license has been installed on all hosts at all sites th at are participating in the configuration.
435 Administering sites and rem ote mirrors Configuring site consistency on a disk group The -f option allows the requirement to be removed if the site is detached or offline.
436 Administering sites and remot e mirrors Setting the s iteread policy on a volume To turn on the site consistency requirement for an existing volume, use the following form of the vxvol command: # .
437 Administering sites and rem ote mirrors Site-based allocation of storage to volumes Note: If the Site Awareness license is inst alled o n all the hosts in the Remote Mirror configuration, and site consis tency is enabled on a volume, the vxassist command attempts to alloca te storage across the sites that are registered to a disk group.
438 Administering sites and remot e mirrors Site-based allocation of s torage to volumes Examples of storage allocation usi ng sites The examples in the following table demo nstrate how to use site names with the vxassist command to allocate st orage.
439 Administering sites and rem ote mirrors Making an existing disk gro up site consistent Making an existing disk group site consistent T o make an existing disk group site consiste nt 1 Ensure that .
440 Administering sites and remot e mirrors Fire drill — t esting the configuratio n Fire drill — testing the configuration Caution: To avoid potential loss of service or data, it is recomme nded that you do not use these procedures on a live system.
441 Administering sites and rem ote mirrors Failure scenarios and recovery procedures site state to ACTIVE, and initiates recov ery of the plexes. When all the plexes have been recovered, the plexes are put into the ACTIVE state. Note: vxsited does not try to reattach a site t h at you have explicitly detached by using the vxdg detachsite command.
442 Administering sites and remot e mirrors Failure scenarios and re covery p rocedures Recovery from a loss of site connectivity If the network links between the si tes are disrupted, the application environments may continue to run in parallel, and this may lead to inconsistencies between the disk group co nfiguration copies at the sites.
443 Administering sites and rem ote mirrors Failure scenarios and recovery procedures at the other sites. When the storage comes back online, you can use the following commands to reattach a site and .
444 Administering sites and remot e mirrors Failure scenarios and re covery p rocedures.
Chap ter 15 Using Storage Expert About Storage Expert System administrators often find that gathering and interpreting data about large and complex configurations can be a difficult task. Veritas Storage Expert is designed to help in diagnosing configuration problems with VxVM.
446 Using Storage E xpert How Storage Expert works How Storage Expert works Storage Expert components include a set of rule scripts and a rules engine. The rules engine runs the scripts and produce s ASCII output, which is organized and archived by Storage Expert’s report generator.
447 Using Storage E xpert Running Storage Expert See “Rule definitions and attribut es” on page 456. Discovering what a rule does To obtain details about wh at a rule does, use the info keyword, a.
448 Using Storage E xpert Running Storage Expert # vxse_dg1 -g mydg run VxVM vxse:vxse_dg1 INFO V-5-1-5511 vxse_vxdg1 - RESULTS ---------------------------------------------------------- vxse_dg1 PASS.
449 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert ■ A value specified on the c ommand line. ■ A value specified in a user-defined defa ults file. ■ A value in the /etc/default/vxse f ile that has not been commented out.
450 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert Checking for large mirror vo lumes without a dirty re gion log (vxse_drl1) To check whether large mirro r volumes (larger than 1G B) have an associated dirty region log (DRL), run rule vxse_drl1 .
451 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert A mirror of the RAID-5 log protects agains t loss of data due to the f ailure of a single disk. You are strongly advised to mirror the log if vxse_raid5log3 reports that the log of a large RAID-5 volume does not have a mirror.
452 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert Checking the number of configur ation copies in a disk group (vxse_dg5) To find out whether a disk group has on ly a single VxVM configured disk, run rule vxse_dg5 . See “Creating and administering disk groups” on page 165.
453 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert ■ v olumes needing rec overy See “Reattaching plexes” on page 231. See “Starting a volume” on page 271. See the Veritas Volume Manager Troubleshooting Guide .
454 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert Checking the number of columns in striped volumes (vxse_stripes2) The default values for the number of colu mns in a striped plex are 16 and 3.
455 Using Storage E xpert Identifying configurati on problem s using Stora ge Expert Checking the system name (vxse_host) Rule vxse_host can be used to confirm that the system name in the file /etc/vx/volboot is the same as the name that was assigned to the system when it was booted.
456 Using Storage E xpert Rule defini tions and attributes Rule definitions and attributes You can use the info keyword to show a description of a rule. See “Discovering what a rule does” on page 447. Table 15-1 lists the available rule definiti ons, and rule attributes and their default values.
457 Using Storage E xpert Rule definitions and attributes You can use the list and check keywords to show what attributes are available for a rule and to display the defa ult values of these attributes. See “Running a rule” on page 447. Table 15-2 lists the available rule attrib utes and their default values.
458 Using Storage E xpert Rule defini tions and attributes Table 15-2 Rule attributes and defaul t attribute values Rule Attribute Default value Description vxse_dc_failures - - No u ser-configur able vari ab l es . vxse_dg1 max_disks_per_dg 250 Maximum number of disks in a dis k group .
459 Using Storage E xpert Rule definitions and attributes vxse_mirstripe large_mirror_size nsd_threshold 1g (1GB) 8 Larg e mirr or-stripe thr eshold size. W arn if a mirror-stripe vol u m e is l a rge r th a n this. Larg e mirr or-stripe number of subdi sks thre shold .
460 Using Storage E xpert Rule defini tions and attributes vxse_redundancy volume_redundancy 0V o l u m e r e d u n d a n c y chec k. The value of 2 perf orms a mirror red u n d an cy c he c k . A value of 1 performs a RAID- 5 re dundanc y chec k . The def ault value of 0 performs no red u n d an cy c he c k .
461 Using Storage E xpert Rule definitions and attributes vxse_volplex - - N o user-configur able vari ab l es . Table 15-2 Rule attributes and defaul t attribute values Rule Attribute Default value D.
462 Using Storage E xpert Rule defini tions and attributes.
Chap ter 16 Performance monitoring and tuning Veritas Volume Manager (VxVM) can improve overall syst em performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configurin g your system appropriately.
464 Performance moni toring and tuning Perf ormance guidelines Striping Striping improves access performance by cu tting data into slices and storing it on multiple devices that can be access ed in parallel. Stri ped plexes improve access performance for both read and write operations.
465 Performance monit oring and tuning Performan ce guidelines Combining mirroring and striping Note: You need a full license to use this feature. Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams.
466 Performance moni toring and tuning Perf ormance guidelines Volume read policies To help optimize performance for diff erent types of volumes, VxVM supports the following read policies on data plexes: ■ round— a r ound-robin r ead policy , wher e all plexes in the v olume tak e turns satisfying r ead r equests to the v olume.
467 Performance monit oring and tuning Performan ce monitoring Note: To improve performance for read-intens ive workloads, you can attach up to 32 data plexes to the same volume . However, this would usually be an ineffective use of disk space for the gain in read performance.
468 Performance moni toring and tuning Performance monitoring T racing volume operations Use the vxtrace command to trace operations on specified volumes, kernel I/O object types or devices. The vxtrace command either prints kernel I/O errors or I/O trace records to the standard output or writes the records to a file in binary format.
469 Performance monit oring and tuning Performan ce monitoring an operation makes it possible to measure the impact of that particular operation. The following is an example of output produced using the vxstat command: OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE vol blop 0 0 0 0 0.
470 Performance moni toring and tuning Performance monitoring Such output helps to identify volume s with an unusually large number of operations or excessive read or write times.
471 Performance monit oring and tuning Performan ce monitoring If two volumes (other than the root volu me) on the same disk are busy, move them so that each is on a different disk.
472 Performance moni toring and tuning T uning VxVM writes where mirroring can improve performance depends greatly on the disks, the disk controller, whether multiple co ntrollers can be used, and the speed of the system bus.
473 Performance monit oring and tuning T uning VxVM Tuning guidelines for large systems On smaller systems (with fewer than a hundred disk drives), tuning is unnecessary and VxVM is capable of adopting reasonable defaults for all configuration parameters.
474 Performance moni toring and tuning T uning VxVM To set the number of configuration co pies for a new disk group, use the nconfig operand with the vxdg init command (see the vxdg (1M) manual page for details). You can also change the number of copies for an existing group by using the vxedit set command (see the vxedit (1M) manual page).
475 Performance monit oring and tuning T uning VxVM Tunable parameters The following sections describe specific tunable parameters. dmp_cache_open If set to on , the first open of a device that is performed by an array support library (ASL) is cached.
476 Performance moni toring and tuning T uning VxVM The value of this tunable is changed by using the vxdmpadm settune command. dmp_health_time DMP detects intermittently failing paths, and prevents I/O requests from being sent on them. The value of dmp_health_time represents the time in seconds for which a path must stay healthy.
477 Performance monit oring and tuning T uning VxVM increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 14 and 16 for an I/O activity pattern that consists mostly of sequential reads or writes. Note: This parameter only affe cts the behavior of the balanced I/O policy.
478 Performance moni toring and tuning T uning VxVM dmp_restore_policy The DMP restore policy, which ca n be set to 0 (CHECK_ALL), 1 (CHECK_DISABLED), 2 (CHECK_PERIO DIC), or 3 (CHECK_A LTERNATE). The value of this tunable is only changeable by using the vxdmpadm start restore command.
479 Performance monit oring and tuning T uning VxVM dmp_stat_interval The time interval between gathering DMP statistics. The default and minimum value is 1 second.
480 Performance moni toring and tuning T uning VxVM Since the region size must be the same on all nodes in a cluster for a shared volume, the value of the vol_fmr_logsz tunable on the master node overrides the tunable values on the slave nodes, if these values are different.
481 Performance monit oring and tuning T uning VxVM performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests.
482 Performance moni toring and tuning T uning VxVM volcvm_sm artsync If set to 0 , volcvm_smartsync disables Smar tSync on sha red disk gr oups. If set to 1 , this parameter enables the use of SmartSync with shared disk groups. See“ SmartSync recovery accelerator ” on page 62 for more information.
483 Performance monit oring and tuning T uning VxVM voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from usin g all the memory in the system.
484 Performance moni toring and tuning T uning VxVM tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool. Increasing this size can allow additional tracing to be performed at the expense of system memory usage.
485 Performance monit oring and tuning T uning VxVM Note: The memory allocated for this cache is exclusively dedicated to it. It is not available for other proce sses or applications.
486 Performance moni toring and tuning T uning VxVM.
Appendix A Commands summary This appendix summarizes the usage an d purpose of important commonly-used commands in Veritas Volume Manager (VxVM). References are included to longer descriptions in th e remainder of this book.
488 Commands su mmary other commands and scripts, and which are not intended for general use, are not located in /opt/VRTS/bin and do not have manual pages.
489 Commands summary vxinfo [-g diskgroup ] [ vol u m e ...] Display s informat ion about the accessibility and usability of volumes. See “Listing Unstartable V o lumes” in the Veritas Volume Manager Troubleshooting Guide .
490 Commands su mmary vxdiskadd [devicename ...] Adds a disk specified by dev ice name. See “ Using vxdiskadd to place a disk under control of VxVM ” on page 10 1. Example: # vxdiskadd c0t1d0 vxedit [-g diskgroup ] rename olddisk newdisk Renames a disk under c ontrol of Vx V M .
491 Commands summary vxedit [-g diskgroup ] set spare=on|off diskname Adds/r emov es a disk from the pool of hot-r elocation spares. See “ Marking a disk as a hot- relocation spare ” on pag e 387. See “ Removing a disk from use as a hot-relocation sp are ” on page 388.
492 Commands su mmary Table A-3 Creating and administerin g disk groups Command Description vxdg [-s] init diskgroup [ diskname =] devicename Creat es a disk group using a pre- initialized disk. See “ Creating a disk group ” on page 170. See “ Creating a shar ed disk group ” on page 422.
493 Commands summary vxdg [-o expand] listmove sourcedg targetdg object ... Lists the obj ects potentially aff ected by moving a disk gr oup. See “ Listing objects potentially affected by a move ” on pag e 200. Example: # vxdg -o expand listmove mydg newdg myvol1 vxdg [-o expand] move sourcedg targetdg object .
494 Commands su mmary vxrecover -g diskgroup -sb Starts all volumes in an imported disk gr oup. See “ Moving disk groups between systems ” on page 185. Example: # vxrecover -g mydg -sb vxdg destroy diskgroup Destr oy s a disk group and r eleases its disks.
495 Commands summary vxsd [-g diskgroup ] assoc plex subdisk1:0 ... subdiskM:N-1 Adds subdisks t o the ends of the columns in a striped or RAID-5 vol u m e. See “ Associating subdisks with plexes ” on pag e 218. Example: # vxsd -g mydg assoc vol01-01 mydg10-01:0 mydg11-01:1 mydg12-01:2 vxsd [-g diskgroup ] mv oldsubdisk newsubdisk .
496 Commands su mmary vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “ Moving and unrelocating subdisks using vxunreloc ” on page 392. Example: # vxunreloc -g mydg mydg01 vxsd [-g diskgroup ] dis subdisk Dissociates a subdisk from a ple x.
497 Commands summary vxmake [-g diskgroup ] plex ple x layout=stripe|raid5 stwidth= W ncolumn= N sd= subdisk1 [, subdisk2 ,...] Creat es a striped or R AID-5 plex.
498 Commands su mmary vxplex [-g diskgroup ] cp vol u m e n ew p l ex Copies a volume ont o a plex. See “ Copying volumes to plexes ” on page 233. Example: # vxplex -g mydg cp vol02 vol03-01 vxmend [-g diskgroup ] fix clean ple x Sets the state of a ple x in an unstart able volu me to CLEAN.
499 Commands summary vxassist -b [-g diskgroup ] make vol u m e length [layout= lay out ] [ attr ibutes ] Creat es a volume. See “ Creating a volume on any disk ” on page 243.
500 Commands su mmary vxassist -b [-g diskgroup ] make vol u m e length layout=mirror mirror=ctlr [ attributes ] Cre ates a volume with mirr ored data plexes on separ ate contr ollers. See “ Mirroring across targets, controllers or enclosures ” on page 255.
501 Commands summary Table A-7 Adminis tering volumes Command Description vxassist [-g diskgroup ] mirror volu me [ attributes ] Adds a mirr or to a v olume.
502 Commands su mmary vxsnap [-g diskgroup ] prepare volu me [drl=on|sequential|off] Pr epares a volume f or instant snapshots and f or DRL logging . See “ Preparing a volume for DRL and instant snapshots ” on page 275.
503 Commands summary vxmake [-g diskgroup ] cache cache_object cachevolname = vol u m e [regionsize = size ] Creates a cache obje ct for use by space-optimized instant snapshots.
504 Commands su mmary vxsnap [-g diskgroup ] unprepare volu me Removes support f or instant snapshots and DRL logging fr om a vol u m e. See “ Removing suppo rt for DRL and instant snapshots from a volu me ” on page 279.
505 Commands summary vxassist [-g diskgroup ] convert vol u m e [layout = lay ou t ] [ conv er t_options ] Conv erts between a la yered v olume and a non-lay ered v olume lay out. See “ Converting between layered and non-layered volumes ” on pag e 300.
506 Commands su mmary vxtask pause task Suspends oper ation of a task. See “ Using the vxtask command ” on page 269. Example: # vxtask pause mytask vxtask -p [-g diskgroup ] list L ists all paused tasks. See “ Using the vxtask command ” on page 269.
507 Commands summary Online manu al pages Online manual pages Manual pages are organi zed into three sections: ■ Section 1M — administrative commands ■ Section 4 — file formats ■ Section 7 .
508 Commands su mmary Online manual pages vxconfigd V eritas V olume Manager c onfigurat ion daemon vxconfigrestore Rest ore disk gr oup conf igurat ion. vxcp_lvmroot Copy L VM root disk onto new V eritas V olume Manager root disk. vxdarestore Rest ore simple or nopriv disk access r ecords.
509 Commands summary Online manu al pages vxmend Mend simple problems in c onfigur ation rec ords. vxmirror Mirr or volume s on a disk or contr ol default mirr oring. vxnotify Display V eritas V olume Manager c onfigur ation ev ents. vxpfto Set P owerf ail T imeout (pfto ).
510 Commands su mmary Online manual pages Section 4 — file formats Manual pages in section 4 describe the fo rmat of files that are used by Veritas Volume Manager. Section 7 — device driver interfaces Manual pages in section 7 describe th e interfaces to Veritas Volume Manager devices.
Appendix B Configuring Veritas Volume Manager This appendix provides guidelines for se tting up efficient storage management after installing the Verita s Volume Manager software.
512 Configuring Veritas Volume M anager Adding unsupported disk arrays as JBODs Optional Setup T asks ■ Place the root disk under V xVM contr ol and mirr or it to c reat e an alternate boot disk. ■ Designate hot -reloc ation spar e disks in e ach disk gr oup.
513 Configuring Ve ritas Volume Man ager Guideline s for configuring storage groups. Storage pools are only required if you intend using the ISP feature of VxVM. Guidelines for configuring storage A disk failure can cause loss of data on th e failed disk and lo ss of access to your system.
514 Configuring Veritas Volume M anager Guidelin es for configurin g storage ■ Leav e the V eritas V olu me Manager hot -relocat ion feat ure enabled. See “ Hot- relocation guidelines ” on page 516 f or details. Mirroring guidelines Refer to the following guidelines when using mirroring.
515 Configuring Ve ritas Volume Man ager Guideline s for configuring storage Dirty region logging guidelines Dirty region logging (DRL) can speed up recovery of mirrored volumes following a system crash. When DRL is enabled, Veritas Volume Manager keeps track of the regions within a volume that have ch anged as a result of writes to a plex.
516 Configuring Veritas Volume M anager Guidelin es for configurin g storage ■ If mor e than one plex of a mir ror ed v olume is striped, configur e the same stripe-unit size f or each striped ple x. ■ Where poss ible, distribute the subdisks of a striped v olume across dri ves connec ted to diff erent c ontroller s and buses.
517 Configuring Ve ritas Volume Man ager Guideline s for configuring storage The hot-relocation feature is enabled by default. The associated daemon, vxrelocd , is automatically started during system startup. Refer to the following guidelin es when using hot-relocation.
518 Configuring Veritas Volume M anager Controlling VxV M’s view of multipathed de vices subdisks to determine whether they should be r elocated to mor e suitable disks t o reg ain the original performan ce benefits.
519 Configuring Ve ritas Volume Man ager Configuring cl uster support Configuring shared disk groups This section describes how to configure shared disks in a cluster. If you are installing Veritas Volume Ma nager for the first time or adding disks to an existing cluster, you need to configure new shared disks.
520 Configuring Veritas Volume M anager Reconfiguration tasks If dirty region logs exist, ensure they are active. If not, replace them with larger ones. To display the shared flag for all the shared disk groups, use the following command: # vxdg list The disk groups are now ready to be shared.
Glo ssar y Activ e/Active d isk arr ays This type of multipathed disk arr ay allows you to ac cess a disk in the disk arr ay throu gh all the paths to the disk simultaneously , without any perf ormance degr adation.
522 Glossa ry cluster A set of hosts ( eac h termed a node ) that share a set of disks. cluster manager An ext ernally-pro vided daemon that runs on each node in a c luster . The c luster manager s on each n od e co mm un ic ate wi th ea ch o th e r and inform V xVM of chang es in cluster membership .
523 Glossa ry maintained in the DCO v olume. Otherwise, the DRL is allocated to an associated subdisk called a log subdisk . disabled path A path to a disk that is not availa ble for I/O . A path can be disabled d ue t o rea l h a rd wa re failures or if the user has used th e vxdmpadm disable command on that contr oller .
524 Glossa ry An alternat ive term f or a disk name . disk media record A configur ation rec ord that identifies a partic ular disk , by disk ID, and gives that disk a logical ( or administrativ e ) name. disk name A logical or administrati ve name chosen for a disk that is under the c ontrol of V xVM, such as disk03 .
525 Glossa ry An ar ea of a disk under V xVM contr ol that is not allocat ed to any subdisk or r eserv ed for use by a ny other V xVM obj ect. free subdisk A subdisk that is not associated with any ple x and has an empty putil[0] fi eld . hostid A st r i n g t h a t i d e n t i fi e s a h o s t t o VxV M .
526 Glossa ry Where the re ar e multiple phy sical acc ess paths to a disk connec ted to a sy stem, the disk is called multipathed. Any softwar e residing on the host, (for e xample, the DMP driver ) that hides this f act fr om the user is said to pr ovide multipathing func tionality .
527 Glossa ry A f orm of FastResync that can pr eserve its maps acr oss r eboots of the syst em by storing its cha nge ma p i n a DCO volume on disk. Al so see data change object (DCO) .
528 Glossa ry The disk containing the r oot file syst em. This disk may b e under V xVM contr ol. root file system The initial f ile system mount ed as part of the UNIX kernel start up sequence. root partition The disk region on w hich the root file sy stem resides.
529 Glossa ry A plex that is not as long as the volume or th at has holes (regions of the ple x that do not have a ba cking subdisk). Storage Area Network (SAN) A networ king paradig m that pr ovides .
530 Glossa ry A virtual disk, r epresenti ng an addressable r ange of disk block s used by applications such as file syst ems or databases. A volume is a collect ion of from one t o 32 plex es.
Index Symbols /dev/vx/dmp directory 126 /dev/vx/rdmp directory 126 /etc/default/vxassist file 241, 390 /etc/default/vxdg defaults file 403 /etc/default/vxdg file 171 /etc/default/vxdisk file 81, 97 /etc/default/vxse file 448 /etc/fstab file 290 /etc/volboot file 21 2 /etc/vx/darecs file 212 /etc/vx/disk.
532 Index ndcomirror 25 1, 252, 357 ndcomirs 275, 3 21 newvol 330 nmirror 330 nomanual 146 nopreferred 146 plex 234 preferred priority 146 primary 147 putil 222, 234 secondary 147 sequential DRL 252 s.
533 Index clusters activating disk gro ups 403 activating shared disk groups 425 activation modes for sha red disk groups 40 2 benefits 397 checking cluster proto col version 4 27 cluster-shareable di.
534 Index crash dumps using VxVM volumes for 107 Cross-platform Data Sharing (CDS) alignment constraints 242 disk format 81 CVM cluster functionality of VxVM 397 D d# 20, 78 data change object DCO 69 .
535 Index A/P-C 126 A/PF 126 A/PF-C 126 A/PG 126 A/PG-C 126 Active/Active 126 Active/Passive 125 adding disks to DISKS c ategory 87 adding vendor-supplied support package 84 Asymmetric Active/Active 1.
536 Index serial split brain condition 190 setting connectivity policies in clusters 425 setting default disk group 1 68 setting failure policies in clusters 426 setting nu mber of con figuratio n cop.
537 Index spares 388 removing from VxVM c ontrol 112, 17 2 removing tags fro m 178 removing with sub disks 111, 112 renaming 119 replacing 112 replacing removed 1 15 reserving for special purpo ses 11.
538 Index dmp_scsi_timeo ut tunable 4 78 dmp_stat_inter val tunable 479 DRL adding log subdisks 220 adding logs to mirrored volumes 281 checking existence of 450 checking existence of mirror 450 creat.
539 Index use with snapshots 66 fastresync attr ibute 251 , 252, 293 file systems growing using vxresize 285 shrinking using vxresize 285 unmounting 290 fire drill defined 432 testing 440 firmware upgrading 154 FMR.
540 Index initialization of di sks 90 instant snapshots backing up multiple volumes 333 cascaded 312 creating backups 319 creating for volume s ets 334 creating full-sized 327 creating space-optimized.
541 Index LUN group f ailover 126 LUN groups displaying details of 140 LUNs idle 477 M maps adding to volumes 274 usage with volumes 237 master node defined 400 discovering 420 maxautogro w attribute .
542 Index plex attribute 234 renaming disks 119 subdisk 29 subdisk attr ibute 221 VM disk 29 volume 31 naming scheme changing for disks 91 changing for TPD enclosures 94 for disk devices 78 native mul.
543 Index hot spots identified by I/O traces 4 72 impact of number o f disk group conf iguration copies 473 improving for instant snapshot synchronization 345 load balancing in DMP 129 mirrored volume.
544 Index condition flags 228 converting to snapshot 351 copying 233 creating 223 creating striped 224 defined 30 detaching from volumes tempor arily 231 disconnecting from volumes 230 displaying info.
545 Index performance of 466 prefer 289 round 289 select 289 siteread 28 9, 433, 434, 436 split 289 read-only mode 402 readonly mode 40 2 RECOVER plex condition 228 recovery checkpoint interval 479 I/.
546 Index read policy 289 rules attributes 458 checking attrib ute values 447 checking disk group configuration copies 451 checking disk group version number 451 checking for full disk group confi gur.
547 Index siteconsistent at tribute 435 siteread read policy 289 , 433, 434, 436 sites reattaching 440 size units 216 slave nodes defined 400 SmartSync 62 disabling on shared disk groups 482 enabling .
548 Index standby path attribute 147 states for plexes 224 of link objects 311 volume 265 statistics gathering 12 8 storage ordered allo cation of 245, 251, 257 storage attributes and volum e layout 2.
549 Index physical disk placement 513 putil attribute 22 2 RAID-5 fail ure of 380 RAID-5 plex, configuring 516 removing from VxVM 22 1 restrictions on moving 217 specifying differ ent offsets for unre.
550 Index vol_default_iodelay 479 vol_fmr_logsz 68, 479 vol_max_vol 480 vol_maxio 480 vol_maxioctl 480 vol_maxparallelio 481 vol_maxspecialio 481 vol_subdisk_num 481 volcvm_smartsync 482 voldrl_max_dr.
551 Index DETACHED 267 DISABLED 267 ENABLED 267 volume length, RAID-5 guidelines 516 volume resynchronization 59 volume sets adding volumes to 362 administering 361 controlling access to r aw device n.
552 Index effect of growing on FastResync maps 73 enabling FastResync on 292 enabling FastResync on new 251 excluding storage from use by vxassist 244 finding ma ximum size of 242 finding out maximum .
553 Index zeroing out contents of 261 vxassist adding a log subdisk 220 adding a RAID-5 log 283 adding DCOs to volumes 357 adding DRL logs 281 adding mirrors to volu mes 230, 271 adding sequentia l DR.
554 Index reattach ing version 0 DCOs to volumes 359 removing version 0 DCOs from volumes 358 vxdctl checking cluster proto col version 4 27 managing vxconfigd 212 setting a site tag 43 4 setting defa.
555 Index vxdisk scandisks rescanning devices 82 scanning devices 82 vxdiskadd adding disks to disk groups 171 creating disk groups 171 placing disks under VxVM control 101 vxdiskadm Add or initialize.
556 Index removing instant snapshots 341 removing plexes 234 removing snapshots from a cache 347 removing subdisks from VxVM 221 removing volumes 290 renaming disks 119 reserving disks 119 VxFS file s.
557 Index vxse_dg2 rule to check disk gr oup configuration copies 451 vxse_dg3 rule to check on disk config size 451 vxse_dg4 rule to check disk gr oup version number 451 vxse_dg5 rule to check number.
558 Index moving subdisks after hot- relocation 392 restarting afte r errors 394 specifying different of fsets for unrel ocated subdisks 393 unrelocating su bdisks after hot-relocat ion 392 unrelocati.
Un punto importante, dopo l’acquisto del dispositivo (o anche prima di acquisto) è quello di leggere il manuale. Dobbiamo farlo per diversi motivi semplici:
Se non hai ancora comprato il HP (Hewlett-Packard) HP-UX 11i v3 è un buon momento per familiarizzare con i dati di base del prodotto. Prime consultare le pagine iniziali del manuale d’uso, che si trova al di sopra. Dovresti trovare lì i dati tecnici più importanti del HP (Hewlett-Packard) HP-UX 11i v3 - in questo modo è possibile verificare se l’apparecchio soddisfa le tue esigenze. Esplorando le pagine segenti del manuali d’uso HP (Hewlett-Packard) HP-UX 11i v3 imparerai tutte le caratteristiche del prodotto e le informazioni sul suo funzionamento. Le informazioni sul HP (Hewlett-Packard) HP-UX 11i v3 ti aiuteranno sicuramente a prendere una decisione relativa all’acquisto.
In una situazione in cui hai già il HP (Hewlett-Packard) HP-UX 11i v3, ma non hai ancora letto il manuale d’uso, dovresti farlo per le ragioni sopra descritte. Saprai quindi se hai correttamente usato le funzioni disponibili, e se hai commesso errori che possono ridurre la durata di vita del HP (Hewlett-Packard) HP-UX 11i v3.
Tuttavia, uno dei ruoli più importanti per l’utente svolti dal manuale d’uso è quello di aiutare a risolvere i problemi con il HP (Hewlett-Packard) HP-UX 11i v3. Quasi sempre, ci troverai Troubleshooting, cioè i guasti più frequenti e malfunzionamenti del dispositivo HP (Hewlett-Packard) HP-UX 11i v3 insieme con le istruzioni su come risolverli. Anche se non si riesci a risolvere il problema, il manuale d’uso ti mostrerà il percorso di ulteriori procedimenti – il contatto con il centro servizio clienti o il servizio più vicino.