US20020019909A1 - Method and apparatus for managing virtual storage devices in a storage system - Google Patents
Method and apparatus for managing virtual storage devices in a storage system Download PDFInfo
- Publication number
- US20020019909A1 US20020019909A1 US09/774,299 US77429901A US2002019909A1 US 20020019909 A1 US20020019909 A1 US 20020019909A1 US 77429901 A US77429901 A US 77429901A US 2002019909 A1 US2002019909 A1 US 2002019909A1
- Authority
- US
- United States
- Prior art keywords
- processor
- logical volumes
- virtual volume
- storage system
- host computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000008878 coupling Effects 0.000 claims abstract description 18
- 238000010168 coupling process Methods 0.000 claims abstract description 18
- 238000005859 coupling reaction Methods 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 description 36
- 238000004891 communication Methods 0.000 description 9
- 230000008901 benefit Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000000470 constituent Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007596 consolidation process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- the present invention is directed to a method and apparatus for managing virtual storage devices in a storage system.
- FIG. 1 An example of such a system is shown in FIG. 1, and includes a host computer 1 and a storage system 3 .
- the storage system typically includes a plurality of storage devices on which data is stored.
- the storage system includes a plurality of disk drives 5 a - b , and a plurality of disk controllers 7 a - 7 b that respectively control access to the disk drives 5 a and 5 b .
- the storage system 3 further includes a plurality of storage bus directors 9 that control communication with the host computer 1 over communication buses 17 .
- the storage system 3 further includes a cache 11 to provide improved storage system performance.
- the storage system 3 may service the read from the cache 11 (when the data is stored in the cache), rather than from one of the disk drives 5 a - 5 b , to execute the read more efficiently.
- the host computer 1 executes a write to the storage system 3
- the corresponding storage bus director 9 can execute the write to the cache 11 .
- the write can be destaged asynchronously, in a manner transparent to the host computer 1 , to the appropriate one of the disk drives 5 a - 5 b .
- the storage system 3 includes an internal bus 13 over which the storage bus directors 9 , disk controllers 7 a - 7 b and the cache 11 communicate.
- the host computer 1 includes a processor 16 and one or more host bus adapters 15 that each controls communication between the processor 16 and the storage system 3 via a corresponding one of the communication buses 17 .
- the host computer 1 can include multiple processors.
- Each bus 17 can be any of a number of different types of communication links, with the host bus adapter 15 and the storage bus directors 9 being adapted to communicate using an appropriate protocol for the communication bus 17 coupled therebetween.
- each of the communication buses 17 can be implemented as a SCSI bus, with the directors 9 and adapters 15 each being a SCSI driver.
- communication between the host computer 1 and the storage system 3 can be performed over a Fibre Channel fabric.
- each path includes a host bus adapter 15 , a bus 17 and a storage bus director 9 in FIG. 1).
- each of the host bus adapters 15 has the ability to access each of the disk drives 5 a - b , through the appropriate storage bus director 9 and disk controller 7 a - b . It should be appreciated that providing such multi-path capabilities enhances system performance, in that multiple communication operations between the host computer 1 and the storage system 3 can be performed simultaneously.
- the phrase open system is intended to indicate a non-mainframe environment, such that the host computer 1 employs commodity based hardware available from multiple vendors and runs a commodity-based operating system that is also available from multiple vendors.
- intelligent storage systems such as the storage system 3 shown in FIG. 1 have only recently been used with open systems. Thus, problems have been encountered in implementing an open computer system that includes multiple paths to an intelligent storage system.
- conventional host computers 1 in an open system will not recognize that multiple paths have been formed to the same storage device within the storage system.
- the operating system on the host computer 1 will view the storage system 3 as having four times its actual number of disk drives 5 a - b , since four separate paths are provided to each of disk drives 5 a - b .
- conventional host computers in an open system have, as explained below, included an additional mapping layer, below the file system or logical volume manager (LVM), to reduce the number of storage devices (e.g., disk drives 5 a - b ) visible at the application layer to the number of storage devices that actually exist on the storage system 3 .
- LVM logical volume manager
- FIG. 2 is a schematic representation of a number of mapping layers that may exist in a known multi-path computer system such as the one shown in FIG. 1.
- the system includes an application layer 21 which includes application programs executing on the processor 16 of the host computer 1 .
- the application layer 21 will generally refer to storage locations used thereby with a label or identifier such as a file name, and will have no knowledge about where the file is physically stored on the storage system 3 (FIG. 1).
- a file system and/or a logical volume manager (LVM) 23 that maps the label or identifier specified by the application layer 21 to a logical volume that the host computer perceives to correspond directly to a physical device address (e.g., the address of one of the disk drives 5 a - b ) within the storage system 3 .
- a multi-path mapping layer 25 that maps the logical volume address specified by the file system/LVM layer 23 , through a particular one of the multiple system paths, to the logical volume address to be presented to the storage system 3 .
- the multi-path mapping layer 25 not only specifies a particular logical volume address, but also specifies a particular one of the multiple system paths to access the specified logical volume.
- the storage system 3 were not an intelligent storage system, the logical volume address specified by the multi-pathing layer 25 would identify a particular physical device (e.g., one of disk drives 5 a - b ) within the storage system 3 .
- the storage system itself may include a further mapping layer 27 , such that the logical volume address passed from the host computer 1 may not correspond directly to an actual physical device (e.g., a disk drive 5 a - b ) on the storage system 3 . Rather, a logical volume specified by the host computer 1 can be spread across multiple physical storage devices (e.g., disk drives 5 a - b ), or multiple logical volumes accessed by the host computer 1 can be stored on a single physical storage device.
- the multi-path mapping layer 25 performs two functions. First, it controls which of the multiple system paths is used for each access by the host computer 1 to a logical volume. Second, the multi-path mapping layer 25 also reduces the number of logical volumes visible to the file system/LVM layer 23 . In particular, for a system including X paths between the host computer 1 and the storage system 3 , and Y logical volumes defined on the storage system 3 , the host bus adapters 15 see X times Y logical volumes. However, the multi-path mapping layer 25 reduces the number of logical volumes made visible to the file system/LVM layer 23 to equal only the Y distinct logical volumes that actually exist on the storage system 3 .
- the operating system executing on the processor 16 in the host computer 1 is required to manage (e.g., at the multi-path mapping layer 25 ) a number of logical volumes that is equal to the number of logical volumes that the host computer 1 would perceive the storage system 3 as storing if multi-pathing where not employed (Y in the example above), multiplied by the number of paths (e.g., X in the example above and four in FIG. 1) between the host computer 1 and the storage system 3 .
- the number of paths e.g., X in the example above and four in FIG.
- the storage system 3 includes a total of twenty disk drives 5 a - b that each corresponds directly to a single logical volume, and the four paths 17 between the host computer 1 and the storage system 3 , the operating system on the processor 16 would need to manage eighty logical volumes.
- a unique label is generated for each independent path to a logical volume.
- four unique labels will be generated, each specifying a different path (e.g., through an adapter 15 , a bus 17 and a director 9 ) to the logical volume.
- These unique labels are used during multi-path operation to identify through which path an operation on the host computer 1 directed to a particular logical volume is to be executed.
- FIG. 3 is a conceptual representation of the manner in which complexity is introduced into the host computer 1 due to the use of multiple paths P1-P4.
- the storage system 3 includes twenty logical volumes 51 , labeled LV1-LV20.
- the host computer 1 includes four separate groups of labels 53 - 56 for each group of logical volumes LV1-LV20. These groups of labels are identified as P1LV1-P1LV20, P2LV1-P2LV20, P3LV1-P3LV20 and P4LV1-P4LV20 to indicate that there are four separate paths (i.e., P1-P4) to each of the groups of logical volumes LV1-LV20.
- the multi-path mapping layer 25 (FIG. 2) consolidates the four groups of labels 53 - 56 to represent only the twenty unique logical volumes LV1-LV20 at 59 , so that the file system/LVM layer 23 sees the correct number of logical volumes actually present on the storage system 3 .
- the operating system for a typical processor 16 maintains a number of resources to manage the target devices that it recognizes as coupled to the adapters 15 at the host computer. For many processors, particularly when the host computer 1 is an open system, these resources are limited. Thus, there is a constraint on the number of target devices that the operating system will support (e.g., the operating system will simply not boot if the total number of target devices exceeds the number supported). For example, the NT operating system has a limit of approximately four-hundred target devices, and thirty-two target devices per path.
- a second related problem is that multiplying the number of logical volumes by the number of paths can result in an extremely large number of target devices to be managed by the operating system, which can result in an extremely long boot time when initializing the host computer 1 .
- the operating system on the processor 16 includes a satisfactorily large limit on the total number of target devices that can be supported, the implementation of the multi-path system in the manner described above can result in extremely long boot times for the host computer 1 .
- One illustrative embodiment of the invention is directed to a method of managing a plurality of logical volumes in a computer system, the computer system including a processor and a storage system coupled to the processor, the storage system including at least one storage device, the storage system storing the plurality of logical volumes on the at least one storage device.
- the method comprises steps of: (A) combining, in the storage system, at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes; (B) presenting the virtual volume to the processor as a single logical volume; and (C) presenting the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
- Another illustrative embodiment of the invention is directed to a storage system for use in a computer system including a processor coupled to the storage system.
- the storage system comprises at least one storage device to store a plurality of logical volumes; and a controller to combine at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes, to present the virtual volume to the processor as a single logical volume, and to further present the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
- a further illustrative embodiment of the invention is directed to a host computer for use in a computer system including a storage system coupled to the host computer.
- the storage system includes at least one storage device to store a plurality of logical volumes, combines at least two of the plurality of logical volumes into a virtual volume and presents the virtual volume to the processor as a single logical volume.
- the host computer comprises a processor and means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
- Another illustrative embodiment of the invention is directed to a multi-path computer system comprising: a processor; a storage system including at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes; and a plurality of paths coupling the processor to the storage system, the plurality of paths including X paths coupling the processor to the storage system.
- the processor is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the processor includes Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
- a further illustrative embodiment of the invention is directed to a host computer for use in a multi-path computer system including a storage system having at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes.
- the multi-path computer system further includes X paths coupling the host computer to the storage system.
- the processor comprises a processor capable of accessing each of the Y logical volumes through each of the X paths, the processor including Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
- FIG. 1 is a block diagram of an exemplary multi-path computing system on which aspects of the present invention can be implemented;
- FIG. 2 is a schematic representation of a number of mapping layers that exist in a known multi-path computing system
- FIG. 3 is a conceptual illustration of the manner in which logical volumes are managed in a prior art multi-path computing system
- FIG. 4 is a conceptual illustration of the manner in which logical volumes are managed according to a virtual volume aspect of the present invention.
- FIG. 5 is a schematic representation of a number of mapping layers that can be employed to implement the virtual volume aspect of the present invention.
- an improved method and apparatus for implementing a multi-path system is provided.
- the logical volumes implemented on the storage system e.g., storage system 3 of FIG. 1 are merged into a relatively small number of larger virtual volumes that are presented to the host computer. In this manner, the number of target devices that the operating system on the host computer must manage is significantly reduced.
- the storage system can also provide information to the host computer that enables it to deconstruct each of the larger virtual volumes in a manner described below.
- the aspects of the present invention are employed with an open system, and with a storage device that includes a plurality of disk drives.
- a storage device that includes a plurality of disk drives.
- the present invention is not limited in this respect.
- the present invention can be employed with any type of storage system (e.g., tape drives, etc.) and is not limited to use with a disk drive storage system.
- the aspects of the present invention discussed below are particularly advantageous for use in connection with open systems, the present invention is not limited in this respect, as aspects of the present invention can also be employed in a mainframe environment.
- FIG. 4 is a conceptual diagram of the manner in which a virtual volume is employed in accordance with one exemplary embodiment of the present invention, using the same example described above in connection with FIG. 1, wherein the computer system includes four paths (P1-P4) between the host computer 1 and the storage system 3 , and wherein the storage system includes twenty logical volumes (i.e., LV1-LV20).
- LV1-LV20 logical volumes
- multiple logical volumes LV1-LV20 are combined in the storage system 3 into a larger virtual volume 61 , labeled in FIG. 4 as VV1.
- the single virtual volume 61 then is presented to the host computer 1 over each of the four paths P1-P4, rather than having the twenty logical volumes LV1-LV20 that make up VV1 presented separately to the host computer over each of these paths. Therefore, the host computer 1 sees only four target devices 63 - 66 , respectively labeled as P1VV1 through P4VV1 in FIG. 4 to indicate that the virtual volume VV1 is visible over each of the paths P1-P4.
- the four target volumes 64 - 66 are combined to form a single representation of the virtual volume 61 (i.e., VV1) to reflect that the same virtual volume is perceived by the host computer 1 over each of the paths P1-P4.
- This consolidation process is similar to that performed by the multi-path mapping layer 25 in the known system discussed above in connection with FIG. 2.
- the storage system 3 also provides the host computer 1 with information relating to the structure of the virtual volume VV1. This information enables the host computer 1 to deconstruct the virtual volume into the logical volumes LV1-LV20 that comprise it. The deconstructed logical volumes LV1-LV20 can then be presented to the file system/LVM layer 23 (FIG. 5 discussed below) in much the same manner as would be done if no multi-pathing or virtual volume mapping were employed and the logical volumes LV1-LV20 within the storage system 3 were simply presented over a single path to the host computer 1 .
- the host computer is able to deconstruct the virtual volume and thereafter access the logical volumes LV1-LV20 independently, rather than having to access all of the logical volumes LV1-LV20 together as part of the virtual volume VV1.
- FIG. 5 is a schematic representation of a number of mapping layers that may exist in a multi-path computer system that employs the virtual volume aspect of the present invention described in connection with FIG. 4.
- a computer system employing the virtual volume aspect of the present invention may include an application layer 21 , a file system and/or LVM layer 23 , and a storage system mapping layer 27 that each performs functions similar to those described above in connection with FIG. 2.
- a virtual volume mapping layer 71 that performs the function of mapping between the larger virtual volume 61 (i.e., VV1) and the logical volumes LV1-LV20 that comprise it.
- the virtual volume mapping layer 71 performs the function of combining the twenty logical volumes LV1-LV20 into the virtual volume VV1.
- a virtual volume mapping layer 73 is also provided in the host computer 1 to map between the virtual volume and the logical volumes that comprise it.
- the mapping layer 73 will make use of the information provided by the storage system 3 (FIG. 1) as to the structure of the virtual volume to deconstruct the virtual volume into the logical volumes that comprise it.
- the system also includes a multi-path mapping layer 25 that is similar in many respects to that described in connection with FIG. 2, and that maps between the multiple target devices 63 - 66 (FIG. 4) corresponding to the multiple paths P1-P4 and the single reconstructed representation of the virtual volume 61 .
- the virtual volume aspect of the present invention provides a number of advantages when used in conjunction with a multi-path computer system such as that shown in FIG. 1.
- the virtual volume aspect of the present invention significantly reduces the number of target devices (e.g., 63 - 66 in FIG. 4) that must be managed by the operating system on the host computer 1 .
- This can significantly reduce the initialization time for the computer system, since as described above, the necessity of managing a large number of target devices can significantly slow down the boot time of the system.
- reducing the number of target devices that the operating system of the host computer 1 must manage greatly increases flexibility in possible system configurations, particularly for host computers with operating systems that have strict constraints on the number of target devices that can be managed.
- the virtual volume aspect of the present invention can enable the use of a greater number of paths between the host computer 1 and the storage system 3 , a greater number of logical volumes provided per path, a greater total number of logical volumes on the storage system 3 that are useable by the host computer 1 , or all of the above.
- the advantages in employing the virtual volume aspect of the present invention increase in proportion to the number of logical volumes and/or the number of paths employed in the multi-path computing system.
- employing the known system illustrated in FIG. 3 requires that the operating system on the host computer initialize nine hundred sixty (960) distinct target device labels. It has been found that initializing such a system can take approximately five hours. This is a significant increase in the boot time for the system over what would be required if multiple paths were not employed between the host computer 1 and the storage system 3 .
- a disincentive is provided to implementing a multi-path system using the known system shown in FIG. 3.
- the virtual volume aspect of the present invention shown in FIG. 4 if all one hundred twenty (120) of the logical volumes are combined into a single virtual volume, the operating system on the host computer 1 need only create eight distinct target device labels to support the multi-path configuration, which has a de minis impact on the initialization time of the system.
- using the virtual volume aspect of the present invention enables a multi-path system to be implemented without significantly increasing the system boot time.
- the virtual volume aspect of the present invention will also result in the generation of a unique label for the single representation of the virtual volume 61 (i.e., VV1) in the host computer, as well as unique labels for the deconstructed logical volumes LV1-LV20 that comprise it.
- VV1 the single representation of the virtual volume 61
- unique labels for the deconstructed logical volumes LV1-LV20 that comprise it.
- These labels are respectively created by the multi-path mapping layer 25 (FIG. 5) and the virtual volume mapping layer 73 , and are not created by the operating system at boot time, so that the creation of these labels does not increase the initialization time for the system.
- the total number of labels created within the host computer 1 when employing the virtual volume aspect of the present invention is not significantly greater than in a system wherein multi-pathing was not employed.
- the number of additional labels created as compared to a single-path system is simply equal to the number of multiple paths employed, plus one when a single virtual volume is created for all of the logical volumes in the system.
- the use of the virtual volume aspect of the present invention also significantly reduces the number of target device addresses that the operating system on the host computer must support for a system including multiple paths. As mentioned above, this enables greater flexibility in terms of system configuration with respect to the number of logical volumes and multiple paths that can be supported. This is particularly important for multi-path systems that include relatively large numbers of multiple paths. In this respect, it is contemplated that the aspects of the present invention can be employed in a multi-path system that includes more than simply two paths, and that can include three, four or any greater number (e.g., thirty-two or more) of paths.
- the virtual volume aspect of the present invention is employed to provide the host computer 1 with the capability of dynamically changing the configuration of the storage system 3 without rebootting the host computer 1 .
- a target device e.g., a disc drive 5 a - c in FIG. 1 or a logical volume LV1 in FIG. 3
- the target devices are managed by the virtual volume mapping layer 73 (FIG. 5).
- a target device e.g., LV2 in FIG. 4
- LV2 in FIG. 4 can be added or removed from the storage system 3 without requiring that the host computer 1 be rebooted, because the target devices are managed by the virtual volume mapping layer 73 (FIG. 5) rather than by the operating system.
- This provides the host computer 1 with the capability of dynamically changing the configuration of the storage system 3 without rebootting the host computer 1 .
- the virtual volume aspect of the present invention also provides a significant advantage in a system wherein the resources of the storage system 3 are shared by two or more host computers.
- An example of such a system might employ a Fibre Channel fabric to connect each of multiple host computers to the storage system.
- certain target devices e.g., logical volumes
- certain target devices within the storage system 3 typically are dedicated to a subset (e.g., a single one) of the host computers, such that access to those target devices is denied from the other host computers coupled to the storage system.
- some type of volume configuration management is typically employed to partition the target devices on the storage system 3 into subsets with different access privileges.
- the above-described virtual volume techniques can be employed to group together the subsets of the target devices that are to be managed in the same way (i.e., which share common access privileges amongst the one or more host computers) into a single virtual volume. In this manner, access from the multiple host computers to the target devices can be managed by a volume configuration management scheme that deals only with a small number of virtual volumes, thereby simplifying this management process.
- the virtual volume aspect of the present invention is not limited to presenting all of the logical volumes within the storage device 3 to the host computer 1 in a single virtual volume.
- the benefits of the virtual volume aspect of the present invention can be achieved by presenting a virtual volume to the host computer that includes less than the entire set of logical volumes supported by the storage system 3 .
- Including any number of two or more logical volumes in a virtual volume provides the advantages discussed above by reducing the number of target device labels that the operating system on the host computer 1 must support.
- a virtual volume can be created for a subset of the logical volumes included on the storage system 3 , while other logical volumes on the storage system can be presented directly to the host computer 1 , without being included in a larger virtual volume.
- multiple virtual volumes can be presented to the host computer 1 simultaneously, with each virtual volume corresponding to a subset of the logical volumes within the storage system 3 .
- distinct virtual volumes can be created for different volume groups within the storage system 3 , wherein each volume group can be associated with a particular application executing on the host computer 1 .
- the virtual volume aspect of the present invention can be implemented in any of numerous ways, and the present invention is not limited to any particular method of implementation.
- the virtual volumes are created using a “metavolume” that is created in the cache 11 in a storage system such as that shown in FIG. 1.
- the SYMMETRIX line of disk arrays available from EMC Corporation, Hopkinton, Mass. support the creation of metavolumes in a storage system cache such as the cache 11 shown in FIG. 1.
- a metavolume is employed to concatenate together a plurality of logical volumes (or “hyper-volumes” as discussed below) to form a single large metavolume that looks to the host computer 1 like a single logical volume.
- a virtual volume in accordance with the present invention can be implemented, for example, by forming a metavolume in the cache 11 of the storage system 3 , with the metavolume including each of the logical volumes to be included in the virtual volume.
- metavolume technology provides a convenient way of implementing a virtual volume
- the present invention is not limited in this respect, and that a virtual volume can be implemented in numerous other ways.
- the metavolume technology conveniently makes use of the cache 11 to form a metavolume
- the present invention is not limited to employing a cache to form the virtual volume, and is not even limited to use with a storage system that includes a cache.
- the metavolume technology provides a useful way to implement the virtual volume according to the present invention, there is a significant difference between a metavolume and a virtual volume.
- the metavolume is presented to the host computer 1 simply as a single large logical volume.
- the host computer has no knowledge about the structure of the metavolume (i.e., of what logical volumes or hyper-volumes make-up the metavolume), and therefore, the host computer simply treats the metavolume as a single large logical volume.
- the virtual volume aspect of the present invention provides not only a large concatenated volume to the host computer, but also provides the host computer with information relating to the structure of the virtual volume, so that the host computer 1 can deconstruct the virtual volume in the manner shown conceptually in FIG. 4 and can access each of its constituent logical volumes independently.
- the virtual volume can be formed by concatenating together not only logical volumes and/or hyper-volumes, but also metavolumes that will form a subset of the larger virtual volume and which will not be deconstructed by the host computer.
- any metavolumes that make up the virtual volume will remain intact.
- the host computer will be provided with information relating to the structure of the virtual volume, so that the host computer 1 can deconstruct the virtual volume into each of its constituent logical volumes, hyper-volumes and metavolumes. However, the host computer will not be provided with information concerning the internal structure of any metavolumes that make up the virtual volume.
- each of the layers in the system shown in FIG. 5 can be implemented in numerous ways.
- the present invention is not limited to any particular manner of implementation.
- the application 21 and file system/LVM 23 layers are typically implemented in software that is stored in a memory (not shown) in the host computer and is executed on the processor 16 .
- the virtual volume mapping layer 73 and the multi-path mapping layer 25 can is also be implemented in this manner.
- the mapping layers 25 , 73 can be implemented in the host bus adapters 15 .
- the adapters can each include a processor (not shown) that can execute software or firmware to implement the mapping layers 25 , 73 .
- the virtual volume mapping layer 71 can be implemented in the storage bus directors 9 in the storage system.
- the directors can each include a processor (not shown) that can execute software or firmware to implement the mapping layer 71 .
- the storage system mapping layer 27 can be implemented in the storage bus directors 9 or disk controllers 7 a - b in the storage system.
- the disk controllers 7 a - b can each include a processor (not shown) that can execute software or firmware to implement the mapping layer 27 .
- the virtual volume is created by concatenating together not only logical volumes that each corresponds to a physical storage device (e.g., disk drives 5 a - b in FIG. 1) in the storage system 3 , but by concatenating together “hyper-volumes”.
- a physical storage device e.g., disk drives 5 a - b in FIG. 1
- hyper-volumes Many storage systems support the splitting of a single physical storage device such as a disk drive into two or more logical storage devices or drives, referred to as hyper-volumes by EMC Corporation, and as luns in conventional RAID array terminology.
- the use of hyper-volumes is advantageous in that it facilitates management of the hyper-volumes within the storage system 3 , and in particular within the cache 11 .
- the cache 11 will typically include a particular number of cache slots dedicated to each logical volume or hyper-volume. By employing hyper-volumes, the cache 11 can manage a smaller volume of information, which may result in fewer collisions within the cache slots dedicated to each hyper-volume than might occur if the cache was organized using larger volume boundaries.
- the virtual volume aspect of the present invention is employed to provide tremendous flexibility at the host computer 1 with respect to the manner in which the storage system 3 can be configured. This flexibility is enhanced further through the use of hyper-volumes to form a virtual volume.
- the virtual volume can be formed by a concatenation of numerous hyper-volumes that are not constrained to correspond to an entire one of the physical storage devices (e.g., disk drives 5 a - b ).
- the virtual volume can be formed by a concatenation of numerous smaller hyper-volumes, and the information regarding the structure of the virtual volume can be passed to the host computer 1 .
- the host computer 1 has the capability of dynamically changing the configuration of the storage system 3 in any of numerous ways.
- the host computer 1 can add or delete hyper-volumes to a particular volume set visible by the file system/LVM layer 23 , can change the size of the volume visible at that layer, etc.
- the storage system 3 is provided with the capability of presenting a representation to the host computer 1 of any type of configuration desired, including a representation of the storage system as including a small number of relatively large virtual volumes, while the storage system 3 is able to manage much smaller volumes of data (e.g., hyper-volumes) internally to maximize the efficiency of the storage system 3 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method and apparatus for managing a plurality of logical volumes in a computer system, the computer system including a processor and a storage system coupled to the processor, the storage system including at least one storage device, the storage system storing the plurality of logical volumes on the at least one storage device. At least two of the plurality of logical volumes are combined in the storage system into a virtual volume that is presented to the processor as a single logical volume. The storage system also presents the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes. Another aspect is directed to a multi-path computer system including a processor, a storage system including at least one storage device to store Y logical volumes, and X paths coupling the processor to the storage system. The processor is capable of accessing each of the Y logical volumes through each of the X paths, and includes Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
Description
- The present invention is directed to a method and apparatus for managing virtual storage devices in a storage system.
- Many computer systems include one or more host computers and one or more storage systems that store data used by the host computers. An example of such a system is shown in FIG. 1, and includes a
host computer 1 and astorage system 3. The storage system typically includes a plurality of storage devices on which data is stored. In the exemplary system shown in FIG. 1, the storage system includes a plurality of disk drives 5 a-b, and a plurality of disk controllers 7 a-7 b that respectively control access to the disk drives 5 a and 5 b. Thestorage system 3 further includes a plurality ofstorage bus directors 9 that control communication with thehost computer 1 overcommunication buses 17. Thestorage system 3 further includes acache 11 to provide improved storage system performance. In particular, when thehost computer 1 executes a read from thestorage system 3, thestorage system 3 may service the read from the cache 11 (when the data is stored in the cache), rather than from one of the disk drives 5 a-5 b, to execute the read more efficiently. Similarly, when thehost computer 1 executes a write to thestorage system 3, the correspondingstorage bus director 9 can execute the write to thecache 11. Thereafter, the write can be destaged asynchronously, in a manner transparent to thehost computer 1, to the appropriate one of the disk drives 5 a-5 b. Finally, thestorage system 3 includes aninternal bus 13 over which thestorage bus directors 9, disk controllers 7 a-7 b and thecache 11 communicate. - The
host computer 1 includes aprocessor 16 and one or morehost bus adapters 15 that each controls communication between theprocessor 16 and thestorage system 3 via a corresponding one of thecommunication buses 17. It should be appreciated that rather than asingle processor 16, thehost computer 1 can include multiple processors. Eachbus 17 can be any of a number of different types of communication links, with thehost bus adapter 15 and thestorage bus directors 9 being adapted to communicate using an appropriate protocol for thecommunication bus 17 coupled therebetween. For example, each of thecommunication buses 17 can be implemented as a SCSI bus, with thedirectors 9 andadapters 15 each being a SCSI driver. Alternatively, communication between thehost computer 1 and thestorage system 3 can be performed over a Fibre Channel fabric. - As shown in the exemplary system of FIG. 1, some computer systems employ multiple paths for communicating between the
host computer 1 and the storage system 3 (e.g., each path includes ahost bus adapter 15, abus 17 and astorage bus director 9 in FIG. 1). In many such systems, each of thehost bus adapters 15 has the ability to access each of the disk drives 5 a-b, through the appropriatestorage bus director 9 and disk controller 7 a-b. It should be appreciated that providing such multi-path capabilities enhances system performance, in that multiple communication operations between thehost computer 1 and thestorage system 3 can be performed simultaneously. - Although the provision of multiple paths between the
host computer 1 and thestorage system 3 provides for improved system performance, it also results in some increased system complexity, particularly in so-called “open systems”. As used herein, the phrase open system is intended to indicate a non-mainframe environment, such that thehost computer 1 employs commodity based hardware available from multiple vendors and runs a commodity-based operating system that is also available from multiple vendors. Unlike the mainframe environment, intelligent storage systems such as thestorage system 3 shown in FIG. 1 have only recently been used with open systems. Thus, problems have been encountered in implementing an open computer system that includes multiple paths to an intelligent storage system. - For example,
conventional host computers 1 in an open system will not recognize that multiple paths have been formed to the same storage device within the storage system. Referring to the illustrative system of FIG. 1, the operating system on thehost computer 1 will view thestorage system 3 as having four times its actual number of disk drives 5 a-b, since four separate paths are provided to each of disk drives 5 a-b. To address this problem, conventional host computers in an open system have, as explained below, included an additional mapping layer, below the file system or logical volume manager (LVM), to reduce the number of storage devices (e.g., disk drives 5 a-b) visible at the application layer to the number of storage devices that actually exist on thestorage system 3. - FIG. 2 is a schematic representation of a number of mapping layers that may exist in a known multi-path computer system such as the one shown in FIG. 1. The system includes an
application layer 21 which includes application programs executing on theprocessor 16 of thehost computer 1. Theapplication layer 21 will generally refer to storage locations used thereby with a label or identifier such as a file name, and will have no knowledge about where the file is physically stored on the storage system 3 (FIG. 1). Below theapplication layer 21 is a file system and/or a logical volume manager (LVM) 23 that maps the label or identifier specified by theapplication layer 21 to a logical volume that the host computer perceives to correspond directly to a physical device address (e.g., the address of one of the disk drives 5 a-b) within thestorage system 3. Below the file system/LVM layer 23 is amulti-path mapping layer 25 that maps the logical volume address specified by the file system/LVM layer 23, through a particular one of the multiple system paths, to the logical volume address to be presented to thestorage system 3. Thus, themulti-path mapping layer 25 not only specifies a particular logical volume address, but also specifies a particular one of the multiple system paths to access the specified logical volume. - If the
storage system 3 were not an intelligent storage system, the logical volume address specified by themulti-pathing layer 25 would identify a particular physical device (e.g., one of disk drives 5 a-b) within thestorage system 3. However, for an intelligent storage system such as that shown in FIG. 1, the storage system itself may include afurther mapping layer 27, such that the logical volume address passed from thehost computer 1 may not correspond directly to an actual physical device (e.g., a disk drive 5 a-b) on thestorage system 3. Rather, a logical volume specified by thehost computer 1 can be spread across multiple physical storage devices (e.g., disk drives 5 a-b), or multiple logical volumes accessed by thehost computer 1 can be stored on a single physical storage device. - It should be appreciated from the foregoing that the
multi-path mapping layer 25 performs two functions. First, it controls which of the multiple system paths is used for each access by thehost computer 1 to a logical volume. Second, themulti-path mapping layer 25 also reduces the number of logical volumes visible to the file system/LVM layer 23. In particular, for a system including X paths between thehost computer 1 and thestorage system 3, and Y logical volumes defined on thestorage system 3, thehost bus adapters 15 see X times Y logical volumes. However, themulti-path mapping layer 25 reduces the number of logical volumes made visible to the file system/LVM layer 23 to equal only the Y distinct logical volumes that actually exist on thestorage system 3. - In a known multi-pathing system as described above in connection with FIGS.1-2, the operating system executing on the
processor 16 in thehost computer 1 is required to manage (e.g., at the multi-path mapping layer 25) a number of logical volumes that is equal to the number of logical volumes that thehost computer 1 would perceive thestorage system 3 as storing if multi-pathing where not employed (Y in the example above), multiplied by the number of paths (e.g., X in the example above and four in FIG. 1) between thehost computer 1 and thestorage system 3. Referring to the illustrative system of FIG. 1, assuming thestorage system 3 includes a total of twenty disk drives 5 a-b that each corresponds directly to a single logical volume, and the fourpaths 17 between thehost computer 1 and thestorage system 3, the operating system on theprocessor 16 would need to manage eighty logical volumes. In this respect, a unique label is generated for each independent path to a logical volume. Thus, for each of the twenty logical volumes present on thestorage system 3, four unique labels will be generated, each specifying a different path (e.g., through anadapter 15, abus 17 and a director 9) to the logical volume. These unique labels are used during multi-path operation to identify through which path an operation on thehost computer 1 directed to a particular logical volume is to be executed. - FIG. 3 is a conceptual representation of the manner in which complexity is introduced into the
host computer 1 due to the use of multiple paths P1-P4. In the example shown in FIG. 3, thestorage system 3 includes twentylogical volumes 51, labeled LV1-LV20. As shown in FIG. 3, thehost computer 1 includes four separate groups of labels 53-56 for each group of logical volumes LV1-LV20. These groups of labels are identified as P1LV1-P1LV20, P2LV1-P2LV20, P3LV1-P3LV20 and P4LV1-P4LV20 to indicate that there are four separate paths (i.e., P1-P4) to each of the groups of logical volumes LV1-LV20. Finally, as shown in FIG. 3, the multi-path mapping layer 25 (FIG. 2) consolidates the four groups of labels 53-56 to represent only the twenty unique logical volumes LV1-LV20 at 59, so that the file system/LVM layer 23 sees the correct number of logical volumes actually present on thestorage system 3. - The manner in which the known multi-path system described above is implemented presents two independent but related problems. First, it should be appreciated that the operating system for a
typical processor 16 maintains a number of resources to manage the target devices that it recognizes as coupled to theadapters 15 at the host computer. For many processors, particularly when thehost computer 1 is an open system, these resources are limited. Thus, there is a constraint on the number of target devices that the operating system will support (e.g., the operating system will simply not boot if the total number of target devices exceeds the number supported). For example, the NT operating system has a limit of approximately four-hundred target devices, and thirty-two target devices per path. It should be immediately apparent that implementing the multi-path system in the manner described above places severe limitations on the type of system that can be configured. For example, since the total number of target devices that the operating system must support is equal to the number of actual logical volumes multiplied by the number of paths in the above-described system, a trade off is encountered between the total number of paths and the total number of logical volumes that can be employed. Although in the example described above the number of paths is equal to four and the number of logical volumes is equal to twenty, it should be appreciated that in an actual system, it is generally desirable to employ significantly more paths (e.g., thirty-two or greater) and significantly more logical volumes. In fact, it is often desirable to employ a system including a number of logical volumes and a number of paths that, when multiplied together, would greatly exceed the limit of four hundred imposed by the NT operating system. Thus, implementing the multi-path system in the manner described above places limitations on both the number of actual logical volumes that can be employed in the system, and the number of paths that can be employed. - A second related problem is that multiplying the number of logical volumes by the number of paths can result in an extremely large number of target devices to be managed by the operating system, which can result in an extremely long boot time when initializing the
host computer 1. Thus, even if the operating system on theprocessor 16 includes a satisfactorily large limit on the total number of target devices that can be supported, the implementation of the multi-path system in the manner described above can result in extremely long boot times for thehost computer 1. - One illustrative embodiment of the invention is directed to a method of managing a plurality of logical volumes in a computer system, the computer system including a processor and a storage system coupled to the processor, the storage system including at least one storage device, the storage system storing the plurality of logical volumes on the at least one storage device. The method comprises steps of: (A) combining, in the storage system, at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes; (B) presenting the virtual volume to the processor as a single logical volume; and (C) presenting the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
- Another illustrative embodiment of the invention is directed to a storage system for use in a computer system including a processor coupled to the storage system. The storage system comprises at least one storage device to store a plurality of logical volumes; and a controller to combine at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes, to present the virtual volume to the processor as a single logical volume, and to further present the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
- A further illustrative embodiment of the invention is directed to a host computer for use in a computer system including a storage system coupled to the host computer. The storage system includes at least one storage device to store a plurality of logical volumes, combines at least two of the plurality of logical volumes into a virtual volume and presents the virtual volume to the processor as a single logical volume. The host computer comprises a processor and means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
- Another illustrative embodiment of the invention is directed to a multi-path computer system comprising: a processor; a storage system including at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes; and a plurality of paths coupling the processor to the storage system, the plurality of paths including X paths coupling the processor to the storage system. The processor is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the processor includes Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
- A further illustrative embodiment of the invention is directed to a host computer for use in a multi-path computer system including a storage system having at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes. The multi-path computer system further includes X paths coupling the host computer to the storage system. The processor comprises a processor capable of accessing each of the Y logical volumes through each of the X paths, the processor including Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
- FIG. 1 is a block diagram of an exemplary multi-path computing system on which aspects of the present invention can be implemented;
- FIG. 2 is a schematic representation of a number of mapping layers that exist in a known multi-path computing system;
- FIG. 3 is a conceptual illustration of the manner in which logical volumes are managed in a prior art multi-path computing system;
- FIG. 4 is a conceptual illustration of the manner in which logical volumes are managed according to a virtual volume aspect of the present invention; and
- FIG. 5 is a schematic representation of a number of mapping layers that can be employed to implement the virtual volume aspect of the present invention.
- In accordance with one illustrative embodiment of the present invention, an improved method and apparatus for implementing a multi-path system is provided. In one embodiment of the present invention, the logical volumes implemented on the storage system (e.g.,
storage system 3 of FIG. 1) are merged into a relatively small number of larger virtual volumes that are presented to the host computer. In this manner, the number of target devices that the operating system on the host computer must manage is significantly reduced. The storage system can also provide information to the host computer that enables it to deconstruct each of the larger virtual volumes in a manner described below. - In the examples discussed below, the aspects of the present invention are employed with an open system, and with a storage device that includes a plurality of disk drives. However, it should be appreciated that the present invention is not limited in this respect. The present invention can be employed with any type of storage system (e.g., tape drives, etc.) and is not limited to use with a disk drive storage system. Similarly, although the aspects of the present invention discussed below are particularly advantageous for use in connection with open systems, the present invention is not limited in this respect, as aspects of the present invention can also be employed in a mainframe environment.
- FIG. 4 is a conceptual diagram of the manner in which a virtual volume is employed in accordance with one exemplary embodiment of the present invention, using the same example described above in connection with FIG. 1, wherein the computer system includes four paths (P1-P4) between the
host computer 1 and thestorage system 3, and wherein the storage system includes twenty logical volumes (i.e., LV1-LV20). In accordance with one embodiment of the present invention, multiple logical volumes LV1-LV20 are combined in thestorage system 3 into a largervirtual volume 61, labeled in FIG. 4 as VV1. The singlevirtual volume 61 then is presented to thehost computer 1 over each of the four paths P1-P4, rather than having the twenty logical volumes LV1-LV20 that make up VV1 presented separately to the host computer over each of these paths. Therefore, thehost computer 1 sees only four target devices 63-66, respectively labeled as P1VV1 through P4VV1 in FIG. 4 to indicate that the virtual volume VV1 is visible over each of the paths P1-P4. In thehost computer 1, the four target volumes 64-66 are combined to form a single representation of the virtual volume 61 (i.e., VV1) to reflect that the same virtual volume is perceived by thehost computer 1 over each of the paths P1-P4. This consolidation process is similar to that performed by themulti-path mapping layer 25 in the known system discussed above in connection with FIG. 2. - The
storage system 3 also provides thehost computer 1 with information relating to the structure of the virtual volume VV1. This information enables thehost computer 1 to deconstruct the virtual volume into the logical volumes LV1-LV20 that comprise it. The deconstructed logical volumes LV1-LV20 can then be presented to the file system/LVM layer 23 (FIG. 5 discussed below) in much the same manner as would be done if no multi-pathing or virtual volume mapping were employed and the logical volumes LV1-LV20 within thestorage system 3 were simply presented over a single path to thehost computer 1. Thus, although the logical volumes LV1-LV20 are presented to thehost computer 1 as a single virtual volume VV1, the host computer is able to deconstruct the virtual volume and thereafter access the logical volumes LV1-LV20 independently, rather than having to access all of the logical volumes LV1-LV20 together as part of the virtual volume VV1. - FIG. 5 is a schematic representation of a number of mapping layers that may exist in a multi-path computer system that employs the virtual volume aspect of the present invention described in connection with FIG. 4. As with the known system described above in connection with FIG. 2, a computer system employing the virtual volume aspect of the present invention may include an
application layer 21, a file system and/orLVM layer 23, and a storagesystem mapping layer 27 that each performs functions similar to those described above in connection with FIG. 2. In addition, within thestorage system 3 is a virtualvolume mapping layer 71 that performs the function of mapping between the larger virtual volume 61 (i.e., VV1) and the logical volumes LV1-LV20 that comprise it. For example, in the example discussed above, the virtualvolume mapping layer 71 performs the function of combining the twenty logical volumes LV1-LV20 into the virtual volume VV1. A virtualvolume mapping layer 73 is also provided in thehost computer 1 to map between the virtual volume and the logical volumes that comprise it. For example, themapping layer 73 will make use of the information provided by the storage system 3 (FIG. 1) as to the structure of the virtual volume to deconstruct the virtual volume into the logical volumes that comprise it. Finally, the system also includes amulti-path mapping layer 25 that is similar in many respects to that described in connection with FIG. 2, and that maps between the multiple target devices 63-66 (FIG. 4) corresponding to the multiple paths P1-P4 and the single reconstructed representation of thevirtual volume 61. - It should be appreciated that the virtual volume aspect of the present invention provides a number of advantages when used in conjunction with a multi-path computer system such as that shown in FIG. 1. In particular, the virtual volume aspect of the present invention significantly reduces the number of target devices (e.g.,63-66 in FIG. 4) that must be managed by the operating system on the
host computer 1. This can significantly reduce the initialization time for the computer system, since as described above, the necessity of managing a large number of target devices can significantly slow down the boot time of the system. In addition, reducing the number of target devices that the operating system of thehost computer 1 must manage greatly increases flexibility in possible system configurations, particularly for host computers with operating systems that have strict constraints on the number of target devices that can be managed. In particular, by reducing the total number of target devices that the operating system of thehost computer 1 must support, the virtual volume aspect of the present invention can enable the use of a greater number of paths between thehost computer 1 and thestorage system 3, a greater number of logical volumes provided per path, a greater total number of logical volumes on thestorage system 3 that are useable by thehost computer 1, or all of the above. - The advantages of using the virtual volume aspect of the present invention in connection with a multi-path system should be immediately apparent from the foregoing, and are highlighted by a comparison of the conceptual illustrations of the prior art system of FIG. 3 and the virtual volume system of FIG. 4. For the illustrative example shown wherein the storage system includes twenty logical volumes and the computer system includes four paths, the prior art system illustrated in FIG. 3 creates eighty distinct labels represented at53-56 for the eighty target devices that the
host computer 1 perceives as available over its four paths P1-P4. By contrast, using the virtual volume aspect of the present invention shown in FIG. 4, the host computer creates only four labels for four distinct target devices 63-66. As discussed above, this reduction in the number of target device labels can significantly reduce the boot or initialization time of thehost computer 1, and can further enable the system to boot with and use a greater number of paths and/or logical volumes in the multi-path computing system. - It should be appreciated that the advantages in employing the virtual volume aspect of the present invention increase in proportion to the number of logical volumes and/or the number of paths employed in the multi-path computing system. For example, for an exemplary system such as that shown in FIG. 1 that employs one hundred twenty (120) logical volumes and eight separate paths between the host computer and the storage system, employing the known system illustrated in FIG. 3 requires that the operating system on the host computer initialize nine hundred sixty (960) distinct target device labels. It has been found that initializing such a system can take approximately five hours. This is a significant increase in the boot time for the system over what would be required if multiple paths were not employed between the
host computer 1 and thestorage system 3. Thus, a disincentive is provided to implementing a multi-path system using the known system shown in FIG. 3. Conversely, using the virtual volume aspect of the present invention shown in FIG. 4, if all one hundred twenty (120) of the logical volumes are combined into a single virtual volume, the operating system on thehost computer 1 need only create eight distinct target device labels to support the multi-path configuration, which has a de minis impact on the initialization time of the system. Thus, using the virtual volume aspect of the present invention enables a multi-path system to be implemented without significantly increasing the system boot time. - It should be appreciated from the foregoing that in addition to the unique labels generated for the target devices63-66 in the illustrative example of FIG. 4, the virtual volume aspect of the present invention will also result in the generation of a unique label for the single representation of the virtual volume 61 (i.e., VV1) in the host computer, as well as unique labels for the deconstructed logical volumes LV1-LV20 that comprise it. These labels are respectively created by the multi-path mapping layer 25 (FIG. 5) and the virtual
volume mapping layer 73, and are not created by the operating system at boot time, so that the creation of these labels does not increase the initialization time for the system. In addition, even with the creation of these additional labels, it should be appreciated that the total number of labels created within thehost computer 1 when employing the virtual volume aspect of the present invention is not significantly greater than in a system wherein multi-pathing was not employed. For example, for the illustrative system discussed above in which one hundred twenty (120) logical volumes are provided along with eight paths, if a single virtual volume were created to include all one hundred twenty (120) logical volumes, a total of eight unique target device addresses would be generated by the operating system, a single virtual volume identifier would be created by themulti-path mapping layer 25, and then one hundred twenty (120) unique labels would be created for the deconstructed logical volumes by the virtualvolume mapping layer 73. In this respect, the number of additional labels created as compared to a single-path system is simply equal to the number of multiple paths employed, plus one when a single virtual volume is created for all of the logical volumes in the system. - In addition to the reduction in the initialization time for the system, it should be appreciated that the use of the virtual volume aspect of the present invention also significantly reduces the number of target device addresses that the operating system on the host computer must support for a system including multiple paths. As mentioned above, this enables greater flexibility in terms of system configuration with respect to the number of logical volumes and multiple paths that can be supported. This is particularly important for multi-path systems that include relatively large numbers of multiple paths. In this respect, it is contemplated that the aspects of the present invention can be employed in a multi-path system that includes more than simply two paths, and that can include three, four or any greater number (e.g., thirty-two or more) of paths.
- In accordance with another embodiment of the present invention, the virtual volume aspect of the present invention is employed to provide the
host computer 1 with the capability of dynamically changing the configuration of thestorage system 3 without rebootting thehost computer 1. It should be appreciated that in conventional systems, adding or removing a target device (e.g., a disc drive 5 a-c in FIG. 1 or a logical volume LV1 in FIG. 3) from thestorage system 3 requires that the host computer be rebooted, because each target device is managed directly by the operating system of theprocessor 16. Conversely, in accordance with the virtual volume aspect of the invention, it is only the virtual volumes (e.g., 63-66 in FIG. 4) that are managed by the operating system. The target devices are managed by the virtual volume mapping layer 73 (FIG. 5). Thus, in accordance with one aspect of the present invention, a target device (e.g., LV2 in FIG. 4) can be added or removed from thestorage system 3 without requiring that thehost computer 1 be rebooted, because the target devices are managed by the virtual volume mapping layer 73 (FIG. 5) rather than by the operating system. This provides thehost computer 1 with the capability of dynamically changing the configuration of thestorage system 3 without rebootting thehost computer 1. - The virtual volume aspect of the present invention also provides a significant advantage in a system wherein the resources of the
storage system 3 are shared by two or more host computers. An example of such a system might employ a Fibre Channel fabric to connect each of multiple host computers to the storage system. In such a system, certain target devices (e.g., logical volumes) within thestorage system 3 typically are dedicated to a subset (e.g., a single one) of the host computers, such that access to those target devices is denied from the other host computers coupled to the storage system. Thus, some type of volume configuration management is typically employed to partition the target devices on thestorage system 3 into subsets with different access privileges. Multi-path systems complicate this volume configuration management because access to the target devices within thestorage system 3 must be managed across each of the multiple paths. In accordance with one aspect of the present invention, the above-described virtual volume techniques can be employed to group together the subsets of the target devices that are to be managed in the same way (i.e., which share common access privileges amongst the one or more host computers) into a single virtual volume. In this manner, access from the multiple host computers to the target devices can be managed by a volume configuration management scheme that deals only with a small number of virtual volumes, thereby simplifying this management process. - The virtual volume aspect of the present invention is not limited to presenting all of the logical volumes within the
storage device 3 to thehost computer 1 in a single virtual volume. In this respect, it should be appreciated that the benefits of the virtual volume aspect of the present invention can be achieved by presenting a virtual volume to the host computer that includes less than the entire set of logical volumes supported by thestorage system 3. Including any number of two or more logical volumes in a virtual volume provides the advantages discussed above by reducing the number of target device labels that the operating system on thehost computer 1 must support. Thus, it is contemplated that a virtual volume can be created for a subset of the logical volumes included on thestorage system 3, while other logical volumes on the storage system can be presented directly to thehost computer 1, without being included in a larger virtual volume. Furthermore, it is also contemplated that multiple virtual volumes can be presented to thehost computer 1 simultaneously, with each virtual volume corresponding to a subset of the logical volumes within thestorage system 3. For example, distinct virtual volumes can be created for different volume groups within thestorage system 3, wherein each volume group can be associated with a particular application executing on thehost computer 1. - The virtual volume aspect of the present invention can be implemented in any of numerous ways, and the present invention is not limited to any particular method of implementation. In accordance with one embodiment of the present invention, the virtual volumes are created using a “metavolume” that is created in the
cache 11 in a storage system such as that shown in FIG. 1. The SYMMETRIX line of disk arrays available from EMC Corporation, Hopkinton, Mass. support the creation of metavolumes in a storage system cache such as thecache 11 shown in FIG. 1. A metavolume is employed to concatenate together a plurality of logical volumes (or “hyper-volumes” as discussed below) to form a single large metavolume that looks to thehost computer 1 like a single logical volume. Thus, a virtual volume in accordance with the present invention can be implemented, for example, by forming a metavolume in thecache 11 of thestorage system 3, with the metavolume including each of the logical volumes to be included in the virtual volume. - Although the metavolume technology provides a convenient way of implementing a virtual volume, it should be appreciated that the present invention is not limited in this respect, and that a virtual volume can be implemented in numerous other ways. For example, although the metavolume technology conveniently makes use of the
cache 11 to form a metavolume, it should be appreciated that the present invention is not limited to employing a cache to form the virtual volume, and is not even limited to use with a storage system that includes a cache. In addition, it should be appreciated that although the metavolume technology provides a useful way to implement the virtual volume according to the present invention, there is a significant difference between a metavolume and a virtual volume. In particular, in known systems that have implemented a metavolume, the metavolume is presented to thehost computer 1 simply as a single large logical volume. The host computer has no knowledge about the structure of the metavolume (i.e., of what logical volumes or hyper-volumes make-up the metavolume), and therefore, the host computer simply treats the metavolume as a single large logical volume. By contrast, as described above, the virtual volume aspect of the present invention provides not only a large concatenated volume to the host computer, but also provides the host computer with information relating to the structure of the virtual volume, so that thehost computer 1 can deconstruct the virtual volume in the manner shown conceptually in FIG. 4 and can access each of its constituent logical volumes independently. - It should be appreciated that when metavolume technology is employed to form a virtual volume, the virtual volume can be formed by concatenating together not only logical volumes and/or hyper-volumes, but also metavolumes that will form a subset of the larger virtual volume and which will not be deconstructed by the host computer. Thus, when the virtual volume is deconstructed by the host computer, any metavolumes that make up the virtual volume will remain intact. The host computer will be provided with information relating to the structure of the virtual volume, so that the
host computer 1 can deconstruct the virtual volume into each of its constituent logical volumes, hyper-volumes and metavolumes. However, the host computer will not be provided with information concerning the internal structure of any metavolumes that make up the virtual volume. - It should be appreciated that each of the layers in the system shown in FIG. 5 can be implemented in numerous ways. The present invention is not limited to any particular manner of implementation. The
application 21 and file system/LVM 23 layers are typically implemented in software that is stored in a memory (not shown) in the host computer and is executed on theprocessor 16. The virtualvolume mapping layer 73 and themulti-path mapping layer 25 can is also be implemented in this manner. Alternatively, the mapping layers 25, 73 can be implemented in thehost bus adapters 15. For example, the adapters can each include a processor (not shown) that can execute software or firmware to implement the mapping layers 25, 73. The virtualvolume mapping layer 71 can be implemented in thestorage bus directors 9 in the storage system. For example, the directors can each include a processor (not shown) that can execute software or firmware to implement themapping layer 71. Finally, the storagesystem mapping layer 27 can be implemented in thestorage bus directors 9 or disk controllers 7 a-b in the storage system. For example, the disk controllers 7 a-b can each include a processor (not shown) that can execute software or firmware to implement themapping layer 27. - In accordance with a further embodiment of the present invention, the virtual volume is created by concatenating together not only logical volumes that each corresponds to a physical storage device (e.g., disk drives5 a-b in FIG. 1) in the
storage system 3, but by concatenating together “hyper-volumes”. Many storage systems support the splitting of a single physical storage device such as a disk drive into two or more logical storage devices or drives, referred to as hyper-volumes by EMC Corporation, and as luns in conventional RAID array terminology. The use of hyper-volumes is advantageous in that it facilitates management of the hyper-volumes within thestorage system 3, and in particular within thecache 11. In this respect, it should be appreciated that thecache 11 will typically include a particular number of cache slots dedicated to each logical volume or hyper-volume. By employing hyper-volumes, thecache 11 can manage a smaller volume of information, which may result in fewer collisions within the cache slots dedicated to each hyper-volume than might occur if the cache was organized using larger volume boundaries. - In accordance with another embodiment of the present invention, the virtual volume aspect of the present invention is employed to provide tremendous flexibility at the
host computer 1 with respect to the manner in which thestorage system 3 can be configured. This flexibility is enhanced further through the use of hyper-volumes to form a virtual volume. In this respect, the virtual volume can be formed by a concatenation of numerous hyper-volumes that are not constrained to correspond to an entire one of the physical storage devices (e.g., disk drives 5 a-b). Thus, the virtual volume can be formed by a concatenation of numerous smaller hyper-volumes, and the information regarding the structure of the virtual volume can be passed to thehost computer 1. In accordance with one embodiment of the present invention, thehost computer 1 has the capability of dynamically changing the configuration of thestorage system 3 in any of numerous ways. Thus, thehost computer 1 can add or delete hyper-volumes to a particular volume set visible by the file system/LVM layer 23, can change the size of the volume visible at that layer, etc. In addition, thestorage system 3 is provided with the capability of presenting a representation to thehost computer 1 of any type of configuration desired, including a representation of the storage system as including a small number of relatively large virtual volumes, while thestorage system 3 is able to manage much smaller volumes of data (e.g., hyper-volumes) internally to maximize the efficiency of thestorage system 3. - Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The invention is limited only as defined by the following claims and the equivalents thereto.
Claims (53)
1. A method of managing a plurality of logical volumes in a computer system, the computer system including a processor and a storage system coupled to the processor, the storage system including at least one storage device, the storage system storing the plurality of logical volumes on the at least one storage device, the method comprising steps of:
(A) combining, in the storage system, at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes;
(B) presenting the virtual volume to the processor as a single logical volume; and
(C) presenting the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
2. The method of claim 1 , wherein the computer system is a multi-path computer system including a plurality of paths coupling the processor to the storage system, and wherein the step (B) includes a step of presenting the virtual volume to the processor over each of the plurality of paths.
3. The method of claim 1 , wherein the computer system is a multi-path computer system including at least three paths coupling the processor to the storage system, and wherein the step (B) includes a step of presenting the virtual volume to the processor over each of the at least three paths.
4. The method of claim 1 , wherein the storage system includes a cache, and wherein the step (A) includes a step of combining the at least two of the plurality of logical volumes into the virtual volume in the cache.
5. The method of claim 1 , wherein the step (A) includes steps of:
subdividing the at least one storage device to form each of the at least two of the plurality of logical volumes, so that each of the at least two of the plurality of logical volumes is a hyper-volume; and
combining the hyper-volumes into the virtual volume.
6. The method of claim 1 , wherein:
the step (A) includes a step of combining, in the storage system, a first pair of the plurality of logical volumes into a first virtual volume that includes the first pair of the plurality of logical volumes, and a step of combining, in the storage system, a second pair of the plurality of logical volumes into a second virtual volume that includes the second pair of the plurality of logical volumes;
the step (B) includes a step of presenting each of the first and second virtual volumes to the processor as a single logical volume; and
the step (C) includes a step of presenting the processor with information that enables the processor to deconstruct the first and second virtual volumes, respectively, into the first and second pairs of the plurality of logical volumes.
7. The method of claim 1 , further including a step of:
(D) deconstructing, in the processor, the virtual volume into the at least two of the plurality of logical volumes.
8. The method of claim 7 , further including a step of:
(E) independently accessing, from the processor, the at least two of the plurality of logical volumes.
9. The method of claim 1 , wherein the computer system is an open computer system, and wherein the method further includes a step of:
(D) deconstructing, in the processor, the virtual volume into the at least two of the plurality of logical volumes.
10. The method of claim 1 , wherein the computer system is a multi-path computer system including X paths coupling the processor to the storage system, wherein the step (A) includes a step of combining Y of the plurality of logical volumes into the virtual volume, wherein the processor is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the method further includes a step of:
generating Z unique target address identifiers corresponding to the Y logical volumes, wherein Z is less than X times Y.
11. The method of claim 10 , further including a step of accessing, from the processor, each of the Y logical volumes through each of the X paths.
12. The method of claim 4 , wherein the step (A) includes steps of:
subdividing the at least one storage device to form each of the at least two of the plurality of logical volumes, so that each of the at least two of the plurality of logical volumes is a hyper-volume; and
combining the hyper-volumes into the virtual volume.
13. The method of claim 1 , further including a step of merging, in the processor, the plurality of presentations of the virtual volume over the plurality of paths to form a single representation of the virtual volume in the processor.
14. The method of claim 13 , further including a step of:
(D) deconstructing, in the processor, the single representation of the virtual volume into the at least two of the plurality of logical volumes.
15. A storage system for use in a computer system including a processor coupled to the storage system, the storage system comprising:
at least one storage device to store a plurality of logical volumes; and
a controller to combine at least two of the plurality of logical volumes into a virtual volume that includes the at least two of the plurality of logical volumes, to present the virtual volume to the processor as a single logical volume, and to further present the processor with information that enables the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
16. The storage system of claim 15 , wherein the storage system includes a plurality of ports for use in a multi-path computer system including a plurality of paths coupling the processor to the storage system, and wherein the controller presents the virtual volume to the processor over each of the plurality of ports.
17. The storage system of claim 15 , wherein the storage system includes at least three ports for use in a multi-path computer system including at least three paths coupling the processor to the storage system, and wherein the controller presents the virtual volume to the processor over each of the three ports.
18. The storage system of claim 15 , further including a cache, and wherein the controller combines, in the cache, the at least two of the plurality of logical volumes into the virtual volume.
19. The storage system of claim 15 , wherein the at least one storage device includes a plurality of storage devices, wherein the storage system further includes means for subdividing one of the plurality of storage devices to form each of the at least two of the plurality of logical volumes, so that each of the at least two of the plurality of logical volumes is a hyper-volume, and wherein the controller combines the hyper-volumes into the virtual volume.
20. The storage system of claim 15 , wherein the controller includes:
means for combining a first pair of the plurality of logical volumes into a first virtual volume that includes the first pair of the plurality of logical volumes, and for combining a second pair of the plurality of logical volumes into a second virtual volume that includes the second pair of the plurality of logical volumes;
means for presenting each of the first and second virtual volumes to the processor as a single logical volume; and
means for presenting the processor with information that enables the processor to deconstruct the first and second virtual volumes, respectively, into the first and second pairs of the plurality of logical volumes.
21. The storage system of claim 15 , in combination with the processor to form the computer system, wherein the processor includes means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
22. The combination of claim 21 , wherein the processor includes means for independently accessing the at least two of the plurality of logical volumes.
23. The combination of claim 21 , wherein the computer system is an open computer system.
24. The storage system of claim 15 , in combination with the processor to form the computer system, wherein the computer system is a multi-path computer system including X paths coupling the processor to the storage system, wherein the controller combines Y of the plurality of logical volumes into the virtual volume, wherein the processor is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the processor generates Z unique target address identifiers corresponding to the Y logical volumes, wherein Z is less than X times Y.
25. The combination of claim 24 , wherein the processor includes means for accessing each of the Y logical volumes through each of the X paths.
26. The storage system of claim 18 , wherein the at least one storage device includes a plurality of storage devices, wherein the storage system further includes means for subdividing one of the plurality of storage devices to form each of the at least two of the plurality of logical volumes, so that each of the at least two of the plurality of logical volumes is a hyper-volume, and wherein the controller combines the hyper-volumes into the virtual volume.
27. The storage system of claim 16 , in combination with the processor to form the computer system, wherein the processor includes means for merging the plurality of presentations of the virtual volume received over the plurality of paths to form a single representation of the virtual volume in the processor.
28. The combination of claim 27 , wherein the processor further includes means for deconstructing the single representation of the virtual volume into the at least two of the plurality of logical volumes.
29. The storage system of claim 15 , wherein the at least one storage device is a disk drive.
30. A host computer for use in a computer system including a storage system coupled to the host computer, wherein the storage system includes at least one storage device to store a plurality of logical volumes, wherein the storage system combines at least two of the plurality of logical volumes into a virtual volume and presents the virtual volume to the processor as a single logical volume, the host computer comprising:
a processor; and
means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
31. The host computer of claim 30 , further including means for independently accessing the at least two of the plurality of logical volumes.
32. The host computer of claim 30 , wherein the computer system is an open computer system.
33. The host computer of claim 30 , wherein the computer system is a multi-path computer system including X paths coupling the host computer to the storage system, wherein the storage system combines Y of the plurality of logical volumes into the virtual volume, wherein the host computer is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the host computer includes means for generating Z unique target address identifiers corresponding to the Y logical volumes, wherein Z is less than X times Y.
34. The host computer of claim 33 , further including means for accessing each of the Y logical volumes through each of the X paths.
35. The host computer of claim 30 , wherein the computer system is a multi-path computer system including a plurality of paths coupling the host computer to the storage system, wherein the host computer includes a plurality of ports for respectively coupling to the plurality of paths, and wherein the host computer receives a presentation of the virtual volume from the storage system over each of the plurality of ports.
36. The host computer of claim 35 , further including means for merging the plurality of presentations of the virtual volume received over the plurality of ports to form a single representation of the virtual volume.
37. The host computer of claim 36 , wherein the means for deconstructing the virtual volume operates upon the single representation of the virtual volume.
38. The host computer of claim 30 , wherein the computer system is a multi-path computer system including at least three paths coupling the host computer to the storage system, wherein the host computer includes at least three ports for respectively coupling to the at least three paths, and wherein the host computer receives a presentation of the virtual volume from the storage system over each of the three ports.
39. The host computer of claim 30 , wherein the storage system presents the host computer with information relating to a structure of the virtual volume, and wherein the means for deconstructing the virtual volume uses the information relating to the structure of the virtual volume to determine the manner in which the virtual volume is to be deconstructed.
40. A multi-path computer system comprising:
a processor;
a storage system including at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes; and
a plurality of paths coupling the processor to the storage system, the plurality of paths including X paths coupling the processor to the storage system;
wherein the processor is capable of accessing each of the Y logical volumes through each of the X paths, and wherein the processor includes Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
41. The multi-path computer system of claim 40 , wherein the storage system includes a controller to combine at least two of the Y logical volumes into a virtual volume, to present the virtual volume to the processor as a single logical volume, and to further present the processor with information to enable the processor to deconstruct the virtual volume into the at least two of the plurality of logical volumes.
42. The multi-path computer system of claim 41 , wherein the controller presents the virtual volume to the processor over each of the X paths.
43. The multi-path computer system of claim 42 , wherein the processor includes means for merging the plurality of presentations of the virtual volume received over the X paths to form a single representation of the virtual volume in the processor.
44. The multi-path computer system of claim 41 , wherein the processor includes means for independently accessing the at least two of the plurality of logical volumes.
45. The multi-path computer system of claim 40 , wherein the computer system is an open computer system.
46. The multi-path computer system of claim 41 , wherein the processor includes means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
47. The multi-path computer system of claim 40 , wherein the at least one storage device is a disk drive.
48. A host computer for use in a multi-path computer system including a storage system having at least one storage device to store a plurality of logical volumes, the plurality of logical volumes including at least Y logical volumes, the multi-path computer system further including X paths coupling the host computer to the storage system, the processor comprising:
a processor capable of accessing each of the Y logical volumes through each of the X paths, the processor including Z unique target address identifiers identifying the Y logical volumes, wherein Z is less than X times Y.
49. The host computer of claim 48 , wherein the storage system includes a controller that combines at least two of the Y logical volumes into a virtual volume, presents the virtual volume to the processor as a single logical volume, and presents the processor with information relating to a structure of the single logical volume, and wherein the processor includes means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
50. The host computer of claim 49 , wherein the controller presents the virtual volume to the processor over each of the X paths, and wherein the processor includes means for merging the plurality of presentations of the virtual volume received over the X paths to form a single representation of the virtual volume in the processor.
51. The host computer of claim 49 , wherein the processor includes means for independently accessing the at least two of the plurality of logical volumes.
52. The computer system of claim 49 , wherein the processor includes means for deconstructing the virtual volume into the at least two of the plurality of logical volumes.
53. The host computer of claim 48 , wherein the host computer is an open computer system
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/774,299 US20020019909A1 (en) | 1998-06-30 | 2001-01-30 | Method and apparatus for managing virtual storage devices in a storage system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/107,617 US6216202B1 (en) | 1998-06-30 | 1998-06-30 | Method and apparatus for managing virtual storage devices in a storage system |
US09/774,299 US20020019909A1 (en) | 1998-06-30 | 2001-01-30 | Method and apparatus for managing virtual storage devices in a storage system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/107,617 Continuation US6216202B1 (en) | 1998-06-30 | 1998-06-30 | Method and apparatus for managing virtual storage devices in a storage system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020019909A1 true US20020019909A1 (en) | 2002-02-14 |
Family
ID=22317511
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/107,617 Expired - Lifetime US6216202B1 (en) | 1998-06-30 | 1998-06-30 | Method and apparatus for managing virtual storage devices in a storage system |
US09/774,299 Abandoned US20020019909A1 (en) | 1998-06-30 | 2001-01-30 | Method and apparatus for managing virtual storage devices in a storage system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/107,617 Expired - Lifetime US6216202B1 (en) | 1998-06-30 | 1998-06-30 | Method and apparatus for managing virtual storage devices in a storage system |
Country Status (1)
Country | Link |
---|---|
US (2) | US6216202B1 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6542909B1 (en) * | 1998-06-30 | 2003-04-01 | Emc Corporation | System for determining mapping of logical objects in a computer system |
US6567811B1 (en) * | 1999-07-15 | 2003-05-20 | International Business Machines Corporation | Method and system to merge volume groups on a UNIX-based computer system |
US20040220960A1 (en) * | 2003-04-30 | 2004-11-04 | Oracle International Corporation | Determining a mapping of an object to storage layer components |
US20050071559A1 (en) * | 2003-09-29 | 2005-03-31 | Keishi Tamura | Storage system and storage controller |
US6889309B1 (en) * | 2002-04-15 | 2005-05-03 | Emc Corporation | Method and apparatus for implementing an enterprise virtual storage system |
JP2006040026A (en) * | 2004-07-28 | 2006-02-09 | Hitachi Ltd | Load balancing computer system, route setting program and method thereof |
US7127545B1 (en) | 2003-11-19 | 2006-10-24 | Veritas Operating Corporation | System and method for dynamically loadable storage device I/O policy modules |
JP2006293459A (en) * | 2005-04-06 | 2006-10-26 | Hitachi Ltd | Load balancing computer system, route setting program and method thereof |
US20070198602A1 (en) * | 2005-12-19 | 2007-08-23 | David Ngo | Systems and methods for resynchronizing information |
US20070198722A1 (en) * | 2005-12-19 | 2007-08-23 | Rajiv Kottomtharayil | Systems and methods for granular resource management in a storage network |
US20080005468A1 (en) * | 2006-05-08 | 2008-01-03 | Sorin Faibish | Storage array virtualization using a storage block mapping protocol client and server |
CN100375040C (en) * | 2002-07-30 | 2008-03-12 | 维瑞泰斯操作公司 | Storage management bridges |
US7383294B1 (en) | 1998-06-30 | 2008-06-03 | Emc Corporation | System for determining the mapping of logical objects in a data storage system |
US20080147878A1 (en) * | 2006-12-15 | 2008-06-19 | Rajiv Kottomtharayil | System and methods for granular resource management in a storage network |
US20080155316A1 (en) * | 2006-10-04 | 2008-06-26 | Sitaram Pawar | Automatic Media Error Correction In A File Server |
US20110010518A1 (en) * | 2005-12-19 | 2011-01-13 | Srinivas Kavuri | Systems and Methods for Migrating Components in a Hierarchical Storage Network |
US20110238621A1 (en) * | 2010-03-29 | 2011-09-29 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8463751B2 (en) | 2005-12-19 | 2013-06-11 | Commvault Systems, Inc. | Systems and methods for performing replication copy storage operations |
US8489656B2 (en) | 2010-05-28 | 2013-07-16 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8504515B2 (en) | 2010-03-30 | 2013-08-06 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US8656218B2 (en) | 2005-12-19 | 2014-02-18 | Commvault Systems, Inc. | Memory configuration for data replication system including identification of a subsequent log entry by a destination computer |
US8666942B2 (en) | 2008-12-10 | 2014-03-04 | Commvault Systems, Inc. | Systems and methods for managing snapshots of replicated databases |
US8706993B2 (en) | 2004-04-30 | 2014-04-22 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
US8725698B2 (en) | 2010-03-30 | 2014-05-13 | Commvault Systems, Inc. | Stub file prioritization in a data replication system |
US8726242B2 (en) | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
US8725980B2 (en) | 2004-04-30 | 2014-05-13 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
US8793221B2 (en) | 2005-12-19 | 2014-07-29 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US9495382B2 (en) | 2008-12-10 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods for performing discrete data replication |
US10176036B2 (en) | 2015-10-29 | 2019-01-08 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US10275320B2 (en) | 2015-06-26 | 2019-04-30 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation |
US10379988B2 (en) | 2012-12-21 | 2019-08-13 | Commvault Systems, Inc. | Systems and methods for performance monitoring |
US10831591B2 (en) | 2018-01-11 | 2020-11-10 | Commvault Systems, Inc. | Remedial action based on maintaining process awareness in data storage management |
US11042318B2 (en) | 2019-07-29 | 2021-06-22 | Commvault Systems, Inc. | Block-level data replication |
US11449253B2 (en) | 2018-12-14 | 2022-09-20 | Commvault Systems, Inc. | Disk usage growth prediction system |
US11809285B2 (en) | 2022-02-09 | 2023-11-07 | Commvault Systems, Inc. | Protecting a management database of a data storage management system to meet a recovery point objective (RPO) |
US12056018B2 (en) | 2022-06-17 | 2024-08-06 | Commvault Systems, Inc. | Systems and methods for enforcing a recovery point objective (RPO) for a production database without generating secondary copies of the production database |
Families Citing this family (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000512416A (en) * | 1997-03-12 | 2000-09-19 | ストーリッジ テクノロジー コーポレーション | Virtual tape data storage subsystem attached to network |
US6658526B2 (en) * | 1997-03-12 | 2003-12-02 | Storage Technology Corporation | Network attached virtual data storage subsystem |
US6393540B1 (en) | 1998-06-30 | 2002-05-21 | Emc Corporation | Moving a logical object from a set of source locations to a set of destination locations using a single command |
US6883063B2 (en) | 1998-06-30 | 2005-04-19 | Emc Corporation | Method and apparatus for initializing logical objects in a data storage system |
US6591356B2 (en) * | 1998-07-17 | 2003-07-08 | Roxio, Inc. | Cluster buster |
US6457139B1 (en) * | 1998-12-30 | 2002-09-24 | Emc Corporation | Method and apparatus for providing a host computer with information relating to the mapping of logical volumes within an intelligent storage system |
US6449652B1 (en) * | 1999-01-04 | 2002-09-10 | Emc Corporation | Method and apparatus for providing secure access to a computer system resource |
US6931440B1 (en) * | 1999-04-21 | 2005-08-16 | Emc Corporation | Method and apparatus for dynamically determining whether access to a resource connected to a computer has changed and determining how to access the resource with a new identifier |
US7483967B2 (en) * | 1999-09-01 | 2009-01-27 | Ximeta Technology, Inc. | Scalable server architecture based on asymmetric 3-way TCP |
US6493729B2 (en) * | 1999-09-23 | 2002-12-10 | International Business Machines Corporation | Method and system to administer mirrored filesystems |
US6983330B1 (en) * | 1999-12-29 | 2006-01-03 | Emc Corporation | Method and apparatus for using multiple paths for processing out of band commands |
US7792923B2 (en) * | 2000-10-13 | 2010-09-07 | Zhe Khi Pak | Disk system adapted to be directly attached to network |
US6751719B1 (en) * | 2000-10-26 | 2004-06-15 | International Business Machines Corporation | Method and an apparatus to dynamically order features and to resolve conflicts in a multiple-layer logical volume management environment |
JP4105398B2 (en) * | 2001-02-28 | 2008-06-25 | 株式会社日立製作所 | Information processing system |
JP2002288108A (en) * | 2001-03-28 | 2002-10-04 | Hitachi Ltd | External storage device |
EP1588452A2 (en) * | 2001-07-16 | 2005-10-26 | Kim, Han Gyoo, Hongik university computer engineering | Scheme for dynamically connecting i/o devices through network |
US20050149682A1 (en) * | 2001-10-09 | 2005-07-07 | Han-Gyoo Kim | Virtual multiple removable media jukebox |
US7007152B2 (en) * | 2001-12-28 | 2006-02-28 | Storage Technology Corporation | Volume translation apparatus and method |
US7076690B1 (en) | 2002-04-15 | 2006-07-11 | Emc Corporation | Method and apparatus for managing access to volumes of storage |
JP2003316521A (en) * | 2002-04-23 | 2003-11-07 | Hitachi Ltd | Storage controller |
US7043614B2 (en) * | 2002-07-11 | 2006-05-09 | Veritas Operating Corporation | Storage services and systems |
US7707151B1 (en) | 2002-08-02 | 2010-04-27 | Emc Corporation | Method and apparatus for migrating data |
KR100532842B1 (en) * | 2002-08-17 | 2005-12-05 | 삼성전자주식회사 | Image recording/reproducing apparatus capable of reducing waste of hard disk drive space |
JP4318902B2 (en) * | 2002-10-15 | 2009-08-26 | 株式会社日立製作所 | Storage device system control method, storage device system, and program |
US20040078521A1 (en) * | 2002-10-17 | 2004-04-22 | International Business Machines Corporation | Method, apparatus and computer program product for emulating an iSCSI device on a logical volume manager |
US7546482B2 (en) * | 2002-10-28 | 2009-06-09 | Emc Corporation | Method and apparatus for monitoring the storage of data in a computer system |
US20040088284A1 (en) * | 2002-10-31 | 2004-05-06 | John Gourlay | Extraction of information as to a volume group and logical units |
JP4139675B2 (en) * | 2002-11-14 | 2008-08-27 | 株式会社日立製作所 | Virtual volume storage area allocation method, apparatus and program thereof |
US7263593B2 (en) | 2002-11-25 | 2007-08-28 | Hitachi, Ltd. | Virtualization controller and data transfer control method |
US7376764B1 (en) | 2002-12-10 | 2008-05-20 | Emc Corporation | Method and apparatus for migrating data in a computer system |
JP2004259079A (en) | 2003-02-27 | 2004-09-16 | Hitachi Ltd | Data processing system |
US7080221B1 (en) | 2003-04-23 | 2006-07-18 | Emc Corporation | Method and apparatus for managing migration of data in a clustered computer system environment |
US7093088B1 (en) | 2003-04-23 | 2006-08-15 | Emc Corporation | Method and apparatus for undoing a data migration in a computer system |
US7263590B1 (en) | 2003-04-23 | 2007-08-28 | Emc Corporation | Method and apparatus for migrating data in a computer system |
US7805583B1 (en) | 2003-04-23 | 2010-09-28 | Emc Corporation | Method and apparatus for migrating data in a clustered computer system environment |
US7415591B1 (en) | 2003-04-23 | 2008-08-19 | Emc Corporation | Method and apparatus for migrating data and automatically provisioning a target for the migration |
JP4386694B2 (en) * | 2003-09-16 | 2009-12-16 | 株式会社日立製作所 | Storage system and storage control device |
US7219201B2 (en) * | 2003-09-17 | 2007-05-15 | Hitachi, Ltd. | Remote storage disk control device and method for controlling the same |
US7457880B1 (en) * | 2003-09-26 | 2008-11-25 | Ximeta Technology, Inc. | System using a single host to receive and redirect all file access commands for shared data storage device from other hosts on a network |
AU2004286660B2 (en) * | 2003-10-27 | 2011-06-16 | Hitachi Vantara, LLC | Policy-based management of a redundant array of independent nodes |
JP4451118B2 (en) * | 2003-11-18 | 2010-04-14 | 株式会社日立製作所 | Information processing system, management apparatus, logical device selection method, and program |
JP2005165702A (en) * | 2003-12-03 | 2005-06-23 | Hitachi Ltd | Device connection method for cluster storage |
US7664836B2 (en) * | 2004-02-17 | 2010-02-16 | Zhe Khi Pak | Device and method for booting an operation system for a computer from a passive directly attached network device |
US20050193017A1 (en) * | 2004-02-19 | 2005-09-01 | Han-Gyoo Kim | Portable multimedia player/recorder that accesses data contents from and writes to networked device |
US20060069884A1 (en) * | 2004-02-27 | 2006-03-30 | Han-Gyoo Kim | Universal network to device bridge chip that enables network directly attached device |
US7149859B2 (en) * | 2004-03-01 | 2006-12-12 | Hitachi, Ltd. | Method and apparatus for data migration with the efficient use of old assets |
JP4672282B2 (en) * | 2004-05-07 | 2011-04-20 | 株式会社日立製作所 | Information processing apparatus and control method of information processing apparatus |
US7131027B2 (en) | 2004-07-09 | 2006-10-31 | Hitachi, Ltd. | Method and apparatus for disk array based I/O routing and multi-layered external storage linkage |
US7746900B2 (en) * | 2004-07-22 | 2010-06-29 | Zhe Khi Pak | Low-level communication layers and device employing same |
US7328287B1 (en) | 2004-07-26 | 2008-02-05 | Symantec Operating Corporation | System and method for managing I/O access policies in a storage environment employing asymmetric distributed block virtualization |
US7657581B2 (en) * | 2004-07-29 | 2010-02-02 | Archivas, Inc. | Metadata management for fixed content distributed data storage |
US7278000B2 (en) * | 2004-08-03 | 2007-10-02 | Hitachi, Ltd. | Data migration with worm guarantee |
US7860943B2 (en) * | 2004-08-23 | 2010-12-28 | Zhe Khi Pak | Enhanced network direct attached storage controller |
US20060067356A1 (en) * | 2004-08-23 | 2006-03-30 | Han-Gyoo Kim | Method and apparatus for network direct attached storage |
US7849257B1 (en) | 2005-01-06 | 2010-12-07 | Zhe Khi Pak | Method and apparatus for storing and retrieving data |
JP2006195712A (en) * | 2005-01-13 | 2006-07-27 | Hitachi Ltd | Storage control device, logical volume management method, and storage device |
US8996841B2 (en) * | 2008-02-06 | 2015-03-31 | Compellent Technologies | Hypervolume data storage object and method of data storage |
JP2009223442A (en) * | 2008-03-13 | 2009-10-01 | Hitachi Ltd | Storage system |
US8706959B1 (en) * | 2009-06-30 | 2014-04-22 | Emc Corporation | Virtual storage machine |
US8521686B2 (en) * | 2009-07-13 | 2013-08-27 | Vmware, Inc. | Concurrency control in a file system shared by application hosts |
US8683120B2 (en) * | 2011-03-28 | 2014-03-25 | Hitachi, Ltd. | Method and apparatus to allocate area to virtual volume |
US10152530B1 (en) | 2013-07-24 | 2018-12-11 | Symantec Corporation | Determining a recommended control point for a file system |
US10474372B1 (en) * | 2014-11-07 | 2019-11-12 | Amazon Technologies, Inc. | Optimizing geometry based on workload characteristics |
US9887978B2 (en) | 2015-06-23 | 2018-02-06 | Veritas Technologies Llc | System and method for centralized configuration and authentication |
US10757104B1 (en) | 2015-06-29 | 2020-08-25 | Veritas Technologies Llc | System and method for authentication in a computing system |
CN115617256A (en) * | 2021-07-12 | 2023-01-17 | 戴尔产品有限公司 | Moving virtual volumes in storage nodes of a storage cluster based on a determined likelihood of specifying a virtual machine boot condition |
CN114968128A (en) * | 2022-07-28 | 2022-08-30 | 云宏信息科技股份有限公司 | Qcow 2-based virtual disk mapping method, system and medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5129088A (en) * | 1987-11-30 | 1992-07-07 | International Business Machines Corporation | Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device |
US5148432A (en) * | 1988-11-14 | 1992-09-15 | Array Technology Corporation | Arrayed disk drive system and method |
US5379391A (en) * | 1991-03-01 | 1995-01-03 | Storage Technology Corporation | Method and apparatus to access data records in a cache memory by multiple virtual addresses |
JP3686457B2 (en) * | 1995-08-31 | 2005-08-24 | 株式会社日立製作所 | Disk array subsystem |
US5819310A (en) * | 1996-05-24 | 1998-10-06 | Emc Corporation | Method and apparatus for reading data from mirrored logical volumes on physical disk drives |
US5897661A (en) * | 1997-02-25 | 1999-04-27 | International Business Machines Corporation | Logical volume manager and method having enhanced update capability with dynamic allocation of storage and minimal storage of metadata information |
US5983316A (en) * | 1997-05-29 | 1999-11-09 | Hewlett-Parkard Company | Computing system having a system node that utilizes both a logical volume manager and a resource monitor for managing a storage pool |
US5973690A (en) * | 1997-11-07 | 1999-10-26 | Emc Corporation | Front end/back end device visualization and manipulation |
-
1998
- 1998-06-30 US US09/107,617 patent/US6216202B1/en not_active Expired - Lifetime
-
2001
- 2001-01-30 US US09/774,299 patent/US20020019909A1/en not_active Abandoned
Cited By (98)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130986A1 (en) * | 1998-06-30 | 2003-07-10 | Tamer Philip E. | System for determining the mapping of logical objects in a data storage system |
US6542909B1 (en) * | 1998-06-30 | 2003-04-01 | Emc Corporation | System for determining mapping of logical objects in a computer system |
US6938059B2 (en) | 1998-06-30 | 2005-08-30 | Emc Corporation | System for determining the mapping of logical objects in a data storage system |
US7383294B1 (en) | 1998-06-30 | 2008-06-03 | Emc Corporation | System for determining the mapping of logical objects in a data storage system |
US6567811B1 (en) * | 1999-07-15 | 2003-05-20 | International Business Machines Corporation | Method and system to merge volume groups on a UNIX-based computer system |
US6889309B1 (en) * | 2002-04-15 | 2005-05-03 | Emc Corporation | Method and apparatus for implementing an enterprise virtual storage system |
CN100375040C (en) * | 2002-07-30 | 2008-03-12 | 维瑞泰斯操作公司 | Storage management bridges |
US20040220960A1 (en) * | 2003-04-30 | 2004-11-04 | Oracle International Corporation | Determining a mapping of an object to storage layer components |
US7577675B2 (en) * | 2003-04-30 | 2009-08-18 | Oracle International Corporation | Determining a mapping of an object to storage layer components |
US7441095B2 (en) | 2003-09-29 | 2008-10-21 | Hitachi, Ltd. | Storage system and storage controller |
US20060242363A1 (en) * | 2003-09-29 | 2006-10-26 | Keishi Tamura | Storage system and storage controller |
US20050071559A1 (en) * | 2003-09-29 | 2005-03-31 | Keishi Tamura | Storage system and storage controller |
FR2860312A1 (en) * | 2003-09-29 | 2005-04-01 | Hitachi Ltd | MEMORIZATION SYSTEM AND MEMORY CONTROLLER |
US7493466B2 (en) | 2003-09-29 | 2009-02-17 | Hitachi, Ltd. | Virtualization system for virtualizing disks drives of a disk array system |
US7127545B1 (en) | 2003-11-19 | 2006-10-24 | Veritas Operating Corporation | System and method for dynamically loadable storage device I/O policy modules |
US7694063B1 (en) | 2003-11-19 | 2010-04-06 | Symantec Operating Corporation | System and method for dynamically loadable storage device I/O policy modules |
US10282113B2 (en) | 2004-04-30 | 2019-05-07 | Commvault Systems, Inc. | Systems and methods for providing a unified view of primary and secondary storage resources |
US9164692B2 (en) | 2004-04-30 | 2015-10-20 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
US9111220B2 (en) | 2004-04-30 | 2015-08-18 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
US8725980B2 (en) | 2004-04-30 | 2014-05-13 | Commvault Systems, Inc. | System and method for allocation of organizational resources |
US9405471B2 (en) | 2004-04-30 | 2016-08-02 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
US10901615B2 (en) | 2004-04-30 | 2021-01-26 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
US11287974B2 (en) | 2004-04-30 | 2022-03-29 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
US8706993B2 (en) | 2004-04-30 | 2014-04-22 | Commvault Systems, Inc. | Systems and methods for storage modeling and costing |
JP2006040026A (en) * | 2004-07-28 | 2006-02-09 | Hitachi Ltd | Load balancing computer system, route setting program and method thereof |
JP4643198B2 (en) * | 2004-07-28 | 2011-03-02 | 株式会社日立製作所 | Load balancing computer system, route setting program and method thereof |
JP4609848B2 (en) * | 2005-04-06 | 2011-01-12 | 株式会社日立製作所 | Load balancing computer system, route setting program and method thereof |
JP2006293459A (en) * | 2005-04-06 | 2006-10-26 | Hitachi Ltd | Load balancing computer system, route setting program and method thereof |
US8725694B2 (en) | 2005-12-19 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for performing replication copy storage operations |
US9002799B2 (en) | 2005-12-19 | 2015-04-07 | Commvault Systems, Inc. | Systems and methods for resynchronizing information |
US20110010518A1 (en) * | 2005-12-19 | 2011-01-13 | Srinivas Kavuri | Systems and Methods for Migrating Components in a Hierarchical Storage Network |
US20070198602A1 (en) * | 2005-12-19 | 2007-08-23 | David Ngo | Systems and methods for resynchronizing information |
US20100312979A1 (en) * | 2005-12-19 | 2010-12-09 | Srinivas Kavuri | Systems and Methods for Migrating Components in a Hierarchical Storage Network |
US11132139B2 (en) | 2005-12-19 | 2021-09-28 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US8463751B2 (en) | 2005-12-19 | 2013-06-11 | Commvault Systems, Inc. | Systems and methods for performing replication copy storage operations |
US20070198797A1 (en) * | 2005-12-19 | 2007-08-23 | Srinivas Kavuri | Systems and methods for migrating components in a hierarchical storage network |
US20070198722A1 (en) * | 2005-12-19 | 2007-08-23 | Rajiv Kottomtharayil | Systems and methods for granular resource management in a storage network |
US10133507B2 (en) | 2005-12-19 | 2018-11-20 | Commvault Systems, Inc | Systems and methods for migrating components in a hierarchical storage network |
US9971657B2 (en) | 2005-12-19 | 2018-05-15 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8572330B2 (en) | 2005-12-19 | 2013-10-29 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US9916111B2 (en) | 2005-12-19 | 2018-03-13 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US8656218B2 (en) | 2005-12-19 | 2014-02-18 | Commvault Systems, Inc. | Memory configuration for data replication system including identification of a subsequent log entry by a destination computer |
US8655850B2 (en) | 2005-12-19 | 2014-02-18 | Commvault Systems, Inc. | Systems and methods for resynchronizing information |
US8661216B2 (en) | 2005-12-19 | 2014-02-25 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US9639294B2 (en) | 2005-12-19 | 2017-05-02 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US20100153338A1 (en) * | 2005-12-19 | 2010-06-17 | David Ngo | Systems and Methods for Resynchronizing Information |
US9448892B2 (en) | 2005-12-19 | 2016-09-20 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US20070260834A1 (en) * | 2005-12-19 | 2007-11-08 | Srinivas Kavuri | Systems and methods for migrating components in a hierarchical storage network |
US9313143B2 (en) | 2005-12-19 | 2016-04-12 | Commvault Systems, Inc. | Systems and methods for granular resource management in a storage network |
US9298382B2 (en) | 2005-12-19 | 2016-03-29 | Commvault Systems, Inc. | Systems and methods for performing replication copy storage operations |
US9208210B2 (en) | 2005-12-19 | 2015-12-08 | Commvault Systems, Inc. | Rolling cache configuration for a data replication system |
US8793221B2 (en) | 2005-12-19 | 2014-07-29 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US9152685B2 (en) | 2005-12-19 | 2015-10-06 | Commvault Systems, Inc. | Systems and methods for migrating components in a hierarchical storage network |
US8935210B2 (en) | 2005-12-19 | 2015-01-13 | Commvault Systems, Inc. | Systems and methods for performing replication copy storage operations |
US9020898B2 (en) | 2005-12-19 | 2015-04-28 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7653832B2 (en) * | 2006-05-08 | 2010-01-26 | Emc Corporation | Storage array virtualization using a storage block mapping protocol client and server |
US20080005468A1 (en) * | 2006-05-08 | 2008-01-03 | Sorin Faibish | Storage array virtualization using a storage block mapping protocol client and server |
US9003374B2 (en) | 2006-07-27 | 2015-04-07 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
US8726242B2 (en) | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
US7890796B2 (en) * | 2006-10-04 | 2011-02-15 | Emc Corporation | Automatic media error correction in a file server |
US20080155316A1 (en) * | 2006-10-04 | 2008-06-26 | Sitaram Pawar | Automatic Media Error Correction In A File Server |
US20110004683A1 (en) * | 2006-12-15 | 2011-01-06 | Rajiv Kottomtharayil | Systems and Methods for Granular Resource Management in a Storage Network |
US20080147878A1 (en) * | 2006-12-15 | 2008-06-19 | Rajiv Kottomtharayil | System and methods for granular resource management in a storage network |
US9047357B2 (en) | 2008-12-10 | 2015-06-02 | Commvault Systems, Inc. | Systems and methods for managing replicated database data in dirty and clean shutdown states |
US8666942B2 (en) | 2008-12-10 | 2014-03-04 | Commvault Systems, Inc. | Systems and methods for managing snapshots of replicated databases |
US9396244B2 (en) | 2008-12-10 | 2016-07-19 | Commvault Systems, Inc. | Systems and methods for managing replicated database data |
US9495382B2 (en) | 2008-12-10 | 2016-11-15 | Commvault Systems, Inc. | Systems and methods for performing discrete data replication |
US20110238621A1 (en) * | 2010-03-29 | 2011-09-29 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8504517B2 (en) | 2010-03-29 | 2013-08-06 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8868494B2 (en) | 2010-03-29 | 2014-10-21 | Commvault Systems, Inc. | Systems and methods for selective data replication |
US8725698B2 (en) | 2010-03-30 | 2014-05-13 | Commvault Systems, Inc. | Stub file prioritization in a data replication system |
US9483511B2 (en) | 2010-03-30 | 2016-11-01 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US9002785B2 (en) | 2010-03-30 | 2015-04-07 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US8504515B2 (en) | 2010-03-30 | 2013-08-06 | Commvault Systems, Inc. | Stubbing systems and methods in a data replication environment |
US8589347B2 (en) | 2010-05-28 | 2013-11-19 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8572038B2 (en) | 2010-05-28 | 2013-10-29 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8745105B2 (en) | 2010-05-28 | 2014-06-03 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US8489656B2 (en) | 2010-05-28 | 2013-07-16 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US10379988B2 (en) | 2012-12-21 | 2019-08-13 | Commvault Systems, Inc. | Systems and methods for performance monitoring |
US10275320B2 (en) | 2015-06-26 | 2019-04-30 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation |
US12147312B2 (en) | 2015-06-26 | 2024-11-19 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data into a data stream in a secondary copy operation |
US11983077B2 (en) | 2015-06-26 | 2024-05-14 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation |
US11301333B2 (en) | 2015-06-26 | 2022-04-12 | Commvault Systems, Inc. | Incrementally accumulating in-process performance data and hierarchical reporting thereof for a data stream in a secondary copy operation |
US11474896B2 (en) | 2015-10-29 | 2022-10-18 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US10248494B2 (en) | 2015-10-29 | 2019-04-02 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US10853162B2 (en) | 2015-10-29 | 2020-12-01 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US10176036B2 (en) | 2015-10-29 | 2019-01-08 | Commvault Systems, Inc. | Monitoring, diagnosing, and repairing a management database in a data storage management system |
US11200110B2 (en) | 2018-01-11 | 2021-12-14 | Commvault Systems, Inc. | Remedial action based on maintaining process awareness in data storage management |
US11815993B2 (en) | 2018-01-11 | 2023-11-14 | Commvault Systems, Inc. | Remedial action based on maintaining process awareness in data storage management |
US10831591B2 (en) | 2018-01-11 | 2020-11-10 | Commvault Systems, Inc. | Remedial action based on maintaining process awareness in data storage management |
US11449253B2 (en) | 2018-12-14 | 2022-09-20 | Commvault Systems, Inc. | Disk usage growth prediction system |
US11941275B2 (en) | 2018-12-14 | 2024-03-26 | Commvault Systems, Inc. | Disk usage growth prediction system |
US11709615B2 (en) | 2019-07-29 | 2023-07-25 | Commvault Systems, Inc. | Block-level data replication |
US11042318B2 (en) | 2019-07-29 | 2021-06-22 | Commvault Systems, Inc. | Block-level data replication |
US11809285B2 (en) | 2022-02-09 | 2023-11-07 | Commvault Systems, Inc. | Protecting a management database of a data storage management system to meet a recovery point objective (RPO) |
US12045145B2 (en) | 2022-02-09 | 2024-07-23 | Commvault Systems, Inc. | Protecting a management database of a data storage management system to meet a recovery point objective (RPO) |
US12248375B2 (en) | 2022-02-09 | 2025-03-11 | Commvault Systems, Inc. | Resiliency of a data storage system by protecting its management database to meet a recovery point objective (RPO) |
US12056018B2 (en) | 2022-06-17 | 2024-08-06 | Commvault Systems, Inc. | Systems and methods for enforcing a recovery point objective (RPO) for a production database without generating secondary copies of the production database |
Also Published As
Publication number | Publication date |
---|---|
US6216202B1 (en) | 2001-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6216202B1 (en) | Method and apparatus for managing virtual storage devices in a storage system | |
US6629189B1 (en) | Method and apparatus for managing target devices in a multi-path computer system | |
US7082497B2 (en) | System and method for managing a moveable media library with library partitions | |
US20030126225A1 (en) | System and method for peripheral device virtual functionality overlay | |
US7167951B2 (en) | Intelligent controller accessed through addressable virtual space | |
US7596637B2 (en) | Storage apparatus and control method for the same, and computer program product | |
US7689803B2 (en) | System and method for communication using emulated LUN blocks in storage virtualization environments | |
US6618798B1 (en) | Method, system, program, and data structures for mapping logical units to a storage space comprises of at least one array of storage units | |
US7032070B2 (en) | Method for partial data reallocation in a storage system | |
US6272571B1 (en) | System for improving the performance of a disk storage device by reconfiguring a logical volume of data in response to the type of operations being performed | |
US20020029319A1 (en) | Logical unit mapping in a storage area network (SAN) environment | |
US6510491B1 (en) | System and method for accomplishing data storage migration between raid levels | |
US20030126395A1 (en) | System and method for partitioning a storage area network associated data library employing element addresses | |
US20030123274A1 (en) | System and method for intermediating communication with a moveable media library utilizing a plurality of partitions | |
US20020091828A1 (en) | Computer system and a method of assigning a storage device to a computer | |
US7536503B1 (en) | Methods and systems for preserving disk geometry when migrating existing data volumes | |
US8972656B1 (en) | Managing accesses to active-active mapped logical volumes | |
KR20110093998A (en) | Active-Active Failover for Direct-Connect Storage Systems | |
JP2001142648A (en) | Computer system and device assignment method | |
US6356979B1 (en) | System and method for selectively presenting logical storage units to multiple host operating systems in a networked computing system | |
US6438648B1 (en) | System apparatus and method for managing multiple host computer operating requirements in a data storage system | |
US6851023B2 (en) | Method and system for configuring RAID subsystems with block I/O commands and block I/O path | |
KR20010098429A (en) | System and method for multi-layer logical volume creation and management | |
JP2004355638A (en) | Computer system and device allocation method | |
US7406578B2 (en) | Method, apparatus and program storage device for providing virtual disk service (VDS) hints based storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |