US20030221060A1 - Managing data in a multi-level raid storage array - Google Patents
Managing data in a multi-level raid storage array Download PDFInfo
- Publication number
- US20030221060A1 US20030221060A1 US10/154,870 US15487002A US2003221060A1 US 20030221060 A1 US20030221060 A1 US 20030221060A1 US 15487002 A US15487002 A US 15487002A US 2003221060 A1 US2003221060 A1 US 2003221060A1
- Authority
- US
- United States
- Prior art keywords
- data
- level
- array
- raid
- raid level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- FIG. 5 is a continuation of the flow diagram of FIG. 4.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- The present disclosure relates to multi-level RAID storage arrays, and more particularly, to managing data within such arrays to optimize storage efficiency and performance.
- Multi-level RAID (redundant array of independent disks) storage arrays provide better performance and storage efficiency than single level RAID storage arrays by exploiting advantages of different RAID levels combined within the same array. Multi-level RAID arrays employ two or more RAID levels such as RAID level 1 and RAID level 5 that coexist on the same set of disks within the array. Generally, different RAID levels provide different benefits of performance versus storage efficiency. For example, RAID level 1 provides low storage efficiency because disks are mirrored for data redundancy, while RAID level 5 provides higher storage efficiency by creating and storing parity information on one disk that provides redundancy for data stored on a number of disks. However, RAID level 1 provides faster performance under random data writes than RAID level 5 because RAID level 1 does not require the multiple read operations that are necessary in RAID level 5 for recreating parity information when data is being updated (i.e. written) to a disk.
- Multi-level RAID arrays migrate data between different RAID levels within the array to maximize the benefits of performance and storage efficiency offered by the different RAID levels. Generally, active data (i.e., data most recently written) is migrated from a lower performing RAID level to a higher performing RAID level, while inactive data (i.e., data least recently written) is migrated from a lower storage-efficient RAID level to a higher storage-efficient RAID level. Migration of data from a lower performance RAID level to a higher performance RAID level is called “promotion”. Migration of data from a higher performance RAID level to a lower performance RAID level is called “demotion”. Thus, for a multi-level RAID array employing RAID levels 1 and 5, for example, active data is promoted to RAID level 1 from RAID level 5, and inactive data is demoted from RAID level 1 to RAID level 5.
- Although data migration between RAID levels in a multi-level RAID array generally helps to optimize performance and storage efficiency of the array, past methods of migrating data have several disadvantages. One disadvantage with past methods is that promotions are performed in the foreground while an array is servicing a write request. Foreground promotions inflate response times for a data write because the data write has to wait for the full promotion process to conclude before the data write can be considered finished.
- A promotion (i.e., data migration) involves several operations. Assuming, for example, that a multi-level RAID array employs RAID level 1 and RAID level 5, the promotion process first requires that the block of data being updated by a data write be read from RAID level 5 (i.e., the slower performing RAID level) into memory on the array. The data write is then written to RAID level 1 (i.e., the faster performing RAID level). The size of the data block being promoted is typically much larger than the size of the original data write. Such foreground promotions, which include an extra read and write of the data block being promoted, incur a penalty that makes the service time for the data write much longer than it would be without the promotion. As mentioned in the previous paragraph, data write response times are inflated because the foreground promotion process has to wait for the extra read and write of a block of data that is typically larger than the size of the data write.
- Another disadvantage with past methods of migrating data between RAID levels in a multi-level RAID array is that background demotions are not integrated with foreground promotions. Therefore, when a higher (i.e., faster) performing RAID level (e.g., RAID level 1) runs out of space, a promotion triggers a disruptive foreground demotion process to make space available in the higher performing RAID level by demoting data to a lower performing RAID level. Under these circumstances, the promotion process described above cannot take place until a demotion process occurs.
- In an example demotion process, data is read from the higher performing RAID level 1 and merged into the block of data in memory that is to be written to the lower performing RAID level 5. The parity of the RAID level 5 block of data to be written is then computed, and the data block and the parity are written to RAID level 5. This process is called a “read, modify, write”. The size of the data block being demoted from RAID level 1 to RAID level 5 is typically much larger than the size of the original data write that instigated the demotion. Thus, the demotion process not only incurs time penalties for the “read, modify, write” process, but also incurs additional time penalties because the data block being demoted is typically larger in size than the original data write. Therefore, data write requests can result in response times that are orders of magnitude longer than would otherwise be necessary for a simple data write.
- Accordingly, the need exists for a way to manage data in a multi-level RAID storage array that overcomes the penalties associated with current data migration methods and that optimally exploits advantages in performance and storage efficiency inherent to different RAID levels combined within an array.
- A system and methods implement non-disruptive data migration processes in a multi-level RAID (redundant array of independent disks) storage array. Newly written data is initially written to a lower performing RAID level. The data migration processes promote recently written data from the lower performing RAID level to a higher performing RAID level and demote older data from the higher performing RAID level to the lower performing RAID level. The migration processes operate in the background during times when the array utilization rate is low so that there is little or no adverse impact on the speed with which foreground host I/O (input/output) requests are processed by the array.
- In one embodiment, a data migration process begins with a search for migratable data. Migratable data is found by comparing the age of data in a low performing RAID level with the age of data in a higher performing RAID level. If the search yields migratable data, the process continues by checking the utilization rate of the array. If the utilization rate is low, indicating the array is not very busy, then the migratable data is migrated to appropriate RAID levels within the array. If migratable data is not found or if the utilization rate is too high, the migration process terminates and is reinitiated at a later time.
- In another embodiment, a data promotion process determines if blocks of data in a lower performing RAID level are promotion candidates. Promotion candidates are determined by which blocks of data have been most recently updated. If a promotion candidate is found, a utilization check process is initiated to monitor the utilization rate of the array. If the utilization rate reaches a certain threshold level that indicates the array is not very busy, then promotion candidates are promoted from the lower performing RAID level to a higher performing RAID level.
- In still another embodiment, a data demotion process determines if blocks of data in a higher performing RAID level are demotion candidates. Demotion candidates are determined by comparing a last update time to an update time threshold, such as 24 hours. Thus, a block of data in the higher performing RAID level that has not been updated within the past 24 hours is considered a demotion candidate. If a demotion candidate is found, the utilization rate of the array is checked as previously discussed, and demotion candidates are demoted from the higher performing RAID level to a lower performing RAID level when the utilization rate indicates that the array is not busy.
- The same reference numbers are used throughout the drawings to reference like components and features.
- FIG. 1 illustrates a system environment that is suitable for managing data in a multi-level RAID storage array.
- FIG. 2 is a block diagram illustrating in greater detail, a particular embodiment of a host computer device and a multi-level RAID storage array as might be implemented in the system environment of FIG. 1.
- FIG. 3 is a block diagram illustrating in greater detail, another embodiment of a host computer device and a multi-level RAID storage array as might be implemented in the system environment of FIG. 1.
- FIG. 4 is a flow diagram illustrating an example method of migrating data between levels in a multi-level RAID storage array such as that illustrated in FIG. 2.
- FIG. 5 is a continuation of the flow diagram of FIG. 4.
- FIG. 6 is a continuation of the flow diagram of FIG. 4.
- FIG. 7 is a flow diagram illustrating an example method for promoting data from a lower performing RAID level to a higher performing RAID level in a multi-level RAID storage array such as that illustrated in FIG. 3.
- FIG. 8 is a flow diagram illustrating an example method for demoting data from a higher performing RAID level to a lower performing RAID level in a multi-level RAID storage array such as that illustrated in FIG. 3.
- A system and methods implemented within a multi-level RAID (redundant array of independent disks) storage array operate to initially write data to a lower performing RAID level within the array. In addition, data is migrated between lower and higher performing RAID levels via data migration processes that function as background processes. Benefits of the disclosed system and methods include a non-disruptive environment for servicing host I/O (input/output) requests. Array response times are significantly reduced by not allowing initial data writes to interfere with higher performing RAID levels and by migrating data between lower and higher performing RAID levels in the background when the array is less busy servicing host I/O requests.
- Exemplary System Environment for Managing Data in a Multi-Level RAID Storage Array
- FIG. 1 illustrates a
system environment 100 suitable for managing data in a multi-level RAID storage array. Thesystem 100 includes arrayedstorage device 102 operatively coupled to host device(s) 104 throughnetwork 106. Thenetwork connection 106 can include, for example, a LAN (local area network), a WAN (wide area network), an intranet, the Internet, a fiber optic cable link, a direct connection, or any other suitable communication link. Host device(s) 104 can be implemented as a variety of general purpose computing devices including, for example, a personal computer (PC), a laptop computer, a server, a Web server, and other devices configured to communicate with arrayedstorage device 102. - Although embodiments of arrayed
storage device 102 are disclosed herein below as multi-level RAID storage arrays, the arrayedstorage device 102 is not limited in this regard. Accordingly, this disclosure is applicable to other configurations of arrayed storage components as currently exist or as might exist in the future that include different array architectures for the general purpose of fault-tolerance and performance/storage trade-offs similar to those provided by currently available RAID levels. Therefore, arrayedstorage device 102 more generally refers to a plurality of storage components/devices operatively coupled in an array for the general purpose of increasing storage performance. Storage performance goals typically include mass storage, low cost per stored megabyte, high input/output performance, and high data availability through redundancy and fault tolerance. Storage components/devices operatively coupled within arrayedstorage devices 102 may include devices such as magnetic disk drives, tape drives, optical read/write disk drives, solid state disks and the like. Such storage components are generally well known in the art of data storage technology. - In addition, arrayed
storage devices 102 as disclosed herein are virtual storage array devices that include a virtual memory storage feature. Thus, thevirtual storage arrays 102 presently disclosed provide a layer of address mapping indirection betweenhost 104 addresses and the actual physical addresses wherehost 104 data is stored within thevirtual storage array 102. Address mapping indirection uses pointers that make it possible to move data around to different physical locations within thearray 102 in a way that is transparent to thehost 104. - As an example of virtual memory storage in a
RAID array 102, ahost device 104 may store data at host address H5 which thehost 104 thinks is pointing to a physical location on disk #2, sector #56, of virtualRAID storage array 102. However, the virtualRAID storage array 102 might relocate the host data to an entirely different physical location (e.g., disk #8, sector #27) within thearray 102 and update a pointer (i.e., layer of address indirection) so that the pointer always points to the host data. Thehost device 104 will continue accessing the data at the same host address H5, but will not know that the data has actually been moved to a new physical location within the virtualRAID storage array 102. - Exemplary Embodiment for Managing Data in a Multi-Level RAID Storage Array
- FIG. 2 is a block diagram illustrating a particular embodiment of a
host computer device 104 and an arrayedstorage device 102 as might be implemented in thesystem environment 100 of FIG. 1. The arrayedstorage device 102 of FIG. 1 is embodied in FIG. 2 as a virtual multi-levelRAID storage array 102.Host device 104 is embodied generally as a computer such as a personal computer (PC), a laptop computer, a server, a Web server, or other computer device configured to communicate with multi-levelRAID storage array 102. -
Host device 104 typically includes aprocessor 200, a volatile memory 202 (i.e., RAM), and a nonvolatile memory 204 (e.g., ROM, hard disk, floppy disk, CD-ROM, etc.).Nonvolatile memory 204 generally provides storage of computer readable instructions, data structures, program modules and other data forhost device 104.Host device 104 may implementvarious application programs 206 stored inmemory 204 and executed onprocessor 200 that create or otherwise access data to be transferred vianetwork connection 106 toRAID storage array 102 for storage and subsequent retrieval.Such applications 206 might include software programs implementing, for example, word processors, spread sheets, browsers, multimedia players, illustrators, computer-aided design tools and the like. Thus,host device 104 provides a regular flow of data I/O requests to be serviced by virtual multi-levelRAID storage array 102. -
Multi-level RAID array 102 is generally designed to provide continuous data storage and data retrieval for computer devices such as host device(s) 104, and to do so regardless of various fault conditions that may occur. Thus,RAID array 102 typically includes redundant subsystems such as controllers 208(A) and 208(B) and power and cooling subsystems 210(A) and 210(B) that permit continued access to theRAID array 102 even during a failure of one of the subsystems. In addition,RAID array 102 typically provides hot-swapping capability for array components (i.e. the ability to remove and replace components while thearray 102 remains online) such as controllers 208(A) and 208(B), power/cooling subsystems 210(A) and 210(B), anddisk drives 214 in the array ofdisks 212. - Disk drives214 of
multi-level RAID array 102 are illustrated in FIG. 2 as disk drives 214(A) and 214(B). This illustration is intended to convey that disk drives 214 employ a plurality of RAID levels. Specifically, disk drives 214(A) employ a higher performing RAID level such as RAID level 1, and disk drives 214(B) employ a lower performing RAID level such as RAID level 5. However, this illustration is in no way intended to indicate that RAID levels are distributed acrossdisk drives 214 in such a uniform or exclusive manner. The illustration of a higher performing RAID level as disk drives 214(A) and a lower performing RAID level as disk drives 214(B) is done for purposes of discussion only. As is known in the art, there are various ways of employing and distributing multiple RAID levels across disks in a multi-level RAID array. - Controllers208(A) and 208(B) on
RAID array 102 mirror each other and are generally configured to redundantly store and access data on disk drives 214. Thus, controllers 208(A) and 208(B) perform tasks such as attaching validation tags to data before saving it todisk drives 214 and checking the tags to ensure data from adisk drive 214 is correct before sending it back tohost device 104. Controllers 208(A) and 208(B) also tolerate faults such asdisk drive 214 failures by recreating data that may be lost during such failures. -
Controllers 208 onRAID array 102 typically include I/O processor(s) such as FC (fiber channel) I/O processor(s) 216, main processor(s) 218, nonvolatile (NV)RAM 220, nonvolatile memory 222 (e.g., ROM), and one or more ASICs (application specific integrated circuits) such asmemory control ASIC 224.NV RAM 220 is typically supported by a battery backup (not shown) that preserves data inNV RAM 220 in the event power is lost to controller(s) 208.Nonvolatile memory 222 generally provides storage of computer readable instructions, data structures, program modules and other data forRAID storage array 102. - Accordingly,
nonvolatile memory 222 includesfirmware 226,data migration module 228, andarray utilization module 230.Firmware 226 is generally configured to execute on processor(s) 218 and supportnormal disk array 102 operations.Firmware 226 is also typically configured to handle various fault scenarios that may arise inRAID array 102. As more fully discussed herein below,migration module 228 is configured to execute on processor(s) 218 and manage data updates such that they are initially written to the lower performing RAID level 214(B).Migration module 228 is additionally configured to migrate or relocate data between RAID levels on disk drives 214(A) and 214(B) in conjunction withutilization module 230. - FC I/O processor(s)216 receives data and commands from
host device 104 vianetwork connection 106. FC I/O processor(s) 216 communicate with main processor(s) 218 through standard protocols and interrupt procedures to transfer data and commands to redundant controller 208(B) and generally move data betweenNV RAM 220 andvarious disk drives 214 to ensure that data is stored redundantly. -
Memory control ASIC 224 generally controls data storage and retrieval, data manipulation, redundancy management, and the like through communications between mirrored controllers 208(A) and 208(B).Memory controller ASIC 224 handles tagging of data sectors being striped todisks 214 in the array ofdisks 212 and writes parity information across the disk drives 214. Data striping and parity checking are well-known to those skilled in the art.Memory control ASIC 224 also typically includes internal buffers (not shown) that facilitate testing ofmemory 222 to ensure that all regions of mirrored memory (i.e. between mirrored controllers 208(A) and 208(B)) are compared to be identical and checked for ECC (error checking and correction) errors on a regular basis.Memory control ASIC 224 notifiesprocessor 218 of these and other errors it detects.Firmware 226 is configured to manage errors detected bymemory control ASIC 224 in a tolerant manner which may include, for example, preventing the corruption ofarray 102 data or working around a detected error/fault through a redundant subsystem to prevent theRAID array 102 from crashing. - As indicated above,
migration module 228 is configured to manage incoming data updates such that they are initially written to the lower performing RAID level 214(B). This prevents disruption of foreground processes that are servicing host 104 I/O requests. However,migration module 228 andutilization module 230 are also configured to migrate data between RAID levels on disk drives 214(A) and 214(B) as foreground processes abate. - More specifically,
migration module 228 determines if there is data in the higher performing RAID level 214(A) and/or the lower performing RAID level 214(B) that is “migratable data”. If there is migratable data,utilization module 230 executes to determine the utilization rate of theRAID array 102. In general,utilization module 230 executes to determine how busy theRAID array 102 is in handling host 104 I/O requests.Utilization module 230 informsmigration module 228 if an appropriate time comes when migratable data can be migrated without disrupting the servicing of host 104 I/O requests. - In the current embodiment of FIG. 2,
migration module 228 includes time and/orevent information 232 also stored inmemory 222. The time/event information 232 is used to trigger the execution or initiation ofmigration module 228. Thus,migration module 228 may execute periodically as dictated by a time factor (e.g., every 5 seconds) stored in time/event information 232.Migration module 228 may also be initiated based on the occurrence of a particular event as identified byevent information 232. Such an event may include, for example, the conclusion of a particular process configured to run onprocessor 218. Time and event factors stored as time/event information 232 are variables that may be tunable by a user or system administrator. - Upon activation,
migration module 228 searches for migratable data by comparing the age of data in a low performing RAID level with the age of data in a higher performing RAID level. In general,migration module 228 operates to keep the most active data in the higher performing RAID level 214(A). The most active data is data that has been the most recently written to or updated. Thus, a comparison typically includes comparing update times for blocks of data in the lower performing RAID level 214(B) with update times for blocks of data in the higher performing RAID level 214(A). Blocks of data in the lower performing RAID level 214(B) that have been more recently updated than blocks of data in the higher performing RAID level 214(A) are migratable data. Specifically, these blocks of data will be “promoted”, or migrated from the lower performing RAID level 214(B) to the higher performing RAID level 214(A). Conversely, blocks of data in the higher performing RAID level 214(A) that have not been updated as recently as blocks of data in the lower performing RAID level 214(B), are also migratable data. Specifically, these blocks of data will be “demoted”, or migrated from the higher performing RAID level 214(A) to the lower performing RAID level 214(B). - As will be recognized by those skilled in the art, additional methods for determining migratable data may exist. For example, migratable data might be determined by comparing the age of updated data with a threshold value. Thus, the above described method of comparing recently updated data blocks between lower and higher performing RAID levels is not meant to limit the manner by which
migration module 228 may determine migratable data. - If
migration module 228 does not find any migratable data, the migration process terminates. The process begins again whenmigration module 228 is triggered as discussed above by time/event information 232. - As mentioned above,
utilization module 230 executes to determine the utilization rate of theRAID array 102 oncemigration module 228 locates migratable data. In general,utilization module 230 executes to determine how busy theRAID array 102 is in handling host 104 I/O requests.Utilization module 230 informsmigration module 228 if an appropriate time comes when migratable data can be migrated without disrupting the servicing of host 104 I/O requests.Utilization module 230 may terminate the migration process if the utilization rate of theRAID array 102 is not conducive for non-disruptive data migration. Again, the process will be reinitiated whenmigration module 228 is triggered as discussed above by time/event information 232. -
Utilization module 230 monitors the overall utilization rate ofRAID array 102 in order to determine the least disruptive time to migrate data between higher 214(A) and lower 214(B) performing RAID levels. Thus,utilization module 230 is responsible for ensuring that data migration occurs as a background process that does not interfere with foreground tasks related to servicing host 104 I/O requests. If performed as a foreground task, data migration might otherwise defeat the general purpose of reducing the overall time to service host 104 I/O requests. - There are various
ways utilization module 230 might monitor the utilization rate of a virtualRAID storage array 102. As an example, an optical fiber channel (not shown) is typically used to couplecontrollers 208 to array ofdisks 212. The optical fiber channel may have a maximum data transfer rate of 100 megabytes per second. A decrease in the utilization rate of the optical fiber channel generally indicates that host device 104 I/O requests have diminished, leaving excess capacity on the optical fiber channel that can be used for other tasks without adversely impacting host I/O requests. Thus,utilization module 230 monitors the optical fiber channel to determine when the utilization rate drops below acertain utilization threshold 234.Utilization module 230 then notifiesmigration module 228 that data migration can proceed. As indicated above, there are other components that may be monitored as a way of indicating the general utilization rate ofRAID array 102. Use of an optical fiber channel as described above is just one example. - Additional Exemplary Embodiments for Managing Data in a Multi-Level RAID Storage Array
- FIG. 3 is a block diagram illustrating additional embodiments of a
host computer device 104 and an arrayedstorage device 102 as might be implemented in thesystem environment 100 of FIG. 1. Like the embodiment of FIG. 2, the arrayedstorage device 102 is embodied as a virtual multi-levelRAID storage array 102, and thehost device 104 is embodied generally as a computer device.Host device 104 is configured as described above with respect to the FIG. 2 embodiment.RAID storage array 102 is also configured as described above with respect to the FIG. 2 embodiment, except that apromotion module 300 and ademotion module 302 are stored inmemory 222 instead ofmigration module 228. - In the FIG. 3 embodiment,
promotion module 300 anddemotion module 302 together perform the same general tasks as those described above with respect to themigration module 228 of FIG. 2. Therefore, incoming data updates are managed such that they are initially written to the lower performing RAID level 214(B). However,promotion module 300 anddemotion module 302 can be configured to operate separately from one another. In addition,promotion module 300 anddemotion module 302 determine which data to migrate between higher 214(A) and lower 214(B) performing RAID levels differently. -
Promotion module 300 is triggered as described above by time/event information 232. Upon activation,promotion module 300 looks in lower performing RAID level 214(B) for recently updated data blocks that are promotion candidates. A promotion candidate is a block of data that has been recently updated. Such a data block therefore contains active data and should be promoted to the higher performing RAID level 214(A). A promotion candidate may be a block of data that has been updated within a certain prior period of time as determined by a threshold value, or it may simply be a block of data that has been updated at some previous time but not yet promoted to the higher performing RAID level 214(A). -
Utilization module 230 operates as described above when a promotion candidate is found. That is,utilization module 230 monitors the utilization rate ofRAID array 102 in order to determine the least disruptive time to promote data from the lower performing RAID level 214(B) to the higher performing RAID level 214(A).Utilization module 230 may terminate the promotion process if the utilization rate of theRAID array 102 is not conducive to non-disruptive data promotion. However, promotion will be reinitiated whenpromotion module 300 is triggered again by time/event information 232. -
Demotion module 302 is also triggered as described above by time/event information 232. Upon activation,demotion module 302 looks in the higher performing RAID level 214(A) for data blocks that have not been written to or updated within a certain period of time as determined by a preset threshold value. Data blocks that have not been written to or updated within such a time are not considered to be active data, and are therefore demotion candidates that should be demoted to the lower performing RAID level 214(B). -
Promotion 300 anddemotion 302 modules may be triggered to execute concurrently, or they may be triggered by different time/event mechanisms to execute separately. - Exemplary Methods for Managing Data in a Multi-Level RAID Storage Array
- Example methods for managing data in a multi-level
RAID storage array 102 will now be described with primary reference to FIGS. 4-8. The methods apply generally to the exemplary embodiments ofsystem 100 as discussed above with respect to FIGS. 1-3. The elements of the described methods may be performed by any appropriate means, such as by the execution of processor-readable instructions defined on a processor-readable media, such as a disk, a ROM or other such memory device. - FIGS.4-6 are flow diagrams that show examples of general methods for migrating data between RAID levels in a multi-level RAID storage array such as that illustrated in FIG. 2. At
block 400 of FIG. 4, data is initially written to a lower performing RAID level within a multi-levelRAID storage array 102. Atblock 401, a data migration process is initiated based on a time or event factor. The process begins with a search for migratable data as shown atblock 402. - A method of searching for migratable data is illustrated by the flow diagram of FIG. 5 beginning at
block 500. Referring now to FIG. 5, atblock 500, the newest or most recently updated data is located in the lower performing RAID level. Atblock 502, the oldest or least recently updated data is located in the higher performing RAID level. Atblock 504, a determination is made as to whether the newest data from the lower performing RAID level is more recent than the oldest data from the higher performing RAID level. If it is not, then there is no migratable data present in theRAID storage array 102 as indicated atblock 506. However, if the newest data from the lower performing RAID level is more recent than the oldest data from the higher performing RAID level, then both the newest data and the oldest data are migratable. The newest data is migratable to the higher performing RAID level and the oldest data is migratable to the lower performing RAID level as indicated atblock 508. - Referring again to FIG. 4, at block404 a determination is made as to whether migratable data is present in the
RAID storage array 102. If there is no migratable data present, the migration process terminates as indicated byblock 406. The migration process will be initiated again based on a time or event factor as shown inblock 401. - If there is migratable data present in the
RAID storage array 102, an array utilization check process is initiated atblock 408. Atblock 410, a determination is made as to whether the array utilization is above a minimum threshold value. If the array utilization is above the minimum threshold, the data migration process can terminate as indicated atblock 412. Again, the migration process will be reinitiated based on a time or event factor as shown inblock 401. - If the array utilization is not above the minimum threshold, migratable data is migrated to appropriate RAID levels within the
RAID storage array 102. Referring to FIG. 6, migration then proceeds with migrating (i.e., promoting) migratable data from the lower performing RAID level to the higher performing RAID level as shown atblock 600. Migratable data is also demoted from the higher performing RAID level to the lower performing RAID level as shown atblock 602. - FIG. 7 is a flow diagram that shows an example method for promoting data within a multi-level
RAID storage array 102 such as that illustrated in FIG. 3. Atblock 700 of FIG. 7, data is initially written to a lower performing RAID level within a multi-levelRAID storage array 102. Atblock 701, a data promotion process is initiated based on a time or event factor. The promotion process begins atblock 702 with determining whether there are recently updated data blocks in a lower performing RAID level that are candidates for promotion to a higher performing RAID level. Promotion candidates are generally the most recently updated data blocks and can be determined either by how recent an update has been written to a data block or simply by whether a data block has been updated since a prior promotion process was executed. - At block704 a determination is made as to whether a promotion candidate is present in the
RAID storage array 102. If there is no promotion candidate present, the promotion process terminates as indicated byblock 706. The promotion process will be initiated again based on a time or event factor as shown inblock 701. - If there is a promotion candidate present in the
RAID storage array 102, an array utilization check process is initiated atblock 708. Atblock 710, a determination is made as to whether the array utilization is above a minimum threshold value. If the array utilization is above the minimum threshold, the data promotion process can terminate as indicated atblock 712. Again, the promotion process will be reinitiated based on a time or event factor as shown inblock 701. - If the array utilization is not above the minimum threshold, promotion candidates are promoted from the lower performing RAID level to the higher performing RAID level at
block 714. - FIG. 8 is a flow diagram that shows an example method for demoting data within a multi-level
RAID storage array 102 such as that illustrated in FIG. 3. Atblock 800 of FIG. 8, data is initially written to a lower performing RAID level within a multi-levelRAID storage array 102. Atblock 801, a data demotion process is initiated based on a time or event factor. The demotion process begins atblock 802 with determining whether there are data blocks in a higher performing RAID level that are candidates for demotion to a lower performing RAID level. Demotion candidates are generally the least recently updated data blocks in a higher performing RAID level that have not been written to within a certain period of time as determined by a preset threshold value. - At block804 a determination is made as to whether a demotion candidate is present in the
RAID storage array 102. If there is no demotion candidate present, the demotion process terminates as indicated byblock 806. The demotion process will be initiated again based on a time or event factor as shown inblock 801. - If there is a demotion candidate present in the
RAID storage array 102, an array utilization check process is initiated atblock 808. Atblock 810, a determination is made as to whether the array utilization is above a minimum threshold value. If the array utilization is above the minimum threshold, the data demotion process can terminate as indicated atblock 812. Again, the demotion process will be reinitiated based on a time or event factor as shown inblock 801. - If the array utilization is not above the minimum threshold, demotion candidates are demoted from the higher performing RAID level to the lower performing RAID level at
block 814. - The methods of promoting and demoting data within a multi-level
RAID storage array 102 as illustrated respectively in FIGS. 7 and 8, may be performed concurrently or separately depending on how time and/or event mechanisms are configured to control these methods. - Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.
- Additionally, while one or more methods have been disclosed by means of flow diagrams and text associated with the blocks of the flow diagrams, it is to be understood that the blocks do not necessarily have to be performed in the order in which they were presented, and that an alternative order may result in similar advantages.
Claims (45)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/154,870 US6898667B2 (en) | 2002-05-23 | 2002-05-23 | Managing data in a multi-level raid storage array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/154,870 US6898667B2 (en) | 2002-05-23 | 2002-05-23 | Managing data in a multi-level raid storage array |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030221060A1 true US20030221060A1 (en) | 2003-11-27 |
US6898667B2 US6898667B2 (en) | 2005-05-24 |
Family
ID=29548966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/154,870 Expired - Lifetime US6898667B2 (en) | 2002-05-23 | 2002-05-23 | Managing data in a multi-level raid storage array |
Country Status (1)
Country | Link |
---|---|
US (1) | US6898667B2 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060056412A1 (en) * | 2004-09-14 | 2006-03-16 | Gregory Page | Optimization of routing forwarding database in a network processor |
US20060271608A1 (en) * | 2005-05-24 | 2006-11-30 | Yanling Qi | Methods and systems for automatically identifying a modification to a storage array |
US20070050589A1 (en) * | 2005-08-26 | 2007-03-01 | Hitachi, Ltd. | Data migration method |
US20070083482A1 (en) * | 2005-10-08 | 2007-04-12 | Unmesh Rathi | Multiple quality of service file system |
US7392356B1 (en) * | 2005-09-06 | 2008-06-24 | Symantec Corporation | Promotion or demotion of backup data in a storage hierarchy based on significance and redundancy of the backup data |
US20080168304A1 (en) * | 2006-12-06 | 2008-07-10 | David Flynn | Apparatus, system, and method for data storage using progressive raid |
US7441079B2 (en) | 2006-03-21 | 2008-10-21 | International Business Machines Corporation | Data location management in high density packaging |
US20100281213A1 (en) * | 2009-04-29 | 2010-11-04 | Smith Gary S | Changing the redundancy protection for data associated with a file |
US20100293348A1 (en) * | 2009-05-13 | 2010-11-18 | Samsung Electronics Co., Ltd. | Apparatus and method of rearranging data and nonvolitile data storage apparatus |
US7849352B2 (en) | 2003-08-14 | 2010-12-07 | Compellent Technologies | Virtual disk drive system and method |
US7886111B2 (en) * | 2006-05-24 | 2011-02-08 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US20110072227A1 (en) * | 2009-09-22 | 2011-03-24 | Emc Corporation | Performance improvement of a capacity optimized storage system using a performance segment storage system and a segment storage system |
US8046537B2 (en) * | 2005-07-15 | 2011-10-25 | International Business Machines Corporation | Virtualization engine and method, system, and computer program product for managing the storage of data |
US8468292B2 (en) | 2009-07-13 | 2013-06-18 | Compellent Technologies | Solid state drive data storage system and method |
US20140297983A1 (en) * | 2013-03-29 | 2014-10-02 | Fujitsu Limited | Method of arranging data, information processing apparatus, and recording medium |
US9146851B2 (en) | 2012-03-26 | 2015-09-29 | Compellent Technologies | Single-level cell and multi-level cell hybrid solid state drive |
US9489150B2 (en) | 2003-08-14 | 2016-11-08 | Dell International L.L.C. | System and method for transferring data between different raid data storage types for current data and replay data |
US20210294536A1 (en) * | 2017-11-13 | 2021-09-23 | Weka.IO LTD | Tiering Data Strategy for a Distributed Storage System |
CN114546272A (en) * | 2022-02-18 | 2022-05-27 | 山东云海国创云计算装备产业创新中心有限公司 | Method, system, apparatus and storage medium for fast universal RAID demotion to RAID5 |
US20240256162A1 (en) * | 2023-01-27 | 2024-08-01 | Dell Products L.P. | Storage Management System and Method |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7139846B1 (en) * | 2003-09-30 | 2006-11-21 | Veritas Operating Corporation | Computer system and method for performing low impact backup operations |
US7418548B2 (en) * | 2003-11-18 | 2008-08-26 | Intel Corporation | Data migration from a non-raid volume to a raid volume |
US11327674B2 (en) * | 2012-06-05 | 2022-05-10 | Pure Storage, Inc. | Storage vault tiering and data migration in a distributed storage network |
US12061519B2 (en) | 2005-09-30 | 2024-08-13 | Purage Storage, Inc. | Reconstructing data segments in a storage network and methods for use therewith |
US20070214313A1 (en) * | 2006-02-21 | 2007-09-13 | Kalos Matthew J | Apparatus, system, and method for concurrent RAID array relocation |
US20080183963A1 (en) * | 2007-01-31 | 2008-07-31 | International Business Machines Corporation | System, Method, And Service For Providing A Generic RAID Engine And Optimizer |
US7958058B2 (en) * | 2007-03-02 | 2011-06-07 | International Business Machines Corporation | System, method, and service for migrating an item within a workflow process |
US8224782B2 (en) * | 2008-09-29 | 2012-07-17 | Hitachi, Ltd. | System and method for chunk based tiered storage volume migration |
US8650145B2 (en) * | 2008-10-07 | 2014-02-11 | Hewlett-Packard Development Company, L.P. | Creating snapshots of data using a selected one of different snapshot algorithms |
US8719495B2 (en) * | 2010-03-30 | 2014-05-06 | Lenovo (Singapore) Pte. Ltd. | Concatenating a first raid with a second raid |
WO2012053026A1 (en) * | 2010-10-18 | 2012-04-26 | Hitachi, Ltd. | Data storage apparatus and power control method therefor |
US12141459B2 (en) | 2012-06-05 | 2024-11-12 | Pure Storage, Inc. | Storage pool tiering in a storage network |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5392244A (en) * | 1993-08-19 | 1995-02-21 | Hewlett-Packard Company | Memory systems with data storage redundancy management |
-
2002
- 2002-05-23 US US10/154,870 patent/US6898667B2/en not_active Expired - Lifetime
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5392244A (en) * | 1993-08-19 | 1995-02-21 | Hewlett-Packard Company | Memory systems with data storage redundancy management |
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9047216B2 (en) | 2003-08-14 | 2015-06-02 | Compellent Technologies | Virtual disk drive system and method |
US8020036B2 (en) | 2003-08-14 | 2011-09-13 | Compellent Technologies | Virtual disk drive system and method |
US8321721B2 (en) | 2003-08-14 | 2012-11-27 | Compellent Technologies | Virtual disk drive system and method |
US7962778B2 (en) | 2003-08-14 | 2011-06-14 | Compellent Technologies | Virtual disk drive system and method |
US7945810B2 (en) | 2003-08-14 | 2011-05-17 | Compellent Technologies | Virtual disk drive system and method |
US7941695B2 (en) | 2003-08-14 | 2011-05-10 | Compellent Technolgoies | Virtual disk drive system and method |
US10067712B2 (en) | 2003-08-14 | 2018-09-04 | Dell International L.L.C. | Virtual disk drive system and method |
US8473776B2 (en) | 2003-08-14 | 2013-06-25 | Compellent Technologies | Virtual disk drive system and method |
US9489150B2 (en) | 2003-08-14 | 2016-11-08 | Dell International L.L.C. | System and method for transferring data between different raid data storage types for current data and replay data |
US9436390B2 (en) | 2003-08-14 | 2016-09-06 | Dell International L.L.C. | Virtual disk drive system and method |
US8555108B2 (en) | 2003-08-14 | 2013-10-08 | Compellent Technologies | Virtual disk drive system and method |
US8560880B2 (en) | 2003-08-14 | 2013-10-15 | Compellent Technologies | Virtual disk drive system and method |
US9021295B2 (en) | 2003-08-14 | 2015-04-28 | Compellent Technologies | Virtual disk drive system and method |
US7849352B2 (en) | 2003-08-14 | 2010-12-07 | Compellent Technologies | Virtual disk drive system and method |
US7706302B2 (en) * | 2004-09-14 | 2010-04-27 | Alcatel Lucent | Optimization of routing forwarding database in a network processor |
US20060056412A1 (en) * | 2004-09-14 | 2006-03-16 | Gregory Page | Optimization of routing forwarding database in a network processor |
US7840755B2 (en) | 2005-05-24 | 2010-11-23 | Lsi Corporation | Methods and systems for automatically identifying a modification to a storage array |
US20060271608A1 (en) * | 2005-05-24 | 2006-11-30 | Yanling Qi | Methods and systems for automatically identifying a modification to a storage array |
US9258364B2 (en) * | 2005-07-15 | 2016-02-09 | International Business Machines Corporation | Virtualization engine and method, system, and computer program product for managing the storage of data |
US8046537B2 (en) * | 2005-07-15 | 2011-10-25 | International Business Machines Corporation | Virtualization engine and method, system, and computer program product for managing the storage of data |
US7640407B2 (en) | 2005-08-26 | 2009-12-29 | Hitachi, Ltd. | Data migration method |
US20070050589A1 (en) * | 2005-08-26 | 2007-03-01 | Hitachi, Ltd. | Data migration method |
US20080209104A1 (en) * | 2005-08-26 | 2008-08-28 | Hitachi, Ltd. | Data Migration Method |
US7373469B2 (en) * | 2005-08-26 | 2008-05-13 | Hitachi, Ltd. | Data migration method |
US7392356B1 (en) * | 2005-09-06 | 2008-06-24 | Symantec Corporation | Promotion or demotion of backup data in a storage hierarchy based on significance and redundancy of the backup data |
US7469326B1 (en) * | 2005-09-06 | 2008-12-23 | Symantec Corporation | Promotion or demotion of backup data in a storage hierarchy based on significance and redundancy of the backup data |
US8438138B2 (en) * | 2005-10-08 | 2013-05-07 | Oracle International Corporation | Multiple quality of service file system using performance bands of storage devices |
US20070083482A1 (en) * | 2005-10-08 | 2007-04-12 | Unmesh Rathi | Multiple quality of service file system |
US20090228535A1 (en) * | 2005-10-08 | 2009-09-10 | Unmesh Rathi | Multiple quality of service file system using performance bands of storage devices |
US7441079B2 (en) | 2006-03-21 | 2008-10-21 | International Business Machines Corporation | Data location management in high density packaging |
US8230193B2 (en) | 2006-05-24 | 2012-07-24 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US10296237B2 (en) | 2006-05-24 | 2019-05-21 | Dell International L.L.C. | System and method for raid management, reallocation, and restripping |
US9244625B2 (en) | 2006-05-24 | 2016-01-26 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US7886111B2 (en) * | 2006-05-24 | 2011-02-08 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US8412979B2 (en) | 2006-12-06 | 2013-04-02 | Fusion-Io, Inc. | Apparatus, system, and method for data storage using progressive raid |
US11640359B2 (en) | 2006-12-06 | 2023-05-02 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US8412904B2 (en) | 2006-12-06 | 2013-04-02 | Fusion-Io, Inc. | Apparatus, system, and method for managing concurrent storage requests |
US20110179225A1 (en) * | 2006-12-06 | 2011-07-21 | Fusion-Io, Inc. | Apparatus, system, and method for a shared, front-end, distributed raid |
US7934055B2 (en) | 2006-12-06 | 2011-04-26 | Fusion-io, Inc | Apparatus, system, and method for a shared, front-end, distributed RAID |
US11960412B2 (en) | 2006-12-06 | 2024-04-16 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US20080256183A1 (en) * | 2006-12-06 | 2008-10-16 | David Flynn | Apparatus, system, and method for a front-end, distributed raid |
US8019940B2 (en) * | 2006-12-06 | 2011-09-13 | Fusion-Io, Inc. | Apparatus, system, and method for a front-end, distributed raid |
US11847066B2 (en) | 2006-12-06 | 2023-12-19 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US20080168304A1 (en) * | 2006-12-06 | 2008-07-10 | David Flynn | Apparatus, system, and method for data storage using progressive raid |
US8601211B2 (en) | 2006-12-06 | 2013-12-03 | Fusion-Io, Inc. | Storage system with front-end controller |
US11573909B2 (en) | 2006-12-06 | 2023-02-07 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US8214591B2 (en) | 2006-12-06 | 2012-07-03 | Fusion-Io, Inc. | Apparatus, system, and method for a front-end, distributed raid |
US20080256292A1 (en) * | 2006-12-06 | 2008-10-16 | David Flynn | Apparatus, system, and method for a shared, front-end, distributed raid |
US8015440B2 (en) | 2006-12-06 | 2011-09-06 | Fusion-Io, Inc. | Apparatus, system, and method for data storage using progressive raid |
US8195877B2 (en) | 2009-04-29 | 2012-06-05 | Hewlett Packard Development Company, L.P. | Changing the redundancy protection for data associated with a file |
US20100281213A1 (en) * | 2009-04-29 | 2010-11-04 | Smith Gary S | Changing the redundancy protection for data associated with a file |
US20100293348A1 (en) * | 2009-05-13 | 2010-11-18 | Samsung Electronics Co., Ltd. | Apparatus and method of rearranging data and nonvolitile data storage apparatus |
US8825942B2 (en) * | 2009-05-13 | 2014-09-02 | Samsung Electronics Co., Ltd. | Apparatus and method of rearranging data and nonvolatile data storage apparatus |
US8819334B2 (en) | 2009-07-13 | 2014-08-26 | Compellent Technologies | Solid state drive data storage system and method |
US8468292B2 (en) | 2009-07-13 | 2013-06-18 | Compellent Technologies | Solid state drive data storage system and method |
US20110071980A1 (en) * | 2009-09-22 | 2011-03-24 | Emc Corporation | Performance improvement of a capacity optimized storage system including a determiner |
US20110072227A1 (en) * | 2009-09-22 | 2011-03-24 | Emc Corporation | Performance improvement of a capacity optimized storage system using a performance segment storage system and a segment storage system |
US8447726B2 (en) | 2009-09-22 | 2013-05-21 | Emc Corporation | Performance improvement of a capacity optimized storage system including a determiner |
US9875028B2 (en) | 2009-09-22 | 2018-01-23 | EMC IP Holding Company LLC | Performance improvement of a capacity optimized storage system including a determiner |
US10013167B2 (en) * | 2009-09-22 | 2018-07-03 | EMC IP Holding Company LLC | Performance improvement of a capacity optimized storage system using a performance segment storage system and a segment storage system |
US9141300B2 (en) | 2009-09-22 | 2015-09-22 | Emc Corporation | Performance improvement of a capacity optimized storage system using a performance segment storage system and a segment storage system |
US20110072226A1 (en) * | 2009-09-22 | 2011-03-24 | Emc Corporation | Snapshotting of a performance storage system in a system for performance improvement of a capacity optimized storage system |
US20160034200A1 (en) * | 2009-09-22 | 2016-02-04 | Emc Corporation | Performance improvement of a capacity optimized storage system using a performance segment storage system and a segment storage system |
US8677052B2 (en) * | 2009-09-22 | 2014-03-18 | Emc Corporation | Snapshotting of a performance storage system in a system for performance improvement of a capacity optimized storage system |
US9146851B2 (en) | 2012-03-26 | 2015-09-29 | Compellent Technologies | Single-level cell and multi-level cell hybrid solid state drive |
US20140297983A1 (en) * | 2013-03-29 | 2014-10-02 | Fujitsu Limited | Method of arranging data, information processing apparatus, and recording medium |
US20210294536A1 (en) * | 2017-11-13 | 2021-09-23 | Weka.IO LTD | Tiering Data Strategy for a Distributed Storage System |
US11656803B2 (en) * | 2017-11-13 | 2023-05-23 | Weka.IO Ltd. | Tiering data strategy for a distributed storage system |
CN114546272A (en) * | 2022-02-18 | 2022-05-27 | 山东云海国创云计算装备产业创新中心有限公司 | Method, system, apparatus and storage medium for fast universal RAID demotion to RAID5 |
US20240256162A1 (en) * | 2023-01-27 | 2024-08-01 | Dell Products L.P. | Storage Management System and Method |
US12056378B1 (en) * | 2023-01-27 | 2024-08-06 | Dell Products L.P. | Storage management system and method |
Also Published As
Publication number | Publication date |
---|---|
US6898667B2 (en) | 2005-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6898667B2 (en) | Managing data in a multi-level raid storage array | |
US6912635B2 (en) | Distributing workload evenly across storage media in a storage array | |
US6330642B1 (en) | Three interconnected raid disk controller data processing system architecture | |
US7058764B2 (en) | Method of adaptive cache partitioning to increase host I/O performance | |
US6857057B2 (en) | Virtual storage systems and virtual storage system operational methods | |
KR100211788B1 (en) | Failure prediction for disk arrays | |
US6182198B1 (en) | Method and apparatus for providing a disc drive snapshot backup while allowing normal drive read, write, and buffering operations | |
EP0727745B1 (en) | Cache memory control apparatus and method | |
EP0718766B1 (en) | Method of operating a disk drive array | |
US5809224A (en) | On-line disk array reconfiguration | |
US7962783B2 (en) | Preventing write corruption in a raid array | |
US5574851A (en) | Method for performing on-line reconfiguration of a disk array concurrent with execution of disk I/O operations | |
US7975168B2 (en) | Storage system executing parallel correction write | |
US6799244B2 (en) | Storage control unit with a volatile cache and a non-volatile backup cache for processing read and write requests | |
US8726070B2 (en) | System and method for information handling system redundant storage rebuild | |
US7702852B2 (en) | Storage system for suppressing failures of storage media group | |
US7600152B2 (en) | Configuring cache memory from a storage controller | |
US20040205297A1 (en) | Method of cache collision avoidance in the presence of a periodic cache aging algorithm | |
JP3409859B2 (en) | Control method of control device | |
US20080082744A1 (en) | Storage system having data comparison function | |
JP2009075759A (en) | Storage device and data management method in storage device | |
US20120011326A1 (en) | Storage system and method for changing configuration of cache memory for storage system | |
JP2007156597A (en) | Storage device | |
US20100115310A1 (en) | Disk array apparatus | |
JPH09282104A (en) | Method for improving data storage performance of storage device and device therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD COMPANY, COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:UMBERGER, DAVID K.;NAVARRO, GUILLERMO;CONDEL, JONATHAN;REEL/FRAME:013134/0121 Effective date: 20020513 |
|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., COLORAD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:013776/0928 Effective date: 20030131 Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.,COLORADO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:013776/0928 Effective date: 20030131 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001 Effective date: 20151027 |
|
FPAY | Fee payment |
Year of fee payment: 12 |