US20060206665A1 - Accelerated RAID with rewind capability - Google Patents
Accelerated RAID with rewind capability Download PDFInfo
- Publication number
- US20060206665A1 US20060206665A1 US11/433,152 US43315206A US2006206665A1 US 20060206665 A1 US20060206665 A1 US 20060206665A1 US 43315206 A US43315206 A US 43315206A US 2006206665 A1 US2006206665 A1 US 2006206665A1
- Authority
- US
- United States
- Prior art keywords
- data
- cache
- log
- area
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003860 storage Methods 0.000 claims abstract description 63
- 238000013500 data storage Methods 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000012937 correction Methods 0.000 claims abstract description 5
- 230000008520 organization Effects 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 15
- 238000012546 transfer Methods 0.000 claims description 6
- 238000010926 purge Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 8
- 230000010076 replication Effects 0.000 description 8
- 230000006835 compression Effects 0.000 description 6
- 238000007906 compression Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000011084 recovery Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000007480 spreading Effects 0.000 description 2
- 238000003892 spreading Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000009385 viral infection Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1441—Resetting or repowering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2066—Optimisation of the communication load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2069—Management of state, configuration or failover
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/1004—Adaptive RAID, i.e. RAID system adapts to changing circumstances, e.g. RAID1 becomes RAID5 as disks fill up
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/103—Hybrid, i.e. RAID systems with parity comprising a mix of RAID types
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99955—Archiving or backup
Definitions
- the present invention relates to data protection in data storage devices, and in particular to data protection in disk arrays.
- Storage devices of various types are utilized for storing information such as in computer systems.
- Conventional computer systems include storage devices such as disk drives for storing information managed by an operating system file system.
- disk drives With decreasing costs of storage space, an increasing amount of data is stored on individual disk drives.
- important data can be lost.
- some fault-tolerant storage devices utilize an array of redundant disk drives (RAID).
- the data stored on the primary storage devices is backed-up to secondary storage devices such as tape, from time to time.
- secondary storage devices such as tape
- True data protection can be achieved by keeping a log of all writes to a storage device, on a data block level.
- a user data set and a write log are maintained, wherein the data set has been completely backed up and thereafter a log of all writes is maintained.
- the backed-up data set and the write log allows returning to the state of the data set before the current state of the data set, by restoring the backed-up (baseline) data set and then executing all writes from that log up until that time.
- RAID configured disk arrays provide protection against data loss by protecting a single disk drive failure.
- Protecting the log file stream using RAID has been achieved by either a RAID mirror (known as RAID- 1 ) shown by example in FIG. 1 , or a RAID stripe (known as RAID- 5 ) shown by example in FIG. 2 .
- RAID mirror 10 including several disk drives 12
- two disk drives store the data of one independent disk drive.
- n+1 disk drives 12 are required to store the data of n independent disk drives (e.g., in FIG. 2 , a stripe of five disk drives stores the data of four independent disk drives).
- each disk drive 12 has e.g. 100 GB capacity. In each disk drive 12 , half the capacity is used for user data, and another half for mirror data. As such, user data capacity of the disk array 10 is 400 GB and the other 400 GB is used for mirror data.
- drive 1 protects drive 0 data (M 0 )
- drive 2 protects drive 1 data (M 1 )
- drive 0 fails, then the data M 0 in drive 1 can be used to recreate data M 0 in drive 0 , and the data M 7 in drive 7 can be used to crate data M 7 of drive 0 . As such, no data is lost in case of a single disk drive failure.
- a RAID stripe configuration effectively groups capacity from all but one of the disk drives in the disk array 14 and writes the parity (XOR) of that capacity on the remaining disk drive (or across multiple drives as shown).
- the disk array 14 includes five disk drives 12 (e.g., drive 0 -drive 4 ) each disk drive 12 having e.g. 100 GB capacity, divided into 5 sections.
- the blocks S 0 -S 3 in the top portions of drive 0 -drive 3 are for user data, and a block of drive 4 is for parity data (i.e., XOR of S 0 -S 3 ).
- the RAID stripe capacity is 400 GB for user data and 100 GB for parity data.
- the parity area is distributed among the disk drives 12 as shown. Spreading the parity data across the disk drives 12 allows spreading the task of reading the parity data over several disk drives as opposed to just one disk drive.
- Writing on a disk drive in a stripe configuration requires that the disk drive holding parity be read, a new parity calculated and the new parity written over the old parity. This requires a disk revolution and increases the write latency. The increased write latency decreases the throughput of the storage device 14 .
- the RAID mirror configuration allows writing the log file stream to disk faster than the RAID stripe configuration (“stripe”).
- a mirror is faster than a stripe since in the mirror, each write activity is independent of other write activities, in that the same block can be written to the mirroring disk drives at the same time.
- a mirror configuration requires that the capacity to be protected be matched on another disk drive. This is costly as the capacity to be protected must be duplicated, requiring double the number of disk drives.
- a stripe reduces such capacity to 1/n where n is the number of disk drives in the disk drive array. As such, protecting data with parity across multiple disk drives makes a stripe slower than a mirror, but more cost effective.
- the present invention satisfies these needs.
- the present invention provides a method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a hybrid of a logical mirror area (i.e., RAID mirror) and a logical stripe area (i.e., RAID stripe).
- a logical mirror area i.e., RAID mirror
- a logical stripe area i.e., RAID stripe
- the data is duplicated by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, the data is stored as stripes of blocks, including data blocks and associated error-correction blocks.
- a log file stream is maintained as a log cache in the RAID mirror area for writing data from a host to the storage subsystem, and then data is transferred from the log file in the RAID mirror area to the final address in the RAID stripe area, preferably as a background task. In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
- a memory cache (RAM cache) is added in front of the log cache, wherein incoming host blocks are first written to RAM cache quickly and the host is acknowledged.
- the host perceives a faster write cycle than is possible if the data were written to a data storage unit while the host waited for an acknowledgement. This further enhances the performance of the above hybrid RAID subsystem.
- a flashback module (backup module) is added to the subsystem to protect the RAM cache data.
- the flashback module includes a non-volatile memory, such as flash memory, and a battery. During normal operations, the battery is trickle charged. Should any power failure then occur, the battery provides power to transfer the contents of the RAM cache to the flash memory. Upon restoration of power, the flash memory contents are transferred back to the RAM cache, and normal operations resume.
- Read performance is further enhanced by pressing a data storage unit (e.g., disk drive) normally used as a spare data storage unit (“hot spare”) in the array, into temporary service in the hybrid RAID system.
- a data storage unit e.g., disk drive
- hot spare a spare data storage unit
- the hot spare can be used to replicate the data in the mirrored area of the hybrid RAID subsystem. Should any data storage unit in the array fail, this hot spare could immediately be delivered to take the place of that failed data storage unit without increasing exposure to data loss from a single data storage unit failure.
- the replication of the mirror area would make the array more responsive to read requests by allowing the hot spare to supplement the mirror area.
- the mirror area acts as a temporary store for the log, prior to storing the write data in its final location in the stripe area.
- the log prior to purging the data from the mirror area, the log can be written sequentially to an archival storage medium such as tape. If a baseline backup of the entire RAID subsystem stripe is created just before the log files are archived, each successive state of the RAID subsystem can be recreated by re-executing the write requests within the archived log files. This would allow any earlier state of the stripe of the RAID subsystem to be recreated (i.e., infinite roll-back or rewind). This is beneficial in allowing recovery from e.g. user error such as accidentally erasing a file, from a virus infection, etc.
- the present invention provides a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system, and also provides the capability of returning to a desired previous data state.
- FIG. 1 shows a block diagram of an example disk array configured as a RAID mirror
- FIG. 2 shows a block diagram of an example disk array configured as a RAID stripe
- FIG. 3A shows a block diagram of an example hybrid RAID data organization in a disk array according to an embodiment of the present invention
- FIG. 3B shows an example flowchart of an embodiment of the steps of data storage according to the present invention
- FIG. 3C shows a block diagram of an example RAID subsystem logically configured as hybrid RAID stripe and mirror, according to the hybrid RAID data organization FIG. 3A ;
- FIG. 4A shows an example data set and a log of updates to the data set after a back-up
- FIG. 4B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- FIG. 4C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- FIG. 5A shows another block diagram of the disk array of FIGS. 3A and 3B , further including a flashback module according to the present invention
- FIG. 5B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- FIG. 5C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- FIG. 6A shows a block diagram of another example hybrid RAID data organization in a disk array including a hot spare used as a temporary RAID mirror according to the present invention
- FIG. 6B shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- FIG. 6C shows a block diagram of an example RAID subsystem logically configured as the hybrid RAID data organization of FIG. 6A that further includes a hot spare used as a temporary RAID mirror;
- FIG. 7A shows a block diagram of another disk array including a hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention
- FIG. 7B shows a block diagram of another disk array including hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention
- FIG. 8A shows an example of utilizing a hybrid RAID subsystem in a storage area network (SAN), according to the present invention
- FIG. 8B shows an example of utilizing a hybrid RAID as a network attached storage (NAS), according to the present invention.
- FIG. 8C shows an example flowchart of another embodiment of the steps of data storage according to the present invention.
- an example fault-tolerant storage subsystem 16 having an array of failure independent data storage units 18 , such as disk drives, using a hybrid RAID data organization according to an embodiment of the present invention is shown.
- the data storage units 18 can be other storage devices, such as e.g. optical storage devices, DVD-RAM, etc.
- protecting data with parity across multiple disk drives makes a RAID stripe slow but cost effective.
- a RAID mirror provides better data transfer performance because the target sector is simultaneously written on two disk drives, but requires that the capacity to be protected be matched on another disk drive.
- a RAID stripe reduces such capacity to 1/n where n is the number of drives in the disk array, but in a RAID stripe, both the target and the parity sector must be read then written, causing write latency.
- an array 17 of six disk drives 18 (e.g., drive 0 -drive 5 ) is utilized for storing data from, and reading data back to, a host system, and is configured to include both a RAID mirror data organization and a RAID stripe data organization according to the present invention.
- the RAID mirror (“mirror”) configuration provides performance advantage when transferring data to disk drives 18 using e.g. a log file stream approach
- the RAID stripe (“stripe”) configuration provides cost effectiveness by using the stripe organization for general purpose storage of user data sets.
- this is achieved by dividing the capacity of the disk array 17 of FIG. 3A into at least two areas (segments), including a mirror area 20 and a stripe area 22 (step 100 ).
- a data set 24 is maintained in the stripe area 22 (step 102 ), and an associated log file/stream 26 is maintained in the mirror area 20 (step 104 ).
- the log file 26 is maintained as a write log cache in the mirror area 20 , such that upon receiving a write request from a host, the host data is written to the log file 26 (step 106 ), and then data is transferred from the log file 26 in the mirror area 20 to a final address in the data set 24 in the stripe area 22 (preferably, performed as a background task) (step 108 ). In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
- the log is backed-up to tape continually or on a regular basis (step 110 ). The above steps are repeated as write requests arrive from the host.
- the disk array 17 can include additional hybrid RAID mirror and RAID stripe configured areas according to the present invention.
- the example hybrid RAID subsystem 16 further includes a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18 ).
- a data organization manager 28 having a RAID controller 30 that implements the hybrid data organization of FIG. 3A on the disk array 17 (e.g., an array of N disk drives 18 ).
- a RAID stripe for user data (e.g., S 0 -S 29 ) and parity data (e.g., XOR 0 -XOR 29 ).
- 400 GB of user data is stored in the hybrid RAID subsystem 16 , compared to the same capacity in the RAID mirror 10 of FIG. 1 and the RAID stripe 14 of FIG. 2 .
- the subsystem 16 communicates with a host 29 via a host interface 31 .
- Other numbers of disk drives and with different storage capacities can also be used in the RAID subsystem 16 of FIG. 3C , according to the present invention.
- FIG. 4A shows an example user data set 24 and a write log 26 , wherein the data set 24 has been completely backed up at e.g. midnight and thereafter a log 26 of all writes has been maintained (e.g., at times t 1 -t 6 ).
- each write log entry 26 a includes updated data (udata) and the address (addr) in the data set where the updated data is to be stored, and a corresponding time stamp (ts).
- the data set at each time t 1 -t 6 is also shown in FIG. 4A .
- the log file 26 is first written in the RAID mirror area 20 and then data is transferred from the log file 26 in the RAID mirror area 20 to the final address in the RAID stripe area 22 (preferably as a background task), according to the present invention.
- the disk array 17 ( FIG. 3C ) is used as a write log cache in a three step process: (1) when the host needs to write data to a disk, rather than writing to the final destination in a disk drive, that data is first written to the log 26 , satisfying the host (2) then when the disk drive is not busy, that data from the log 26 is transferred to the final destination data set on the disk drive, transparent to the host and (3) the log data is backed-up to e.g. tape to free up storage space, to log new data from the host.
- the log and the final destination data are maintained in a hybrid RAID configuration as described.
- cache hit i.e., cache hit
- the requested data is transferred to the host 20 from the log 26 (step 124 ).
- the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 126 ).
- the stripe area 22 is used for flushing the write log data, thereby permanently storing the data set in the stripe area 22 , and also used to read data blocks that are not in the write log cache 26 in the mirror area 20 .
- the hybrid RAID system 16 is an improvement over a conventional RAID stripe without a RAID mirror, since according to the present invention most recently written data is likely in the log 26 stored in the mirror area 20 , which provides a faster read than a stripe.
- the hybrid RAID system provides equivalent of RAID mirror performance for all writes and for most reads since most recently written data is most likely to be read back.
- the RAID stripe 22 is only accessed to retrieve data not found in the log cache 26 stored in the RAID mirror 20 , whereby the hybrid RAID system 16 essentially provides the performance of a RAID mirror, but at cost effectiveness of a RAID stripe.
- the stripe 22 is written to as a foreground process (e.g., real-time), then there is write performance penalty (i.e. the host is waiting for an acknowledgement that the write is complete).
- the log cache 26 permits avoidance of such real-time writes to the stripe 22 .
- the disk array 17 is divided into two logical data areas (i.e., a mirrored log write area 20 and a striped read area 22 ) using a mirror configuration for log writes avoids the write performance penalty of a stripe.
- the mirror area 20 is sufficiently large to hold all log writes that occur during periods of peak activity, updates to the stripe area 22 can be performed in the background.
- the mirror area 20 is essentially a write cache, and writing the log 26 to the mirror area 20 with background writes to the stripe area 22 allows the hybrid subsystem 16 to match mirror performance at stripe-like cost.
- a cache memory e.g., RAM write cache 32 , FIG. 5A
- RAM write cache 32 FIG. 5A
- incoming host blocks are first written to the RAM write cache 32 quickly and the host is acknowledged (step 138 ). The host perceives a faster write cycle than is possible if the data were written to disk while the host waited for an acknowledgement.
- the host data in the RAM write cache 32 is copied sequentially to the log 26 in the mirror area 20 (i.e., disk mirror write cache) (step 140 ), and the log data is later copied to the data set 24 in the stripe area 22 (i.e., disk stripe data set) e.g. as a background process (step 142 ). Sequential writes to the disk mirror write cache 26 and random writes to the disk stripe data set 24 , provide fast sequential writes.
- a flashback module 34 (backup module) can be added to the disk array 17 to protect RAM cache data according to the present invention. Without the module 34 , write data would not be secure until stored at its destination address on disk.
- the module 34 includes a non-volatile memory 36 such as Flash memory, and a battery 38 .
- the battery 38 is trickle charged from an external power source 40 (step 150 ). Should any power failure then occur, the battery 38 provides the RAID controller 30 with power sufficient (step 152 ) to transfer the contents of the RAM write cache 32 to the flash memory 36 (step 154 ). Upon restoration of power, the contents of the flash memory 36 are transferred back to the RAM write cache 32 , and normal operations resume (step 156 ). This allows acknowledging the host write request (command) once the data is written in the RAM cache 32 (which is faster than writing it to the mirror disks).
- the flashback module 34 can be moved to a another hybrid subsystem 16 to restore data from the flash memory 36 .
- writes can be accumulated in the RAM cache 32 and written to the mirrored disk log file 26 sequentially (e.g., in the background).
- write data should be transferred to disk as quickly as possible. Since sequential throughput of a hard disk drive is substantially better than random performance, the fastest way to transfer data from the RAM write cache 32 to disk is via the log file 26 (i.e., a sequence of address/data pairs above) in the mirror area 20 . This is because when writing a data block to the mirror area 20 , the data block is written to two different disk drives. Depending on the physical disk address of the incoming blocks from the host to be written, the disk drives of the mirror 20 may be accessed randomly. However, as a log file is written sequentially based on entries in time, the blocks are written to the log file in a sequential manner, regardless of their actual physical location in the data set 24 on the disk drives.
- the log file 26 i.e., a sequence of address/data pairs above
- data requested by the host 29 from the RAID subsystem 16 can be in the RAM write cache 32 , in the log cache area 26 in the mirror 20 area or in the general purpose stripe area 22 .
- a determination is made if the requested data is in the RAM cache 32 (step 162 ), and if so, the requested data is transferred to the host 29 from the RAM cache 32 (step 164 ).
- step 166 a determination is made if the requested data is in the write log file 26 in the mirror area 20 (step 166 ), and if so, the requested data is transferred to the host from the log 26 (step 168 ). If the requested data is not in the log 26 , then the data set 24 in the stripe area 22 is accessed to retrieve the requested data to provide to the host (step 169 ).
- the mirror area 20 Since data in the mirror area 20 is replicated, twice the number of actuators are available to pursue read data requests effectively doubling responsiveness. While this mirror benefit is generally recognized, the benefit may be enhanced because the mirror does not contain random data but rather data that has recently been written. As discussed, because the likelihood that data will be read is probably directly proportional to the time since the data has been written, the mirror area 20 may be more likely to contain the desired data. A further acceleration can be realized if the data is read back in the same order it was written regardless of the potential randomness of the final data addresses since the mirror area 20 stores data in the written order and a read in that order creates a sequential stream.
- read performance of the subsystem 16 can further be enhanced.
- one of the disk drives in the array can be reserved as a spare disk drive (“hot spare”), wherein if one of the other disk drives in the array should fail, the hot spare is used to take the place of that failed drive.
- read performance can be further enhanced by pressing a disk drive normally used as a hot spare in the disk array 17 , into temporary service in the hybrid RAID subsystem 16 .
- FIG. 6A shows the hybrid RAID subsystem 16 of FIG. 3A , further including a hot spare disk drive 18 a (i.e., drive 6 ) according to the present invention.
- the status of the hot spare 18 a is determined (step 170 ) and upon detecting the hot spare 18 a is lying dormant (i.e., not being used as a failed device replacement) (step 172 ), the hot spare 18 a is used to replicate the data in the mirrored area 20 of the hybrid RAID subsystem 16 (step 174 ). Then upon receiving a read request from the host (step 176 ), it is determined if the requested data is in the hot spare 18 a and the mirror area 20 (step 178 ).
- a copy of the requested data is provided to the host from the hot spare 18 a with minimum latency or from the mirror area 20 , if faster ( 180 ). Otherwise, a copy of a requested data is provided to the host from the mirror area 20 or the stripe area 22 (step 182 ). Thereafter, it is determined if the hot spare 18 a is required to replace a failed disk drive (step 184 ). If not, the process goes back to step 176 , otherwise the hot spare 18 a is used to replace the failed disk drive (step 186 ).
- the hot spare 18 a can immediately be delivered to take the place of that failed disk drive without increasing exposure to data loss from a single disk drive failure. For example, if drive 1 fails, drive 0 and drive 2 -drive 5 can start using the spare drive 6 and rebuild drive 6 to contain data of drive 1 prior to failure. However, while all the disk drives 18 of the array 17 are working properly, the replication of the mirror area 20 would make the subsystem 16 more responsive to read requests by allowing the hot spare 18 a to supplement the mirror area 20 .
- the hot spare 18 a may be able to provide multiple redundant data copies for further performance boost. For example, if the hot spare 18 a matches the capacity of the mirrored area 20 of the array 17 , the mirrored area data can then be replicated twice on the hot spare 18 a . For example, in the hot spare 18 a data can be arranged wherein the data is replicated on each concentric disk track (i.e., one half of a track contains a copy of that which is on the other half of that track). In that case, rotational latency of the hot spare 18 a in response to random requests is effectively halved (i.e., smaller read latency).
- FIG. 6C shows an example block diagram of a hybrid RAID subsystem 16 including a RAID controller 30 that implements the hybrid RAID data organization of FIG. 6A , for seven disk drives (drive 0 -drive 6 ), wherein drive 6 is the hot spare 18 a .
- drive 0 -drive 1 in FIG. 6C for example, M 0 data is in drive 0 and is duplicated in drive 1 , whereby drive 1 protects drive 0 .
- M 0 data is written to the spare drive 6 using replication, such that if requested M 0 data is in the write log 26 in the mirror area 20 , it can be read back from drive 0 , drive 1 , or the spare drive 6 . Since M 0 data is replicated twice in drive 6 , drive 6 appears to have high r.p.m. because as described, replication lowers read latency. Spare drive 6 can be configured to store all the mirrored blocks in a replicated fashion, similar to that for M 0 data, to improve the read performance of the hybrid subsystem 16 .
- the hot spare 18 a can replicate the mirror area 20 twice. If the hot spare 18 a includes a replication of the mirror area, the hot spare 18 a can be removed from the subsystem 16 and backed-up. The backup can be performed off-line, not using network bandwidth. A new baseline could be created from the hot spare 18 a.
- the backup can be restored from tape to a secondary disk array and then all writes from the log file 26 written to the stripe 22 of the secondary disk array.
- the order of writes need not take place in a temporal order but can be optimized to minimize time between reads of the hot spare and/or writes to the secondary array.
- the stripe of the secondary array is then in the same state as that of the primary array, as of the time the hot spare was removed from the primary array.
- FIG. 7A shows a block diagram of an embodiment of a hybrid RAID subsystem 16 implementing said hybrid RAID data organization, and further including a hot spare 18 a as a redundant mirror and a flashback module 34 , according to the present invention.
- Writing to the log 26 in the mirror area 20 and the flashback module 34 removes the write performance penalty normally associated with replication on a mirror.
- Replication on a mirror involves adding a quarter rotation to all writes. When the target track is acquired, average latency to one of the replicated sectors is one quarter rotation but half a rotation is need to write the other sector. Since average latency on a standard mirror is half a rotation, an additional quarter rotation is required for writes.
- FIG. 7B shows a block diagram of another embodiment of hybrid RAID subsystem 16 of FIG. 7A , wherein the flashback module 34 is part of the data organization manager 28 that includes the RAID controller 30 .
- FIG. 8A shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention in a example block device such as storage area network (SAN) 42 .
- SAN storage area network
- FIG. 8B shows an example of utilizing an embodiment of a hybrid RAID subsystem 16 according to the present invention as a network attached storage (NAS) in a network 44 .
- NAS network attached storage
- connected devices exchange files, as such a file server 46 is positioned in front of the hybrid RAID subsystem 16 .
- the file server portion of a NAS device can be simplified with a focus solely on file service, and data integrity is provided by the hybrid RAID subsystem 16 .
- the mirror area 20 acts as a temporary store for the log cache 26 , prior to storing the write data in its final location in the stripe 22 .
- the log 26 can be written sequentially to an archival storage medium such as tape. Then, to return to a prior state of the data set, if a baseline backup of the entire RAID subsystem stripe 22 is created just before the log files are archived, each successive state of the RAID subsystem 16 can be recreated by re-executing the write requests within the archived log file system.
- any earlier state of the stripe 22 of the RAID subsystem 16 is recreated (i.e., infinite roll-back or rewind). This is beneficial e.g. in allowing recovery from user error such as accidentally erasing a file, in allowing recovery from a virus infection, etc.
- a copy of the data set 24 created at a back-up time prior to the selected time is obtained (step 190 ) and a copy of cache log 26 associated with said data set copy is obtained (step 192 ).
- Said associated cache log 26 includes entries 26 a ( FIG.
- Each data block in each entry of said associated cache log 26 is time-sequentially transferred to the corresponding block address in the data set copy, until a time stamp indicating said selected time is reached in an entry 26 a of the associated cache log (step 194 ).
- the present invention further provides compressing the data in the log 26 stored in the mirror area 20 of the hybrid RAID system 16 for cost effectiveness. Compression is not employed in a conventional RAID subsystem because of variability in data redundancy. For example, a given data block is to be read, modified and rewritten. If the read data consumes the entire data block and the modified data does contain as much redundancy as did the original data, then the compressed modified data cannot fit in the data block on disk.
- a read/modify/write operation is not a valid operation in the mirror area 20 in the present invention because the mirror area 20 contains a sequential log file of writes. While a given data block may be read from the mirror area 20 , after any modification, the writing of the data block would be appended to the existing log file stream 26 , not overwritten in place. Because of this, variability in compression is not an issue in the mirror area 20 . Modern compression techniques can e.g. halve the size of typical data, whereby use of compression in the mirror area 20 effectively e.g. doubles its size. This allows doubling the mirror area size or cutting the actual mirror area size in half, without reducing capacity relative to a mirror area without compression. The compression technique can similarly be performed for the RAM write cache 32 .
- the data in the RAID subsystem 16 may be replicated to a system 16 a ( FIG. 7B ) at a remote location.
- the remote system 16 a may not be called upon except in the event of an emergency in which the primary RAID subsystem 16 is shut down.
- the remote system 16 a can provide further added value in the case of the present invention.
- the primary RAID subsystem 16 sends data in the log file 26 in mirror area 20 to the remote subsystem 16 a wherein in this example the remote subsystem 16 a comprises a hybrid RAID subsystem according to the present invention. If the log file data is compressed the transmission time to the remote system 16 a can be reduced.
- the remote subsystem 16 a can be the source of parity information for the primary subsystem 16 .
- the remote subsystem 16 a in the process of writing data from the mirror area to its final address on the stripe in the subsystem 16 a , the associated parity data is generated.
- the remote subsystem 16 a can then send the parity data (preferably compressed) to the primary subsystem 16 which can then avoid generating parity data itself, accelerating the transfer process for a given data block between the mirror and the stripe areas in the primary subsystem 16 .
- the present invention goes beyond standard RAID by protecting data integrity, not just providing device reliability. Infinite roll-back provides protection during the window of vulnerability between backups. A hybrid mirror/stripe data organization results in improved performance. With the addition of the flashback module 34 , a conventional RAID mirror is outperformed at a cost which approaches that of a stripe. Further performance enhancement is attained with replication on an otherwise dormant hot spare and that hot spare can be used by a host-less appliance to generate a new baseline backup.
- the present invention can be implemented in various data processing systems such as Enterprise systems, networks, SAN, NAS, medium and small systems (e.g., in a personal computer a write log is used, and data transferred to the user data set in background).
- the “host” and “host system” refer to any source of information that is in communication with the hybrid RAID system for transferring data to, and from, the hybrid RAID subsystem.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Hardware Redundancy (AREA)
Abstract
Description
- The present invention relates to data protection in data storage devices, and in particular to data protection in disk arrays.
- Storage devices of various types are utilized for storing information such as in computer systems. Conventional computer systems include storage devices such as disk drives for storing information managed by an operating system file system. With decreasing costs of storage space, an increasing amount of data is stored on individual disk drives. However, in case of disk drive failure, important data can be lost. To alleviate this problem, some fault-tolerant storage devices utilize an array of redundant disk drives (RAID).
- In typical data storage systems including storage devices such as primary disk drives, the data stored on the primary storage devices is backed-up to secondary storage devices such as tape, from time to time. However, any change to the data on the primary storage devices before the next back-up, can be lost if one or more of the primary storage devices fail.
- True data protection can be achieved by keeping a log of all writes to a storage device, on a data block level. In one example, a user data set and a write log are maintained, wherein the data set has been completely backed up and thereafter a log of all writes is maintained. The backed-up data set and the write log allows returning to the state of the data set before the current state of the data set, by restoring the backed-up (baseline) data set and then executing all writes from that log up until that time.
- To protect the log file itself, RAID configured disk arrays provide protection against data loss by protecting a single disk drive failure. Protecting the log file stream using RAID has been achieved by either a RAID mirror (known as RAID-1) shown by example in
FIG. 1 , or a RAID stripe (known as RAID-5) shown by example inFIG. 2 . In theRAID mirror 10 includingseveral disk drives 12, two disk drives store the data of one independent disk drive. In theRAID stripe 14, n+1disk drives 12 are required to store the data of n independent disk drives (e.g., inFIG. 2 , a stripe of five disk drives stores the data of four independent disk drives). Theexample RAID mirror 10 inFIG. 1 includes an array of eight disk drives 12 (e.g., drive0-drive7), wherein eachdisk drive 12 has e.g. 100 GB capacity. In eachdisk drive 12, half the capacity is used for user data, and another half for mirror data. As such, user data capacity of thedisk array 10 is 400 GB and the other 400 GB is used for mirror data. In this example mirror configuration, drive1 protects drive0 data (M0), drive2 protects drive1 data (M1), etc. If drive0 fails, then the data M0 in drive1 can be used to recreate data M0 in drive0, and the data M7 in drive7 can be used to crate data M7 of drive0. As such, no data is lost in case of a single disk drive failure. - Referring back to
FIG. 2 , a RAID stripe configuration effectively groups capacity from all but one of the disk drives in thedisk array 14 and writes the parity (XOR) of that capacity on the remaining disk drive (or across multiple drives as shown). In the exampleFIG. 2 , thedisk array 14 includes five disk drives 12 (e.g., drive0-drive4) eachdisk drive 12 having e.g. 100 GB capacity, divided into 5 sections. The blocks S0-S3 in the top portions of drive0-drive3 are for user data, and a block of drive4 is for parity data (i.e., XOR of S0-S3). In this example, the RAID stripe capacity is 400 GB for user data and 100 GB for parity data. The parity area is distributed among thedisk drives 12 as shown. Spreading the parity data across thedisk drives 12 allows spreading the task of reading the parity data over several disk drives as opposed to just one disk drive. Writing on a disk drive in a stripe configuration requires that the disk drive holding parity be read, a new parity calculated and the new parity written over the old parity. This requires a disk revolution and increases the write latency. The increased write latency decreases the throughput of thestorage device 14. - On the other hand, the RAID mirror configuration (“mirror”) allows writing the log file stream to disk faster than the RAID stripe configuration (“stripe”). A mirror is faster than a stripe since in the mirror, each write activity is independent of other write activities, in that the same block can be written to the mirroring disk drives at the same time. However, a mirror configuration requires that the capacity to be protected be matched on another disk drive. This is costly as the capacity to be protected must be duplicated, requiring double the number of disk drives. A stripe reduces such capacity to 1/n where n is the number of disk drives in the disk drive array. As such, protecting data with parity across multiple disk drives makes a stripe slower than a mirror, but more cost effective.
- There is, therefore, a need for a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system. There is also a need for such a system to provide the capability of returning to a desired previous data state.
- The present invention satisfies these needs. In one embodiment, the present invention provides a method for storing data in a fault-tolerant storage subsystem having an array of failure independent data storage units, by dividing the storage area on the storage units into a hybrid of a logical mirror area (i.e., RAID mirror) and a logical stripe area (i.e., RAID stripe). When storing data in the mirror area, the data is duplicated by keeping a duplicate copy of the data on a pair of storage units, and when storing data in the stripe area, the data is stored as stripes of blocks, including data blocks and associated error-correction blocks.
- In one version of the present invention, a log file stream is maintained as a log cache in the RAID mirror area for writing data from a host to the storage subsystem, and then data is transferred from the log file in the RAID mirror area to the final address in the RAID stripe area, preferably as a background task. In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host.
- To further enhance performance, according to the present invention, a memory cache (RAM cache) is added in front of the log cache, wherein incoming host blocks are first written to RAM cache quickly and the host is acknowledged. The host perceives a faster write cycle than is possible if the data were written to a data storage unit while the host waited for an acknowledgement. This further enhances the performance of the above hybrid RAID subsystem.
- While the data is en-route to a data storage unit through the RAM cache, power failure can result in data loss. As such, according to another aspect of the present invention, a flashback module (backup module) is added to the subsystem to protect the RAM cache data. The flashback module includes a non-volatile memory, such as flash memory, and a battery. During normal operations, the battery is trickle charged. Should any power failure then occur, the battery provides power to transfer the contents of the RAM cache to the flash memory. Upon restoration of power, the flash memory contents are transferred back to the RAM cache, and normal operations resume.
- Read performance is further enhanced by pressing a data storage unit (e.g., disk drive) normally used as a spare data storage unit (“hot spare”) in the array, into temporary service in the hybrid RAID system. In a conventional RAID subsystem, any hot spare lies dormant but ready to take over if one of the data storage units in the array should fail. According to the present invention, rather than lying dormant, the hot spare can be used to replicate the data in the mirrored area of the hybrid RAID subsystem. Should any data storage unit in the array fail, this hot spare could immediately be delivered to take the place of that failed data storage unit without increasing exposure to data loss from a single data storage unit failure. However, while all the data storage units of the array are working properly, the replication of the mirror area would make the array more responsive to read requests by allowing the hot spare to supplement the mirror area.
- The mirror area acts as a temporary store for the log, prior to storing the write data in its final location in the stripe area. In another version of the present invention, prior to purging the data from the mirror area, the log can be written sequentially to an archival storage medium such as tape. If a baseline backup of the entire RAID subsystem stripe is created just before the log files are archived, each successive state of the RAID subsystem can be recreated by re-executing the write requests within the archived log files. This would allow any earlier state of the stripe of the RAID subsystem to be recreated (i.e., infinite roll-back or rewind). This is beneficial in allowing recovery from e.g. user error such as accidentally erasing a file, from a virus infection, etc.
- As such, the present invention provides a method and system of providing cost effective data protection with improved data read/write performance than a conventional RAID system, and also provides the capability of returning to a desired previous data state.
- These and other features, aspects and advantages of the present invention will become understood with reference to the following description, appended claims and accompanying figures where:
-
FIG. 1 shows a block diagram of an example disk array configured as a RAID mirror; -
FIG. 2 shows a block diagram of an example disk array configured as a RAID stripe;FIG. 3A shows a block diagram of an example hybrid RAID data organization in a disk array according to an embodiment of the present invention; -
FIG. 3B shows an example flowchart of an embodiment of the steps of data storage according to the present invention; -
FIG. 3C shows a block diagram of an example RAID subsystem logically configured as hybrid RAID stripe and mirror, according to the hybrid RAID data organizationFIG. 3A ; -
FIG. 4A shows an example data set and a log of updates to the data set after a back-up; -
FIG. 4B shows an example flowchart of another embodiment of the steps of data storage according to the present invention; -
FIG. 4C shows an example flowchart of another embodiment of the steps of data storage according to the present invention -
FIG. 5A shows another block diagram of the disk array ofFIGS. 3A and 3B , further including a flashback module according to the present invention; -
FIG. 5B shows an example flowchart of another embodiment of the steps of data storage according to the present invention; -
FIG. 5C shows an example flowchart of another embodiment of the steps of data storage according to the present invention; -
FIG. 6A shows a block diagram of another example hybrid RAID data organization in a disk array including a hot spare used as a temporary RAID mirror according to the present invention; -
FIG. 6B shows an example flowchart of another embodiment of the steps of data storage according to the present invention; -
FIG. 6C shows a block diagram of an example RAID subsystem logically configured as the hybrid RAID data organization ofFIG. 6A that further includes a hot spare used as a temporary RAID mirror; -
FIG. 7A shows a block diagram of another disk array including a hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention; -
FIG. 7B shows a block diagram of another disk array including hybrid RAID data organization using stripe and mirror configurations, and further including a hot spare as a redundant mirror and a flashback module, according to the present invention; -
FIG. 8A shows an example of utilizing a hybrid RAID subsystem in a storage area network (SAN), according to the present invention; -
FIG. 8B shows an example of utilizing a hybrid RAID as a network attached storage (NAS), according to the present invention; and -
FIG. 8C shows an example flowchart of another embodiment of the steps of data storage according to the present invention. - Referring to
FIG. 3A , an example fault-tolerant storage subsystem 16 having an array of failure independentdata storage units 18, such as disk drives, using a hybrid RAID data organization according to an embodiment of the present invention is shown. Thedata storage units 18 can be other storage devices, such as e.g. optical storage devices, DVD-RAM, etc. As discussed, protecting data with parity across multiple disk drives makes a RAID stripe slow but cost effective. A RAID mirror provides better data transfer performance because the target sector is simultaneously written on two disk drives, but requires that the capacity to be protected be matched on another disk drive. Whereas a RAID stripe reduces such capacity to 1/n where n is the number of drives in the disk array, but in a RAID stripe, both the target and the parity sector must be read then written, causing write latency. - In the example of
FIG. 3A , anarray 17 of six disk drives 18 (e.g., drive0-drive5) is utilized for storing data from, and reading data back to, a host system, and is configured to include both a RAID mirror data organization and a RAID stripe data organization according to the present invention. In thedisk array 17, the RAID mirror (“mirror”) configuration provides performance advantage when transferring data todisk drives 18 using e.g. a log file stream approach, and the RAID stripe (“stripe”) configuration provides cost effectiveness by using the stripe organization for general purpose storage of user data sets. - Referring to the example steps in the flowchart of
FIG. 3B , according to an embodiment of the present invention, this is achieved by dividing the capacity of thedisk array 17 ofFIG. 3A into at least two areas (segments), including amirror area 20 and a stripe area 22 (step 100). Adata set 24 is maintained in the stripe area 22 (step 102), and an associated log file/stream 26 is maintained in the mirror area 20 (step 104). Thelog file 26 is maintained as a write log cache in themirror area 20, such that upon receiving a write request from a host, the host data is written to the log file 26 (step 106), and then data is transferred from thelog file 26 in themirror area 20 to a final address in the data set 24 in the stripe area 22 (preferably, performed as a background task) (step 108). In doing so, the aforementioned write latency performance penalty associated with writes to a RAID stripe can be masked from the host. Preferably, the log is backed-up to tape continually or on a regular basis (step 110). The above steps are repeated as write requests arrive from the host. Thedisk array 17 can include additional hybrid RAID mirror and RAID stripe configured areas according to the present invention. - Referring to
FIG. 3C , the examplehybrid RAID subsystem 16 according to the present invention further includes adata organization manager 28 having aRAID controller 30 that implements the hybrid data organization ofFIG. 3A on the disk array 17 (e.g., an array of N disk drives 18). In the example ofFIG. 3C , anarray 17 of N=6 disk drives (drive0-drive5, e.g. 100 GB each) is configured such that portions of the capacity of the disk drives 18 are used as a RAID mirror for thewrite log cache 26 and write log cache mirror data 27 (i.e., M0-M5). And, remaining portions of the capacity of the disk drives 18 are used a RAID stripe for user data (e.g., S0-S29) and parity data (e.g., XOR0-XOR29). In this example, 400 GB of user data is stored in thehybrid RAID subsystem 16, compared to the same capacity in theRAID mirror 10 ofFIG. 1 and theRAID stripe 14 ofFIG. 2 . Thesubsystem 16 communicates with ahost 29 via ahost interface 31. Other numbers of disk drives and with different storage capacities can also be used in theRAID subsystem 16 ofFIG. 3C , according to the present invention. -
FIG. 4A shows an example user data set 24 and awrite log 26, wherein thedata set 24 has been completely backed up at e.g. midnight and thereafter alog 26 of all writes has been maintained (e.g., at times t1-t6). In this example, eachwrite log entry 26 a includes updated data (udata) and the address (addr) in the data set where the updated data is to be stored, and a corresponding time stamp (ts). The data set at each time t1-t6 is also shown inFIG. 4A . The backed-up data set 24 and thewrite log 26 allows returning to the state of the data set 24 at any time before the current state of the data set (e.g., at time t6), by restoring the backed-up (baseline)data set 24 and then executing all writes from thatlog 26 up until that time. For example, if data for address addr=0 (e.g., logical block address 0) were updated at time t2, but then corrupted at time t5, then the data from addr=0 from time t2 can be retrieved by restoring the baseline backup and running the write log through time t2. Thelog file 26 is first written in theRAID mirror area 20 and then data is transferred from thelog file 26 in theRAID mirror area 20 to the final address in the RAID stripe area 22 (preferably as a background task), according to the present invention. - As the
write log 26 may grow large, it is preferably offloaded to secondary storage devices such as tape drives, to free up disk space to log more changes to thedata set 24. As such, the disk array 17 (FIG. 3C ) is used as a write log cache in a three step process: (1) when the host needs to write data to a disk, rather than writing to the final destination in a disk drive, that data is first written to thelog 26, satisfying the host (2) then when the disk drive is not busy, that data from thelog 26 is transferred to the final destination data set on the disk drive, transparent to the host and (3) the log data is backed-up to e.g. tape to free up storage space, to log new data from the host. The log and the final destination data are maintained in a hybrid RAID configuration as described. - Referring to the example steps in the flowchart of
FIG. 4B , upon receiving a host read request (step 120), a determination is made if the requested data is in thewrite log 26, maintained as a cache in themirror area 20, (i.e., cache hit) (step 122), and if so, the requested data is transferred to thehost 20 from the log 26 (step 124). Statistically, since recently written data is more likely to be read back than previously written data, there is a tradeoff such that the larger the log area, the higher the probability that the requested data is in the log 26 (in the mirror area 20). When reading multiple blocks from themirror area 20, different blocks can be read from different disk drives simultaneously, increasing read performance. Instep 122, if there is no log cache hit, then thestripe area 22 is accessed to retrieve the requested data to provide to the host (step 126). Stripe read performance is inferior to a mirror but not as dramatically as write performance is inferior. - A such, the
stripe area 22 is used for flushing the write log data, thereby permanently storing the data set in thestripe area 22, and also used to read data blocks that are not in thewrite log cache 26 in themirror area 20. Thehybrid RAID system 16 is an improvement over a conventional RAID stripe without a RAID mirror, since according to the present invention most recently written data is likely in thelog 26 stored in themirror area 20, which provides a faster read than a stripe. The hybrid RAID system provides equivalent of RAID mirror performance for all writes and for most reads since most recently written data is most likely to be read back. As such, theRAID stripe 22 is only accessed to retrieve data not found in thelog cache 26 stored in theRAID mirror 20, whereby thehybrid RAID system 16 essentially provides the performance of a RAID mirror, but at cost effectiveness of a RAID stripe. - Therefore, if the
stripe 22 is written to as a foreground process (e.g., real-time), then there is write performance penalty (i.e. the host is waiting for an acknowledgement that the write is complete). Thelog cache 26 permits avoidance of such real-time writes to thestripe 22. Because thedisk array 17 is divided into two logical data areas (i.e., a mirroredlog write area 20 and a striped read area 22) using a mirror configuration for log writes avoids the write performance penalty of a stripe. Provided themirror area 20 is sufficiently large to hold all log writes that occur during periods of peak activity, updates to thestripe area 22 can be performed in the background. Themirror area 20 is essentially a write cache, and writing thelog 26 to themirror area 20 with background writes to thestripe area 22 allows thehybrid subsystem 16 to match mirror performance at stripe-like cost. - Referring to the example steps in the flowchart of
FIG. 4C , to further enhance performance, according to the present invention, a cache memory (e.g.,RAM write cache 32,FIG. 5A ) is added in front of thelog cache 26 in the disk array 17 (step 130), and as above the data set 24 and thelog file 26 are maintained in thestripe area 22 and themirror area 20, respectively (steps 132, 134). Upon receiving host write requests (step 136) incoming host blocks are first written to theRAM write cache 32 quickly and the host is acknowledged (step 138). The host perceives a faster write cycle than is possible if the data were written to disk while the host waited for an acknowledgement. This enhances the performance of conventional RAID system and further enhances the performance of the abovehybrid RAID subsystem 16. The host data in theRAM write cache 32 is copied sequentially to thelog 26 in the mirror area 20 (i.e., disk mirror write cache) (step 140), and the log data is later copied to the data set 24 in the stripe area 22 (i.e., disk stripe data set) e.g. as a background process (step 142). Sequential writes to the diskmirror write cache 26 and random writes to the diskstripe data set 24, provide fast sequential writes. - However, power failure while the data is en-route to disk (e.g., to the write log cache on disk) through the
RAM write cache 32 can result in data loss because RAM is volatile. Therefore, as shown in the example block diagram of another embodiment of ahybrid RAID subsystem 16 inFIG. 5A , a flashback module 34 (backup module) can be added to thedisk array 17 to protect RAM cache data according to the present invention. Without themodule 34, write data would not be secure until stored at its destination address on disk. - The
module 34 includes anon-volatile memory 36 such as Flash memory, and abattery 38. Referring to the example steps in the flowchart ofFIG. 5B , during normal operations, thebattery 38 is trickle charged from an external power source 40 (step 150). Should any power failure then occur, thebattery 38 provides theRAID controller 30 with power sufficient (step 152) to transfer the contents of theRAM write cache 32 to the flash memory 36 (step 154). Upon restoration of power, the contents of theflash memory 36 are transferred back to theRAM write cache 32, and normal operations resume (step 156). This allows acknowledging the host write request (command) once the data is written in the RAM cache 32 (which is faster than writing it to the mirror disks). Should a failure of an element of theRAID subsystem 16 preclude resumption of normal operations, theflashback module 34 can be moved to a anotherhybrid subsystem 16 to restore data from theflash memory 36. With theflashback module 34 protecting theRAM write cache 32 against power loss, writes can be accumulated in theRAM cache 32 and written to the mirroreddisk log file 26 sequentially (e.g., in the background). - To minimize the size (and the cost) of the RAM write cache 32 (and thus the corresponding size and cost of
flash memory 36 in the flashback module 34), write data should be transferred to disk as quickly as possible. Since sequential throughput of a hard disk drive is substantially better than random performance, the fastest way to transfer data from theRAM write cache 32 to disk is via the log file 26 (i.e., a sequence of address/data pairs above) in themirror area 20. This is because when writing a data block to themirror area 20, the data block is written to two different disk drives. Depending on the physical disk address of the incoming blocks from the host to be written, the disk drives of themirror 20 may be accessed randomly. However, as a log file is written sequentially based on entries in time, the blocks are written to the log file in a sequential manner, regardless of their actual physical location in the data set 24 on the disk drives. - In the above hybrid RAID system architecture according to the present invention, data requested by the
host 29 from theRAID subsystem 16 can be in theRAM write cache 32, in thelog cache area 26 in themirror 20 area or in the generalpurpose stripe area 22. Referring to the example steps in the flowchart ofFIG. 5C , upon receiving a host read request (step 160), a determination is made if the requested data is in the RAM cache 32 (step 162), and if so, the requested data is transferred to thehost 29 from the RAM cache 32 (step 164). If the requested data is not in theRAM cache 32, then a determination is made if the requested data is in thewrite log file 26 in the mirror area 20 (step 166), and if so, the requested data is transferred to the host from the log 26 (step 168). If the requested data is not in thelog 26, then the data set 24 in thestripe area 22 is accessed to retrieve the requested data to provide to the host (step 169). - Since data in the
mirror area 20 is replicated, twice the number of actuators are available to pursue read data requests effectively doubling responsiveness. While this mirror benefit is generally recognized, the benefit may be enhanced because the mirror does not contain random data but rather data that has recently been written. As discussed, because the likelihood that data will be read is probably directly proportional to the time since the data has been written, themirror area 20 may be more likely to contain the desired data. A further acceleration can be realized if the data is read back in the same order it was written regardless of the potential randomness of the final data addresses since themirror area 20 stores data in the written order and a read in that order creates a sequential stream. - According to another aspect of the present invention, read performance of the
subsystem 16 can further be enhanced. In a conventional RAID system, one of the disk drives in the array can be reserved as a spare disk drive (“hot spare”), wherein if one of the other disk drives in the array should fail, the hot spare is used to take the place of that failed drive. According to the present invention, read performance can be further enhanced by pressing a disk drive normally used as a hot spare in thedisk array 17, into temporary service in thehybrid RAID subsystem 16.FIG. 6A shows thehybrid RAID subsystem 16 ofFIG. 3A , further including a hotspare disk drive 18 a (i.e., drive6) according to the present invention. - Referring to the example steps in the flowchart of
FIG. 6B , according to the present invention, the status of the hot spare 18 a is determined (step 170) and upon detecting the hot spare 18 a is lying dormant (i.e., not being used as a failed device replacement) (step 172), the hot spare 18 a is used to replicate the data in the mirroredarea 20 of the hybrid RAID subsystem 16 (step 174). Then upon receiving a read request from the host (step 176), it is determined if the requested data is in the hot spare 18 a and the mirror area 20 (step 178). If so, a copy of the requested data is provided to the host from the hot spare 18 a with minimum latency or from themirror area 20, if faster (180). Otherwise, a copy of a requested data is provided to the host from themirror area 20 or the stripe area 22 (step 182). Thereafter, it is determined if the hot spare 18 a is required to replace a failed disk drive (step 184). If not, the process goes back to step 176, otherwise the hot spare 18 a is used to replace the failed disk drive (step 186). - As such, in
FIG. 6A should anydisk drive 18 in thearray 17 fail, the hot spare 18 a can immediately be delivered to take the place of that failed disk drive without increasing exposure to data loss from a single disk drive failure. For example, if drive1 fails, drive0 and drive2-drive5 can start using the spare drive6 and rebuild drive6 to contain data of drive1 prior to failure. However, while all the disk drives 18 of thearray 17 are working properly, the replication of themirror area 20 would make thesubsystem 16 more responsive to read requests by allowing the hot spare 18 a to supplement themirror area 20. - Depending upon the size of the mirrored
area 20, the hot spare 18 a may be able to provide multiple redundant data copies for further performance boost. For example, if the hot spare 18 a matches the capacity of the mirroredarea 20 of thearray 17, the mirrored area data can then be replicated twice on the hot spare 18 a. For example, in the hot spare 18 a data can be arranged wherein the data is replicated on each concentric disk track (i.e., one half of a track contains a copy of that which is on the other half of that track). In that case, rotational latency of the hot spare 18 a in response to random requests is effectively halved (i.e., smaller read latency). - As such, the hot spare 18 a is used to make the
mirror area 20 of thehybrid RAID subsystem 16 faster.FIG. 6C shows an example block diagram of ahybrid RAID subsystem 16 including aRAID controller 30 that implements the hybrid RAID data organization ofFIG. 6A , for seven disk drives (drive0-drive6), wherein drive6 is the hot spare 18 a. Considering drive0-drive1 inFIG. 6C , for example, M0 data is in drive0 and is duplicated in drive1, whereby drive1 protects drive0. In addition, M0 data is written to the spare drive6 using replication, such that if requested M0 data is in thewrite log 26 in themirror area 20, it can be read back from drive0, drive1, or the spare drive6. Since M0 data is replicated twice in drive6, drive6 appears to have high r.p.m. because as described, replication lowers read latency. Spare drive6 can be configured to store all the mirrored blocks in a replicated fashion, similar to that for M0 data, to improve the read performance of thehybrid subsystem 16. - Because a hot spare disk drive should match capacity of other disk drives in the disk array (primary array) and since in this example the mirror area data (M0-M5) is half the capacity of a
disk drive 18, the hot spare 18 a can replicate themirror area 20 twice. If the hot spare 18 a includes a replication of the mirror area, the hot spare 18 a can be removed from thesubsystem 16 and backed-up. The backup can be performed off-line, not using network bandwidth. A new baseline could be created from the hot spare 18 a. - If for example, previously a full backup of the disk array has been made to tape, and that the hot spare 18 a contains all writes since that backup, then the backup can be restored from tape to a secondary disk array and then all writes from the
log file 26 written to thestripe 22 of the secondary disk array. To speed this process only the most recent update to a given block need be written. The order of writes need not take place in a temporal order but can be optimized to minimize time between reads of the hot spare and/or writes to the secondary array. The stripe of the secondary array is then in the same state as that of the primary array, as of the time the hot spare was removed from the primary array. Backing up the secondary array to tape at this point creates a new baseline that can then be updated with newer hot spares over time to create newer baselines facilitating fast emergency restores. Such new baseline creation can be done without a host but rather with an appliance including a disk array and a tape drive. If the new baseline tape backup fails, the process can revert to the previous baseline and a tape backup of the hot spare. -
FIG. 7A shows a block diagram of an embodiment of ahybrid RAID subsystem 16 implementing said hybrid RAID data organization, and further including a hot spare 18 a as a redundant mirror and aflashback module 34, according to the present invention. Writing to thelog 26 in themirror area 20 and theflashback module 34, removes the write performance penalty normally associated with replication on a mirror. Replication on a mirror involves adding a quarter rotation to all writes. When the target track is acquired, average latency to one of the replicated sectors is one quarter rotation but half a rotation is need to write the other sector. Since average latency on a standard mirror is half a rotation, an additional quarter rotation is required for writes. With theflashback module 34, acknowledgment of write non-volatility to the host can occur upon receipt of the write inRAM write cache 32 in theRAID controller 30. Writes fromRAM write cache 32 to the disk logfile write cache 26 occur in the background during periods of non-peak activity. By writing sequentially to thelog file 26, the likelihood of such non-peak activity is greatly increased.FIG. 7B shows a block diagram of another embodiment ofhybrid RAID subsystem 16 ofFIG. 7A , wherein theflashback module 34 is part of thedata organization manager 28 that includes theRAID controller 30. - Another embodiment of a
hybrid RAID subsystem 16 according to the present invention provides data block service and can be used as any block device (e.g:, single disk drive, RAID, etc.). Such a hybrid RAID subsystem can be used in any system wherein a device operating at a data block level can be used.FIG. 8A shows an example of utilizing an embodiment of ahybrid RAID subsystem 16 according to the present invention in a example block device such as storage area network (SAN) 42. In SAN, connected devices exchange data blocks. -
FIG. 8B shows an example of utilizing an embodiment of ahybrid RAID subsystem 16 according to the present invention as a network attached storage (NAS) in anetwork 44. In NAS, connected devices exchange files, as such afile server 46 is positioned in front of thehybrid RAID subsystem 16. The file server portion of a NAS device can be simplified with a focus solely on file service, and data integrity is provided by thehybrid RAID subsystem 16. - The present invention provides further example enhancements to the hybrid RAID subsystem, described herein below. As mentioned, the mirror area 20 (
FIG. 3A ) acts as a temporary store for thelog cache 26, prior to storing the write data in its final location in thestripe 22. Before purging the data from thetemporary mirror 20, thelog 26 can be written sequentially to an archival storage medium such as tape. Then, to return to a prior state of the data set, if a baseline backup of the entireRAID subsystem stripe 22 is created just before the log files are archived, each successive state of theRAID subsystem 16 can be recreated by re-executing the write requests within the archived log file system. This would allow any earlier state of thestripe 22 of theRAID subsystem 16 to be recreated (i.e., infinite roll-back or rewind). This is beneficial e.g. in allowing recovery from user error such as accidentally erasing a file, in allowing recovery from a virus infection, etc. Referring to the example steps in the flowchart ofFIG. 8C , to recreate a state of the data set 24 in thestripe 22 at a selected time, a copy of the data set 24 created at a back-up time prior to the selected time, is obtained (step 190) and a copy ofcache log 26 associated with said data set copy is obtained (step 192). Said associatedcache log 26 includesentries 26 a (FIG. 4A ) created time-sequentially immediately subsequent to said back-up time. Each data block in each entry of said associatedcache log 26 is time-sequentially transferred to the corresponding block address in the data set copy, until a time stamp indicating said selected time is reached in anentry 26 a of the associated cache log (step 194). - The present invention further provides compressing the data in the
log 26 stored in themirror area 20 of thehybrid RAID system 16 for cost effectiveness. Compression is not employed in a conventional RAID subsystem because of variability in data redundancy. For example, a given data block is to be read, modified and rewritten. If the read data consumes the entire data block and the modified data does contain as much redundancy as did the original data, then the compressed modified data cannot fit in the data block on disk. - However, a read/modify/write operation is not a valid operation in the
mirror area 20 in the present invention because themirror area 20 contains a sequential log file of writes. While a given data block may be read from themirror area 20, after any modification, the writing of the data block would be appended to the existinglog file stream 26, not overwritten in place. Because of this, variability in compression is not an issue in themirror area 20. Modern compression techniques can e.g. halve the size of typical data, whereby use of compression in themirror area 20 effectively e.g. doubles its size. This allows doubling the mirror area size or cutting the actual mirror area size in half, without reducing capacity relative to a mirror area without compression. The compression technique can similarly be performed for theRAM write cache 32. - For additional data protection, in another version of the present invention, the data in the
RAID subsystem 16 may be replicated to asystem 16 a (FIG. 7B ) at a remote location. Theremote system 16 a may not be called upon except in the event of an emergency in which theprimary RAID subsystem 16 is shut down. However, theremote system 16 a can provide further added value in the case of the present invention. In particular, theprimary RAID subsystem 16 sends data in thelog file 26 inmirror area 20 to theremote subsystem 16 a wherein in this example theremote subsystem 16 a comprises a hybrid RAID subsystem according to the present invention. If the log file data is compressed the transmission time to theremote system 16 a can be reduced. Since the load on theremote subsystem 16 a is less than that on the primary subsystem 16 (i.e., theprimary subsystem 16 responds to both read and write requests whereas theremote subsystem 16 a need only respond to writes), theremote subsystem 16 a can be the source of parity information for theprimary subsystem 16. As such, within theremote subsystem 16 a, in the process of writing data from the mirror area to its final address on the stripe in thesubsystem 16 a, the associated parity data is generated. Theremote subsystem 16 a can then send the parity data (preferably compressed) to theprimary subsystem 16 which can then avoid generating parity data itself, accelerating the transfer process for a given data block between the mirror and the stripe areas in theprimary subsystem 16. - The present invention goes beyond standard RAID by protecting data integrity, not just providing device reliability. Infinite roll-back provides protection during the window of vulnerability between backups. A hybrid mirror/stripe data organization results in improved performance. With the addition of the
flashback module 34, a conventional RAID mirror is outperformed at a cost which approaches that of a stripe. Further performance enhancement is attained with replication on an otherwise dormant hot spare and that hot spare can be used by a host-less appliance to generate a new baseline backup. - The present invention can be implemented in various data processing systems such as Enterprise systems, networks, SAN, NAS, medium and small systems (e.g., in a personal computer a write log is used, and data transferred to the user data set in background). As such in the description herein, the “host” and “host system” refer to any source of information that is in communication with the hybrid RAID system for transferring data to, and from, the hybrid RAID subsystem.
- The present invention has been described in considerable detail with reference to certain preferred versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
Claims (34)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/433,152 US20060206665A1 (en) | 2002-09-20 | 2006-05-13 | Accelerated RAID with rewind capability |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/247,859 US7076606B2 (en) | 2002-09-20 | 2002-09-20 | Accelerated RAID with rewind capability |
US11/433,152 US20060206665A1 (en) | 2002-09-20 | 2006-05-13 | Accelerated RAID with rewind capability |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/247,859 Division US7076606B2 (en) | 2002-09-20 | 2002-09-20 | Accelerated RAID with rewind capability |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060206665A1 true US20060206665A1 (en) | 2006-09-14 |
Family
ID=31946445
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/247,859 Expired - Fee Related US7076606B2 (en) | 2002-09-20 | 2002-09-20 | Accelerated RAID with rewind capability |
US11/433,152 Abandoned US20060206665A1 (en) | 2002-09-20 | 2006-05-13 | Accelerated RAID with rewind capability |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/247,859 Expired - Fee Related US7076606B2 (en) | 2002-09-20 | 2002-09-20 | Accelerated RAID with rewind capability |
Country Status (3)
Country | Link |
---|---|
US (2) | US7076606B2 (en) |
EP (1) | EP1400899A3 (en) |
JP (1) | JP2004118837A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080276124A1 (en) * | 2007-05-04 | 2008-11-06 | Hetzler Steven R | Incomplete write protection for disk array |
US20090204758A1 (en) * | 2008-02-13 | 2009-08-13 | Dell Products, Lp | Systems and methods for asymmetric raid devices |
US20090303630A1 (en) * | 2008-06-10 | 2009-12-10 | H3C Technologies Co., Ltd. | Method and apparatus for hard disk power failure protection |
US20100037017A1 (en) * | 2008-08-08 | 2010-02-11 | Samsung Electronics Co., Ltd | Hybrid storage apparatus and logical block address assigning method |
US20100161883A1 (en) * | 2008-12-24 | 2010-06-24 | Kabushiki Kaisha Toshiba | Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive |
US7886111B2 (en) | 2006-05-24 | 2011-02-08 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US20110225353A1 (en) * | 2008-10-30 | 2011-09-15 | Robert C Elliott | Redundant array of independent disks (raid) write cache sub-assembly |
US20120151133A1 (en) * | 2010-12-13 | 2012-06-14 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US8555108B2 (en) | 2003-08-14 | 2013-10-08 | Compellent Technologies | Virtual disk drive system and method |
US8819478B1 (en) * | 2008-06-30 | 2014-08-26 | Emc Corporation | Auto-adapting multi-tier cache |
US8856427B2 (en) | 2011-06-08 | 2014-10-07 | Panasonic Corporation | Memory controller and non-volatile storage device |
CN105068760A (en) * | 2013-10-18 | 2015-11-18 | 华为技术有限公司 | Data storage method, data storage apparatus and storage device |
US9489150B2 (en) | 2003-08-14 | 2016-11-08 | Dell International L.L.C. | System and method for transferring data between different raid data storage types for current data and replay data |
US9996421B2 (en) | 2013-10-18 | 2018-06-12 | Huawei Technologies Co., Ltd. | Data storage method, data storage apparatus, and storage device |
Families Citing this family (261)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7418620B1 (en) * | 2001-02-16 | 2008-08-26 | Swsoft Holdings, Ltd. | Fault tolerant distributed storage method and controller using (N,K) algorithms |
JP4186602B2 (en) | 2002-12-04 | 2008-11-26 | 株式会社日立製作所 | Update data writing method using journal log |
JP2004213435A (en) * | 2003-01-07 | 2004-07-29 | Hitachi Ltd | Storage system |
US6965979B2 (en) * | 2003-01-29 | 2005-11-15 | Pillar Data Systems, Inc. | Methods and systems of host caching |
JP4165747B2 (en) | 2003-03-20 | 2008-10-15 | 株式会社日立製作所 | Storage system, control device, and control device program |
US7668876B1 (en) * | 2003-04-25 | 2010-02-23 | Symantec Operating Corporation | Snapshot-based replication infrastructure for efficient logging with minimal performance effect |
US20040254962A1 (en) * | 2003-06-12 | 2004-12-16 | Shoji Kodama | Data replication for enterprise applications |
US7149858B1 (en) * | 2003-10-31 | 2006-12-12 | Veritas Operating Corporation | Synchronous replication for system and data security |
JP2005166016A (en) * | 2003-11-11 | 2005-06-23 | Nec Corp | Disk array device |
US7234074B2 (en) * | 2003-12-17 | 2007-06-19 | International Business Machines Corporation | Multiple disk data storage system for reducing power consumption |
JP4634049B2 (en) * | 2004-02-04 | 2011-02-16 | 株式会社日立製作所 | Error notification control in disk array system |
JP4112520B2 (en) * | 2004-03-25 | 2008-07-02 | 株式会社東芝 | Correction code generation apparatus, correction code generation method, error correction apparatus, and error correction method |
US20050235336A1 (en) * | 2004-04-15 | 2005-10-20 | Kenneth Ma | Data storage system and method that supports personal video recorder functionality |
JP4519563B2 (en) | 2004-08-04 | 2010-08-04 | 株式会社日立製作所 | Storage system and data processing system |
US7519629B2 (en) * | 2004-09-30 | 2009-04-14 | International Business Machines Corporation | System and method for tolerating multiple storage device failures in a storage system with constrained parity in-degree |
JP4428202B2 (en) * | 2004-11-02 | 2010-03-10 | 日本電気株式会社 | Disk array subsystem, distributed arrangement method, control method, and program in disk array subsystem |
US7702864B2 (en) * | 2004-11-18 | 2010-04-20 | International Business Machines Corporation | Apparatus, system, and method for writing stripes in parallel to unique persistent storage devices |
JP2006268420A (en) * | 2005-03-24 | 2006-10-05 | Nec Corp | Disk array device, storage system and control method |
US7644046B1 (en) * | 2005-06-23 | 2010-01-05 | Hewlett-Packard Development Company, L.P. | Method of estimating storage system cost |
US7529968B2 (en) * | 2005-11-07 | 2009-05-05 | Lsi Logic Corporation | Storing RAID configuration data within a BIOS image |
US7761426B2 (en) * | 2005-12-07 | 2010-07-20 | International Business Machines Corporation | Apparatus, system, and method for continuously protecting data |
JP2007264894A (en) * | 2006-03-28 | 2007-10-11 | Kyocera Mita Corp | Data storage system |
US7617361B2 (en) * | 2006-03-29 | 2009-11-10 | International Business Machines Corporation | Configureable redundant array of independent disks |
KR100771521B1 (en) * | 2006-10-30 | 2007-10-30 | 삼성전자주식회사 | Flash memory device including multi-level cells and method of writing data thereof |
US7904647B2 (en) | 2006-11-27 | 2011-03-08 | Lsi Corporation | System for optimizing the performance and reliability of a storage controller cache offload circuit |
US20080168224A1 (en) * | 2007-01-09 | 2008-07-10 | Ibm Corporation | Data protection via software configuration of multiple disk drives |
US8370715B2 (en) * | 2007-04-12 | 2013-02-05 | International Business Machines Corporation | Error checking addressable blocks in storage |
US8032702B2 (en) | 2007-05-24 | 2011-10-04 | International Business Machines Corporation | Disk storage management of a tape library with data backup and recovery |
US7853751B2 (en) * | 2008-03-12 | 2010-12-14 | Lsi Corporation | Stripe caching and data read ahead |
JP2008217811A (en) * | 2008-04-03 | 2008-09-18 | Hitachi Ltd | Disk controller using non-volatile memory |
JP2009252114A (en) * | 2008-04-09 | 2009-10-29 | Hitachi Ltd | Storage system and data saving method |
US20090282194A1 (en) * | 2008-05-07 | 2009-11-12 | Masashi Nagashima | Removable storage accelerator device |
KR20110050404A (en) | 2008-05-16 | 2011-05-13 | 퓨전-아이오, 인크. | Devices, systems, and program products to detect and replace defective data stores |
EP2283557A4 (en) * | 2008-05-22 | 2013-05-22 | Lsi Corp | Battery backup system with sleep mode |
CN101325610B (en) * | 2008-07-30 | 2011-12-28 | 杭州华三通信技术有限公司 | Virtual tape library backup system and disk power supply control method |
WO2010016115A1 (en) * | 2008-08-06 | 2010-02-11 | 富士通株式会社 | Disk array device control unit, data transfer device, and power recovery processing method |
WO2010071655A1 (en) | 2008-12-19 | 2010-06-24 | Hewlett-Packard Development Company, L.P. | Redundant data storage for uniform read latency |
US20100287407A1 (en) * | 2009-05-05 | 2010-11-11 | Siemens Medical Solutions Usa, Inc. | Computer Storage Synchronization and Backup System |
US8307258B2 (en) | 2009-05-18 | 2012-11-06 | Fusion-10, Inc | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US8281227B2 (en) | 2009-05-18 | 2012-10-02 | Fusion-10, Inc. | Apparatus, system, and method to increase data integrity in a redundant storage system |
US8732396B2 (en) * | 2009-06-08 | 2014-05-20 | Lsi Corporation | Method and apparatus for protecting the integrity of cached data in a direct-attached storage (DAS) system |
US8176360B2 (en) * | 2009-08-11 | 2012-05-08 | Texas Memory Systems, Inc. | Method and apparatus for addressing actual or predicted failures in a FLASH-based storage system |
US8930622B2 (en) | 2009-08-11 | 2015-01-06 | International Business Machines Corporation | Multi-level data protection for flash memory system |
US7941696B2 (en) * | 2009-08-11 | 2011-05-10 | Texas Memory Systems, Inc. | Flash-based memory system with static or variable length page stripes including data protection information and auxiliary protection stripes |
US9037951B2 (en) * | 2009-12-17 | 2015-05-19 | International Business Machines Corporation | Data management in solid state storage systems |
US9785561B2 (en) * | 2010-02-17 | 2017-10-10 | International Business Machines Corporation | Integrating a flash cache into large storage systems |
US8725940B2 (en) * | 2010-02-27 | 2014-05-13 | Cleversafe, Inc. | Distributedly storing raid data in a raid memory and a dispersed storage network memory |
US8181062B2 (en) * | 2010-03-26 | 2012-05-15 | Lsi Corporation | Method to establish high level of redundancy, fault tolerance and performance in a raid system without using parity and mirroring |
US8112663B2 (en) * | 2010-03-26 | 2012-02-07 | Lsi Corporation | Method to establish redundancy and fault tolerance better than RAID level 6 without using parity |
US20110296105A1 (en) * | 2010-06-01 | 2011-12-01 | Hsieh-Huan Yen | System and method for realizing raid-1 on a portable storage medium |
US8554741B1 (en) * | 2010-06-16 | 2013-10-08 | Western Digital Technologies, Inc. | Timeline application for log structured storage devices |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US12008266B2 (en) | 2010-09-15 | 2024-06-11 | Pure Storage, Inc. | Efficient read by reconstruction |
US8738962B2 (en) * | 2010-11-17 | 2014-05-27 | International Business Machines Corporation | Memory mirroring with memory compression |
TWI417727B (en) * | 2010-11-22 | 2013-12-01 | Phison Electronics Corp | Memory storage device, memory controller thereof, and method for responding instruction sent from host thereof |
US9092337B2 (en) | 2011-01-31 | 2015-07-28 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for managing eviction of data |
JP5505329B2 (en) * | 2011-02-22 | 2014-05-28 | 日本電気株式会社 | Disk array device and control method thereof |
CN102682012A (en) * | 2011-03-14 | 2012-09-19 | 成都市华为赛门铁克科技有限公司 | Method and device for reading and writing data in file system |
US9396067B1 (en) | 2011-04-18 | 2016-07-19 | American Megatrends, Inc. | I/O accelerator for striped disk arrays using parity |
US9300590B2 (en) | 2011-06-24 | 2016-03-29 | Dell Products, Lp | System and method for dynamic rate control in Ethernet fabrics |
US9798615B2 (en) | 2011-07-05 | 2017-10-24 | Dell Products, Lp | System and method for providing a RAID plus copy model for a storage network |
US8589640B2 (en) | 2011-10-14 | 2013-11-19 | Pure Storage, Inc. | Method for maintaining multiple fingerprint tables in a deduplicating storage system |
US8799557B1 (en) * | 2011-10-13 | 2014-08-05 | Netapp, Inc. | System and method for non-volatile random access memory emulation |
US9104529B1 (en) | 2011-12-30 | 2015-08-11 | Emc Corporation | System and method for copying a cache system |
US9009416B1 (en) | 2011-12-30 | 2015-04-14 | Emc Corporation | System and method for managing cache system content directories |
US8930947B1 (en) | 2011-12-30 | 2015-01-06 | Emc Corporation | System and method for live migration of a virtual machine with dedicated cache |
US9158578B1 (en) | 2011-12-30 | 2015-10-13 | Emc Corporation | System and method for migrating virtual machines |
US8627012B1 (en) * | 2011-12-30 | 2014-01-07 | Emc Corporation | System and method for improving cache performance |
US9235524B1 (en) | 2011-12-30 | 2016-01-12 | Emc Corporation | System and method for improving cache performance |
US9053033B1 (en) | 2011-12-30 | 2015-06-09 | Emc Corporation | System and method for cache content sharing |
US9767032B2 (en) | 2012-01-12 | 2017-09-19 | Sandisk Technologies Llc | Systems and methods for cache endurance |
US10073656B2 (en) | 2012-01-27 | 2018-09-11 | Sandisk Technologies Llc | Systems and methods for storage virtualization |
US8856619B1 (en) * | 2012-03-09 | 2014-10-07 | Google Inc. | Storing data across groups of storage nodes |
GB2503274A (en) * | 2012-06-22 | 2013-12-25 | Ibm | Restoring redundancy in a RAID |
US9059868B2 (en) | 2012-06-28 | 2015-06-16 | Dell Products, Lp | System and method for associating VLANs with virtual switch ports |
US10346095B2 (en) | 2012-08-31 | 2019-07-09 | Sandisk Technologies, Llc | Systems, methods, and interfaces for adaptive cache persistence |
WO2014132373A1 (en) * | 2013-02-28 | 2014-09-04 | 株式会社 日立製作所 | Storage system and memory device fault recovery method |
JP6248435B2 (en) * | 2013-07-04 | 2017-12-20 | 富士通株式会社 | Storage device and storage device control method |
US10019352B2 (en) | 2013-10-18 | 2018-07-10 | Sandisk Technologies Llc | Systems and methods for adaptive reserve storage |
JP6244974B2 (en) * | 2014-02-24 | 2017-12-13 | 富士通株式会社 | Storage device and storage device control method |
US9612952B2 (en) * | 2014-06-04 | 2017-04-04 | Pure Storage, Inc. | Automatically reconfiguring a storage memory topology |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US9003144B1 (en) | 2014-06-04 | 2015-04-07 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US9213485B1 (en) | 2014-06-04 | 2015-12-15 | Pure Storage, Inc. | Storage system architecture |
US9836234B2 (en) | 2014-06-04 | 2017-12-05 | Pure Storage, Inc. | Storage cluster |
US8850108B1 (en) | 2014-06-04 | 2014-09-30 | Pure Storage, Inc. | Storage cluster |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US12137140B2 (en) | 2014-06-04 | 2024-11-05 | Pure Storage, Inc. | Scale out storage platform having active failover |
US9367243B1 (en) | 2014-06-04 | 2016-06-14 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US9218244B1 (en) | 2014-06-04 | 2015-12-22 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US11068363B1 (en) | 2014-06-04 | 2021-07-20 | Pure Storage, Inc. | Proactively rebuilding data in a storage cluster |
US9946894B2 (en) * | 2014-06-27 | 2018-04-17 | Panasonic Intellectual Property Management Co., Ltd. | Data processing method and data processing device |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US9021297B1 (en) | 2014-07-02 | 2015-04-28 | Pure Storage, Inc. | Redundant, fault-tolerant, distributed remote procedure call cache in a storage system |
US8868825B1 (en) | 2014-07-02 | 2014-10-21 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US9836245B2 (en) | 2014-07-02 | 2017-12-05 | Pure Storage, Inc. | Non-volatile RAM and flash memory in a non-volatile solid-state storage |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US9811677B2 (en) | 2014-07-03 | 2017-11-07 | Pure Storage, Inc. | Secure data replication in a storage grid |
US10853311B1 (en) | 2014-07-03 | 2020-12-01 | Pure Storage, Inc. | Administration through files in a storage system |
US12182044B2 (en) | 2014-07-03 | 2024-12-31 | Pure Storage, Inc. | Data storage in a zone drive |
US8874836B1 (en) | 2014-07-03 | 2014-10-28 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US9483346B2 (en) | 2014-08-07 | 2016-11-01 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US10983859B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Adjustable error correction based on memory health in a storage unit |
US9558069B2 (en) | 2014-08-07 | 2017-01-31 | Pure Storage, Inc. | Failure mapping in a storage array |
US9766972B2 (en) | 2014-08-07 | 2017-09-19 | Pure Storage, Inc. | Masking defective bits in a storage array |
US9495255B2 (en) | 2014-08-07 | 2016-11-15 | Pure Storage, Inc. | Error recovery in a storage cluster |
US9082512B1 (en) | 2014-08-07 | 2015-07-14 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US12158814B2 (en) | 2014-08-07 | 2024-12-03 | Pure Storage, Inc. | Granular voltage tuning |
US10079711B1 (en) | 2014-08-20 | 2018-09-18 | Pure Storage, Inc. | Virtual file server with preserved MAC address |
US9563524B2 (en) | 2014-12-11 | 2017-02-07 | International Business Machines Corporation | Multi level data recovery in storage disk arrays |
US9747177B2 (en) * | 2014-12-30 | 2017-08-29 | International Business Machines Corporation | Data storage system employing a hot spare to store and service accesses to data having lower associated wear |
US20160202924A1 (en) * | 2015-01-13 | 2016-07-14 | Telefonaktiebolaget L M Ericsson (Publ) | Diagonal organization of memory blocks in a circular organization of memories |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US10082985B2 (en) | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10178169B2 (en) | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US10846275B2 (en) | 2015-06-26 | 2020-11-24 | Pure Storage, Inc. | Key management in a storage device |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US11341136B2 (en) | 2015-09-04 | 2022-05-24 | Pure Storage, Inc. | Dynamically resizable structures for approximate membership queries |
US11269884B2 (en) | 2015-09-04 | 2022-03-08 | Pure Storage, Inc. | Dynamically resizable structures for approximate membership queries |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US10762069B2 (en) | 2015-09-30 | 2020-09-01 | Pure Storage, Inc. | Mechanism for a system where data and metadata are located closely together |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US12271359B2 (en) | 2015-09-30 | 2025-04-08 | Pure Storage, Inc. | Device host operations in a storage system |
US9727244B2 (en) | 2015-10-05 | 2017-08-08 | International Business Machines Corporation | Expanding effective storage capacity of a data storage system while providing support for address mapping recovery |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US11231858B2 (en) | 2016-05-19 | 2022-01-25 | Pure Storage, Inc. | Dynamically configuring a storage system to facilitate independent scaling of resources |
US12235743B2 (en) | 2016-06-03 | 2025-02-25 | Pure Storage, Inc. | Efficient partitioning for storage system resiliency groups |
US10691567B2 (en) | 2016-06-03 | 2020-06-23 | Pure Storage, Inc. | Dynamically forming a failure domain in a storage system that includes a plurality of blades |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US11706895B2 (en) | 2016-07-19 | 2023-07-18 | Pure Storage, Inc. | Independent scaling of compute resources and storage resources in a storage system |
US12210476B2 (en) | 2016-07-19 | 2025-01-28 | Pure Storage, Inc. | Disaggregated compute resources and storage resources in a storage system |
US9672905B1 (en) | 2016-07-22 | 2017-06-06 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US11422719B2 (en) | 2016-09-15 | 2022-08-23 | Pure Storage, Inc. | Distributed file deletion and truncation |
US12039165B2 (en) | 2016-10-04 | 2024-07-16 | Pure Storage, Inc. | Utilizing allocation shares to improve parallelism in a zoned drive storage system |
US10545861B2 (en) | 2016-10-04 | 2020-01-28 | Pure Storage, Inc. | Distributed integrated high-speed solid-state non-volatile random-access memory |
US10756816B1 (en) | 2016-10-04 | 2020-08-25 | Pure Storage, Inc. | Optimized fibre channel and non-volatile memory express access |
US20180095788A1 (en) | 2016-10-04 | 2018-04-05 | Pure Storage, Inc. | Scheduling operations for a storage device |
US9747039B1 (en) | 2016-10-04 | 2017-08-29 | Pure Storage, Inc. | Reservations over multiple paths on NVMe over fabrics |
US10481798B2 (en) | 2016-10-28 | 2019-11-19 | Pure Storage, Inc. | Efficient flash management for multiple controllers |
US10359942B2 (en) | 2016-10-31 | 2019-07-23 | Pure Storage, Inc. | Deduplication aware scalable content placement |
US11550481B2 (en) | 2016-12-19 | 2023-01-10 | Pure Storage, Inc. | Efficiently writing data in a zoned drive storage system |
US11307998B2 (en) | 2017-01-09 | 2022-04-19 | Pure Storage, Inc. | Storage efficiency of encrypted host system data |
US9747158B1 (en) | 2017-01-13 | 2017-08-29 | Pure Storage, Inc. | Intelligent refresh of 3D NAND |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10516645B1 (en) | 2017-04-27 | 2019-12-24 | Pure Storage, Inc. | Address resolution broadcasting in a networked device |
US11003542B1 (en) * | 2017-04-28 | 2021-05-11 | EMC IP Holding Company LLC | Online consistent system checkpoint |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11138103B1 (en) | 2017-06-11 | 2021-10-05 | Pure Storage, Inc. | Resiliency groups |
US10425473B1 (en) | 2017-07-03 | 2019-09-24 | Pure Storage, Inc. | Stateful connection reset in a storage cluster with a stateless load balancer |
US10402266B1 (en) | 2017-07-31 | 2019-09-03 | Pure Storage, Inc. | Redundant array of independent disks in a direct-mapped flash storage system |
US10831935B2 (en) | 2017-08-31 | 2020-11-10 | Pure Storage, Inc. | Encryption management with host-side data reduction |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10789211B1 (en) | 2017-10-04 | 2020-09-29 | Pure Storage, Inc. | Feature-based deduplication |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US12067274B2 (en) | 2018-09-06 | 2024-08-20 | Pure Storage, Inc. | Writing segments and erase blocks based on ordering |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US12032848B2 (en) | 2021-06-21 | 2024-07-09 | Pure Storage, Inc. | Intelligent block allocation in a heterogeneous storage system |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US10719265B1 (en) | 2017-12-08 | 2020-07-21 | Pure Storage, Inc. | Centralized, quorum-aware handling of device reservation requests in a storage system |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
JP6734305B2 (en) * | 2018-01-10 | 2020-08-05 | Necプラットフォームズ株式会社 | Disk array controller, storage device, storage device recovery method, and disk array controller recovery program |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US11036596B1 (en) | 2018-02-18 | 2021-06-15 | Pure Storage, Inc. | System for delaying acknowledgements on open NAND locations until durability has been confirmed |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11995336B2 (en) | 2018-04-25 | 2024-05-28 | Pure Storage, Inc. | Bucket views |
US12175124B2 (en) | 2018-04-25 | 2024-12-24 | Pure Storage, Inc. | Enhanced data access using composite data views |
US12001688B2 (en) | 2019-04-29 | 2024-06-04 | Pure Storage, Inc. | Utilizing data views to optimize secure data access in a storage system |
US11385792B2 (en) | 2018-04-27 | 2022-07-12 | Pure Storage, Inc. | High availability controller pair transitioning |
US12079494B2 (en) | 2018-04-27 | 2024-09-03 | Pure Storage, Inc. | Optimizing storage system upgrades to preserve resources |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10817197B2 (en) | 2018-05-04 | 2020-10-27 | Microsoft Technology Licensing, Llc | Data partitioning in a distributed storage system |
US11120046B2 (en) | 2018-05-04 | 2021-09-14 | Microsoft Technology Licensing Llc | Data replication in a distributed storage system |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US10929229B2 (en) * | 2018-06-21 | 2021-02-23 | International Business Machines Corporation | Decentralized RAID scheme having distributed parity computation and recovery |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US11194473B1 (en) | 2019-01-23 | 2021-12-07 | Pure Storage, Inc. | Programming frequently read data to low latency portions of a solid-state storage array |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US12087382B2 (en) | 2019-04-11 | 2024-09-10 | Pure Storage, Inc. | Adaptive threshold for bad flash memory blocks |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11237960B2 (en) * | 2019-05-21 | 2022-02-01 | Arm Limited | Method and apparatus for asynchronous memory write-back in a data processing system |
US11487665B2 (en) | 2019-06-05 | 2022-11-01 | Pure Storage, Inc. | Tiered caching of data in a storage system |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11157179B2 (en) | 2019-12-03 | 2021-10-26 | Pure Storage, Inc. | Dynamic allocation of blocks of a storage device based on power loss protection |
US12001684B2 (en) | 2019-12-12 | 2024-06-04 | Pure Storage, Inc. | Optimizing dynamic power loss protection adjustment in a storage system |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US12056365B2 (en) | 2020-04-24 | 2024-08-06 | Pure Storage, Inc. | Resiliency for a storage system |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11941287B2 (en) * | 2020-06-17 | 2024-03-26 | EMC IP Holding Company, LLC | System and method for near-instant unmapping and write-same in a log-structured storage cluster |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11693596B2 (en) | 2020-08-13 | 2023-07-04 | Seagate Technology Llc | Pre-emptive storage strategies to reduce host command collisions |
CN112015340B (en) * | 2020-08-25 | 2024-05-03 | 实时侠智能控制技术有限公司 | Nonvolatile data storage structure and storage method |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US12153818B2 (en) | 2020-09-24 | 2024-11-26 | Pure Storage, Inc. | Bucket versioning snapshots |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US12067282B2 (en) | 2020-12-31 | 2024-08-20 | Pure Storage, Inc. | Write path selection |
US12093545B2 (en) | 2020-12-31 | 2024-09-17 | Pure Storage, Inc. | Storage system with selectable write modes |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US12229437B2 (en) | 2020-12-31 | 2025-02-18 | Pure Storage, Inc. | Dynamic buffer for storage system |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US12061814B2 (en) | 2021-01-25 | 2024-08-13 | Pure Storage, Inc. | Using data similarity to select segments for garbage collection |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US12099742B2 (en) | 2021-03-15 | 2024-09-24 | Pure Storage, Inc. | Utilizing programming page size granularity to optimize data segment storage in a storage system |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
US11994723B2 (en) | 2021-12-30 | 2024-05-28 | Pure Storage, Inc. | Ribbon cable alignment apparatus |
US12204788B1 (en) | 2023-07-21 | 2025-01-21 | Pure Storage, Inc. | Dynamic plane selection in data storage system |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5297258A (en) * | 1991-11-21 | 1994-03-22 | Ast Research, Inc. | Data logging for hard disk data storage systems |
US5392244A (en) * | 1993-08-19 | 1995-02-21 | Hewlett-Packard Company | Memory systems with data storage redundancy management |
US5504883A (en) * | 1993-02-01 | 1996-04-02 | Lsc, Inc. | Method and apparatus for insuring recovery of file control information for secondary storage systems |
US5649152A (en) * | 1994-10-13 | 1997-07-15 | Vinca Corporation | Method and system for providing a static snapshot of data stored on a mass storage system |
US5835953A (en) * | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US5960451A (en) * | 1997-09-16 | 1999-09-28 | Hewlett-Packard Company | System and method for reporting available capacity in a data storage system with variable consumption characteristics |
US6098128A (en) * | 1995-09-18 | 2000-08-01 | Cyberstorage Systems Corporation | Universal storage management system |
US6148368A (en) * | 1997-07-31 | 2000-11-14 | Lsi Logic Corporation | Method for accelerating disk array write operations using segmented cache memory and data logging |
US6170063B1 (en) * | 1998-03-07 | 2001-01-02 | Hewlett-Packard Company | Method for performing atomic, concurrent read and write operations on multiple storage devices |
US6223252B1 (en) * | 1998-05-04 | 2001-04-24 | International Business Machines Corporation | Hot spare light weight mirror for raid system |
US6247149B1 (en) * | 1997-10-28 | 2001-06-12 | Novell, Inc. | Distributed diagnostic logging system |
US20020156971A1 (en) * | 2001-04-19 | 2002-10-24 | International Business Machines Corporation | Method, apparatus, and program for providing hybrid disk mirroring and striping |
US6567889B1 (en) * | 1997-12-19 | 2003-05-20 | Lsi Logic Corporation | Apparatus and method to provide virtual solid state disk in cache memory in a storage controller |
US20030200473A1 (en) * | 1990-06-01 | 2003-10-23 | Amphus, Inc. | System and method for activity or event based dynamic energy conserving server reconfiguration |
US6674447B1 (en) * | 1999-12-06 | 2004-01-06 | Oridus, Inc. | Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback |
US6704838B2 (en) * | 1997-10-08 | 2004-03-09 | Seagate Technology Llc | Hybrid data storage and reconstruction system and method for a data storage device |
US6718434B2 (en) * | 2001-05-31 | 2004-04-06 | Hewlett-Packard Development Company, L.P. | Method and apparatus for assigning raid levels |
US20040139128A1 (en) * | 2002-07-15 | 2004-07-15 | Becker Gregory A. | System and method for backing up a computer system |
-
2002
- 2002-09-20 US US10/247,859 patent/US7076606B2/en not_active Expired - Fee Related
-
2003
- 2003-08-20 EP EP03255140A patent/EP1400899A3/en not_active Withdrawn
- 2003-09-17 JP JP2003324785A patent/JP2004118837A/en active Pending
-
2006
- 2006-05-13 US US11/433,152 patent/US20060206665A1/en not_active Abandoned
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030200473A1 (en) * | 1990-06-01 | 2003-10-23 | Amphus, Inc. | System and method for activity or event based dynamic energy conserving server reconfiguration |
US5297258A (en) * | 1991-11-21 | 1994-03-22 | Ast Research, Inc. | Data logging for hard disk data storage systems |
US5504883A (en) * | 1993-02-01 | 1996-04-02 | Lsc, Inc. | Method and apparatus for insuring recovery of file control information for secondary storage systems |
US5392244A (en) * | 1993-08-19 | 1995-02-21 | Hewlett-Packard Company | Memory systems with data storage redundancy management |
US5649152A (en) * | 1994-10-13 | 1997-07-15 | Vinca Corporation | Method and system for providing a static snapshot of data stored on a mass storage system |
US5835953A (en) * | 1994-10-13 | 1998-11-10 | Vinca Corporation | Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating |
US6073222A (en) * | 1994-10-13 | 2000-06-06 | Vinca Corporation | Using a virtual device to access data as it previously existed in a mass data storage system |
US6085298A (en) * | 1994-10-13 | 2000-07-04 | Vinca Corporation | Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer |
US6098128A (en) * | 1995-09-18 | 2000-08-01 | Cyberstorage Systems Corporation | Universal storage management system |
US6148368A (en) * | 1997-07-31 | 2000-11-14 | Lsi Logic Corporation | Method for accelerating disk array write operations using segmented cache memory and data logging |
US5960451A (en) * | 1997-09-16 | 1999-09-28 | Hewlett-Packard Company | System and method for reporting available capacity in a data storage system with variable consumption characteristics |
US6704838B2 (en) * | 1997-10-08 | 2004-03-09 | Seagate Technology Llc | Hybrid data storage and reconstruction system and method for a data storage device |
US6247149B1 (en) * | 1997-10-28 | 2001-06-12 | Novell, Inc. | Distributed diagnostic logging system |
US6567889B1 (en) * | 1997-12-19 | 2003-05-20 | Lsi Logic Corporation | Apparatus and method to provide virtual solid state disk in cache memory in a storage controller |
US6170063B1 (en) * | 1998-03-07 | 2001-01-02 | Hewlett-Packard Company | Method for performing atomic, concurrent read and write operations on multiple storage devices |
US6223252B1 (en) * | 1998-05-04 | 2001-04-24 | International Business Machines Corporation | Hot spare light weight mirror for raid system |
US6674447B1 (en) * | 1999-12-06 | 2004-01-06 | Oridus, Inc. | Method and apparatus for automatically recording snapshots of a computer screen during a computer session for later playback |
US20020156971A1 (en) * | 2001-04-19 | 2002-10-24 | International Business Machines Corporation | Method, apparatus, and program for providing hybrid disk mirroring and striping |
US6718434B2 (en) * | 2001-05-31 | 2004-04-06 | Hewlett-Packard Development Company, L.P. | Method and apparatus for assigning raid levels |
US20040139128A1 (en) * | 2002-07-15 | 2004-07-15 | Becker Gregory A. | System and method for backing up a computer system |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8555108B2 (en) | 2003-08-14 | 2013-10-08 | Compellent Technologies | Virtual disk drive system and method |
US10067712B2 (en) | 2003-08-14 | 2018-09-04 | Dell International L.L.C. | Virtual disk drive system and method |
US9489150B2 (en) | 2003-08-14 | 2016-11-08 | Dell International L.L.C. | System and method for transferring data between different raid data storage types for current data and replay data |
US9436390B2 (en) | 2003-08-14 | 2016-09-06 | Dell International L.L.C. | Virtual disk drive system and method |
US9047216B2 (en) | 2003-08-14 | 2015-06-02 | Compellent Technologies | Virtual disk drive system and method |
US9021295B2 (en) | 2003-08-14 | 2015-04-28 | Compellent Technologies | Virtual disk drive system and method |
US8560880B2 (en) | 2003-08-14 | 2013-10-15 | Compellent Technologies | Virtual disk drive system and method |
US9244625B2 (en) | 2006-05-24 | 2016-01-26 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US7886111B2 (en) | 2006-05-24 | 2011-02-08 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US8230193B2 (en) | 2006-05-24 | 2012-07-24 | Compellent Technologies | System and method for raid management, reallocation, and restriping |
US10296237B2 (en) | 2006-05-24 | 2019-05-21 | Dell International L.L.C. | System and method for raid management, reallocation, and restripping |
US8214684B2 (en) | 2007-05-04 | 2012-07-03 | International Business Machines Corporation | Incomplete write protection for disk array |
US20080276124A1 (en) * | 2007-05-04 | 2008-11-06 | Hetzler Steven R | Incomplete write protection for disk array |
US20090204758A1 (en) * | 2008-02-13 | 2009-08-13 | Dell Products, Lp | Systems and methods for asymmetric raid devices |
US20090303630A1 (en) * | 2008-06-10 | 2009-12-10 | H3C Technologies Co., Ltd. | Method and apparatus for hard disk power failure protection |
US8819478B1 (en) * | 2008-06-30 | 2014-08-26 | Emc Corporation | Auto-adapting multi-tier cache |
US20100037017A1 (en) * | 2008-08-08 | 2010-02-11 | Samsung Electronics Co., Ltd | Hybrid storage apparatus and logical block address assigning method |
US9619178B2 (en) * | 2008-08-08 | 2017-04-11 | Seagate Technology International | Hybrid storage apparatus and logical block address assigning method |
US20110225353A1 (en) * | 2008-10-30 | 2011-09-15 | Robert C Elliott | Redundant array of independent disks (raid) write cache sub-assembly |
US20100161883A1 (en) * | 2008-12-24 | 2010-06-24 | Kabushiki Kaisha Toshiba | Nonvolatile Semiconductor Memory Drive and Data Management Method of Nonvolatile Semiconductor Memory Drive |
US20120151133A1 (en) * | 2010-12-13 | 2012-06-14 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US9286000B2 (en) | 2010-12-13 | 2016-03-15 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US8949524B2 (en) | 2010-12-13 | 2015-02-03 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US8543760B2 (en) * | 2010-12-13 | 2013-09-24 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US9547452B2 (en) | 2010-12-13 | 2017-01-17 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US8458397B2 (en) * | 2010-12-13 | 2013-06-04 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US20120272005A1 (en) * | 2010-12-13 | 2012-10-25 | International Business Machines Corporation | Saving log data using a disk system as primary cache and a tape library as secondary cache |
US8856427B2 (en) | 2011-06-08 | 2014-10-07 | Panasonic Corporation | Memory controller and non-volatile storage device |
CN105068760A (en) * | 2013-10-18 | 2015-11-18 | 华为技术有限公司 | Data storage method, data storage apparatus and storage device |
US9996421B2 (en) | 2013-10-18 | 2018-06-12 | Huawei Technologies Co., Ltd. | Data storage method, data storage apparatus, and storage device |
Also Published As
Publication number | Publication date |
---|---|
US20040059869A1 (en) | 2004-03-25 |
EP1400899A2 (en) | 2004-03-24 |
JP2004118837A (en) | 2004-04-15 |
EP1400899A3 (en) | 2011-04-06 |
US7076606B2 (en) | 2006-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7076606B2 (en) | Accelerated RAID with rewind capability | |
US7055058B2 (en) | Self-healing log-structured RAID | |
US8904129B2 (en) | Method and apparatus for backup and restore in a dynamic chunk allocation storage system | |
US6523087B2 (en) | Utilizing parity caching and parity logging while closing the RAID5 write hole | |
US9448886B2 (en) | Flexible data storage system | |
AU710907B2 (en) | Expansion of the number of drives in a raid set while maintaining integrity of migrated data | |
US7054960B1 (en) | System and method for identifying block-level write operations to be transferred to a secondary site during replication | |
US7904679B2 (en) | Method and apparatus for managing backup data | |
US7975168B2 (en) | Storage system executing parallel correction write | |
US6067635A (en) | Preservation of data integrity in a raid storage device | |
US6766491B2 (en) | Parity mirroring between controllers in an active-active controller pair | |
US20030120864A1 (en) | High-performance log-structured RAID | |
US8078906B2 (en) | Grid storage system and method of operating thereof | |
US8356292B2 (en) | Method for updating control program of physical storage device in storage virtualization system and storage virtualization controller and system thereof | |
WO2024148865A1 (en) | Secure storage method, apparatus and device, and non-volatile readable storage medium | |
US6922752B2 (en) | Storage system using fast storage devices for storing redundant data | |
US7069382B2 (en) | Method of RAID 5 write hole prevention | |
US20030120869A1 (en) | Write-back disk cache management | |
US20070033356A1 (en) | System for Enabling Secure and Automatic Data Backup and Instant Recovery | |
US7293048B2 (en) | System for preserving logical object integrity within a remote mirror cache | |
CN112596673B (en) | Multiple-active multiple-control storage system with dual RAID data protection | |
US20100037023A1 (en) | System and method for transferring data between different raid data storage types for current data and replay data | |
US20100146206A1 (en) | Grid storage system and method of operating thereof | |
CN118779146A (en) | Data storage method, device, medium and product | |
US20240231707A9 (en) | Storage system and storage control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUANATUM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ORSELY, TIM;REEL/FRAME:017895/0995 Effective date: 20020831 |
|
AS | Assignment |
Owner name: CREDIT SUISSE, NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159 Effective date: 20070712 Owner name: CREDIT SUISSE,NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNORS:QUANTUM CORPORATION;ADVANCED DIGITAL INFORMATION CORPORATION;CERTANCE HOLDINGS CORPORATION;AND OTHERS;REEL/FRAME:019605/0159 Effective date: 20070712 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: QUANTUM INTERNATIONAL, INC., WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 Owner name: CERTANCE (US) HOLDINGS, INC., WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 Owner name: ADVANCED DIGITAL INFORMATION CORPORATION, WASHINGT Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 Owner name: QUANTUM CORPORATION, WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 Owner name: CERTANCE, LLC, WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 Owner name: CERTANCE HOLDINGS CORPORATION, WASHINGTON Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE, CAYMAN ISLANDS BRANCH (FORMERLY KNOWN AS CREDIT SUISSE), AS COLLATERAL AGENT;REEL/FRAME:027968/0007 Effective date: 20120329 |