US9542321B2 - Slice-based random access buffer for data interleaving - Google Patents
Slice-based random access buffer for data interleaving Download PDFInfo
- Publication number
- US9542321B2 US9542321B2 US14/260,463 US201414260463A US9542321B2 US 9542321 B2 US9542321 B2 US 9542321B2 US 201414260463 A US201414260463 A US 201414260463A US 9542321 B2 US9542321 B2 US 9542321B2
- Authority
- US
- United States
- Prior art keywords
- sector
- slices
- sectors
- data
- random access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G06F2003/0691—
-
- G06F2003/0692—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1041—Resource optimization
- G06F2212/1044—Space efficiency improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/15—Use in a specific computing environment
- G06F2212/152—Virtualized environment, e.g. logically partitioned system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/16—General purpose computing application
- G06F2212/165—Mainframe system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/222—Non-volatile memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/224—Disk storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/25—Using a specific main memory architecture
- G06F2212/251—Local memory within processor subsystem
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/26—Using a specific storage system architecture
- G06F2212/261—Storage comprising a plurality of storage devices
- G06F2212/262—Storage comprising a plurality of storage devices configured as RAID
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/281—Single cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/282—Partitioned cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/312—In storage controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/46—Caching storage objects of specific type in disk cache
- G06F2212/462—Track or segment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/6042—Allocation of cache space to multiple users or processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/70—Details relating to dynamic memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7203—Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
Definitions
- the disclosure relates to interleaving data within a communication channel such as, but not limited to, a read or write channel of a hard disk drive.
- each super-sector group may contain a different number of logical sectors. For example, a super-sector may include from 1 to 16 logical sectors.
- Maintaining a continuous data stream can be problematic because every media sector in a super-sector group includes a portion of each of the logical sectors. Until all media sectors in a super-sector are output, the next super-sector's logical sectors are not allowed to input, resulting in a gap between the super-sectors.
- One method of accounting for the discontinuity is to use a ping pong structure with dual super-sector buffers.
- a gapless output can be created using dual super-sector buffers, but memory size must be doubled (e.g. from 16 sectors to 32 sectors in size), resulting in increased silicon area cost.
- a system includes a slice divider, a random access buffer, and a label buffer.
- the slice divider is configured to receive incoming data sectors of a super-sector and further configured to divide the incoming data sectors into slices.
- the random access buffer is then configured to store the slices of the incoming data sectors in free memory slots, where a free memory slot is identified by a status flag associated with a logical address of the free memory slot.
- the label buffer is configured to store labels associated with the slices of the incoming data sectors in a sequence based upon an interleaving scheme.
- a processor in communication with the random access buffer and the label buffer is configured to read out media sectors corresponding to the super-sector, where the media sectors include interleaved data slices read out from the memory slots of the random access buffer in order of the sequence of labels stored by the label buffer.
- the corresponding memory slots are freed up for incoming slices of the next super-sector. Accordingly, a continuous (gapless) output stream can be created because the random access buffer is refilled with slices of the next super-sector as soon as current super-sector slices are read out (i.e. the buffers are updated on a slice-by-slice basis).
- FIG. 1A is a block diagram illustrating a storage device, in accordance with an embodiment of the disclosure.
- FIG. 1B is a block diagram illustrating spreader input and output data streams with gaps between super-sectors
- FIG. 1C is a block diagram illustrating spreader input and output data streams without gaps between super-sectors
- FIG. 2A is a block diagram illustrating a system for interleaving data, in accordance with an embodiment of the disclosure
- FIG. 2B is a block diagram illustrating a plurality of independently accessible slots of a random access buffer, in accordance with an embodiment of the disclosure
- FIG. 2C is a block diagram illustrating a plurality of independently accessible slots of a random access buffer, wherein the memory slots are being read out and refilled with new data slices, in accordance with an embodiment of the disclosure;
- FIG. 3 is a block diagram illustrating a storage system including a cluster of storage devices, in accordance with an embodiment of the disclosure
- FIG. 4 is a flow diagram illustrating a method of interleaving data, in accordance with an embodiment of the disclosure.
- FIG. 5 is a flow diagram illustrating a method of de-interleaving data, in accordance with an embodiment of the disclosure.
- FIGS. 1A through 5 illustrate various embodiments of a system and method for interleaving data utilizing a slice-based random access buffer to accommodate a dynamic range of super-sector sizes, sector lengths, and slice sizes.
- the slice-based random access buffer and associated system architecture described herein totally gapless output can be created with only one super-sector buffer (as opposed to dual alternating super-sector buffers). This is possible with only a slightly larger super-sector buffer and a label buffer for recording the slot of the random access buffer that is being written to.
- the footprint and material cost remains less than the dual-buffer architecture.
- a data storage device 100 is illustrated in accordance with an embodiment of the disclosure.
- the storage device 102 includes a storage controller 102 configured to store data sectors to a storage medium 110 (e.g. a magnetic platter) via a write path including an encoder 104 , a spreader 106 , and a writer 108 .
- the encoder block 104 e.g. RLL encoder and/or LDPC encoder
- the data sectors are encoded and then transferred to the spreader 106 .
- the logical sectors feed into the sector spreader 106 for interleaving; where after collecting enough logical sectors (e.g.
- media sectors corresponding to the input logical sectors of the super-sector group are output according to an interleaving scheme.
- the spreader 106 is configured to interleave the encoded data sectors and further configured to transfer the media sectors, which include the interleaved memory slices of the logical data sectors, to the writer 108 (e.g. a magnetic/optical recording head).
- the writer 108 is then configured to record the media sectors on the storage medium 110 .
- a reader 112 e.g. magnetic/optical read head
- a reader 112 is configured to read the interleaved data from the storage medium 110 and is further configured to transfer the interleaved data through a read channel.
- a despreader 114 is configured to de-interleave the media sectors and output logical data sectors, which are then sent through a Y-buffer 116 to a decoder block 118 (e.g. LDPC decoder).
- a decoder block 118 e.g. LDPC decoder
- the write system requires a continuous data stream to the media in a track, which would be problematic in a typical single-buffer spreader architecture.
- 16x-10x-12x-16x super-sector groups each sector being divided into 15 slices, the typical output is shown in FIG. 1B .
- SxLy super-sector x logical sector y
- the input and output streams illustrated in FIG. 1C are continuous (gapless) streams, as desired.
- one method of generating a continuous output stream is to utilize at least two alternating super-sector buffers (sometimes referred to as “ping pong” buffer architecture). Using the dual-buffer architecture, a gapless output could be created, but memory size would be doubled (e.g. from 16 sectors wide to 32 sectors wide), thereby significantly increasing silicon area cost.
- An interleaver system 200 illustrated in FIG. 2A is another approach to achieving continuous (gapless) output with less cost than the dual-buffer architecture that is commonly used.
- one large random access buffer is internally divided into many slots which have unique labels. These slots are distinguished by buffer address and can be accessed out-of-order. Additionally, each slot has a status flag to indicate slot availability. The slots and respective statuses may be searched by label number order.
- the slot's status flag is reset to free so that it can receive the next incoming slice.
- the random access buffer is read out, it is also updated at the same time (i.e. slots are emptied and refilled in parallel) so that the input and output streams are continuous.
- the system 200 includes a slice divider 202 , a random access buffer 204 , and a label buffer 212 .
- the slice divider 202 is configured to receive incoming data sectors of a super-sector and further configured to divide the incoming data sectors into slices.
- the random access buffer 204 is then configured to store the slices of the incoming data sectors in free memory slots.
- an address calculator 206 is configured to determine the logical block address of the slot for storing an incoming slice based upon a respective slot status flag 208 .
- the label buffer 212 is configured to store labels associated with the slices of the incoming data sectors in a sequence based upon an interleaving scheme, which may be programmed or embedded in interleaving logic 210 .
- the system 200 further includes a slice counter 214 configured to determine the beginning and end points of sectors or super-sector groups by counting the input/output slices.
- the slots of the random access buffer 204 are written and read by label and are totally independent.
- the slots are all the same size, for example, using the largest possible slice size if the sectors include differently sized slices.
- the incoming slices of a logical sector can be distributed to any free slot, and the slices are read out according to the output sequence stored in the label buffer 212 .
- the buffer updating happens all the time, as illustrated in FIG. 2C with buffer snapshots of incoming super-sector group S 3 and outgoing super-sector S 1 .
- media sectors corresponding to the current super-sector group are read out (e.g. by a processor that is in communication with the random access buffer and the label buffer), where the media sectors include interleaved data slices read out from the memory slots of the random access buffer 204 in order of the sequence of labels stored by the label buffer 212 .
- collecting “enough” sectors does not necessarily mean collecting all logical or media sectors of a super-sector group.
- media output can be started prior to receiving all logical sectors, and doing so can advantageously reduce the required memory size.
- “enough” logical or media sectors should be understood as generally referring to some predetermined, system-defined, or user-specified threshold number of sectors.
- the media sector output can also be delayed as needed (e.g. by a predetermined time interval) to ensure gapless output in the face of variably sized super-sectors without any additional cost for overspeed capability. That is, overspeed operation, such as being able to read/write the label or data memories at two times the read/write rate, could be used to eliminate the gaps but only at the cost of additional area and power.
- the corresponding memory slots of the random access buffer 204 are freed up for incoming slices of the next super-sector.
- the slices in the buffer include four media sectors of super-sector S 1 and twelve media sectors of super-sector S 2 .
- the random access buffer 204 is configured to suspend outputting the first media sector (e.g. S 1 L 0 ) of a super-sector group (e.g. S 1 ) until all slots are full.
- the random access buffer output may be suspended until 16 logical sectors are stored in the random access buffer 204 , even if the first super-sector (e.g. S 1 ) is only 4x, to make sure no gaps result between super-sectors once super-sector output has commenced.
- the first super-sector e.g. S 1
- media sectors which include the interleaved slices of super-sector S 1
- the corresponding slots are freed for the slices of super-sector S 3 .
- the incoming slices are written to the nonadjacent slots of the random access buffer 204 .
- data slice S 1 L 0 9 shown at T 0
- the respective status flag is reset to free, and the slot is refilled at T 1 with the incoming slice S 3 L 0 0 of the next super-sector group (e.g. super-sector S 3 ).
- the random access buffer 204 keeps updating on a slice-by-slice basis.
- the slot that is freed when slice S 1 L 1 9 is read out is immediately refilled with the next slice S 3 L 0 1 . Accordingly, a continuous (gapless) output stream is created because the random access buffer 204 is refilled with slices of the next super-sector as soon as current super-sector slices are read out.
- the system 200 may include a slice buffer that is approximately 1/240 th the size of the random access buffer 204 . It is further contemplated that with reduced latency, no additional slice buffer would be required. Once slices are output from slots of the random access buffer 204 , the slots could be immediately filled in with incoming slices of the next sector. The write and read could happen at the same memory slot. However, a small (e.g. 1-slice or 2-slice sized) buffer will allow for simpler control logic. Accordingly, some embodiments of system 200 may further include a small slice buffer coupled to the random access buffer 204 .
- the interleaver system 200 may be incorporated into a transmitting portion of a communication channel.
- the interleaver system 200 is included in the spreader 106 of the write path of a data storage device 100 ( FIG. 1 ). Accordingly, some of the functions, operations, or steps described above may be executed by the storage controller 102 or any other processor or control unit included in or communicatively coupled with the data storage device 100 .
- a similarly structured de-interleaver system may be incorporated into a receiving end of the communication channel or network, such as in the despreader 114 of the read path of the data storage device 100 .
- a de-interleaver system includes components similar to those of the interleaver system 200 with the following differences.
- the slice divider 202 is instead configured to receive incoming media sectors and further configured to divide the incoming media sectors into slices.
- the random access buffer 204 is instead configured to store the slices of the incoming media sectors in the free memory slots, and the label buffer configured to store labels associated with the slices of the incoming media sectors in a sequence based upon a de-interleaving scheme.
- the de-interleaving scheme is programmed or embedded in de-interleaving logic (in place of the interleaving logic 210 ) and is based on the interleaving scheme of the interleaver system 200 (e.g. the reverse of the interleaving scheme).
- logical data sectors i.e. the de-interleaved memory slices
- logical data sectors belonging to a super-sector group are read out from the memory slots of the random access buffer 204 in order of the sequence of labels stored by the label buffer 212 .
- FIG. 3 illustrates and embodiment of a data storage system 300 including a RAID configuration where the N devices making up a storage cluster 308 include one or more data storage devices 100 having a spreader block 106 (interleaver system 200 ) and/or despreader block 114 (de-interleaver system) with architectures as described above.
- the data storage system 300 further includes one or more nodes 302 or servers, each including a respective controller 304 .
- the controller 304 in each of the one or more nodes 302 may include a RAID-on-Chip (ROC) controller, a processor, or any other controller configured to access some or all of the N devices via a network 306 , such as one or more switches or expanders, directly or indirectly linking each controller 304 to the one or more storage devices 100 of the storage cluster 306 .
- ROC RAID-on-Chip
- FIGS. 4 and 5 illustrate a method 400 of interleaving data and a method 500 of de-interleaving data, respectively.
- computer-readable program instructions implementing the steps of method 400 or method 500 executed by at least one processor from a communicatively coupled carrier medium or carried out via any other hardware, firmware, or software such as, but not limited to, dedicated electronic circuitry, a programmable logic device, an application-specific integrated circuit (ASIC), a controller/microcontroller, a computing system and/or processor, or any combination thereof.
- ASIC application-specific integrated circuit
- methods 400 and 500 are not restricted to the embodiments of the interleaver system 200 and similarly structured de-interleaver system, which are described above, and can be alternatively manifested by any combination of systems and devices configured to carry out the following steps.
- a method 400 of interleaving data is illustrated according to an embodiment of the disclosure.
- incoming logical data sectors belonging to a super-sector are received, and at step 404 , the incoming sectors are divided into slices for interleaving.
- the slices are stored in free (possibly non-adjacent) memory slots of a random access buffer, and corresponding labels are stored in a label buffer in a sequence based upon an interleaving scheme.
- the method 400 proceeds to step 408 .
- media sectors including interleaved data slices are read out from the random access buffer in order of the sequence of corresponding labels stored by the label buffer (i.e. according to the interleaving scheme).
- the corresponding slot status flags are reset to free so that incoming slices of the next super-sector group can be allocated to the emptied slots.
- the label buffer is continually updated as well.
- a method 500 of de-interleaving data is illustrated according to an embodiment of the disclosure. As can be seen, the method 500 is based on similar principles as method 400 .
- incoming media sectors belonging to a super-sector are received, and at step 504 , the incoming media sectors are divided into slices for de-interleaving.
- the slices are stored in free (possibly non-adjacent) memory slots of a random access buffer, and corresponding labels are stored in a label buffer in a sequence based upon a de-interleaving scheme, which is at least partially based upon the interleaving scheme of method 400 (e.g. a reverse of the interleaving scheme).
- step 508 logical data sectors including de-interleaved (i.e. logically ordered) data slices are read out from the random access buffer in order of the sequence of corresponding labels stored by the label buffer (i.e. according to the de-interleaving scheme). As the logical sectors are read out, the corresponding slot status flags are reset to free so that incoming slices of the next super-sector group can be allocated to the emptied slots.
- the label buffer is continually updated as well.
- the data interleaving or de-interleaving architectures and associated methods described in the embodiments provided above are suitable for a variety of applications due to characteristics including, but not limited to: capability of achieving zero-stitch effect; ability to accommodate any number of sectors in a super-sector group, dynamically; and ability to accommodate any number of slices in sector; ability to accommodate any sector size (e.g. 4k, 8k, etc.) and differently sized slices in a sector.
- the label buffer is completely independent from the slicing, different slicing schemes can be implemented for different media zones, which may have different RLL algorithms.
- the architecture is very suitable for cold storage system with single write, multi-pass read. Using the architecture in the write path increases bandwidth. Because the read path does not have the same bandwidth requirement (i.e. gaps can be tolerated), a simple super-sector buffer can be used in the despreader block, thus resulting in significant area and power savings.
- various functions, operations, or steps described throughout the present disclosure may be carried out by any combination of hardware, software, or firmware.
- various steps or functions are carried out by one or more of the following: electronic circuitry, logic gates, multiplexers, a programmable logic device, an application-specific integrated circuit (ASIC), a controller/microcontroller, or a computing system.
- a computing system may include, but is not limited to, a personal computing system, mainframe computing system, workstation, image computer, parallel processor, or any other device known in the art.
- the terms “controller” and “computing system” are broadly defined to encompass any device having one or more processors, which execute instructions from a carrier medium.
- the carrier medium may be a transmission medium, such as, but not limited to, a wire, cable, or wireless transmission link.
- the carrier medium may also include a non-transitory signal bearing medium or storage medium such as, but not limited to, a read-only memory, a random access memory, a magnetic or optical disk, a solid-state or flash memory device, or a magnetic tape.
- any embodiment of the disclosure manifested above as a system or method may include at least a portion of any other embodiment described herein.
- Those having skill in the art will appreciate that there are various embodiments by which systems and methods described herein can be implemented, and that the implementation will vary with the context in which an embodiment of the disclosure is deployed.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/260,463 US9542321B2 (en) | 2014-04-24 | 2014-04-24 | Slice-based random access buffer for data interleaving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/260,463 US9542321B2 (en) | 2014-04-24 | 2014-04-24 | Slice-based random access buffer for data interleaving |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160034393A1 US20160034393A1 (en) | 2016-02-04 |
US9542321B2 true US9542321B2 (en) | 2017-01-10 |
Family
ID=55180172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/260,463 Expired - Fee Related US9542321B2 (en) | 2014-04-24 | 2014-04-24 | Slice-based random access buffer for data interleaving |
Country Status (1)
Country | Link |
---|---|
US (1) | US9542321B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI718858B (en) * | 2020-02-03 | 2021-02-11 | 慧榮科技股份有限公司 | Data storage device and non-volatile memory control method |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9853920B2 (en) * | 2015-08-24 | 2017-12-26 | Cisco Technology, Inc. | Systems and methods for performing packet reorder processing |
TWI617138B (en) * | 2016-01-26 | 2018-03-01 | 晨星半導體股份有限公司 | Time de-interleaving circuit and method thereof |
US10340438B2 (en) * | 2017-11-28 | 2019-07-02 | International Business Machines Corporation | Laser annealing qubits for optimized frequency allocation |
US10706086B1 (en) * | 2018-03-12 | 2020-07-07 | Amazon Technologies, Inc. | Collaborative-filtering based user simulation for dialog systems |
CN115061640B (en) * | 2022-08-11 | 2022-12-02 | 深圳云豹智能有限公司 | Fault-tolerant distributed storage system, method, electronic equipment and medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070041050A1 (en) * | 2005-08-17 | 2007-02-22 | Bing-Yu Wang | Memory management method and system |
US20090216942A1 (en) * | 2008-02-23 | 2009-08-27 | Broadcom Corporation | Efficient memory management for hard disk drive (hdd) read channel |
US20090300234A1 (en) * | 2008-05-28 | 2009-12-03 | Fujitsu Limited | Buffer control method and storage apparatus |
US20090319749A1 (en) * | 2007-04-20 | 2009-12-24 | Fujitsu Limited | Program, apparatus and method for storage management |
US8151035B2 (en) | 2004-12-16 | 2012-04-03 | Sandisk Technologies Inc. | Non-volatile memory and method with multi-stream updating |
US20120284482A1 (en) * | 2008-02-04 | 2012-11-08 | Craig Sullender | Label Reuse Method and System for Connected Component Labeling |
US20140258591A1 (en) * | 2013-03-07 | 2014-09-11 | Kabushiki Kaisha Toshiba | Data storage and retrieval in a hybrid drive |
US20140281146A1 (en) * | 2013-03-15 | 2014-09-18 | Western Digital Technologies, Inc. | Compression and formatting of data for data storage systems |
US20150046678A1 (en) * | 2013-08-08 | 2015-02-12 | Linear Algebra Technologies Limited | Apparatus, systems, and methods for providing configurable computational imaging pipeline |
-
2014
- 2014-04-24 US US14/260,463 patent/US9542321B2/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8151035B2 (en) | 2004-12-16 | 2012-04-03 | Sandisk Technologies Inc. | Non-volatile memory and method with multi-stream updating |
US20070041050A1 (en) * | 2005-08-17 | 2007-02-22 | Bing-Yu Wang | Memory management method and system |
US20090319749A1 (en) * | 2007-04-20 | 2009-12-24 | Fujitsu Limited | Program, apparatus and method for storage management |
US20120284482A1 (en) * | 2008-02-04 | 2012-11-08 | Craig Sullender | Label Reuse Method and System for Connected Component Labeling |
US20090216942A1 (en) * | 2008-02-23 | 2009-08-27 | Broadcom Corporation | Efficient memory management for hard disk drive (hdd) read channel |
US20090300234A1 (en) * | 2008-05-28 | 2009-12-03 | Fujitsu Limited | Buffer control method and storage apparatus |
US20140258591A1 (en) * | 2013-03-07 | 2014-09-11 | Kabushiki Kaisha Toshiba | Data storage and retrieval in a hybrid drive |
US20140281146A1 (en) * | 2013-03-15 | 2014-09-18 | Western Digital Technologies, Inc. | Compression and formatting of data for data storage systems |
US20150046678A1 (en) * | 2013-08-08 | 2015-02-12 | Linear Algebra Technologies Limited | Apparatus, systems, and methods for providing configurable computational imaging pipeline |
Non-Patent Citations (8)
Title |
---|
Chaichanavong U.S. Appl. No. 60/830,045 Reduced-complexity decoding algorithm for non-binary LDPC codes, filed Jul. 2006. |
Chen et al "Normalized Switch Schemes for Low Density Parity Check Codes," IMACS Multiconference on Computational Engineering in Systems Applications, Oct. 2006. |
Darabiha et al "Multi-Gbit/sec low density parity check decoders with reduced interconnect complexity," in Proc. IEEE International Symposium on Circuits and Systems 2005. |
Gunnam "Next Generation Iterative LDPC Solutions for Magnetic Recording Storage", Invited presentation at 42nd Asilomar Conference on Signals, Systems and Computers, Oct. 28. |
Malema et al , "Interconnection network for structured low-density paritycheck decoders," Asia-Pacific Conference on Communications, Oct. 3-5, 2005, pp. 537-540. |
Matloff, "Memory Interleaving" University of California at Davis, Nov. 2003. |
Tarabel et al "Further results on mapping functions," Information Theory Workshop, 2005 IEEE , vol., No., pp. 5, Aug. 29-Sep. 1, 2005. |
Vancourt et al "Application-Specific Memory Interleaving Enables High Performance in FPGA-based Grid Computations" FCCM 2006. |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI718858B (en) * | 2020-02-03 | 2021-02-11 | 慧榮科技股份有限公司 | Data storage device and non-volatile memory control method |
US11366775B2 (en) | 2020-02-03 | 2022-06-21 | Silicon Motion, Inc. | Data storage device with an exclusive channel for flag checking of read data, and non-volatile memory control method |
US11526454B2 (en) | 2020-02-03 | 2022-12-13 | Silicon Motion, Inc. | Data storage device with an exclusive channel for flag checking of read data, and non-volatile memory control method |
US11550740B2 (en) | 2020-02-03 | 2023-01-10 | Silicon Motion, Inc. | Data storage device with an exclusive channel for flag checking of read data, and non-volatile memory control method |
Also Published As
Publication number | Publication date |
---|---|
US20160034393A1 (en) | 2016-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9542321B2 (en) | Slice-based random access buffer for data interleaving | |
US10496483B2 (en) | Method and apparatus for rebuilding redundant array of independent disks | |
US9933973B2 (en) | Systems and methods for data organization in storage systems using large erasure codes | |
US11209982B2 (en) | Controlling operation of a data storage system | |
US8996799B2 (en) | Content storage system with modified cache write policies | |
US9569109B2 (en) | Nonvolatile memory interface for metadata shadowing | |
US10170158B2 (en) | Variable scoping capability for physical tape layout diagnostic structure of tape storage device | |
US10600443B2 (en) | Sequential data storage with rewrite using dead-track detection | |
US20190258413A1 (en) | Logical format utilizing lateral encoding of data for storage on magnetic tape | |
GB2451549A (en) | Buffering data packet segments in a data buffer addressed using pointers stored in a pointer memory | |
CN111381775A (en) | System and method for quality of service assurance for multi-stream scenarios in hard disk drives | |
US20090216942A1 (en) | Efficient memory management for hard disk drive (hdd) read channel | |
TW201407968A (en) | Data processing system with retained sector reprocessing | |
US20140244926A1 (en) | Dedicated Memory Structure for Sector Spreading Interleaving | |
TW201407464A (en) | Data processing system with out of order transfer | |
US9208083B2 (en) | System and method to interleave memory | |
US20180173440A1 (en) | Low latency lateral decoding of data for retrieval from magnetic tape | |
US9564925B1 (en) | Pipelined architecture for iterative decoding of product codes | |
US10691376B2 (en) | Prioritized sourcing for efficient rewriting | |
US10229072B2 (en) | System and method for despreader memory management | |
US20080201524A1 (en) | System and method for increasing video server storage bandwidth | |
KR101547858B1 (en) | Systems and methods for symbol re-grouping decoding processing | |
US20130275717A1 (en) | Multi-Tier Data Processing | |
US9092368B2 (en) | Systems and methods for modified quality based priority scheduling during iterative data processing | |
US9058842B2 (en) | Systems and methods for gate aware iterative data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LSI CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WU, ZHIWEI;LI, ZHIBIN;WORRELL, KURT J.;AND OTHERS;SIGNING DATES FROM 20140422 TO 20140424;REEL/FRAME:032746/0093 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LSI CORPORATION;REEL/FRAME:035390/0388 Effective date: 20140814 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047422/0464 Effective date: 20180509 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE PREVIOUSLY RECORDED AT REEL: 047422 FRAME: 0464. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048883/0702 Effective date: 20180905 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: BROADCOM INTERNATIONAL PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED;REEL/FRAME:053771/0901 Effective date: 20200826 |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20210110 |