US20020141244A1 - Parallel erase operations in memory systems - Google Patents
Parallel erase operations in memory systems Download PDFInfo
- Publication number
- US20020141244A1 US20020141244A1 US10/157,541 US15754102A US2002141244A1 US 20020141244 A1 US20020141244 A1 US 20020141244A1 US 15754102 A US15754102 A US 15754102A US 2002141244 A1 US2002141244 A1 US 2002141244A1
- Authority
- US
- United States
- Prior art keywords
- memory
- flash
- written
- entries
- flash memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
- G11C16/14—Circuits for erasing electrically, e.g. erase voltage switching circuits
- G11C16/16—Circuits for erasing electrically, e.g. erase voltage switching circuits for erasing blocks, e.g. arrays, words, groups
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/20—Employing a main memory using a specific memory technology
- G06F2212/202—Non-volatile memory
- G06F2212/2022—Flash memory
Definitions
- the present invention relates generally to memory storage systems, and more particularly to flash memory systems.
- EEPROM electrically erasable programmable read-only-memory
- the EEPROMs have the capability of electrically erasing data stored on the memory and replacing it with other data.
- programming the EEPROM is relatively slow since input/output of data and addressing is in a serial format.
- special “high” voltages are required when programming the EEPROM.
- EEPROMs are typically only available in relatively small memory sizes such as 8 Kbyte or 16 Kbyte sizes. As more and more non-volatile memory space is required at lower power consumption for portable electronic apparatus, alternatives to EEPROM are required.
- Flash EEPROM also known as “flash memory”
- flash memory has been the answer. Large regions of flash memory can be erased at one time which makes reprogramming flash memory faster than reprogramming EEPROM and which is the origin of the term “flash”. Additionally, it has lower stand-by power consumption than EEPROM. Also, in replacing hard disk systems, these flash memory systems are sometimes referred to as flash “disk” systems and similar descriptive terminology is used, even though no rotating magnetic disks are used.
- a plurality of flash memory chips are arranged in banks that share some of the control signals from a buffer chip.
- the flash memory chips are nonvolatile semiconductor-memory chips that retain data when power is no longer applied.
- flash memory chips are divided into pages and blocks.
- a 64 Mbit flash chip typically has 512-byte pages, which happens to match the sector size for IDE and small-computer system interface (SCSI) hard disks. Rather than writing to or reading from just one word in the page, the entire page must be read or written at the same time; individual bytes cannot be written. Thus flash memory operations are inherently slow since an entire page must be read or written.
- SCSI small-computer system interface
- Flash memory is also not truly random-access. While reads can be to random pages, writes require that memory cells must first be erased before information is placed in them; i.e., a write (or program) operation is always preceded by an erase operation.
- the erase operation is done in one of several ways. For example, in some flash memories, the entire chip is erased at one time. If not all the information in the chip is to be erased, the information must first be temporarily saved, and is usually written into another memory (typically a RAM). The information is then restored into the nonvolatile flash memory by programming back into the chip.
- a RAM random access memory
- the memory is divided into blocks that are each separately erasable, but only one at a time. By selecting the desired block and going through the erase sequence the designated area is erased. While, the need for temporary memory is reduced, erase in various areas of the memory still requires a time consuming sequential approach.
- the memory is divided into sectors where all cells within each sector are erasable together. Each sector can be addressed separately and selected for erase.
- LBA logical block address
- PBA physical block address
- flash reads can be to random pages
- flash writes require that larger regions, such as a sector, block, or chip be erased in a flash erase operation before a flash write can be performed.
- regions such as a sector, block, or chip
- flash erase operation For example, in block erases, a block of 16 pages must be erased together, while all 512 bytes on a page must be written together.
- flash erase operations are significantly slower than flash read or write operations. Further, only one erase operation per flash memory chip can be active at a time.
- cache memories to speed up the performance of computer systems having slower access devices, such as flash memory.
- flash memory typically, a part of system RAM is used as a cache for temporarily holding the most recently accessed data from the flash memory system. The next time the data is needed, it may be obtained from the fast cache instead of the slow flash memory system.
- This technique works well in situations where the same data is repeatedly operated on. This is the case in most structures and programs since the computer tends to work within a small area of memory at a time in running a program.
- DMA direct-memory access
- a method of memory operation providing a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory in which an erase operation is followed by a plurality of sequential write operations. Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, the present invention provides a way of substantially speeding up these operations.
- a memory system having a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and a processor for erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory in which an erase operation is followed by a plurality of sequential write operations. Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, the present invention provides a fast memory system.
- FIG. 1 is a block diagram of a flash memory system in accordance with one embodiment of the present invention.
- FIG. 2 is a time chart showing conventional alternating flash erase and write operations, parallel flash erase and write operations of the present invention, and parallel-parallel flash erase and write operations in accordance with another embodiment of the present invention.
- FIG. 3 shows an example list of cache blocks that require write transactions.
- FIG. 4 shows example erase command sequences performed in response to certain cache blocks shown in FIG. 3 in accordance with one embodiment of the present invention.
- FIG. 5 shows example write command sequences performed in response to certain erase command sequences shown in FIG. in accordance with one embodiment of the present invention.
- FIG. 6 shows write command sequences that may be asserted on the same flash bus by two different DMA controllers so as to interleave the write commands in accordance with a further embodiment of the present invention.
- FIG. 7 shows a method of performing a parallel-erase operation in accordance with one embodiment of the present invention.
- FIG. 8 shows a method of interleaving write command sequences in accordance with yet one embodiment of the present invention.
- FIG. 1 is a block diagram of a flash memory system 10 having at least one flash specific DMA controller coupled to each flash bus used in the system.
- two DMA controllers are used with DMA controller 12 coupled to flash bus 16 and DMA controller 14 coupled to flash bus 18 .
- the number of DMA controllers for each flash bus or the number of flash buses shown is not intended to limit the present invention in any way and may be increased to improve performance. Flash memory systems are known, such as that described in U.S. Pat. No. 5,822,251.
- local processor 20 sends high-level requests to flash chips 24 via local bus 22 .
- Each request is translated into a sequence of commands, address bytes and data transfers (“command sequence”) by either DMA controller 12 or 14 .
- DMA controller 12 or 14 in turn transfers the command sequence to a flash buffer chip (“buffer chip”) by using shared data/address/command lines that comprise the flash bus.
- Each buffer chip 26 is coupled to at one bank of flash chips 24 via a buffer bus, such as buffer bus 40 and 42 .
- flash buses 16 and 18 also have lines for transmitting an encoded command. The encoded command is used to select and control a plurality of buffer chips 26 .
- each flash bus has an 8-bit portion destined for the buffer chips or for the flash memory chips 24 , and a 2-bit portion sent only to buffer chips 26 .
- Each buffer chip 26 buffers at least one bank of flash memory chips 24 and also serves as a protocol converter by using a protocol defined for the flash buses to transceive command sequences on the flash buses and by converting the protocol to another protocol expected by a flash memory chip, such as flash chip 114 a, 114 b, 114 c or 114 d .
- This could be as simple as converting flash bus commands to the appropriate sequence of signal transitions to a flash memory chip, or could involve translation of commands or addresses, or even more complex sequencing.
- the commands on the flash bus are kept similar to those expected by the flash chips to minimize the cost of conversion and thus keep buffer chips 26 simple.
- DMA controllers 12 and 14 drive flash buses 16 and 18 , respectively. Flash buses 16 and 18 can operate at the same time, allowing flash operations to be initiated and processed in parallel.
- Each buffer chip 26 controls at least one bank of flash memory chips 24 .
- Each bank can be separately accessed, allowing flash chips to perform flash operations in parallel with flash chips in other banks.
- each flash chip within the same bank can also be separately accessed, allowing flash operations to be performed in parallel on more than one flash chip within the same bank of flash chips. Hence, not only can parallel flash operations be performed on different flash chips belonging to separate banks but also on flash chips belonging to the same bank of flash chips.
- Each buffer chip 26 may be coupled to any number, such as four, of banks of flash chips 24 although only two banks per buffer chip are shown in FIG. 1 to avoid over-complicating the present invention.
- Each bank has eight flash memory chips 24 although only four flash chips are shown, such as flash-chips 114 a through 114 d and 116 a through 116 d. Additional banks of flash memory chips can be added to an existing buffer bus, or modules of flash memory chips with a buffer chip can be coupled to a flash bus. The ability to add additional flash buses facilitates expansion since any number of buffer chips can be added. Buffer chips monitor flash operations performed by flash chips, permitting the buffer chips to indicate to DMA controllers 12 and 14 which flash chips 24 are busy and enabling the DMA controllers to perform additional flash operations to other flash chips.
- DMA controllers 12 and 14 may be contained in a single Application-Specific Integrated Circuit (ASIC) 28 .
- the ASIC 28 connects DMA controllers 12 and 14 to local bus 22 .
- the DMA controllers are integrated in any chip in the flash memory system 10 , which facilitates data transfer between the flash chips and local bus 22 .
- one DMA controller may be integrated with each of buffer chips 26 instead of being integrated together in the ASIC 28 .
- Local bus 22 connects a cache 30 , local processor 20 , and an interface controller 32 , such as a small-computer system interface (SCSI), ATA/IDE, or another interface controller, to DMA controllers 12 and 14 .
- Host requests from a host 34 are received by interface controller 32 and driven onto local bus 22 .
- Local processor 20 responds to the host requests by storing host data into cache 30 for writes, or reading data from the flash memory chips 24 or from cache 30 .
- a read-only memory (ROM) 36 contains firmware code of routines that execute on local processor 20 to respond to host requests. Other system-maintenance routines are stored on ROM 36 , such as wear-leveling and copy-back routines.
- Cache 30 is under firmware control by local processor 20 , and thus the local processor's local memory 38 and cache 30 may share the same physical memory.
- Cache 30 is implemented using DRAM although this is not intended to limit the present invention in any way.
- Cache 30 is used as a cache for temporarily holding the most recently accessed data from flash memory system 10 . The next time the data is needed by the host 34 , it may be obtained from cache 30 instead of the relatively slower flash memory 25 . This technique works well in situations where the same data is repeatedly operated on as is the case in most structures and programs since host 34 tends to work within a small area of memory at a time in running a program.
- cache 30 has a cache size of 32MB for a 256 MB flash memory.
- Local processor 20 also tracks data stored in cache 30 and can determine if the data, such as a cache block or cache sector/page, is “dirty”, or has been updated more recently than the copy of the data stored in flash memory 25 . This permits host 34 to use the most recent copy of the data and, when the dirty data is to be “victimized”, or replaced, with other data, the dirty data is first written to flash memory 25 so that any changes that were made to the dirty data will be preserved. This technique is well known to those skilled in the art. This cache coherency process permits local processor 20 to determine when dirty cache data is ready to be written to flash memory 25 .
- Local processor 20 initiates write or read transactions by sending high-level commands to one of the DMA controllers 12 or 14 .
- DMA controller 12 or 14 then generates the corresponding command sequences.
- Many command sequences may be needed, such as for block reads and writes.
- a block read requires that many page read command sequences be performed, each sequence generally sending command and address bytes to flash memory chips 24 through the buffer chips 26 .
- Some flash chips also have a sequential read mode where command and address bytes need only be sent for the first page in a sequence.
- Local processor 20 uses DMA transfers to move data between one of the DMA controllers coupled to the flash buses and cache 20 .
- the DMA transfers may be performed by local processor 20 using program control or may be facilitated using at least one additional DMA controller (not shown) that is either integrated with local processor 20 or implemented separately and coupled to local bus 22 .
- Conventional flash memory such as flash chips 114 a through 114 d and 116 a through 116 d , operate differently from DRAM devices in many respects. For instance, a flash block selected for a write transaction must first erase the block targeted for the write transaction before the block can be written with data. Performing and completing an erase cycle before performing a write cycle adds an additional delay.
- a time chart 50 is shown that depicts alternating erase and write cycles 52 that are performed in conventional flash memory. The erase cycles have flash erase times 61 through 64 , respectively, and the write cycles have write times 65 through 68 , respectively.
- each erase cycle takes approximately 4 ms to perform, while performing each write cycle takes approximately 3 ms.
- the total elapsed time to perform four write transactions is the first erase cycle time 61 plus the first write cycle time 65 plus the second erase cycle time 62 plus the second write cycle time 66 plus the third erase cycle time 63 plus the third write cycle time 67 plus the fourth erase cycle time 64 plus the fourth write cycle time 68 .
- the total time for four conventional erases and writes is 28 ms.
- the present invention minimizes the above described cumulative delays by determining which cache blocks need write transactions and which of the erase cycles associated with the write transactions can be performed before performing the write cycles associated with the erase cycles.
- the erase cycles are performed as a group (in “parallel”) before performing the write cycles. This approach reduces the total time to perform write transactions when compared to traditional methods.
- Performing erase cycles in parallel (“parallel erase operation”) may only be done using separate flash chips although flash chips belonging to the same bank of flash chips are considered as separate flash chips and may be erased as part of the parallel operation.
- FIG. 3 shows a list of cache blocks that require write transactions.
- cache blocks 106 , 108 , 110 , and 112 require a write transaction to flash chips 114 a, 114 d , 116 a and 116 c , respectively.
- Cache blocks 118 and 120 require a write transaction to flash chips 114 a and 116 a , respectively, while cache blocks 122 and 124 require a write transaction to flash chips 114 c and 114 b , respectively.
- Cache blocks 106 , 108 , 110 and 112 qualify for a parallel erase operation since they require write transactions to separate flash chips.
- Cache blocks 118 and 120 also qualify for a parallel erase operation but cannot be performed in parallel with the parallel erase operation for cache blocks 106 , 108 , 110 and 112 because they require erase cycles to the same flash chips as cache blocks 112 and 120 . Consequently, the present invention, either DMA 14 or local processor through program code, selects either set of cache blocks as eligible for write transactions that involve a parallel erase operation. Upon completion of all of the erase cycles by the respective flash chips, the DMC then transmits a series of write commands as group to the erased flash chips via flash bus 18 . The erased flash chips then perform write cycles on the flash blocks that were erased.
- Write transactions are then performed on the next set of cache blocks that are eligible, such as cache blocks 112 and 120 , for a parallel erase operation.
- the cache blocks that are eligible for a group or parallel erase are cache blocks 122 and 124 , as shown in FIG. 3. This set of cache blocks were not included with the first group of cache blocks because they are preceded by cache blocks 118 and 120 , which required erase cycles to be performed by the same flash chips used in the first group of cache blocks.
- the ideal number of dirty cache entries, which trigger an erase is heuristically determined to achieve optimal performance as would be evident to those skilled in the art.
- One factor in the determination is that a flash memory cell is currently capable of being cycled a limited number of times before the erases irrevocably damage the memory cell.
- one objective is to minimize the number of erases and another is to spread the erases over different flash memory chips 24 .
- the determination as to which of the cache blocks requiring a write transaction qualify for a group erase may be performed while the blocks are either in cache 30 or in a pending queue, which in one embodiment of the present invention, are provided for each DMA coupled to a flash bus, such as DMA controller 12 and DMA controller 14 .
- the set of blocks selected are those that are the first eligible group which will not cause the parallel erase cycles to be performed on the same flash chip at the same time although this is not intended to limit the present invention in any way.
- FIG. 4 is a representation of the erase command sequences for cache blocks 106 , 108 , 110 and 112 that are performed by DMA controller 14 and the resulting erase cycles 126 , 128 , 130 and 132 performed by flash chips 114 a, 114 d, 116 a and 116 c , respectively, on the flash blocks (not shown) that correspond to cache blocks 106 , 108 , 110 and 112 in accordance with one embodiment of the present invention.
- FIG. 4 is a representation of the erase command sequences for cache blocks 106 , 108 , 110 and 112 that are performed by DMA controller 14 and the resulting erase cycles 126 , 128 , 130 and 132 performed by flash chips 114 a, 114 d, 116 a and 116 c , respectively, on the flash blocks (not shown) that correspond to cache blocks 106 , 108 , 110 and 112 in accordance with one embodiment of the present invention.
- FIG. 5 is a representation of the write command sequences for cache blocks 106 , 108 , 110 , and 112 that are performed by DMA controller 14 and the resulting write cycles 134 , 136 , 138 and 140 performed by flash chips on the flash blocks 114 a, 114 c, 116 a and 116 d that correspond to cache blocks 106 , 108 , 110 and 112 in accordance with one embodiment of the present invention.
- erase command sequences 100 a, 100 b, 100 c and 100 d directed at flash chips 114 a, 114 d , 116 a and 116 c, respectively, are launched in a sequential manner by DMA controller 14 on flash bus 18 , rendering flash bus 18 unavailable 133 during the launching of the commands.
- the erase command sequences are launched in response to the write transactions requested for cache blocks 106 , 108 , 110 and 113 .
- Flash chips 114 a, 114 d , 116 a and 116 c receive their respective erase command sequence and each consumes approximately four (4) milliseconds to perform an erase cycle in response to each erase command sequence.
- each erase command sequence takes only microseconds to complete, enabling the flash chips to perform the erase cycles as a group or in parallel.
- write command sequences 101 a, 101 b, 101 c and 101 d directed at flash chips 114 a, 114 d, 116 a and 116 c , respectively, are launched in a sequential manner by DMA controller 14 on flash bus 18 . Like the erase command sequences above, this renders flash bus 18 busy 142 during the launching of the commands.
- the write command sequences are launched as part of the write transactions requested for cache blocks 106 , 108 , 110 and 113 and after the erase cycle that corresponds to the write cycle has been completed. Flash chips 114 a, 114 d, 116 a and 116 c receive their respective write command sequence and each consumes approximately three (3) milliseconds to perform a write cycle in response to each write command sequence.
- the erase and write commands are launched on flash bus 18 sequentially, the commands consume a fraction of the time that a flash device consumes when performing an erase or write operation since the time to assert the command sequences on flash bus 18 take microseconds to perform rather than milliseconds.
- Performance is further enhanced because more than one DMA controller is provided. Each DMA controller is able to launch new flash operations simultaneously, allowing different flash chips to perform separate flash operations at the same time and to further increase parallelism.
- FIG. 6 shows write command sequences that may be asserted on the same flash bus by two different DMA controllers so as to interleave the write commands in accordance with a further embodiment of the present invention.
- write command sequences 101 a, 101 b, 101 c and 101 d shown in FIG. 5 are also shown in FIG. 6.
- Time line 142 represents write command sequences 144 a , 144 b , 144 c and 144 d that are asserted by another DMA controller on the same flash bus used by DMA controller 14 , which in this example is flash bus 18 .
- Write command sequences 144 a , 144 b, 144 c and 144 d are asserted only when the write command sequences asserted by DMA controller 14 are in an idle mode, such as when after a write command sequence is received by a flash chip and DMA controller 14 is waiting to receive an acknowledgement from the flash chip that it completed a program cycle.
- Each write command sequence 144 a, 144 b , 144 c and 144 d is followed by a corresponding program command sequence 148 a , 1448 b, 148 c and 148 d , respectively.
- the present invention permits a different DMA controller to assert a write command sequence on flash bus 18 that is then followed by a program sequence, such as program sequence 150 a .
- Program sequence 150 a preferably occurs during the same period as the occurrence of write command sequence 101 b.
- bandwidth efficiency on flash bus 18 may be improved since each write command sequence may be followed by another write command sequence.
- Write command sequences 144 a through 144 b correspond to previous erase command sequences (not shown).
- local processor 20 or the DMA controller 14 detects which cache entries need to be written to flash memory, such as cache entries that are dirty, and causes an erase to be performed first to open up flash memory space on a schedule or on demand.
- An example of erasing on demand would be after a predetermined number of the writes and of erasing on demand would be when a cache entry is dirty but will be delayed in being written but the location of the stale entry in the flash memory is known so it can be erased. This latter technique also permits entries to be moved around in the flash memory to even the wear on the memory cells.
- the present inventors suggest using at least one dedicated DMA controller for transferring data between DMA controllers attached to flash buses, such as DMA controllers 12 and 14 and cache 30 in yet another embodiment of the present invention.
- transfer is controlled by the dedicated DMA controller, obviating the need for control by a program in the local processor 20 and, at the end of the transfer, the relevant status could be posted to the local processor 20 .
- local processor 20 sends a data transfer command to the dedicated DMA controller, it specifies management information that comprises a transfer start address and a transfer count for each of the transfer source and transfer destination. The area specified in this manner is transferred in sequence as one group of data blocks.
- FIG. 7 is a process flow showing a method of performing a parallel erase operation in accordance with one embodiment of the present invention.
- the flash chips are erased approximately in parallel by asserting the erase command cycles to each write transaction in sequence on the flash bus.
- write command sequences are sequentially asserted on the flash bus. Please note that each command sequence includes a program command sequence.
- reference 206 may be performed by interleaving write command sequences that correspond to prior erase command sequences on the same flash bus as the write command sequences asserted in reference 204 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Memory System (AREA)
Abstract
An apparatus for and method of memory operation having a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and a processor for erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory wherein an erase operation is followed by a plurality of sequential write operations.
Description
- This application is a Continuing Application claiming the benefit of U.S. patent application, Ser. No. 09/819,423, filed Mar. 27, 2001, which in turn claims priority of U.S. Provisional patent application serial No. 60/250,642, filed Nov. 30, 2000.
- The present invention relates generally to memory storage systems, and more particularly to flash memory systems.
- Computer systems have traditionally used hard disk systems with rotating magnetic disks as data storage media. However, disk drives are disadvantageous in that they are bulky and they require high precision moving mechanical parts. They are also not rugged and are prone to reliability problems, as well as consuming significant amounts of power.
- More recently these hard disk systems are being replaced by semiconductor systems. These semiconductor systems use electrically erasable programmable read-only-memory (EEPROM) technology as memory storage cells as a substitute for the hard-disk magnetic media. The EEPROMs have the capability of electrically erasing data stored on the memory and replacing it with other data. However, programming the EEPROM is relatively slow since input/output of data and addressing is in a serial format. Additionally, special “high” voltages are required when programming the EEPROM. Even further, EEPROMs are typically only available in relatively small memory sizes such as 8 Kbyte or 16 Kbyte sizes. As more and more non-volatile memory space is required at lower power consumption for portable electronic apparatus, alternatives to EEPROM are required.
- “Flash” EEPROM, also known as “flash memory”, has been the answer. Large regions of flash memory can be erased at one time which makes reprogramming flash memory faster than reprogramming EEPROM and which is the origin of the term “flash”. Additionally, it has lower stand-by power consumption than EEPROM. Also, in replacing hard disk systems, these flash memory systems are sometimes referred to as flash “disk” systems and similar descriptive terminology is used, even though no rotating magnetic disks are used.
- In the flash memory system, a plurality of flash memory chips are arranged in banks that share some of the control signals from a buffer chip. The flash memory chips are nonvolatile semiconductor-memory chips that retain data when power is no longer applied.
- The flash memory chips are divided into pages and blocks. A 64 Mbit flash chip typically has 512-byte pages, which happens to match the sector size for IDE and small-computer system interface (SCSI) hard disks. Rather than writing to or reading from just one word in the page, the entire page must be read or written at the same time; individual bytes cannot be written. Thus flash memory operations are inherently slow since an entire page must be read or written.
- Flash memory is also not truly random-access. While reads can be to random pages, writes require that memory cells must first be erased before information is placed in them; i.e., a write (or program) operation is always preceded by an erase operation.
- The erase operation is done in one of several ways. For example, in some flash memories, the entire chip is erased at one time. If not all the information in the chip is to be erased, the information must first be temporarily saved, and is usually written into another memory (typically a RAM). The information is then restored into the nonvolatile flash memory by programming back into the chip.
- In other flash memories, the memory is divided into blocks that are each separately erasable, but only one at a time. By selecting the desired block and going through the erase sequence the designated area is erased. While, the need for temporary memory is reduced, erase in various areas of the memory still requires a time consuming sequential approach.
- In still other flash memories, the memory is divided into sectors where all cells within each sector are erasable together. Each sector can be addressed separately and selected for erase.
- In even other flash memories, certain numbers of blocks are reserved to be pre-erased and a logical block address (LBA) to physical block address (PBA) translation must be performed.
- While flash reads can be to random pages, flash writes require that larger regions, such as a sector, block, or chip be erased in a flash erase operation before a flash write can be performed. For example, in block erases, a block of 16 pages must be erased together, while all 512 bytes on a page must be written together.
- In all these flash memories, flash erase operations are significantly slower than flash read or write operations. Further, only one erase operation per flash memory chip can be active at a time.
- Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, a way of speeding up these operations has been long sought, but has equally as long eluded those skilled in the art.
- Working from another direction, those skilled in the art have developed cache memories to speed up the performance of computer systems having slower access devices, such as flash memory. Typically, a part of system RAM is used as a cache for temporarily holding the most recently accessed data from the flash memory system. The next time the data is needed, it may be obtained from the fast cache instead of the slow flash memory system. This technique works well in situations where the same data is repeatedly operated on. This is the case in most structures and programs since the computer tends to work within a small area of memory at a time in running a program.
- Most of the conventional cache designs are read caches for speeding up reads from flash memory. In some cases, write caches are used for speeding up writes to flash memory. However, in the case of writes to flash memory systems, data is written to flash memory directly every time they occur, while being-written into cache at the same time. This is done because of concern for loss of updated data files in case of power loss. If the write data is only stored in the cache memory, which is a volatile memory, a loss of power will result in new updated files being lost from the cache before having the old data updated in nonvolatile flash memory. The system will then be operating on the old data when these files are used in further processing. The need to write to flash memory every time is considered by those skilled in the art to defeat the benefits of the caching mechanism for writes. Read caching does not have this concern since the data that could be lost from cache has a backup in flash memory.
- Those skilled in the art have also used direct-memory access (DMA) to facilitate data transfers. While DMA is efficient for transfers of raw data to a memory, flash memory chips also require command and address sequences to set up the relatively long flash operations. Unfortunately, DMA is not well suited to transfer addresses and commands since it is designed to transfer long strings of data beginning at a starting address through an ending address.
- Thus, those skilled in the art working from different directions have encountered what appears to be an insurmountable bottleneck in speeding up flash memory systems to match faster and faster host computer system processors.
- A method of memory operation providing a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory in which an erase operation is followed by a plurality of sequential write operations. Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, the present invention provides a way of substantially speeding up these operations.
- A memory system having a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and a processor for erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory in which an erase operation is followed by a plurality of sequential write operations. Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, the present invention provides a fast memory system.
- The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.
- FIG. 1 is a block diagram of a flash memory system in accordance with one embodiment of the present invention; and
- FIG. 2 is a time chart showing conventional alternating flash erase and write operations, parallel flash erase and write operations of the present invention, and parallel-parallel flash erase and write operations in accordance with another embodiment of the present invention.
- FIG. 3 shows an example list of cache blocks that require write transactions.
- FIG. 4 shows example erase command sequences performed in response to certain cache blocks shown in FIG. 3 in accordance with one embodiment of the present invention.
- FIG. 5 shows example write command sequences performed in response to certain erase command sequences shown in FIG. in accordance with one embodiment of the present invention.
- FIG. 6 shows write command sequences that may be asserted on the same flash bus by two different DMA controllers so as to interleave the write commands in accordance with a further embodiment of the present invention.
- FIG. 7 shows a method of performing a parallel-erase operation in accordance with one embodiment of the present invention.
- FIG. 8 shows a method of interleaving write command sequences in accordance with yet one embodiment of the present invention.
- FIG. 1 is a block diagram of a
flash memory system 10 having at least one flash specific DMA controller coupled to each flash bus used in the system. In the embodiment shown, two DMA controllers are used withDMA controller 12 coupled toflash bus 16 andDMA controller 14 coupled toflash bus 18. The number of DMA controllers for each flash bus or the number of flash buses shown is not intended to limit the present invention in any way and may be increased to improve performance. Flash memory systems are known, such as that described in U.S. Pat. No. 5,822,251. - In operation,
local processor 20 sends high-level requests toflash chips 24 vialocal bus 22. Each request is translated into a sequence of commands, address bytes and data transfers (“command sequence”) by eitherDMA controller DMA controller buffer chip 26 is coupled to at one bank offlash chips 24 via a buffer bus, such asbuffer bus flash buses flash memory chips 24, and a 2-bit portion sent only to bufferchips 26. - Each
buffer chip 26 buffers at least one bank offlash memory chips 24 and also serves as a protocol converter by using a protocol defined for the flash buses to transceive command sequences on the flash buses and by converting the protocol to another protocol expected by a flash memory chip, such asflash chip buffer chips 26 simple. -
DMA controllers drive flash buses Flash buses buffer chip 26 controls at least one bank offlash memory chips 24. Each bank can be separately accessed, allowing flash chips to perform flash operations in parallel with flash chips in other banks. In addition, each flash chip within the same bank can also be separately accessed, allowing flash operations to be performed in parallel on more than one flash chip within the same bank of flash chips. Hence, not only can parallel flash operations be performed on different flash chips belonging to separate banks but also on flash chips belonging to the same bank of flash chips. - Each
buffer chip 26 may be coupled to any number, such as four, of banks offlash chips 24 although only two banks per buffer chip are shown in FIG. 1 to avoid over-complicating the present invention. Each bank has eightflash memory chips 24 although only four flash chips are shown, such as flash-chips 114 a through 114 d and 116 a through 116 d. Additional banks of flash memory chips can be added to an existing buffer bus, or modules of flash memory chips with a buffer chip can be coupled to a flash bus. The ability to add additional flash buses facilitates expansion since any number of buffer chips can be added. Buffer chips monitor flash operations performed by flash chips, permitting the buffer chips to indicate toDMA controllers -
DMA controllers ASIC 28 connectsDMA controllers local bus 22. In another embodiment, the DMA controllers are integrated in any chip in theflash memory system 10, which facilitates data transfer between the flash chips andlocal bus 22. For example, one DMA controller may be integrated with each ofbuffer chips 26 instead of being integrated together in theASIC 28. -
Local bus 22 connects acache 30,local processor 20, and aninterface controller 32, such as a small-computer system interface (SCSI), ATA/IDE, or another interface controller, toDMA controllers host 34 are received byinterface controller 32 and driven ontolocal bus 22.Local processor 20 responds to the host requests by storing host data intocache 30 for writes, or reading data from theflash memory chips 24 or fromcache 30. A read-only memory (ROM) 36 contains firmware code of routines that execute onlocal processor 20 to respond to host requests. Other system-maintenance routines are stored onROM 36, such as wear-leveling and copy-back routines. -
Cache 30 is under firmware control bylocal processor 20, and thus the local processor'slocal memory 38 andcache 30 may share the same physical memory.Cache 30 is implemented using DRAM although this is not intended to limit the present invention in any way.Cache 30 is used as a cache for temporarily holding the most recently accessed data fromflash memory system 10. The next time the data is needed by thehost 34, it may be obtained fromcache 30 instead of the relativelyslower flash memory 25. This technique works well in situations where the same data is repeatedly operated on as is the case in most structures and programs sincehost 34 tends to work within a small area of memory at a time in running a program. In one embodiment,cache 30 has a cache size of 32MB for a 256 MB flash memory. -
Local processor 20 also tracks data stored incache 30 and can determine if the data, such as a cache block or cache sector/page, is “dirty”, or has been updated more recently than the copy of the data stored inflash memory 25. This permitshost 34 to use the most recent copy of the data and, when the dirty data is to be “victimized”, or replaced, with other data, the dirty data is first written toflash memory 25 so that any changes that were made to the dirty data will be preserved. This technique is well known to those skilled in the art. This cache coherency process permitslocal processor 20 to determine when dirty cache data is ready to be written toflash memory 25. -
Local processor 20 initiates write or read transactions by sending high-level commands to one of theDMA controllers DMA controller flash memory chips 24 through the buffer chips 26. Some flash chips also have a sequential read mode where command and address bytes need only be sent for the first page in a sequence. -
Local processor 20 uses DMA transfers to move data between one of the DMA controllers coupled to the flash buses andcache 20. The DMA transfers may be performed bylocal processor 20 using program control or may be facilitated using at least one additional DMA controller (not shown) that is either integrated withlocal processor 20 or implemented separately and coupled tolocal bus 22. - Conventional flash memory, such as
flash chips 114 a through 114 d and 116 a through 116 d, operate differently from DRAM devices in many respects. For instance, a flash block selected for a write transaction must first erase the block targeted for the write transaction before the block can be written with data. Performing and completing an erase cycle before performing a write cycle adds an additional delay. Referring to FIG. 2, a time chart 50 is shown that depicts alternating erase and writecycles 52 that are performed in conventional flash memory. The erase cycles have flash erasetimes 61 through 64, respectively, and the write cycles havewrite times 65 through 68, respectively. In a typical flash chip used by the inventors listed herewith, each erase cycle takes approximately 4 ms to perform, while performing each write cycle takes approximately 3 ms. Thus, the total elapsed time to perform four write transactions, for example, is the first erasecycle time 61 plus the firstwrite cycle time 65 plus the second erasecycle time 62 plus the secondwrite cycle time 66 plus the third erasecycle time 63 plus the thirdwrite cycle time 67 plus the fourth erasecycle time 64 plus the fourthwrite cycle time 68. At 4 millisecond per erase cycle and 3 milliseconds per write cycle, the total time for four conventional erases and writes is 28 ms. - The present invention minimizes the above described cumulative delays by determining which cache blocks need write transactions and which of the erase cycles associated with the write transactions can be performed before performing the write cycles associated with the erase cycles. The erase cycles are performed as a group (in “parallel”) before performing the write cycles. This approach reduces the total time to perform write transactions when compared to traditional methods. Performing erase cycles in parallel (“parallel erase operation”) may only be done using separate flash chips although flash chips belonging to the same bank of flash chips are considered as separate flash chips and may be erased as part of the parallel operation.
- For example, FIG. 3 shows a list of cache blocks that require write transactions. Specifically, cache blocks106, 108, 110, and 112 require a write transaction to
flash chips flash chips flash chips DMA 14 or local processor through program code, selects either set of cache blocks as eligible for write transactions that involve a parallel erase operation. Upon completion of all of the erase cycles by the respective flash chips, the DMC then transmits a series of write commands as group to the erased flash chips viaflash bus 18. The erased flash chips then perform write cycles on the flash blocks that were erased. - Write transactions are then performed on the next set of cache blocks that are eligible, such as cache blocks112 and 120, for a parallel erase operation. The cache blocks that are eligible for a group or parallel erase are
cache blocks - In yet another embodiment of the present invention, the ideal number of dirty cache entries, which trigger an erase, is heuristically determined to achieve optimal performance as would be evident to those skilled in the art. One factor in the determination is that a flash memory cell is currently capable of being cycled a limited number of times before the erases irrevocably damage the memory cell. Thus, one objective is to minimize the number of erases and another is to spread the erases over different
flash memory chips 24. - The determination as to which of the cache blocks requiring a write transaction qualify for a group erase may be performed while the blocks are either in
cache 30 or in a pending queue, which in one embodiment of the present invention, are provided for each DMA coupled to a flash bus, such asDMA controller 12 andDMA controller 14. In addition, the set of blocks selected are those that are the first eligible group which will not cause the parallel erase cycles to be performed on the same flash chip at the same time although this is not intended to limit the present invention in any way. - FIG. 4 is a representation of the erase command sequences for cache blocks106, 108, 110 and 112 that are performed by
DMA controller 14 and the resulting erasecycles flash chips DMA controller 14 and the resulting write cycles 134, 136, 138 and 140 performed by flash chips on the flash blocks 114 a, 114 c, 116 a and 116 d that correspond to cache blocks 106, 108, 110 and 112 in accordance with one embodiment of the present invention. - As seen in FIG. 4, erase
command sequences flash chips DMA controller 14 onflash bus 18,rendering flash bus 18 unavailable 133 during the launching of the commands. The erase command sequences are launched in response to the write transactions requested for cache blocks 106, 108, 110 and 113. Flash chips 114 a, 114 d, 116 a and 116 c receive their respective erase command sequence and each consumes approximately four (4) milliseconds to perform an erase cycle in response to each erase command sequence. Although the erase command sequences are asserted sequentially, each erase command sequence takes only microseconds to complete, enabling the flash chips to perform the erase cycles as a group or in parallel. - Referring to FIG. 5, write
command sequences flash chips DMA controller 14 onflash bus 18. Like the erase command sequences above, this rendersflash bus 18 busy 142 during the launching of the commands. The write command sequences are launched as part of the write transactions requested for cache blocks 106, 108, 110 and 113 and after the erase cycle that corresponds to the write cycle has been completed. Flash chips 114 a, 114 d, 116 a and 116 c receive their respective write command sequence and each consumes approximately three (3) milliseconds to perform a write cycle in response to each write command sequence. - Thus, although the erase and write commands are launched on
flash bus 18 sequentially, the commands consume a fraction of the time that a flash device consumes when performing an erase or write operation since the time to assert the command sequences onflash bus 18 take microseconds to perform rather than milliseconds. This permits the flash chips to perform the erase operations essentially in parallel, greatly reducing the cumulative time to complete write transactions when compared to the example discussed in FIG. 2. Performance is further enhanced because more than one DMA controller is provided. Each DMA controller is able to launch new flash operations simultaneously, allowing different flash chips to perform separate flash operations at the same time and to further increase parallelism. - FIG. 6 shows write command sequences that may be asserted on the same flash bus by two different DMA controllers so as to interleave the write commands in accordance with a further embodiment of the present invention. To minimize over-complicating the discussion herein, write
command sequences Time line 142 representswrite command sequences DMA controller 14, which in this example isflash bus 18. Writecommand sequences DMA controller 14 are in an idle mode, such as when after a write command sequence is received by a flash chip andDMA controller 14 is waiting to receive an acknowledgement from the flash chip that it completed a program cycle. Eachwrite command sequence program command sequence flash bus 18 that is then followed by a program sequence, such asprogram sequence 150 a.Program sequence 150 a preferably occurs during the same period as the occurrence ofwrite command sequence 101 b. Thus, by interleaving write command sequences asserted by different DMA controllers on the same flash bus, bandwidth efficiency onflash bus 18 may be improved since each write command sequence may be followed by another write command sequence. Writecommand sequences 144 a through 144 b correspond to previous erase command sequences (not shown). - In operation,
local processor 20 or theDMA controller 14 detects which cache entries need to be written to flash memory, such as cache entries that are dirty, and causes an erase to be performed first to open up flash memory space on a schedule or on demand. An example of erasing on demand would be after a predetermined number of the writes and of erasing on demand would be when a cache entry is dirty but will be delayed in being written but the location of the stale entry in the flash memory is known so it can be erased. This latter technique also permits entries to be moved around in the flash memory to even the wear on the memory cells. - Increasing parallelism as describe above has a cost in that the
local processor 20 will have to spend more and more of its time managing writes and less time performing other necessary operations which will slow down the overall operation of theflash memory system 10. To address this problem with thelocal processor 20, the present inventors suggest using at least one dedicated DMA controller for transferring data between DMA controllers attached to flash buses, such asDMA controllers cache 30 in yet another embodiment of the present invention. Preferably, there is one dedicated DMA controller for each DMA controller attached to the flash bus. During data transfer, transfer is controlled by the dedicated DMA controller, obviating the need for control by a program in thelocal processor 20 and, at the end of the transfer, the relevant status could be posted to thelocal processor 20. Whenlocal processor 20 sends a data transfer command to the dedicated DMA controller, it specifies management information that comprises a transfer start address and a transfer count for each of the transfer source and transfer destination. The area specified in this manner is transferred in sequence as one group of data blocks. - FIG. 7 is a process flow showing a method of performing a parallel erase operation in accordance with one embodiment of the present invention.
- At
reference 200, a plurality of entries or cache blocks that require a write transaction involving erase cycles to different flash chips are detected. - At
reference 202, the flash chips are erased approximately in parallel by asserting the erase command cycles to each write transaction in sequence on the flash bus. - At
reference 204, write command sequences are sequentially asserted on the flash bus. Please note that each command sequence includes a program command sequence. - In yet a further embodiment of the present invention as shown in FIG. 8,
reference 206 may be performed by interleaving write command sequences that correspond to prior erase command sequences on the same flash bus as the write command sequences asserted inreference 204. - While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the included claims. All matters hither-to-fore set forth or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
Claims (20)
1. A method of memory operation comprising:
providing a memory;
providing a cache containing a plurality of entries with a plurality of the entries to be written to memory;
detecting in the cache the plurality of entries to be written to memory;
erasing a first portion of the memory to accommodate the plurality of entries to be written to memory; and
writing to the first portion of the memory the plurality of entries to be written to memory wherein an erase operation is followed by a plurality of sequential write operations.
2. The method as claimed in claim 1 wherein the detecting uses circuitry selected from a group of circuitry consisting of a local processor, a direct memory access controller, a memory-specific direct memory access controller, and a combination thereof.
3. The method as claimed in claim 1 including:
detecting a second plurality of entries to be written to memory; and
writing to the first portion of the memory the second plurality of entries to be written to memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
4. The method as claimed in claim 1 including:
detecting a second plurality of entries to be written to memory;
erasing a second portion of the memory to accommodate the second plurality of entries to be written to memory; and
writing to the second portion of the memory the second plurality of entries to be written to memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
5. The method as claimed in claim 1 wherein the erasing is performed on a basis selected from a group consisting of a schedule, a demand, and a combination thereof.
6. A method of flash memory operation comprising:
providing a flash memory;
providing a cache containing a plurality of entries with a plurality of dirty entries to be written to flash memory;
detecting in the cache the plurality of dirty entries to be written to flash memory;
erasing a first portion of the flash memory to accommodate the plurality of dirty entries to be written to flash memory; and
writing to the first portion of the flash memory the plurality of dirty entries to be written to flash memory wherein an erase operation is followed by a plurality of sequential write operations.
7. The method as claimed in claim 6 wherein the detecting uses circuitry selected from a group of circuitry consisting of a local processor, a direct memory access controller, a flash-specific direct memory access controller, and a combination thereof.
8. The method as claimed in claim 6 including:
detecting a second plurality of dirty entries to be written to flash memory; and
writing to the first portion of the flash memory the second plurality of dirty entries to be written to flash memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
9. The method as claimed in claim 6 including:
detecting a second plurality of dirty entries to be written to flash memory;
erasing a second portion of the flash memory to accommodate the second plurality of dirty entries to be written to flash memory; and
writing to the second portion of the flash memory the second plurality of dirty entries to be written to flash memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
10. The method as claimed in claim 6 wherein the erasing is performed on a basis selected from a group consisting of a schedule, a demand, and a combination thereof.
11. A memory system comprising:
a memory;
a cache containing a plurality of entries with a plurality of the entries to be written to memory;
a detector for detecting in the cache the plurality of entries to be written to memory; and
a processor for erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and for writing to the first portion of the memory the plurality of entries to be written to memory wherein an erase operation is followed by a plurality of sequential write operations.
12. The memory system as claimed in claim 11 wherein detector uses circuitry selected from a group of circuitry consisting of a local processor, a direct memory access controller, a memory-specific direct memory access controller, and a combination thereof.
13. The memory system as claimed in claim 11 wherein:
the detector detects a second plurality of entries to be written to memory; and
the processor writes to the first portion of the memory the second plurality of entries to be written to memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
14. The memory system as claimed in claim 11 wherein:
the detector detects a second plurality of entries to be written to memory; and
the processor erases a second portion of the memory to accommodate the second plurality of entries to be written to memory and writes to the second portion of the memory the second plurality of entries to be written to memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
15. The memory system as claimed in claim 11 wherein the processor erases are performed on a basis selected from a group consisting of a schedule, a demand, and a combination thereof.
16. A memory system of flash memory operation comprising:
a flash memory;
a cache containing a plurality of entries with a plurality of dirty entries to be written to flash memory;
a detector for detecting in the cache the plurality of dirty entries to be written to flash memory; and
a processor for erasing a first portion of the flash memory to accommodate the plurality of dirty entries to be written to flash memory and writing to the first portion of the flash memory the plurality of dirty entries to be written to flash memory wherein an erase operation is followed by a plurality of sequential write operations.
17. The memory system as claimed in claim 16 wherein the detector uses circuitry selected from a group of circuitry consisting of a local processor, a direct memory access controller, a flash-specific direct memory access controller, and a combination thereof.
18. The memory system as claimed in claim 16 wherein:
the detector detects a second plurality of dirty entries to be written to flash memory; and
writes to the first portion of the flash memory the second plurality of dirty entries to be written to flash memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
19. The memory system as claimed in claim 16 wherein:
the detector detects a second plurality of dirty entries to be written to flash memory; and
the processor erases a second portion of the flash memory to accommodate the second plurality of dirty entries to be written to flash memory and writes to the second portion of the flash memory the second plurality of dirty entries to be written to flash memory wherein a plurality of erase operations followed by a plurality of sequential write operations is performed in parallel.
20. The memory system as claimed in claim 16 wherein the processor performs erases on a basis selected from a group consisting of a schedule, a demand, and a combination thereof.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/157,541 US20020141244A1 (en) | 2000-11-30 | 2002-05-28 | Parallel erase operations in memory systems |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US25064200P | 2000-11-30 | 2000-11-30 | |
US09/819,423 US6529416B2 (en) | 2000-11-30 | 2001-03-27 | Parallel erase operations in memory systems |
US10/157,541 US20020141244A1 (en) | 2000-11-30 | 2002-05-28 | Parallel erase operations in memory systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/819,423 Continuation US6529416B2 (en) | 2000-11-30 | 2001-03-27 | Parallel erase operations in memory systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020141244A1 true US20020141244A1 (en) | 2002-10-03 |
Family
ID=26941028
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/819,423 Expired - Lifetime US6529416B2 (en) | 2000-11-30 | 2001-03-27 | Parallel erase operations in memory systems |
US10/157,541 Abandoned US20020141244A1 (en) | 2000-11-30 | 2002-05-28 | Parallel erase operations in memory systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/819,423 Expired - Lifetime US6529416B2 (en) | 2000-11-30 | 2001-03-27 | Parallel erase operations in memory systems |
Country Status (4)
Country | Link |
---|---|
US (2) | US6529416B2 (en) |
EP (1) | EP1410400A4 (en) |
AU (1) | AU2002245112A1 (en) |
WO (1) | WO2002057995A2 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070118688A1 (en) * | 2000-01-06 | 2007-05-24 | Super Talent Electronics Inc. | Flash-Memory Card for Caching a Hard Disk Drive with Data-Area Toggling of Pointers Stored in a RAM Lookup Table |
US20070288686A1 (en) * | 2006-06-08 | 2007-12-13 | Bitmicro Networks, Inc. | Optimized placement policy for solid state storage devices |
US20090124273A1 (en) * | 2006-04-25 | 2009-05-14 | Eberhard Back | Method and Arrangement for Providing at Least One Piece of Information to a User Mobile Communication Device |
US20110113186A1 (en) * | 2009-09-14 | 2011-05-12 | Bitmicro Networks, Inc. | Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device |
US20110161568A1 (en) * | 2009-09-07 | 2011-06-30 | Bitmicro Networks, Inc. | Multilevel memory bus system for solid-state mass storage |
US20120260026A1 (en) * | 2007-04-25 | 2012-10-11 | Cornwell Michael J | Merging command sequences for memory operations |
US8959307B1 (en) | 2007-11-16 | 2015-02-17 | Bitmicro Networks, Inc. | Reduced latency memory read transactions in storage devices |
US9043669B1 (en) | 2012-05-18 | 2015-05-26 | Bitmicro Networks, Inc. | Distributed ECC engine for storage media |
US9135190B1 (en) | 2009-09-04 | 2015-09-15 | Bitmicro Networks, Inc. | Multi-profile memory controller for computing devices |
US9372755B1 (en) | 2011-10-05 | 2016-06-21 | Bitmicro Networks, Inc. | Adaptive power cycle sequences for data recovery |
US9400617B2 (en) | 2013-03-15 | 2016-07-26 | Bitmicro Networks, Inc. | Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained |
US9423457B2 (en) | 2013-03-14 | 2016-08-23 | Bitmicro Networks, Inc. | Self-test solution for delay locked loops |
US9430386B2 (en) | 2013-03-15 | 2016-08-30 | Bitmicro Networks, Inc. | Multi-leveled cache management in a hybrid storage system |
US9501436B1 (en) | 2013-03-15 | 2016-11-22 | Bitmicro Networks, Inc. | Multi-level message passing descriptor |
US9672178B1 (en) | 2013-03-15 | 2017-06-06 | Bitmicro Networks, Inc. | Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system |
US9720603B1 (en) | 2013-03-15 | 2017-08-01 | Bitmicro Networks, Inc. | IOC to IOC distributed caching architecture |
US9734067B1 (en) | 2013-03-15 | 2017-08-15 | Bitmicro Networks, Inc. | Write buffering |
US9798688B1 (en) | 2013-03-15 | 2017-10-24 | Bitmicro Networks, Inc. | Bus arbitration with routing and failover mechanism |
US9811461B1 (en) | 2014-04-17 | 2017-11-07 | Bitmicro Networks, Inc. | Data storage system |
US9842024B1 (en) | 2013-03-15 | 2017-12-12 | Bitmicro Networks, Inc. | Flash electronic disk with RAID controller |
US9858084B2 (en) | 2013-03-15 | 2018-01-02 | Bitmicro Networks, Inc. | Copying of power-on reset sequencer descriptor from nonvolatile memory to random access memory |
US9875205B1 (en) | 2013-03-15 | 2018-01-23 | Bitmicro Networks, Inc. | Network of memory systems |
US9916213B1 (en) | 2013-03-15 | 2018-03-13 | Bitmicro Networks, Inc. | Bus arbitration with routing and failover mechanism |
US9934045B1 (en) | 2013-03-15 | 2018-04-03 | Bitmicro Networks, Inc. | Embedded system boot from a storage device |
US9952991B1 (en) | 2014-04-17 | 2018-04-24 | Bitmicro Networks, Inc. | Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation |
US9971524B1 (en) | 2013-03-15 | 2018-05-15 | Bitmicro Networks, Inc. | Scatter-gather approach for parallel data transfer in a mass storage system |
US10025736B1 (en) | 2014-04-17 | 2018-07-17 | Bitmicro Networks, Inc. | Exchange message protocol message transmission between two devices |
US10042792B1 (en) | 2014-04-17 | 2018-08-07 | Bitmicro Networks, Inc. | Method for transferring and receiving frames across PCI express bus for SSD device |
US10055150B1 (en) | 2014-04-17 | 2018-08-21 | Bitmicro Networks, Inc. | Writing volatile scattered memory metadata to flash device |
US10078604B1 (en) | 2014-04-17 | 2018-09-18 | Bitmicro Networks, Inc. | Interrupt coalescing |
US10149399B1 (en) | 2009-09-04 | 2018-12-04 | Bitmicro Llc | Solid state drive with improved enclosure assembly |
US10489318B1 (en) | 2013-03-15 | 2019-11-26 | Bitmicro Networks, Inc. | Scatter-gather approach for parallel data transfer in a mass storage system |
US10552050B1 (en) | 2017-04-07 | 2020-02-04 | Bitmicro Llc | Multi-dimensional computer storage system |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7190617B1 (en) * | 1989-04-13 | 2007-03-13 | Sandisk Corporation | Flash EEprom system |
US6763424B2 (en) * | 2001-01-19 | 2004-07-13 | Sandisk Corporation | Partial block data programming and reading operations in a non-volatile memory |
KR20040022451A (en) * | 2001-07-16 | 2004-03-12 | 유킹 렌 | Embedded software update system |
US7356641B2 (en) * | 2001-08-28 | 2008-04-08 | International Business Machines Corporation | Data management in flash memory |
US20030046482A1 (en) * | 2001-08-28 | 2003-03-06 | International Business Machines Corporation | Data management in flash memory |
US20040128414A1 (en) * | 2002-12-30 | 2004-07-01 | Rudelic John C. | Using system memory as a write buffer for a non-volatile memory |
US7603488B1 (en) * | 2003-07-15 | 2009-10-13 | Alereon, Inc. | Systems and methods for efficient memory management |
US7116584B2 (en) * | 2003-08-07 | 2006-10-03 | Micron Technology, Inc. | Multiple erase block tagging in a flash memory device |
DE102004040296B3 (en) * | 2004-08-19 | 2006-03-02 | Giesecke & Devrient Gmbh | Write data to a nonvolatile memory of a portable data carrier |
US7502256B2 (en) * | 2004-11-30 | 2009-03-10 | Siliconsystems, Inc. | Systems and methods for reducing unauthorized data recovery from solid-state storage devices |
DE102004058528B3 (en) * | 2004-12-04 | 2006-05-04 | Hyperstone Ag | Memory system for reading and writing logical sector, has logical sectors for communication with host system are buffered in sector buffers and assigned by direct-flash-access-units between sector buffers and flash memory chips |
US7882299B2 (en) * | 2004-12-21 | 2011-02-01 | Sandisk Corporation | System and method for use of on-chip non-volatile memory write cache |
US7362611B2 (en) * | 2005-08-30 | 2008-04-22 | Micron Technology, Inc. | Non-volatile memory copy back |
JP2008033412A (en) * | 2006-07-26 | 2008-02-14 | Hitachi Ltd | Computer system performance management method, management computer, and storage device |
JP4452261B2 (en) * | 2006-09-12 | 2010-04-21 | 株式会社日立製作所 | Storage system logical volume management method, logical volume management program, and storage system |
US8745315B2 (en) * | 2006-11-06 | 2014-06-03 | Rambus Inc. | Memory Systems and methods supporting volatile and wear-leveled nonvolatile physical memory |
US7589694B2 (en) * | 2007-04-05 | 2009-09-15 | Shakespeare Company, Llc | Small, narrow profile multiband antenna |
JP5075761B2 (en) | 2008-05-14 | 2012-11-21 | 株式会社日立製作所 | Storage device using flash memory |
JP5216463B2 (en) * | 2008-07-30 | 2013-06-19 | 株式会社日立製作所 | Storage device, storage area management method thereof, and flash memory package |
TWI424435B (en) * | 2009-08-31 | 2014-01-21 | Phison Electronics Corp | Method for giving program commands to flash memory chips, and controller and storage system using the same |
JP4746699B1 (en) * | 2010-01-29 | 2011-08-10 | 株式会社東芝 | Semiconductor memory device and control method thereof |
GB2491774B (en) | 2010-04-12 | 2018-05-09 | Hewlett Packard Development Co | Authenticating clearing of non-volatile cache of storage device |
US9772777B2 (en) * | 2015-04-27 | 2017-09-26 | Southwest Research Institute | Systems and methods for improved access to flash memory devices |
KR102602694B1 (en) * | 2015-12-15 | 2023-11-15 | 삼성전자주식회사 | Method for operating storage controller and method for operating storage device including same |
CN111863091B (en) * | 2019-04-29 | 2022-07-08 | 北京兆易创新科技股份有限公司 | Method and device for controlling erasing performance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5703823A (en) * | 1994-03-22 | 1997-12-30 | International Business Machines Corporation | Memory device with programmable self-refreshing and testing methods therefore |
US5822251A (en) * | 1997-08-25 | 1998-10-13 | Bit Microsystems, Inc. | Expandable flash-memory mass-storage using shared buddy lines and intermediate flash-bus between device-specific buffers and flash-intelligent DMA controllers |
US5956743A (en) * | 1997-08-25 | 1999-09-21 | Bit Microsystems, Inc. | Transparent management at host interface of flash-memory overhead-bytes using flash-specific DMA having programmable processor-interrupt of high-level operations |
US6000006A (en) * | 1997-08-25 | 1999-12-07 | Bit Microsystems, Inc. | Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage |
US6842376B2 (en) * | 2000-11-08 | 2005-01-11 | Renesas Technology Corporation | Non-volatile semiconductor memory device for selectively re-checking word lines |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0618535B1 (en) * | 1989-04-13 | 1999-08-25 | SanDisk Corporation | EEPROM card with defective cell substitution and cache memory |
JP3594626B2 (en) * | 1993-03-04 | 2004-12-02 | 株式会社ルネサステクノロジ | Non-volatile memory device |
JP2000057039A (en) * | 1998-08-03 | 2000-02-25 | Canon Inc | Method and device for controlling access, file system and information processor |
CN1691331A (en) * | 1999-02-01 | 2005-11-02 | 株式会社日立制作所 | Semiconductor integrated circuit device |
-
2001
- 2001-03-27 US US09/819,423 patent/US6529416B2/en not_active Expired - Lifetime
- 2001-11-29 AU AU2002245112A patent/AU2002245112A1/en not_active Abandoned
- 2001-11-29 EP EP01993262A patent/EP1410400A4/en not_active Withdrawn
- 2001-11-29 WO PCT/US2001/048085 patent/WO2002057995A2/en not_active Application Discontinuation
-
2002
- 2002-05-28 US US10/157,541 patent/US20020141244A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5703823A (en) * | 1994-03-22 | 1997-12-30 | International Business Machines Corporation | Memory device with programmable self-refreshing and testing methods therefore |
US5822251A (en) * | 1997-08-25 | 1998-10-13 | Bit Microsystems, Inc. | Expandable flash-memory mass-storage using shared buddy lines and intermediate flash-bus between device-specific buffers and flash-intelligent DMA controllers |
US5956743A (en) * | 1997-08-25 | 1999-09-21 | Bit Microsystems, Inc. | Transparent management at host interface of flash-memory overhead-bytes using flash-specific DMA having programmable processor-interrupt of high-level operations |
US6000006A (en) * | 1997-08-25 | 1999-12-07 | Bit Microsystems, Inc. | Unified re-map and cache-index table with dual write-counters for wear-leveling of non-volatile flash RAM mass storage |
US6842376B2 (en) * | 2000-11-08 | 2005-01-11 | Renesas Technology Corporation | Non-volatile semiconductor memory device for selectively re-checking word lines |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7610438B2 (en) | 2000-01-06 | 2009-10-27 | Super Talent Electronics, Inc. | Flash-memory card for caching a hard disk drive with data-area toggling of pointers stored in a RAM lookup table |
US20070118688A1 (en) * | 2000-01-06 | 2007-05-24 | Super Talent Electronics Inc. | Flash-Memory Card for Caching a Hard Disk Drive with Data-Area Toggling of Pointers Stored in a RAM Lookup Table |
US20090124273A1 (en) * | 2006-04-25 | 2009-05-14 | Eberhard Back | Method and Arrangement for Providing at Least One Piece of Information to a User Mobile Communication Device |
US20070288686A1 (en) * | 2006-06-08 | 2007-12-13 | Bitmicro Networks, Inc. | Optimized placement policy for solid state storage devices |
US7506098B2 (en) * | 2006-06-08 | 2009-03-17 | Bitmicro Networks, Inc. | Optimized placement policy for solid state storage devices |
US20120260026A1 (en) * | 2007-04-25 | 2012-10-11 | Cornwell Michael J | Merging command sequences for memory operations |
US9075763B2 (en) * | 2007-04-25 | 2015-07-07 | Apple Inc. | Merging command sequences for memory operations |
US8959307B1 (en) | 2007-11-16 | 2015-02-17 | Bitmicro Networks, Inc. | Reduced latency memory read transactions in storage devices |
US10120586B1 (en) | 2007-11-16 | 2018-11-06 | Bitmicro, Llc | Memory transaction with reduced latency |
US10149399B1 (en) | 2009-09-04 | 2018-12-04 | Bitmicro Llc | Solid state drive with improved enclosure assembly |
US9135190B1 (en) | 2009-09-04 | 2015-09-15 | Bitmicro Networks, Inc. | Multi-profile memory controller for computing devices |
US20110161568A1 (en) * | 2009-09-07 | 2011-06-30 | Bitmicro Networks, Inc. | Multilevel memory bus system for solid-state mass storage |
US20130246694A1 (en) * | 2009-09-07 | 2013-09-19 | Bitmicro Networks, Inc. | Multilevel Memory Bus System For Solid-State Mass Storage |
US10133686B2 (en) | 2009-09-07 | 2018-11-20 | Bitmicro Llc | Multilevel memory bus system |
US8788725B2 (en) * | 2009-09-07 | 2014-07-22 | Bitmicro Networks, Inc. | Multilevel memory bus system for solid-state mass storage |
US8447908B2 (en) * | 2009-09-07 | 2013-05-21 | Bitmicro Networks, Inc. | Multilevel memory bus system for solid-state mass storage |
US9484103B1 (en) * | 2009-09-14 | 2016-11-01 | Bitmicro Networks, Inc. | Electronic storage device |
US9099187B2 (en) * | 2009-09-14 | 2015-08-04 | Bitmicro Networks, Inc. | Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device |
US8560804B2 (en) | 2009-09-14 | 2013-10-15 | Bitmicro Networks, Inc. | Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device |
US10082966B1 (en) | 2009-09-14 | 2018-09-25 | Bitmicro Llc | Electronic storage device |
US20110113186A1 (en) * | 2009-09-14 | 2011-05-12 | Bitmicro Networks, Inc. | Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device |
US9372755B1 (en) | 2011-10-05 | 2016-06-21 | Bitmicro Networks, Inc. | Adaptive power cycle sequences for data recovery |
US10180887B1 (en) | 2011-10-05 | 2019-01-15 | Bitmicro Llc | Adaptive power cycle sequences for data recovery |
US9043669B1 (en) | 2012-05-18 | 2015-05-26 | Bitmicro Networks, Inc. | Distributed ECC engine for storage media |
US9996419B1 (en) | 2012-05-18 | 2018-06-12 | Bitmicro Llc | Storage system with distributed ECC capability |
US9423457B2 (en) | 2013-03-14 | 2016-08-23 | Bitmicro Networks, Inc. | Self-test solution for delay locked loops |
US9977077B1 (en) | 2013-03-14 | 2018-05-22 | Bitmicro Llc | Self-test solution for delay locked loops |
US9798688B1 (en) | 2013-03-15 | 2017-10-24 | Bitmicro Networks, Inc. | Bus arbitration with routing and failover mechanism |
US9672178B1 (en) | 2013-03-15 | 2017-06-06 | Bitmicro Networks, Inc. | Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system |
US9858084B2 (en) | 2013-03-15 | 2018-01-02 | Bitmicro Networks, Inc. | Copying of power-on reset sequencer descriptor from nonvolatile memory to random access memory |
US9875205B1 (en) | 2013-03-15 | 2018-01-23 | Bitmicro Networks, Inc. | Network of memory systems |
US9916213B1 (en) | 2013-03-15 | 2018-03-13 | Bitmicro Networks, Inc. | Bus arbitration with routing and failover mechanism |
US9934045B1 (en) | 2013-03-15 | 2018-04-03 | Bitmicro Networks, Inc. | Embedded system boot from a storage device |
US9934160B1 (en) | 2013-03-15 | 2018-04-03 | Bitmicro Llc | Bit-mapped DMA and IOC transfer with dependency table comprising plurality of index fields in the cache for DMA transfer |
US10489318B1 (en) | 2013-03-15 | 2019-11-26 | Bitmicro Networks, Inc. | Scatter-gather approach for parallel data transfer in a mass storage system |
US9971524B1 (en) | 2013-03-15 | 2018-05-15 | Bitmicro Networks, Inc. | Scatter-gather approach for parallel data transfer in a mass storage system |
US10423554B1 (en) | 2013-03-15 | 2019-09-24 | Bitmicro Networks, Inc | Bus arbitration with routing and failover mechanism |
US9734067B1 (en) | 2013-03-15 | 2017-08-15 | Bitmicro Networks, Inc. | Write buffering |
US10013373B1 (en) | 2013-03-15 | 2018-07-03 | Bitmicro Networks, Inc. | Multi-level message passing descriptor |
US10210084B1 (en) | 2013-03-15 | 2019-02-19 | Bitmicro Llc | Multi-leveled cache management in a hybrid storage system |
US10042799B1 (en) | 2013-03-15 | 2018-08-07 | Bitmicro, Llc | Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system |
US9400617B2 (en) | 2013-03-15 | 2016-07-26 | Bitmicro Networks, Inc. | Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained |
US9430386B2 (en) | 2013-03-15 | 2016-08-30 | Bitmicro Networks, Inc. | Multi-leveled cache management in a hybrid storage system |
US9501436B1 (en) | 2013-03-15 | 2016-11-22 | Bitmicro Networks, Inc. | Multi-level message passing descriptor |
US9720603B1 (en) | 2013-03-15 | 2017-08-01 | Bitmicro Networks, Inc. | IOC to IOC distributed caching architecture |
US10120694B2 (en) | 2013-03-15 | 2018-11-06 | Bitmicro Networks, Inc. | Embedded system boot from a storage device |
US9842024B1 (en) | 2013-03-15 | 2017-12-12 | Bitmicro Networks, Inc. | Flash electronic disk with RAID controller |
US10078604B1 (en) | 2014-04-17 | 2018-09-18 | Bitmicro Networks, Inc. | Interrupt coalescing |
US10055150B1 (en) | 2014-04-17 | 2018-08-21 | Bitmicro Networks, Inc. | Writing volatile scattered memory metadata to flash device |
US10042792B1 (en) | 2014-04-17 | 2018-08-07 | Bitmicro Networks, Inc. | Method for transferring and receiving frames across PCI express bus for SSD device |
US10025736B1 (en) | 2014-04-17 | 2018-07-17 | Bitmicro Networks, Inc. | Exchange message protocol message transmission between two devices |
US9811461B1 (en) | 2014-04-17 | 2017-11-07 | Bitmicro Networks, Inc. | Data storage system |
US9952991B1 (en) | 2014-04-17 | 2018-04-24 | Bitmicro Networks, Inc. | Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation |
US10552050B1 (en) | 2017-04-07 | 2020-02-04 | Bitmicro Llc | Multi-dimensional computer storage system |
Also Published As
Publication number | Publication date |
---|---|
EP1410400A4 (en) | 2006-04-26 |
WO2002057995A2 (en) | 2002-07-25 |
WO2002057995A3 (en) | 2003-05-01 |
US20020097594A1 (en) | 2002-07-25 |
EP1410400A2 (en) | 2004-04-21 |
US6529416B2 (en) | 2003-03-04 |
AU2002245112A1 (en) | 2002-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6529416B2 (en) | Parallel erase operations in memory systems | |
KR101300657B1 (en) | Memory system having nonvolatile memory and buffer memory and data read method thereof | |
US10572391B2 (en) | Methods and apparatus for implementing a logical to physical address mapping in a solid state drive | |
USRE49921E1 (en) | Memory device and controlling method of the same | |
CN100483366C (en) | Flash controller cache architecture | |
EP1242868B1 (en) | Organization of blocks within a nonvolatile memory unit to effectively decrease sector write operation time | |
US8966231B2 (en) | Modifying commands | |
US8341374B2 (en) | Solid state drive and related method of scheduling operations | |
US8356134B2 (en) | Memory device with non-volatile memory buffer | |
JP5728672B2 (en) | Hybrid memory management | |
US7076598B2 (en) | Pipeline accessing method to a large block memory | |
KR100610647B1 (en) | Mass storage device with direct execution control and storage | |
KR101056560B1 (en) | Method and device for programming buffer cache in solid state disk system | |
US20080294814A1 (en) | Flash Memory System with Management of Housekeeping Operations | |
US20050223158A1 (en) | Flash memory system with a high-speed flash controller | |
US20070028035A1 (en) | Storage device, computer system, and storage system | |
CN101479806B (en) | Method and apparatus for improving storage performance using a background erase | |
TW201517051A (en) | Hybrid solid-state memory system having volatile and non-volatile memory | |
JP2008524748A (en) | Data relocation in memory systems | |
US20080002469A1 (en) | Non-volatile memory | |
US20100287332A1 (en) | Data storing system, data storing method, executing device, control method thereof, control device, and control method thereof | |
KR20210063724A (en) | Memory system | |
CN112256203B (en) | Writing method, device, equipment, medium and system of FLASH memory | |
US11023370B2 (en) | Memory system having a plurality of memory chips and method for controlling power supplied to the memory chips | |
CN116610596B (en) | Memory device and data processing method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |