+

WO1995001600A1 - Systeme d'antememoire a disque predictif - Google Patents

Systeme d'antememoire a disque predictif Download PDF

Info

Publication number
WO1995001600A1
WO1995001600A1 PCT/US1994/007882 US9407882W WO9501600A1 WO 1995001600 A1 WO1995001600 A1 WO 1995001600A1 US 9407882 W US9407882 W US 9407882W WO 9501600 A1 WO9501600 A1 WO 9501600A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
predictive
ram
read
disk
Prior art date
Application number
PCT/US1994/007882
Other languages
English (en)
Inventor
Pascal Dornier
Original Assignee
Oakleigh Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oakleigh Systems, Inc. filed Critical Oakleigh Systems, Inc.
Publication of WO1995001600A1 publication Critical patent/WO1995001600A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/311In host system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6024History based prefetching

Definitions

  • the present invention is in the area of memory caching for digital mass storage systems, and pertains in particular to a system for optimizing computer disk- intensive operations.
  • the invention uses a predictive cache to improve both actual and perceived performance.
  • a disk cache provides a place to store in dynamic random access memory (DRAM) or static random access memory (SRAM) digital information on its way to or from a digital storage unit. DRAM and SRAM may be collectively referred to as random access memory (RAM).
  • the disk cache serves the memory requirements of a computer's central processing unit (CPU).
  • Disk caching software provides a temporary and fast method of transferring information relative to computer software applications. In order for the computer to process information, both the software application and data used by the application must reside in the computers RAM. The data to be manipulated and altered by the computer software application typically originally resides on media inside a digital storage device such as a hard disk drive, CD-ROM, floppy-optical or floppy drive.
  • the RAM is the first location that disk caching information is sent to on the way to the central processing unit (CPU). All RAM memory is virtual, which means, all data including cached data is lost when the computer is turned off or re-booted. Disk caching software typically starts anew when the computer is re ⁇ started and any previously established disk caches are emptied.
  • the typical mass digital storage device is a hard disk drive, and descriptions of caching systems in this specification will make use of descriptions of operations of hard disk drives.
  • Other types of storage drives may be used however, such as CD-ROM as mentioned above.
  • the peripheral disk drive can be the single most expensive component in a computer system.
  • the basic design of these drives consists of multiple layers of fast revolving magnetic disks with multiple magnetic reading and writing heads floating on a cushion of air a few millionths of an inch above each disk surface.
  • the critical engineering parameters of disk drive construction include: stepper motor actuation of the moving heads, an atmospherically controlled environment, and protection from shock due to external forces.
  • Each magnetically coated disk has cylindrical matching concentric tracks with interleaving sector association. This low level formatting optimizes clustering of fields for individual application programs, which increases access speeds by the read and write heads.
  • a disk cache The primary purpose of a disk cache is to provide the central processing unit even faster access to data stored on the peripheral device by storing it temporarily in RAM, either in the main computer system RAM or auxiliary disk-controlled RAM. Performance in information processing without a RAM cache is directly related to the current limiting average access speeds of modern hard disk drives. A problem is that the average access time for DRAM is as much as 200,000 times faster than for a hard disk drive. Due to the natural physical limitations of all the moving parts within a hard drive, the performance ratio between RAM and hard drive technology, will likely widen in the future. Also, throughout the development of computer processing, CPU clock speeds have increased along with increasing data transfer rates corresponding to design improvements in bus architecture. These system improvements demand faster hard drive operation. One logical cost effective solution is to invent better disk caching systems.
  • a disk cache intercepts requests for data issued by the host computer's operating system, and first checks in cache RAM to see if the requested file is already there. If it is not, the hard drive is accessed and a copy of the requested file is also placed in a predetermined cache RAM location, either in system RAM or in a dedicated cache RAM on a disk controller device.
  • a software cache a resident program in the computer's main memory (RAM) manages the information.
  • a hardware disk cache uses a separate small processor and separate RAM to control data flow.
  • Disk cache design is best served by a balanced algorithm that takes into consideration the size of the cache and what to have residing in it at any point in time.
  • reading from a hard drive consumes about 90% of a disk cache's function while writing to a hard drive takes up about 10% of its duties.
  • An example of a state-of-the-art disk cache is a 4 set-associative read- ahead algorithm with a defer-write or elevator-write algorithm.
  • the cache compares read requests in four fully associative mini-caches, each assigned to an area of the disk. Then, if a requested file is not there, copies ahead that requested file along with predetermined adjacent sectors (or whole tracks if cache is big enough) till the mini-set cache fills up.
  • Cache Hit Ratio % of attempts you found a request in the disk cache
  • Hit Speed % of attempts you found a request in the disk cache
  • Miss Speed % of attempts you found a request in the disk cache
  • a good disk caching program can decrease wait times in practically all cases, which in turn increases performance.
  • a 90% cache Hit ratio makes it appear as though the disk is operating 10 times faster than it would seem without the cache.
  • a predictive cache system for optimizing read/write operations from and to a non a non-volatile mass storage device connected on a host computer's bus, comprises control means for operating the predictive disk cache system, RAM cache means in communication with the host computer's bus for temporary storage of files fetched from and to be written to the non-volatile mass storage device, and non-volatile sequence table means for storing sequence histories of the read/write operations from and to the non-volatile mass storage device.
  • the control means is configured to write the sequence histories to the non-volatile sequence table means during operation, and to follow sequences recorded thereon in performing the read/write operations.
  • the cache is implemented in system RAM and sequence tables are stored on the optimized disk drive.
  • hardware caches may be employed.
  • sequence tables are selectively associated with specific operating systems or application programs, and sequence information for loading start-up files is recorded in the sequence tables to facilitate start-up of the applications.
  • the predictive cache system can out perform exiting disk cache systems in the following areas: 1) Operate in conjunction with present system disk caching software to default to a "random access mode" which establishes control routines for highest possible “cache hitting” 2) Maintain a performance level of "cache hit ratio” even in cluttered fragmented hard disk environments 3) Reduce the size, therefore the cost of required disk cache, and free up memory resources 4) Reduce “seek time” and “miss time” by established sequence patterns loaded into cache memory at system startup 5) Reduce component overall wear and power requirements by eliminating unnecessary disk drive head movements 6) Maintain a closer performance parity with advancements in CPU and bus system design 7) Reduce start-up time for applications.
  • Fig. 1 is a block diagram of an embodiment of the present invention.
  • Fig. 2 is a diagram of steps as performed in an embodiment of the invention.
  • Fig. 3 is a logic flow diagrams of operation according to an embodiment of the present invention.
  • the present invention is a predictive disk cache system for reducing execution time of hard drive-intensive operations, such as Windows startup, to improve computer system performance.
  • Most applications such as Windows, typically startup in the same sequence, accessing a large number of disk files with most files being accessed just once.
  • a conventional disk cache system such as SMARTDRV or caching disk adapters such as Mylex cannot improve performance beyond gains achieved by a simple read-ahead sequence as described above in the "Background" section.
  • Currently available disk caching systems have no predictive abilities to determine what data is required next by a particular software application.
  • requested data is tracked across patterns, such as: which sector is most likely to be read after reading this one?; and if known, which is read after the next and so on?
  • the Predictive Cache System uses adaptive control routines to increase probabilities of a "cache hit". Tracking sequence data is accumulated in run time and is stored with updates in a sequence table in a storage device, and read into the Predictive Cache System memory on starting the computer. This ability, unlike conventional disk caches which start anew each time the system is turned off, significantly speeds up application loading procedures. Even when there are short durations of 100% disk accessing activities, predominantly on starting complex applications, the Predictive Cache System will improve performance by sequentially reading disk sectors to be loaded, in reordered, optimized sequence.
  • the present embodiment of the invention has the ability to read large recognized blocks of needed data files before they are required.
  • sequence tables stored contain, for example, the following information for each allocation unit on the disk drive to be accessed.
  • Each allocation unit has a corresponding entry in the sequence table in a permanent file preferably maintained on the drive accessed by the cacheing system.
  • the tables are updated within the definition of custom control routines. These updates to the table are rewritten to the table in the drive at very low priority so they don't impact system performance.
  • Fig. 1 is a block diagram illustrating an embodiment of the present invention.
  • the operating control routines 19 for the Predictive Cache System reside in system RAM 17, in communication with system bus 15.
  • System CPU 13 also communicates on the bus, and a peripheral storage device 11, in this case a hard disk drive, resides on the bus as well.
  • Storage device 11 is the object of the Predictive Disk Cache system in this embodiment.
  • the Predictive Cache System When the Predictive Cache System intercepts a request for data files from CPU 13 the Predictive Cache System checks its cache 21 first for the requested files. If the files are not in Cache, it updates its sequence table 23 in RAM and immediately forwards the request to the storage device 11. Sequence table 23 in RAM is a temporary copy of a sequence table 25 written to drive 11 as described below.
  • the requested files are located and provided to CPU 13 for processing according to the algorithms of the application program.
  • the Predictive Cache System updates sequence table 25 on the drive.
  • the sequence table is updated first in cache RAM and then later on the storage device.
  • the table is identical at both locations 23 and 25 other than the differences related to the timed delay of multiple updates.
  • sequence table 25 comprises lists of startup files 27 in needed order for a particular application, and sequence information 29 for a particular sector n on device 11.
  • the Predictive Cache System using past requests can predict future needs in each allocation unit.
  • the sequence table is particular in this embodiment for a particular application, and at startup of any application, the proper sequence table is accessed and updated.
  • Fig. 2 is a flow diagram illustrating steps in disk caching according an embodiment of the Predictive Cache System.
  • the flow diagram assumes the existing computer system already has a software and/or hardware cache installed.
  • the Predictive Cache System works in conjunction with the existing disk cache software.
  • a configuration utility (step 12) is provided for a user to set algorithm priorities and input associated system wide caches and hardware configurations. The user can return at any time to the configuration program, to change hardware or software addresses as well as to customize overall cache algorithms, such as: cache size, application shared memory allocation ratios; partitioning of the set associations, LAN network nodal assignments and priorities, and system wide defaults.
  • Options include a number of suggested control routines and their related intentions, each suited for a particular system use. Along with them, an associated chart gives the user a past performance "cache hit" ratio for each of the previously tried sequence table routines. Set-up also establishes batch files inside command routines at "boot- up", as well as establishing sub-directories needed for the Predictive Cache System. An optional utility benchmark program displays current performance data related to cache hits, hit speeds and miss speeds.
  • the computer is powered up with the Predictive Cache System already configured into the system.
  • the Predictive Cache system is loaded next at step 16, so it can manipulate exiting disk caching software batch routines. This assures optimization by not allocating valuable system memory to two or three caches (three in the case of both an existing hardware and software cache), and also saves redundant "hit and miss" times.
  • the Predictive Cache System scans for startup execution programs in the loading batch file (such as those typically found in the DOS system's AUTOEXEC.BAT) of the operating system.
  • the loading batch file such as those typically found in the DOS system's AUTOEXEC.BAT
  • a sequence table located on the drive is available, having been prepared according to previous operations. This file is now read for the order of operations to proceed. Startup files according to the sequence table are then loaded into cache memory to be compared to disk reading requests.
  • the accessing priorities are set from the sequence table in cache memory, then programs are loaded to system RAM accordingly.
  • the predetermined startup program files could be loaded quickly using an elevator- read routine established previously at function 16, when loaded at disk null times.
  • sequences established previously in the Predictive Cache System's sequence tables further facilitate orderly access to requested files according to relative locations on the hard disk.
  • the Predictive Cache System dumps them from cache RAM at step 24.
  • a new configuration of cache RAM allocations is assigned to individual portions of the disk drive. This best serves both cache RAM memory utilization and the predictive control routines.
  • Step 26 the Predictive Cache System proceeds as follows:
  • a flag at step 28 alerts the predictive disk caching system to default to an adaptive replacement of the least recently used (LRU) sequence set of data files.
  • LRU least recently used
  • the system writes back all program requests within parameters of minimum head movement to include updates to the sequence tables at step 30.
  • the system can default at any time after step 24 to the previously established caching software (step 32) if a predetermined randomness prevails on overall system disk seeking.
  • Fig. 2 represents one embodiment of the invention. In other embodiments the order of steps may be different.
  • Fig. 3 is a logic flow diagram illustrating operations in an embodiment of the Predictive Cache System.
  • a request is received at function 41 and checked if it is a "read” or "write” request at decision 43. If the request is for a "read” file, the system checks the cache for the file at decision 45.
  • This cache can be a combination of the original system cache (for random access requests) or just a customized predictive cache only. Either may be selected in configuration. If the cache doesn't hold the requested file, the file is read immediately from the hard drive at function 47, high priority. The requested file is noted in a predictive algorithm at function 48 and the cache sequence table is updated in RAM at junction 49.
  • the system defaults to an existing disk cache routine at function 51. If at decision 53 the disk is not busy, the new sequence is written to the hard drive at function 57 during null periods of disk drive activity. If the disk is busy, control loops back and continues to check for busy until a window opens.
  • the file already resides in the disk cache at decision 45 then it is immediately read to the operating system at function 59 and the predictive cache algorithm is updated at function 61.
  • the system requests a read- ahead at decision 63 on a medium priority level of disk operating activity. This is an operation of the Predictive Cache System according to the present embodiment. If after a pre-selected number of time frames, the program isn't loaded to the cache, it defaults and the system stops trying at function 73. When a priority door opens, the decision at 63 is yes, and the system checks the fullness the disk cache at decision 65.
  • the Predictive Cache System can vary the size and number of files to dump with the size of the request from the sequence tables. If the cache is filling, the adaptive algorithm as described above can select "blocks" of sequence files or random blocks based on the least recently used block at function 67. If there is room in the cache, the file or sequence of files are read to the cache at function 71.
  • the invention first checks the hard disk activity at decision 81 before writing to the disk at function 87. Data associated with write requests is written on an elevator-deferred basis in the write algorithm and are saved to cache RAM at function 83 until the hard disk activity is low. The system continues to check for disk activity, and copies to he drive when a window opens. In an alternative embodiment, priority for writing can be raised to guarantee system integrity if there is a power or system lock up.
  • the Predictive Cache System can have features incorporated into existing hardware as well as software. On hardware devices, dedicated cache memory comes with disk drive controllers as well as with sophisticated disk drives. These hardware devices typically take advantage of advanced bus structures such as Extended Industry Standard Architecture (EISA), Video Electronics Standard Association (VESA) Local Bus and Peripheral Component Interconnect (PCI) bus. They also give the CPU full utilization of system RAM for best overall performance.
  • EISA Extended Industry Standard Architecture
  • VESA Video Electronics Standard Association
  • PCI Peripheral Component Interconnect
  • Another embodiment incorporates a predictive cache in an encoded programmable read only memory (EPROM) device to be added on a bus structure of a computing device to do a specific caching operation every time the system is turned on.
  • EPROM can also contain the optimizing device files needed by the start up applications.
  • a Predictive Disk Cache system is provided to be used on a local area network (LAN).
  • the adaptive sequence tables to be used reflect the given size and typical program application use.
  • the LAN system also incorporates a unique "last-sector-read" for each node on the multi-user system to identify that particular user's sequence tables for future use.
  • the system can control predictive "writes" to the hard drive as a system-wide back-up feature and/or can be used for repetitive write operations.
  • a predictive disk cache system according to the invention may also be incorporated in an operating system for a computer.
  • the Predictive Cache System can be configured on existing hardware or can be incorporated into new optimizing storage devices and/or device controllers. It provides for a fast access to application execution files from start up.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Un système d'antémémoire à disque prédictif s'utilisant avec un ordinateur central suit les demandes de lecture et d'écriture en exécution et constitue des tableaux de séquences (23, 25) qui sont copiés dans des mémoires rémanentes, telles qu'une unité de disque (11) du système d'antémémoire. Dans un mode de réalisation, le système d'antémémoire prédictif met en ÷uvre une antémémoire (21) dans un système RAM (17), met à jour les tableaux de séquences dans la mémoire RAM, et copie les tableaux sur l'unité de disque de manière non prioritaire. Lors du démarrage, le système d'antémémoire à disque prédictif charge d'abord tout système d'application ou d'exécution secondaire afin de maximiser l'activité du disque lors d'une période d'exécution. Après le chargement, le système accède aux tableaux de séquences précédemment établis et enregistrés et suit les séquences écrites dans celui-ci en chargeant les fichiers à partir de l'unité de disque.
PCT/US1994/007882 1993-07-02 1994-07-01 Systeme d'antememoire a disque predictif WO1995001600A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8672293A 1993-07-02 1993-07-02
US08/086,722 1993-07-02

Publications (1)

Publication Number Publication Date
WO1995001600A1 true WO1995001600A1 (fr) 1995-01-12

Family

ID=22200461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/007882 WO1995001600A1 (fr) 1993-07-02 1994-07-01 Systeme d'antememoire a disque predictif

Country Status (1)

Country Link
WO (1) WO1995001600A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5668814A (en) * 1995-03-20 1997-09-16 Raychem Corporation Dual DDS data multiplexer
US6282204B1 (en) 1997-12-19 2001-08-28 Terayon Communication Systems, Inc. ISDN plus voice multiplexer system
WO2001075581A1 (fr) * 2000-03-31 2001-10-11 Intel Corporation Utilisation d'un journal d'acces pour operations de lecteur de disque
US6779058B2 (en) 2001-07-13 2004-08-17 International Business Machines Corporation Method, system, and program for transferring data between storage devices
EP1345113A3 (fr) * 2002-03-13 2008-02-06 Hitachi, Ltd. Serveur de gestion
EP1424628A3 (fr) * 2002-11-26 2008-08-27 Microsoft Corporation Fiabilité améliorée d'ordinateurs amorçables au réseau sans disques utilisant une antémémoire non volatile
EP3037961A1 (fr) * 2009-04-20 2016-06-29 Intel Corporation Démarrage d'un système d'exploitation d'un système au moyen d'une technique de lecture anticipée
TWI588824B (zh) * 2015-12-11 2017-06-21 捷鼎國際股份有限公司 加快在不連續頁面寫入資料之電腦系統及其方法

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4334289A (en) * 1980-02-25 1982-06-08 Honeywell Information Systems Inc. Apparatus for recording the order of usage of locations in memory
US4882642A (en) * 1987-07-02 1989-11-21 International Business Machines Corporation Sequentially processing data in a cached data storage system
US4980823A (en) * 1987-06-22 1990-12-25 International Business Machines Corporation Sequential prefetching with deconfirmation
US5093777A (en) * 1989-06-12 1992-03-03 Bull Hn Information Systems Inc. Method and apparatus for predicting address of a subsequent cache request upon analyzing address patterns stored in separate miss stack
US5146578A (en) * 1989-05-01 1992-09-08 Zenith Data Systems Corporation Method of varying the amount of data prefetched to a cache memory in dependence on the history of data requests
US5235697A (en) * 1990-06-29 1993-08-10 Digital Equipment Set prediction cache memory system using bits of the main memory address
US5257370A (en) * 1989-08-29 1993-10-26 Microsoft Corporation Method and system for optimizing data caching in a disk-based computer system
US5283884A (en) * 1991-12-30 1994-02-01 International Business Machines Corporation CKD channel with predictive track table
US5285527A (en) * 1991-12-11 1994-02-08 Northern Telecom Limited Predictive historical cache memory
US5287487A (en) * 1990-08-31 1994-02-15 Sun Microsystems, Inc. Predictive caching method and apparatus for generating a predicted address for a frame buffer
US5289581A (en) * 1990-06-29 1994-02-22 Leo Berenguel Disk driver with lookahead cache

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4334289A (en) * 1980-02-25 1982-06-08 Honeywell Information Systems Inc. Apparatus for recording the order of usage of locations in memory
US4980823A (en) * 1987-06-22 1990-12-25 International Business Machines Corporation Sequential prefetching with deconfirmation
US4882642A (en) * 1987-07-02 1989-11-21 International Business Machines Corporation Sequentially processing data in a cached data storage system
US5146578A (en) * 1989-05-01 1992-09-08 Zenith Data Systems Corporation Method of varying the amount of data prefetched to a cache memory in dependence on the history of data requests
US5093777A (en) * 1989-06-12 1992-03-03 Bull Hn Information Systems Inc. Method and apparatus for predicting address of a subsequent cache request upon analyzing address patterns stored in separate miss stack
US5257370A (en) * 1989-08-29 1993-10-26 Microsoft Corporation Method and system for optimizing data caching in a disk-based computer system
US5235697A (en) * 1990-06-29 1993-08-10 Digital Equipment Set prediction cache memory system using bits of the main memory address
US5289581A (en) * 1990-06-29 1994-02-22 Leo Berenguel Disk driver with lookahead cache
US5287487A (en) * 1990-08-31 1994-02-15 Sun Microsystems, Inc. Predictive caching method and apparatus for generating a predicted address for a frame buffer
US5285527A (en) * 1991-12-11 1994-02-08 Northern Telecom Limited Predictive historical cache memory
US5283884A (en) * 1991-12-30 1994-02-01 International Business Machines Corporation CKD channel with predictive track table

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5668814A (en) * 1995-03-20 1997-09-16 Raychem Corporation Dual DDS data multiplexer
US5978390A (en) * 1995-03-20 1999-11-02 Raychem Corporation Dual DDS data multiplexer
US6282204B1 (en) 1997-12-19 2001-08-28 Terayon Communication Systems, Inc. ISDN plus voice multiplexer system
WO2001075581A1 (fr) * 2000-03-31 2001-10-11 Intel Corporation Utilisation d'un journal d'acces pour operations de lecteur de disque
US6684294B1 (en) * 2000-03-31 2004-01-27 Intel Corporation Using an access log for disk drive transactions
US6779058B2 (en) 2001-07-13 2004-08-17 International Business Machines Corporation Method, system, and program for transferring data between storage devices
EP1345113A3 (fr) * 2002-03-13 2008-02-06 Hitachi, Ltd. Serveur de gestion
EP1424628A3 (fr) * 2002-11-26 2008-08-27 Microsoft Corporation Fiabilité améliorée d'ordinateurs amorçables au réseau sans disques utilisant une antémémoire non volatile
US7454653B2 (en) 2002-11-26 2008-11-18 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
EP3037961A1 (fr) * 2009-04-20 2016-06-29 Intel Corporation Démarrage d'un système d'exploitation d'un système au moyen d'une technique de lecture anticipée
EP3037960A1 (fr) * 2009-04-20 2016-06-29 Intel Corporation Démarrage d'un système d'exploitation d'un système au moyen d'une technique de lecture anticipée
US10073703B2 (en) 2009-04-20 2018-09-11 Intel Corporation Booting an operating system of a system using a read ahead technique
TWI588824B (zh) * 2015-12-11 2017-06-21 捷鼎國際股份有限公司 加快在不連續頁面寫入資料之電腦系統及其方法

Similar Documents

Publication Publication Date Title
US6948033B2 (en) Control method of the cache hierarchy
US4875155A (en) Peripheral subsystem having read/write cache with record access
US6988165B2 (en) System and method for intelligent write management of disk pages in cache checkpoint operations
EP0077453B1 (fr) Sous-systèmes de mémoire à dispositifs de limitation des données contenues dans leurs antémémoires
JP3409859B2 (ja) 制御装置の制御方法
EP0848321B1 (fr) Méthode de migration de données
US4571674A (en) Peripheral storage system having multiple data transfer rates
EP0130349B1 (fr) Méthode de remplacement de blocs d'information et son application dans un système de traitement de données
EP0071719B1 (fr) Appareil de traitement de données comprenant un sous-système de mémoire à pagination
US5435004A (en) Computerized system and method for data backup
US6857047B2 (en) Memory compression for computer systems
US8285924B1 (en) Cache control system
EP0207288A2 (fr) Méthode et appareil d'initialisation d'un sous-système périphérique
US20030079087A1 (en) Cache memory control unit and method
US7085907B2 (en) Dynamic reconfiguration of memory in a multi-cluster storage control unit
US5694570A (en) Method and system of buffering data written to direct access storage devices in data processing systems
Cohen et al. Storage hierarchies
US20050251625A1 (en) Method and system for data processing with recovery capability
US5293618A (en) Method for controlling access to a shared file and apparatus therefor
JPH05303528A (ja) ライトバック式ディスクキャッシュ装置
US7437515B1 (en) Data structure for write pending
WO1995001600A1 (fr) Systeme d'antememoire a disque predictif
Menon et al. The IBM 3990 disk cache
JP4189342B2 (ja) ストレージ装置、ストレージコントローラ及びライトバックキャッシュ制御方法
US5845318A (en) Dasd I/O caching method and application including replacement policy minimizing data retrieval and storage costs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载