WO2001057675A1 - Systeme et procede permettant d'utiliser efficacement une antememoire dans un dispositif electronique - Google Patents
Systeme et procede permettant d'utiliser efficacement une antememoire dans un dispositif electronique Download PDFInfo
- Publication number
- WO2001057675A1 WO2001057675A1 PCT/US2001/003025 US0103025W WO0157675A1 WO 2001057675 A1 WO2001057675 A1 WO 2001057675A1 US 0103025 W US0103025 W US 0103025W WO 0157675 A1 WO0157675 A1 WO 0157675A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processor
- isochronous
- cache
- data
- storage segment
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
Definitions
- This invention relates generally to techniques for implementing memory devices, and relates more particularly to a system and method for effectively utilizing a cache memory in an electronic device.
- an electronic device may advantageously communicate with other electronic devices in an electronic interconnect to share data to thereby substantially increase the capabilities and versatility of individual devices in the electronic interconnect.
- an electronic interconnect may be implemented in a home environment to enable flexible and beneficial sharing of data and device resources between various consumer electronic devices, such as personal computers, digital video disc (DVD) devices, digital set-top boxes for digital broadcasting, enhanced television sets, and audio reproduction systems.
- DVD digital video disc
- Interconnect size is also a factor that affects data transfer operations in an electronic device. Communications in an electronic interconnect typically become more complex as the number of individual devices or nodes increases. Assume that a particular device on an electronic interconnect is defined as a local device with local software elements, and other devices on the electronic interconnect are defined as remote devices with remote software elements. Accordingly, a local software module on the local device may need to transfer data to various remote software elements on remote devices across the electronic interconnect. However, successfully managing a substantial number of electronic devices across an interconnect may provide significant benefits to a system user.
- enhanced device capability to perform various advanced processing tasks may provide additional benefits to a system user, but may also place increased demands on the control and management of the various devices in the electronic interconnect.
- an enhanced electronic interconnect that effectively accesses, processes, and displays digital television programming may benefit from efficient interconnect communication techniques because of the large amount and complexity of the digital data involved. Due to growing demands on system processor resources and substantially increasing data magnitudes, it is apparent that developing new and effective methods for performing data transfer operations is a matter of importance for the related electronic technologies. Therefore, for all the foregoing reasons, implementing effective methods for performing data transfers in electronic devices remains a significant consideration for designers, manufacturers, and users of contemporary electronic devices.
- a system and method are disclosed for effectively utilizing cache memory in an electronic device.
- a processor sequentially executes program instructions of a device application.
- the foregoing program instructions may include one or more isochronous load instructions that instruct the processor to load time-sensitive isochronous data from a memory into a specific corresponding mapped location of a local cache.
- the processor may advantageously instruct the cache to create a marker for inclusion in a particular storage segment to indicate that information stored therein includes special information, such as isochronous data.
- the marker may prevent the cache from removing the marked isochronous data without the prior occurrence of predetermined rollout exception events.
- the processor if a target location in the cache currently comprises a segment that includes initial isochronous data designated by a marker, and if another isochronous load instruction creates a conflict by mapping subsequent isochronous data from the source memory to the same target location in the cache, then the processor preferably may rollout the initial isochronous data to permit the subsequent isochronous data from the source memory to be marked and loaded into that particular segment of the cache.
- the device application may instruct the processor to rollout a selectable marked segment of the cache in response to various changes of status in the host electronic device. For example, if an isochronous process is aborted, then the corresponding isochronous data may no longer be required in the cache, and the device application may advantageously issue a rollout command to thereby optimize performance of the cache. Similarly, when a particular isochronous process is completed, a rollout exception may be provided in which the device application issues a rollout command to empty a corresponding marked segment of the cache.
- the device application may then advantageously be notified when the final or sixty-fourth byte of isochronous data is accessed from the marked segment and utilized.
- the device application may then issue a rollout command to return the cached isochronous data to a corresponding location in the source memory.
- the device application may advantageously issue various types of isochronous prefetch load instructions to facilitate efficient and timely completion of the isochronous process. For example, if the device application knows that a certain block of isochronous data must be moved to the cache, then the device application may issue an isochronous prefetch load instruction in advance to notify the source memory to transfer all or part of the foregoing block of isochronous data, rather than sending individual isochronous load instructions for each line of the block of isochronous data.
- isochronous prefetch load instructions may thus result in a more efficient and timely isochronous data transfer because the processor need not wait to complete the transfer of an individual line of isochronous data from the source memory before beginning the transfer of a subsequent line of isochronous data.
- the processor, source memory, and cache may execute the foregoing isochronous prefetch load instruction using any appropriate and effective technique that ensures that the transfer of any given portion of the isochronous data occurs prior to the designated time for utilizing that given portion of the isochronous data.
- the present invention thus advantageously provides effective and efficient techniques for utilizing a cache memory in an electronic device.
- FIG. 1 is a block diagram for one embodiment of an electronic interconnect, in accordance with the present invention.
- FIG. 2 is a block diagram for one embodiment of an exemplary device of FIG. 1 , in accordance with the present invention
- FIG. 3 is a diagram for one embodiment of the memory of FIG. 2, in accordance with the present invention.
- FIG. 4 is a diagram for one embodiment of the cache of FIG. 2, in accordance with the present invention.
- FIG. 5 is a diagram for one embodiment of a segment of the cache of
- FIG. 4 in accordance with the present invention.
- FIG. 6 is a block diagram illustrating procedure for effectively utilizing a cache, in accordance with one embodiment of the present invention.
- FIGS. 7A and 7B are a flowchart of method steps for effectively utilizing a cache, in accordance with one embodiment of the present invention.
- the present invention relates to an improvement in electronic devices
- the following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements.
- Various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments.
- the present invention is not intended to be limited to the embodiment shown, but is to be accorded the widest scope consistent with the principles and features described herein.
- the present invention comprises a system and method for effectively utilizing cache memory in an electronic device, and includes a processor that operates in response to a software program to insert a detectable marker in isochronous data that is stored into the cache memory.
- the marker may then be utilized to identify the isochronous data as special information that is protected from removal from the cache device without the occurrence of a predetermined rollout exception event.
- interconnect 1 10 preferably comprises, but is not limited to, a number of electronic devices 1 12 (device A 1 12(a), device B
- electronic interconnect 1 10 may readily be configured to include various other devices 112 or components that function in addition to, or instead of, those discussed in conjunction with the FIG. 1 embodiment.
- interconnect 110 may readily be connected and configured in any other appropriate and suitable manner.
- devices 112 of interconnect 110 may be implemented as any type of electronic device, including, but not limited to, personal computers, printers, digital video disc devices, television sets, audio systems, video cassette recorders, and set-top boxes for digital broadcasting.
- devices 112 preferably communicate with one another using a bus link 132.
- Bus link 132 preferably includes path 132(a), path 132(b), path 132(c), path 132(d), and path 132(e).
- device B 112(b) is coupled to device A 1 12(a) via path 132(a), and to root device 1 14 via path 132(b).
- bus link 132 is preferably implemented using an IEEE Std 1394-1995 Standard for a High Performance Serial Bus, which is hereby incorporated by reference.
- interconnect 110 may readily communicate and function using various other interconnect methodologies which are equally within the scope of the present invention.
- each device in electronic interconnect 110 may preferably communicate with any other device within interconnect 1 10.
- device E 1 12(e) may communicate with device B 1 12(b) by transmitting transfer data via cable 132(e) to device D 1 12(d), which then may transmit the transfer data via cable 132(d) to root device 1 14.
- root device 1 14 then may transmit the transfer data to device B 112(b) via cable 132(b).
- root device 1 14 preferably provides a master cycle start signal to synchronize isochronous processes for devices 112 in interconnect 110.
- any one of the interconnect devices 112 may be designated as the root device or cycle master.
- Device 112 preferably includes, but is not limited to, a processor 212, an input/ output (I/O) interface 214, a memory 216, a device bus 226, and a bus interface 220.
- processor 212, I/O interface 214, memory 216 and bus interface 220 preferably are each coupled to, and communicate via common device bus 226.
- processor 212 may be implemented as any appropriate multipurpose microprocessor device.
- Memory 216 may be implemented as one or more appropriate storage devices, including, but not limited to, read-only memory, random-access memory, and various types of non-volatile memory, such as floppy disc devices or hard disc devices.
- I/O interface 214 preferably may provide an interface for communications with various compatible sources and/or destinations.
- bus interface 220 preferably provides an interface between device 112 and interconnect 1 10.
- bus interface 220 preferably communicates with other devices 1 12 on interconnect 110 via bus link 132.
- Bus interface 220 also preferably communicates with processor 212, I/O device 214, and memory 216 via common device bus 226.
- device 112 preferably includes the capability to perform various tasks that involve isochronous data and isochronous processes.
- Isochronous data typically includes information that is time- sensitive, and therefore requires deterministic transfer operations to guarantee delivery of the isochronous data in a timely manner. For example, video data that is intended for immediate display must arrive at the appropriate destination in a timely manner in order to prevent jitter or breakup of the corresponding image during display.
- device 1 12 preferably performs isochronous and other types of processing in segments of time called "cycles".
- Scheduling of isochronous processes typically requires a finite time period that is sometimes referred to as "overhead". As the cycle time period is reduced, the overhead becomes a more significant factor because of the reduced amount of time remaining to perform the actual isochronous transfer.
- the cycle time period may be in the proximity of 125 microseconds, with a cycle frequency of approximately eight kilohertz.
- processor 212 preferably includes cache 230 which processor 212 may utilize to locally store information from memory 216 for rapid and convenient local access.
- cache 230 may be implemented in any other appropriate location and manner. The functionality and configuration of cache 230 is further discussed below in conjunction with FIGS. 5 through 7.
- memory 216 preferably includes, but is not limited to, device software 312, isochronous data 314, and non-isochronous data 316.
- memory 216 may readily include various other components in addition to, or instead of, those that are discussed in conjunction with the FIG. 3 embodiment.
- device software 312 includes software instructions that are preferably executed by processor 212 for performing various functions and operations by device 112.
- the particular nature and functionality of device software 312 preferably varies depending upon factors such as the type and purpose of the corresponding host device 112.
- Device software 312 may include various instructions that cause processor 212 to transfer portions of isochronous data 314 and/or non-isochronous data 316 bi-directionally between memory 216 and cache 230, in accordance with the present invention.
- the operation and utilization of device application is further discussed below in conjunction with FIGS. 6 and 7.
- cache 230 preferably includes a location 1 (512(a)) through a location N (512(d)).
- cache 230 may preferably be implemented using a four-way associativity technique in which each location 512(a) through 512 (d) preferably includes four separate segments into which processor 212 may selectively load information from address locations in memory 216. Therefore, location 1 (512(a)) preferably includes segments 514(al), 514(a2), 514(a3) and 514(a4).
- location 2 preferably includes segments 514(bl), 514(b2), 514(b3) and 514(b4)
- location 3 512(c)
- location N 512(d)
- cache 230 may readily be configured to include various components, locations, and/ or segments in addition to, or instead of, those shown in the FIG. 4 embodiment.
- each location 512 of cache 230 may include any desired number of storage segments 514.
- processor 212 may preferably utilize the four-way associativity technique for mapping and storing information from various address locations of memory 216 into cache 230.
- Memory 216 typically possesses a substantially larger storage capacity than the relatively smaller storage capacity of cache 230. Therefore, multiple storage location addresses from memory 216 may be mapped to the same location 512 of cache 230.
- each location 512 of cache 230 preferably includes a plurality of storage segments 514 to permit multiple memory locations from memory 216 to be stored into one location 512 of cache 230.
- a problem may arise when all segments 514 of a given location 512 of cache 230 already contain data from memory 216, and processor 212 requires additional storage capacity at that location 512 to perform a time-critical isochronous process that includes transferring isochronous data 314 into cache 230 from memory 216.
- the present invention advantageously includes a technique for increasing deterministic performance of isochronous processes by supporting priority storage of isochronous data into cache 230.
- processor 212 may therefore mark a specific segment 514 of cache 230 to indicate that the contents of the marked segment 514 contains special information (such as isochronous data 314) that should not be removed or "rolled out” (returned to memory 216) to make room for other data unless certain specific exception conditions exist.
- special information such as isochronous data 314
- the marking of a segment 514 and the identification of exception conditions for permitting a rollout are further discussed below in conjunction with FIGS. 5 through 7.
- Cache architectures and techniques are further discussed in IEEE Std 1596- 1992 which is entitled "IEEE Standard For Scalable Coherent Interface (SCI), which is hereby incorporated by reference.
- segment 214 preferably includes the capacity to store sixty-four bytes of information from memory 216.
- segment 214 may be implemented to store any desired amount or type of information from any appropriate source.
- the FIG. 5 embodiment preferably includes a marker 520 to indicate that segment 514 has been assigned a special status.
- marker 520 preferably indicates that segment 514 includes time-sensitive information that is required for the successful and timely performance of an isochronous process.
- marker 520 may include a digital "bit" that processor 212 preferably sets to a binary value of one to mark the corresponding segment 514 as isochronous information.
- segment 514 may likewise be marked using any other effective technique in order to indicate any desired and appropriate status condition.
- processor 212 initially begins to sequentially access and execute program instructions of device application 312 via path 612.
- the foregoing program instructions of device application 312 may include one or more isochronous load instructions that direct processor 212 to load time- sensitive isochronous data 314 from memory 216 into a specific corresponding mapped location 512 of cache 230 via path 616.
- one address location of memory 216 preferably may be stored in a single segment 514 of the mapped location 512 of cache 230.
- processor 212 may advantageously instruct cache 230 to create a marker 520 for inclusion in that particular storage segment 514 of cache 230 to indicate that the information stored therein includes isochronous data 314.
- marker 520 may simply prevent cache 230 from rolling out the isochronous data 314 in the absence of specific instructions from processor 212.
- a number of rollout exceptions may be implemented to optimize the performance of cache 230.
- processor 212 if a target location 512 in cache 230 currently comprises a segment 514 that includes initial isochronous data designated by an initial marker 520, and if another isochronous load instruction creates a conflict by mapping subsequent isochronous data 314 from memory 216 to the same target location 512 in cache 230, then processor 212 preferably may rollout the initial isochronous data to permit the subsequent isochronous data from memory 216 to be marked with a marker 520 and loaded into that particular segment 514. In addition, under certain conditions related to another rollout exception, device application 312 may instruct processor 212 to rollout a selectable marked segment 514 of cache 230 in response to various changes of status in device 1 12.
- the corresponding isochronous data may no longer be required in cache 230, and device application 312 may advantageously issue a rollout command to thereby optimize performance of cache 230.
- a rollout exception may be provided in which device application 312 issues a rollout command to empty a corresponding marked segment 514 of cache 230.
- segment 514 includes sixty-four bytes of isochronous data
- device application 312 may advantageously determine the final or sixty-fourth byte of isochronous data is accessed and used from the marked segment 514.
- device application 312 may then issue a rollout command to return cached isochronous data to a corresponding location in memory 216.
- device application 312 may advantageously issue various types of isochronous prefetch load instructions to facilitate efficient and successful completion of the isochronous process.
- device application 312 may issue an isochronous prefetch load instruction in advance to notify memory 216 to transfer all or part of the foregoing block of isochronous data 314, rather than sending individual isochronous load instructions for each individual line of the block of isochronous data 314.
- isochronous prefetch load instructions may thus result in a more efficient and timely isochronous data transfer because processor 212 need not wait to complete the transfer of a line of isochronous data from memory 216 before beginning the transfer of a subsequent line of isochronous data from memory 216.
- Processor 212, memory 216, and cache 230 may execute the foregoing isochronous prefetch load instruction using any appropriate and effective technique that ensures that the transfer of a given portion of the isochronous data occurs prior to the designated time for processing or utilizing that given portion of the isochronous data.
- device application 312 may provide isochronous "hints" which a compiler program may translate into corresponding isochronous prefetch load instructions.
- device application 312 may include various prefetch parameters for calculating isochronous prefetch load instructions, or device application 312 may provide specific isochronous prefetch load instructions to processor 212 in appropriate predetermined situations.
- FIG. 7A an initial portion of a flowchart of method steps for utilizing a cache 230 is shown, in accordance with one embodiment of the present invention.
- the FIG. 7A method steps illustrate an embodiment in which a specific target location 512 in cache 230 has no vacant segments 514 for storing additional information from memory 216.
- Processor 212 may therefore be required to perform a rollout procedure in order to empty a segment 514 and load the additional information from memory 216.
- FIG. 7A and 7B embodiment is presented to illustrated certain principles and aspects of the present invention. However, in alternate embodiments, the present invention may readily be implemented by utilizing various steps and techniques in addition to, or instead of, those disclosed in conjunction with the FIG. 7A and 7B embodiment. Furthermore, in alternate embodiments, the FIG. 7A and 7B method steps may similarly occur in various sequences other than that discussed in conjunction with the FIG. 7A and 7B embodiment.
- processor 212 initially, in step 720, processor 212 preferably receives a program instruction from a software program (such as device application 312), and responsively determines the type of the received program instruction. If the received instruction type is a load or store data instruction, then, in step 724, processor 212 determines whether the particular data specified in a load data instruction is already in cache 230.
- a software program such as device application 312
- processor 212 determines whether the instruction (step 720) is an isochronous load or store instruction. If the instruction is not an isochronous load or store instruction, then, in step 732, processor 212 preferably rolls out an unmarked segment 514 in the target location 512 of cache 230. In step 736, processor 212 then fetches and loads the transfer data from memory 216 into an appropriate segment 514 of cache 230, and the FIG. 7A process advances to FIG. 7B.
- processor 212 preferably determines whether the isochronous data 314 (to be transferred between memory 216 and cache 230) is mapped to a target location 512 of cache 230 that includes a marked segment 514 (as designated by a marker 520). If the transfer data is not mapped to a target location 512 of cache 230 that includes a marked segment 514, then in step 742, processor 212 preferably rolls out any segment 514 in the target location 512 of cache 230. The FIG. 7A process then advances to step 748.
- step 740 if the isochronous data for transfer is mapped to a target location 512 of cache 230 that includes a marked segment 514, then, in step 744, processor 212 preferably rolls out the information in the marked segment 514 to create a vacant target segment 514.
- step 748 processor 212 preferably fetches and loads the particular isochronous data from memory 216 into the vacant target segment 514 in cache 230.
- processor 212 may also advantageously mark the foregoing target segment 514 with a marker 520 to indicate its special status.
- step 720 if the instruction type comprises a "flush" instruction that designates a particular flushable marked segment 514 in cache 230, then, in step 776, processor 212 preferably rolls out the information in that particular flushable marked segment 514 in cache 230 to allow free access for other data transfer operations.
- the FIG. 7A process then advances to FIG. 7B.
- step 720 if the instruction type comprises any instruction other than a "load data" instruction or a "flush” instruction, then the FIG. 7A process advances to step "B" of the FIG. 7B flowchart. Referring now to FIG.
- processor 212 preferably executes any "other" program instruction that if necessary as a result of foregoing step 720 of FIG. 7A. Then, in step 768, processor 212 preferably determines whether all information has been accessed and utilized in any of one or more finished segments 514 in cache 230 that are marked with marker 520.
- processor 212 may determine whether all information in a finished segment 514 has been used by monitoring whether the final storage location or address has been accessed and utilized from a marked segment 514. If all information has not been used in a segment 514 of cache 230, then the FIG. 7B process preferably advances to step 756. However, if all information has been used in a finished segment 514 of cache 230, then, in step 772, processor 212 preferably rolls out the information in that particular finished segment 514 of cache 230 to allow access to the finished segment 514 by various other data transfer operations.
- processor 212 preferably performs an update procedure on a program counter.
- processor 212 preferably fetches the next program instruction from the software program (such as device application 312), and the FIG. 7B process returns to the foregoing step 720 of FIG. 7A to analyze another program instruction.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Record Information Processing For Printing (AREA)
Abstract
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2001233131A AU2001233131A1 (en) | 2000-02-02 | 2001-01-30 | System and method for effectively utilizing a cache memory in an electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US49687600A | 2000-02-02 | 2000-02-02 | |
US09/496,876 | 2000-02-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001057675A1 true WO2001057675A1 (fr) | 2001-08-09 |
Family
ID=23974551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/003025 WO2001057675A1 (fr) | 2000-02-02 | 2001-01-30 | Systeme et procede permettant d'utiliser efficacement une antememoire dans un dispositif electronique |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2001233131A1 (fr) |
TW (1) | TW502165B (fr) |
WO (1) | WO2001057675A1 (fr) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008076896A2 (fr) | 2006-12-15 | 2008-06-26 | Microchip Technology Incorporated | Mémoire cache configurable destinée à un microprocesseur |
US7877537B2 (en) | 2006-12-15 | 2011-01-25 | Microchip Technology Incorporated | Configurable cache for a microprocessor |
US8255645B2 (en) | 2004-05-03 | 2012-08-28 | Microsoft Corporation | Non-volatile memory cache performance improvement |
US8909861B2 (en) | 2004-10-21 | 2014-12-09 | Microsoft Corporation | Using external memory devices to improve system performance |
US8914557B2 (en) | 2005-12-16 | 2014-12-16 | Microsoft Corporation | Optimizing write and wear performance for a memory |
US9032151B2 (en) | 2008-09-15 | 2015-05-12 | Microsoft Technology Licensing, Llc | Method and system for ensuring reliability of cache data and metadata subsequent to a reboot |
US9208095B2 (en) | 2006-12-15 | 2015-12-08 | Microchip Technology Incorporated | Configurable cache for a microprocessor |
US9361183B2 (en) | 2008-09-19 | 2016-06-07 | Microsoft Technology Licensing, Llc | Aggregation of write traffic to a data store |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7543116B2 (en) * | 2006-01-30 | 2009-06-02 | International Business Machines Corporation | Data processing system, cache system and method for handling a flush operation in a data processing system having multiple coherency domains |
US8631203B2 (en) | 2007-12-10 | 2014-01-14 | Microsoft Corporation | Management of external memory functioning as virtual cache |
US8032707B2 (en) | 2008-09-15 | 2011-10-04 | Microsoft Corporation | Managing cache data and metadata |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4928239A (en) * | 1986-06-27 | 1990-05-22 | Hewlett-Packard Company | Cache memory with variable fetch and replacement schemes |
EP0529217A1 (fr) * | 1991-08-24 | 1993-03-03 | Motorola, Inc. | Antémémoire en temps réel constituée d'une mémoire puce à double usage |
US5829028A (en) * | 1996-05-06 | 1998-10-27 | Advanced Micro Devices, Inc. | Data cache configured to store data in a use-once manner |
-
2001
- 2001-01-30 AU AU2001233131A patent/AU2001233131A1/en not_active Abandoned
- 2001-01-30 WO PCT/US2001/003025 patent/WO2001057675A1/fr active Application Filing
- 2001-03-02 TW TW090100140A patent/TW502165B/zh active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4928239A (en) * | 1986-06-27 | 1990-05-22 | Hewlett-Packard Company | Cache memory with variable fetch and replacement schemes |
EP0529217A1 (fr) * | 1991-08-24 | 1993-03-03 | Motorola, Inc. | Antémémoire en temps réel constituée d'une mémoire puce à double usage |
US5829028A (en) * | 1996-05-06 | 1998-10-27 | Advanced Micro Devices, Inc. | Data cache configured to store data in a use-once manner |
Non-Patent Citations (1)
Title |
---|
"CONDITIONAL LEAST-RECENTLY-USED DATA CACHE DESIGN TO SUPPORT MULTIMEDIA APPLICATIONS", IBM TECHNICAL DISCLOSURE BULLETIN,US,IBM CORP. NEW YORK, vol. 37, no. 2B, 1 February 1994 (1994-02-01), pages 387 - 389, XP000433887, ISSN: 0018-8689 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10216637B2 (en) | 2004-05-03 | 2019-02-26 | Microsoft Technology Licensing, Llc | Non-volatile memory cache performance improvement |
US8255645B2 (en) | 2004-05-03 | 2012-08-28 | Microsoft Corporation | Non-volatile memory cache performance improvement |
US9405693B2 (en) | 2004-05-03 | 2016-08-02 | Microsoft Technology Licensing, Llc | Non-volatile memory cache performance improvement |
US9317209B2 (en) | 2004-10-21 | 2016-04-19 | Microsoft Technology Licensing, Llc | Using external memory devices to improve system performance |
US9690496B2 (en) | 2004-10-21 | 2017-06-27 | Microsoft Technology Licensing, Llc | Using external memory devices to improve system performance |
US8909861B2 (en) | 2004-10-21 | 2014-12-09 | Microsoft Corporation | Using external memory devices to improve system performance |
US11334484B2 (en) | 2005-12-16 | 2022-05-17 | Microsoft Technology Licensing, Llc | Optimizing write and wear performance for a memory |
US8914557B2 (en) | 2005-12-16 | 2014-12-16 | Microsoft Corporation | Optimizing write and wear performance for a memory |
US9529716B2 (en) | 2005-12-16 | 2016-12-27 | Microsoft Technology Licensing, Llc | Optimizing write and wear performance for a memory |
US9208095B2 (en) | 2006-12-15 | 2015-12-08 | Microchip Technology Incorporated | Configurable cache for a microprocessor |
WO2008076896A2 (fr) | 2006-12-15 | 2008-06-26 | Microchip Technology Incorporated | Mémoire cache configurable destinée à un microprocesseur |
US7966457B2 (en) | 2006-12-15 | 2011-06-21 | Microchip Technology Incorporated | Configurable cache for a microprocessor |
US7877537B2 (en) | 2006-12-15 | 2011-01-25 | Microchip Technology Incorporated | Configurable cache for a microprocessor |
WO2008076896A3 (fr) * | 2006-12-15 | 2008-08-07 | Microchip Tech Inc | Mémoire cache configurable destinée à un microprocesseur |
US9032151B2 (en) | 2008-09-15 | 2015-05-12 | Microsoft Technology Licensing, Llc | Method and system for ensuring reliability of cache data and metadata subsequent to a reboot |
US10387313B2 (en) | 2008-09-15 | 2019-08-20 | Microsoft Technology Licensing, Llc | Method and system for ensuring reliability of cache data and metadata subsequent to a reboot |
US9361183B2 (en) | 2008-09-19 | 2016-06-07 | Microsoft Technology Licensing, Llc | Aggregation of write traffic to a data store |
US9448890B2 (en) | 2008-09-19 | 2016-09-20 | Microsoft Technology Licensing, Llc | Aggregation of write traffic to a data store |
US10509730B2 (en) | 2008-09-19 | 2019-12-17 | Microsoft Technology Licensing, Llc | Aggregation of write traffic to a data store |
Also Published As
Publication number | Publication date |
---|---|
AU2001233131A1 (en) | 2001-08-14 |
TW502165B (en) | 2002-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020161941A1 (en) | System and method for efficiently performing a data transfer operation | |
US6430600B1 (en) | Data processing method and device | |
CN100538737C (zh) | 图形处理单元管线多阶同步控制处理器及其方法 | |
JP4597553B2 (ja) | コンピュータ・プロセッサ及び処理装置 | |
US7069373B2 (en) | USB endpoint controller flexible memory management | |
EP2284713A2 (fr) | Interface de programmation d'application pour transfert de données et gestion de bus sur une structure de bus | |
JP5769093B2 (ja) | ダイレクトメモリアクセスコントローラ、その方法およびコンピュータプログラム | |
EP1618466B1 (fr) | Traitement informatise selon lequel les processus a execution simultanee communiquent par tampon peps | |
US20130014114A1 (en) | Information processing apparatus and method for carrying out multi-thread processing | |
US6339427B1 (en) | Graphics display list handler and method | |
WO2001057675A1 (fr) | Systeme et procede permettant d'utiliser efficacement une antememoire dans un dispositif electronique | |
US20240106754A1 (en) | Load Balancing Method for Multi-Thread Forwarding and Related Apparatus | |
JP4007572B2 (ja) | 処理要素をプログラム位置にディスパッチする方法及び装置 | |
US7552232B2 (en) | Speculative method and system for rapid data communications | |
US6728834B2 (en) | System and method for effectively implementing isochronous processor cache | |
EP3709163A1 (fr) | Appareil de traitement, procédé de traitement et programme lisible par ordinateur | |
US5386514A (en) | Queue apparatus and mechanics for a communications interface architecture | |
US6678761B2 (en) | Method and apparatus for budget development under universal serial bus protocol in a multiple speed transmission environment | |
US20050076177A1 (en) | Storage device control unit and method of controlling the same | |
US6598049B1 (en) | Data structure identifying method and recording medium | |
CN110445580B (zh) | 数据发送方法及装置、存储介质、电子装置 | |
WO2006035727A1 (fr) | Dispositif de traitement d'informations, methode de gestion de zone de memoire, et programme informatique associe | |
US20030014558A1 (en) | Batch interrupts handling device, virtual shared memory and multiple concurrent processing device | |
WO2007135532A2 (fr) | procÉdÉ pour gÉrer un groupe de mÉmoires tampons et systÈme L'utilisant | |
US8719499B2 (en) | Cache-line based notification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |