US20100169608A1 - Flexible counter update and retrieval - Google Patents
Flexible counter update and retrieval Download PDFInfo
- Publication number
- US20100169608A1 US20100169608A1 US12/723,280 US72328010A US2010169608A1 US 20100169608 A1 US20100169608 A1 US 20100169608A1 US 72328010 A US72328010 A US 72328010A US 2010169608 A1 US2010169608 A1 US 2010169608A1
- Authority
- US
- United States
- Prior art keywords
- count
- external memory
- counts
- logic
- packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
Definitions
- the present invention relates generally to network devices, and more particularly, to systems and methods for performing accounting in a network device.
- Each counter logic block in the network device may have different count memory schemes and count-update logic. Also, each count retrieval typically takes at least one PIO-read request. To change characteristics on counts belonging to separate blocks, consistency and coordination between system designers is needed.
- One aspect consistent with principles of the invention is directed to a network device that includes one or more processing units and an external memory.
- Each of the one or more processing units includes a centralized counter configured to perform accounting for the respective processing unit.
- the external memory is associated with at least one of the one or more processing units and is configured to store a group of count values for the at least one processing unit.
- a second aspect consistent with principles of the invention is directed to a method for performing accounting in a network device that includes a group of processing blocks.
- the method includes processing a data unit via one of the processing blocks; generating a request to update a count value based on the processing; transferring the request to centralized counter logic, where the centralized counter logic is configured to perform accounting for at least two of the processing blocks; retrieving, via the centralized counter logic, the count value from a memory, where the memory stores count values for the at least two processing blocks; incrementing, via the centralized counter logic, the count value; and storing the incremented count value in the memory.
- a third aspect consistent with principles of the invention is directed to a method for retrieving counter values in a network device.
- the method includes receiving a request for a block of counter values from a remote device at a centralized counter in the network device, retrieving the block of counter values from a memory, placing the block of counter values in at least one packet, and transmitting the at least one packet to the remote device.
- a fourth aspect consistent with principles of the invention is directed to a network device that includes a group of processing blocks, a memory, and a centralized counter.
- the memory is configured to store counter values for at least two of the processing blocks.
- the centralized counter is configured to update the counter values in the memory, retrieve single counter values from the memory, and retrieve blocks of counter values from the memory.
- FIG. 1 is a block diagram illustrating an exemplary routing system in which systems and methods consistent with the principles of the invention may be implemented;
- FIG. 2 is an exemplary detailed block diagram illustrating portions of the routing system of FIG. 1 ;
- FIG. 3 illustrates an exemplary physical interface card (PIC) configuration according to an implementation consistent with the principles of the invention
- FIG. 4 illustrates an exemplary configuration of a counter block in an implementation consistent with the principles of the invention
- FIG. 5 illustrates an exemplary configuration of a lookup table (LUT) in an implementation consistent with the principles of the invention
- FIG. 6 illustrates an exemplary configuration of an external memory in an implementation consistent with the principles of the invention
- FIG. 7 illustrates an exemplary process for updating counts in an implementation consistent with the principles of the invention
- FIG. 8 illustrates a simplified block diagram of the processing described in relation to FIG. 7 ;
- FIG. 9 illustrates an exemplary process for retrieving counts in an implementation consistent with the principles of the invention.
- Implementations consistent with the principles of the invention efficiently perform accounting in a network device by providing a centralized counter logic block that performs all accounting functions and provides the ability to retrieve single counts through one PIO-read request or blocks of counts through a packetization technique thereby saving valuable bandwidth that would otherwise be spent on multiple PIO-read requests.
- FIG. 1 is a block diagram illustrating an exemplary routing system 100 in which systems and methods consistent with the principles of the invention may be implemented.
- System 100 receives one or more packet streams from physical links, processes the packet stream(s) to determine destination information, and transmits the packet stream(s) out on links in accordance with the destination information.
- System 100 may include packet forwarding engines (PFEs) 110 , a switch fabric 120 , and a routing engine (RE) 130 .
- PFEs packet forwarding engines
- RE routing engine
- RE 130 performs high level management functions for system 100 .
- RE 130 communicates with other networks and systems connected to system 100 to exchange information regarding network topology.
- RE 130 may create routing tables based on network topology information, create forwarding tables based on the routing tables, and forward the forwarding tables to PFEs 110 .
- PFEs 110 use the forwarding tables to perform route lookup for incoming packets.
- RE 130 may also perform other general control and monitoring functions for system 100 .
- PFEs 110 are each connected to RE 130 and switch fabric 120 .
- PFEs 110 receive packet data on physical links connected to a network, such as a wide area network (WAN), a local area network (LAN), or another type of network.
- a network such as a wide area network (WAN), a local area network (LAN), or another type of network.
- Each physical link could be one of many types of transport media, such as optical fiber or Ethernet cable.
- the data on the physical link is formatted according to one of several protocols, such as the synchronous optical network (SONET) standard, an asynchronous transfer mode (ATM) technology, or Ethernet.
- SONET synchronous optical network
- ATM asynchronous transfer mode
- a PFE 110 may process incoming packet data prior to transmitting the data to another PFE or the network. PFE 110 may also perform a route lookup for the data using the forwarding table from RE 130 to determine destination information. If the destination indicates that the data should be sent out on a physical link connected to PFE 110 , then PFE 110 prepares the data for transmission by, for example, adding any necessary headers, and transmits the data from the port associated with the physical link. If the destination indicates that the data should be sent to another PFE via switch fabric 120 , then PFE 110 prepares the data for transmission to the other PFE, if necessary, and sends the data to the other PFE via switch fabric 120 .
- FIG. 2 is a detailed block diagram illustrating portions of routing system 100 .
- PFEs 110 connect to one another through switch fabric 120 .
- Each of PFEs 110 may include one or more packet processors 210 and physical interface cards (PICs) 220 .
- FIG. 2 shows two PICs 220 connected to each of packet processors 210 and three packet processors 210 connected to switch fabric 120 , in other embodiments consistent with principles of the invention there can be more or fewer PICs 220 and packet processors 210 .
- Each of packet processors 210 performs routing functions and handles packet transfers to and from PICs 220 and switch fabric 120 . For each packet it handles, packet processor 210 performs the previously-discussed route lookup function and may perform other processing-related functions.
- PIC 220 may transmit data between a physical link and packet processor 210 .
- Different PICs may be designed to handle different types of physical links.
- one of PICs 220 may be an interface for an optical link while another PIC 220 may be an interface for an Ethernet link.
- the flexible counter update and retrieval technique described below can be implemented in any part (e.g., packet processor 210 , PIC 220 , etc.) of routing system 100 in which accounting services are desired. For explanatory purposes, it will be assumed that the flexible counter update and retrieval technique is implemented in a PIC 220 .
- FIG. 3 illustrates an exemplary PIC 220 configuration according to an implementation consistent with the principles of the invention.
- PIC 220 includes receive logic 310 , send logic 320 , and counter logic 330 that, as will be described in detail below, updates and retrieves count values (referred to hereinafter as “counts”) from an external memory 340 .
- counts may include additional devices (not shown) that aid in receiving, processing, or transmitting data.
- the number of components and sources illustrated in FIG. 3 is exemplary.
- Receive logic 310 may receive a packet (or other data unit) from one of a group of sources (for illustrative purposes, labeled 1 to 4,000) and determine, based on the packet, what type of event is occurring.
- each source may be associated with 32 different events.
- Exemplary event information may include whether the packet is enqueued, dequeued, dropped, includes an error, etc.
- Other event information may include packet size (e.g., in bytes). This way, PIC 220 may track, for example, how many bytes have been enqueued, dequeued, dropped, etc. from a particular source.
- receive logic 310 may transmit event information to counter logic 330 to allow for the appropriate count(s) to be updated in external memory 340 .
- Send logic 320 may receive a packet for sending out of PIC 220 and notify counter logic 330 accordingly.
- Counter logic 330 may then update the appropriate count(s) in external memory 340 based on the notification.
- Counter logic 330 may receive event information from receive logic 310 and send logic 320 and update the appropriate count(s) in external memory 340 by performing a read/modify/write operation. Counter logic 330 may, for example, retrieve the appropriate count(s) from external memory 340 , increment the count(s), for example, by adding one to the retrieved count(s), and write the new value(s) back to the same location(s) in external memory 340 .
- Counter logic 330 may also configure and allocate count space in external memory 340 .
- Counter logic 330 allocates one count for each event associated with a source.
- counter logic 330 may allocate counts in 1 byte, 2 byte, 4 byte, or 8 byte widths. All count widths are for packet counts, except, as will be described further below, that an 8-byte count may include a 29-bit packet-count field and a 35-bit byte-count field.
- External memory 340 stores counts for the different events associated with each source in the system.
- external memory may include a double data rate static random access memory (DDrSRAM) that includes 128,000 memory lines, where each line includes 64 bits.
- DrSRAM double data rate static random access memory
- FIG. 4 illustrates an exemplary configuration of counter logic 330 in an implementation consistent with the principles of the invention.
- counter logic 330 may include an event interface 410 , an event controller 420 , test mode logic 430 , a multiplex unit 440 , a memory interface 450 , a programmable input/output (PIO) interface 460 , and a packet generator 470 .
- counter logic 330 may include additional devices (not shown) that aid in receiving, processing, or transmitting data.
- Event interface 410 may receive a count retrieval or update request and determine the location of the appropriate count in external memory 340 .
- a count retrieval request may retrieve a single 64-bit line from external memory 340 or a block of lines.
- An update request may include, for example, a source number that identifies the source (e.g., ranging from 0 to 4K- 1 ), an event number that identifies the particular event (e.g., ranging from 0 to 31), and an increment amount in terms of packet length (e.g., from 0 to 65535).
- Counter logic 330 may use this information for updating a count in external memory 340 .
- Event interface 410 may include a first-in, first-out (FIFO) memory 412 , a group of lookup tables (LUTs) 414 , and an output interface 416 .
- FIFO 412 may receive a request and temporarily store the request.
- FIFO 412 outputs requests in a first-in, first-out order.
- LUT 414 provides the location of a count in external memory 340 and the characteristics of the count.
- FIG. 5 illustrates an exemplary configuration of LUT 414 in an implementation consistent with the principles of the invention.
- LUT 414 may include a base pointer table 510 and an offset table 520 .
- Base pointer table 510 may receive a request, including a source number and event number, and provide, based on the request, a base pointer that points to the start of a block of counts in external memory 340 .
- Base pointer table 510 may include a first base pointer field 512 , a second base pointer field 514 , and an offset index field 516 .
- First base pointer field 512 may include one or more base pointers that point to the start of blocks in external memory 340 . In one implementation, each base pointer points to a 64-byte block of memory.
- second base pointer field 514 may include one or more base pointers that point to the start of blocks in external memory 340 .
- external memory 340 may be partitioned into a roll-over region and a saturating region.
- First base pointer field 512 includes one or more base pointer that point to the start of blocks in the roll-over region
- second base pointer field 514 includes one or more base pointers that point to the start of blocks in the saturating region.
- Offset index field 516 may include indices to entries in offset table 520 .
- Offset table 520 may receive an offset index and event number from base pointer table 510 and provide, based thereon, an offset value that points to a location of a count in the block identified by the base pointer provided by base pointer table 510 . That is, offset table 520 provides a location of a count within the block identified by base pointer table 510 .
- Offset table 520 may include a mode field 522 , a width field 524 , and an offset field 526 .
- Mode field 522 may include information indicating whether or not a particular count is enabled, and whether a count is in the roll-over or saturating region (i.e., if 1 st Base 512 or 2 nd Base 514 is to be used).
- Width field 524 may include information identifying the width of a particular count. As set forth above, a count may have a width of 1 byte, 2 bytes, 4 bytes, or 8 bytes. A width of a count may be easily changed by reprogramming the appropriate entry in width field 524 associated with the count.
- Offset field 526 may include offset values that identify the location of counts within a block identified by base pointer table 510 . Base pointer table 510 and offset table 520 may be fully programmable.
- output 416 may receive a base pointer and offset and width values from LUT 414 and transfer these values to event controller 420 .
- Event controller 420 identifies the event associated with a received request, converts a base pointer/offset/width values from base pointer table 510 and offset table 520 into an external memory 340 pointer, retrieves the appropriate count value, updates the count value (if appropriate), and stores the updated count value back into external memory 340 .
- Event controller 420 may also transfer count read requests to PIO interface 460 and packet generator 470 for transfer to the appropriate destination.
- Event controller 420 may include a set of adders 422 .
- event controller 420 may include two adders 422 .
- One adder may be dedicated to incrementing packet counts, while the other counter may be dedicated to incrementing byte counts. Packet counts may be incremented by one. Byte counts may be incremented by packet length.
- Test mode logic 430 when activated, may zero out (or set to some predetermined value) all counts in external memory 340 .
- Multiplex unit 440 transfers signals from event controller 420 or test mode logic 430 to the external memory 340 .
- test mode logic 430 controls the reading/writing of count values from/to external memory 340 .
- event controller 420 controls the reading/writing of count values from/to external memory 340 .
- Memory interface 450 transfers data to and receives data from external memory 340 .
- Memory interface 450 also transfers the data received from external memory 340 to its appropriate destination (e.g., event controller 420 ).
- PIO interface 460 handles all PIO requests and allows for a single line of counts to be read from external memory 340 and overwritten, if desired.
- Packet generator 470 is similar to PIO interface 460 except that it allows for bigger chunks of counts (i.e., bigger than a single line) to be retrieved from external memory 340 and transferred out of PIC 220 in a packet format.
- FIG. 6 illustrates an exemplary configuration of external memory 340 in an implementation consistent with the principles of the invention.
- external memory 340 may include 128,000 memory lines, where each line includes 64 bits.
- external memory 340 may be partitioned into a roll-over region 610 and a saturating region 620 .
- a count may be assigned to either roll-over region 610 or saturating region 620 .
- the boundry between these two regions may be software programmable.
- a count increments to some predetermined threshold, resets, increments to the threshold, and so on.
- the threshold is 16.
- a count increments to some predetermined threshold and then stops, even if additional events for that count occur.
- Counts may be assigned to roll-over region 610 or saturating region 620 via LUT 414 . As described above, counts may be assigned to 1-byte, 2-byte, 4-byte, or 8-byte widths.
- FIG. 7 illustrates an exemplary process for updating counts in an implementation consistent with the principles of the invention.
- Processing may begin with counter logic 330 receiving an update request [act 710 ].
- Counter logic 330 may receive the update request from, for example, receive logic 310 or send logic 320 in response to the occurrence of an event (e.g., the dropping of a packet).
- the update request may include a source number that identifies the source of the packet, an event number that identifies the event, and an increment amount that identifies the amount that a count associated with the event is to be incremented.
- the update request may be 33 bits, where the source number is 12 bits, the event number is 5 bits, and the increment amount is 16 bits.
- the increment amount may, in one implementation, represent the packet size (or length) in terms of bytes.
- counter logic 330 may retrieve the appropriate count from external memory 340 associated with the source number/event number identified in the request [act 720 ]. To retrieve the count, counter logic 330 may use base pointer table 510 and offset table 520 to obtain a base pointer and offset/width values that uniquely identify the location of the count in external memory 340 . Counter logic 330 may retrieve the count based on the base pointer and offset/width values.
- Counter logic 330 may then increment the count by the amount indicated by the increment amount in the update request [act 730 ].
- all counts in external memory 340 increment by one, except for those counts having an 8-byte width.
- 8-byte counts include a 29-bit packet-count field and a 35-bit byte-count field.
- Counter logic 330 may simultaneously update the packet-count field and byte-count field. The packet-count field may be incremented by one, while the byte-count field may be incremented by the packet size (in bytes).
- counter logic 330 may store the incremented count in external memory 340 [act 740 ]. Processing may then return to act 710 with counter logic 330 processing the next update request.
- offset table 520 includes a mode field 522 that stores a value indicating whether updating for a particular count is enabled or disabled.
- counter logic 330 may be remotely controlled to start/stop the count update processing described above.
- counter logic 330 may be remotely controlled to accept/drop update requests.
- FIG. 8 illustrates a simplified block diagram of the processing described in relation to FIG. 7 .
- counter logic 330 may receive update requests from, for example, receive logic 310 or send logic 320 .
- counter logic 330 may select one of the update requests to process based, for example, on which request was received first. Assume that counter logic 330 receives the update request ⁇ source # 1 , event #X, increment N ⁇ first.
- Counter logic 330 may retrieve the appropriate count from external memory 340 , increment the count by the increment amount N, and store the new count (i.e., count+N) back to external memory 340 . Counter logic 330 may then process the next update request.
- FIG. 9 illustrates an exemplary process for retrieving counts in an implementation consistent with the principles of the invention. Processing may begin with counter logic 330 receiving a count retrieval request [act 910 ].
- the count retrieval request may include a PIO-read request for a single line (64 bits) of counts or a PIO-packetization request for a block (64 bytes) of counts from external memory 340 .
- counter logic 330 may retrieve the appropriate line from external memory 340 [act 930 ]. Once retrieved, the line in external memory 340 can be overwritten, if desired, with a user-defined value. Counter logic 330 may transfer the retrieved line from external memory to the appropriate destination via PIO interface 460 [act 950 ].
- a block retrieval request can be used. If counter logic 330 receives a block retrieval request [act 920 ], counter logic 330 may retrieve the appropriate block from external memory 340 [act 940 ]. Counter logic 330 may then packetize the retrieved block using packet generator 470 and transfer the packet to the appropriate destination [act 950 ]. In one implementation consistent with the principles of the invention, the packet may be interspersed with regular traffic and transmitted via, for example, send logic 320 . Counts in saturating region 620 may be cleared (or reset) upon retrieval by counter logic 330 , while counts in roll-over region 610 may remain intact upon retrieval.
- test mode logic 430 may be desirable to zero out (or reset) a group of counts in external memory 340 . In one implementation consistent with the principles of the invention, this may be accomplished via test mode logic 430 , via a single line retrieval request or a block retrieval request. When counter logic 330 enters a test mode, normal traffic to the block is temporarily stopped and test mode logic 430 assumes control of interfacing with memory interface 450 . Test mode logic 430 may then walk through all the counts (or some portion of them) in external memory 340 and reset the values of the counts (e.g., set the values to zero). Alternatively, test mode logic 430 may set the counts to a user-specified value.
- the requestor may be given the opportunity to overwrite the line of memory.
- the counts in the line in external memory 340 may be reset or set to a user-specified value. As described above, when retrieving a block of counts via a block retrieval request, any counts retrieved from saturating region 620 of external memory 340 may be automatically reset.
- Implementations consistent with the principles of the invention improve the performance of accounting in a network device.
- counter logic is aggregated and shared making it easier to configure counts, manipulate (e.g., start/stop) the count update process, and retrieve and overwrite counts. Flexibility is enhanced through the programmability of the characteristics for each counter.
- the counter logic need only include one set of adders since all counts are updated sequentially. While a single count can be retrieved through one PIO-read request, a one-shot packetization mechanism provides the ability to retrieve a block of counts via a single read request thereby saving valuable bandwidth over conventional techniques.
- Scalability and design reusability is also enhanced through the use of centralized counter logic. By changing the memory and lookup table sizes, the event counts can be scaled without going through architectural changes.
- Implementations consistent with the principles of the invention efficiently perform accounting in a network device by providing centralized counter logic that performs all accounting functions and provides the ability to retrieve single counts through one PIO-read request or blocks of counts through a packetization technique thereby saving valuable bandwidth that would be spent on multiple PIO-read requests.
- logic that performs one or more functions.
- This logic may include hardware, such as an application specific integrated circuit, software, or a combination of hardware and software.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 10/310,778 filed Dec. 6, 2002, which is incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates generally to network devices, and more particularly, to systems and methods for performing accounting in a network device.
- 2. Description of Related Art
- In a typical network device where enqueue, dequeue, packet drop, byte and event statistics are desired, different counters and counter logic are used throughout the device at different stages of a data pipeline. High programmable input/output (PIO) bandwidth is generally needed when it is desired to retrieve statistics at a fairly short periodic interval. The interval is commonly determined by the number and size of the counters.
- Each counter logic block in the network device may have different count memory schemes and count-update logic. Also, each count retrieval typically takes at least one PIO-read request. To change characteristics on counts belonging to separate blocks, consistency and coordination between system designers is needed.
- As the number of network sources and streams grow, it becomes expensive and at times difficult to handle counts in the distributed manner described above. Flexibility is limited when predetermined counter location, size, as well as roll-over/saturating characteristics are set for a counter. Moreover, reading blocks of counts can be very time-consuming in the above-described architecture since at least one PIO-read request is needed for each count.
- Accordingly, it is desirable to improve the ability to perform accounting in a network device.
- Systems and methods consistent with the principles of the invention address this and other needs by providing a centralized counter logic block, which can be easily tailored to meet the needs of the system.
- One aspect consistent with principles of the invention is directed to a network device that includes one or more processing units and an external memory. Each of the one or more processing units includes a centralized counter configured to perform accounting for the respective processing unit. The external memory is associated with at least one of the one or more processing units and is configured to store a group of count values for the at least one processing unit.
- A second aspect consistent with principles of the invention is directed to a method for performing accounting in a network device that includes a group of processing blocks. The method includes processing a data unit via one of the processing blocks; generating a request to update a count value based on the processing; transferring the request to centralized counter logic, where the centralized counter logic is configured to perform accounting for at least two of the processing blocks; retrieving, via the centralized counter logic, the count value from a memory, where the memory stores count values for the at least two processing blocks; incrementing, via the centralized counter logic, the count value; and storing the incremented count value in the memory.
- A third aspect consistent with principles of the invention is directed to a method for retrieving counter values in a network device. The method includes receiving a request for a block of counter values from a remote device at a centralized counter in the network device, retrieving the block of counter values from a memory, placing the block of counter values in at least one packet, and transmitting the at least one packet to the remote device.
- A fourth aspect consistent with principles of the invention is directed to a network device that includes a group of processing blocks, a memory, and a centralized counter. The memory is configured to store counter values for at least two of the processing blocks. The centralized counter is configured to update the counter values in the memory, retrieve single counter values from the memory, and retrieve blocks of counter values from the memory.
- The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
-
FIG. 1 is a block diagram illustrating an exemplary routing system in which systems and methods consistent with the principles of the invention may be implemented; -
FIG. 2 is an exemplary detailed block diagram illustrating portions of the routing system ofFIG. 1 ; -
FIG. 3 illustrates an exemplary physical interface card (PIC) configuration according to an implementation consistent with the principles of the invention; -
FIG. 4 illustrates an exemplary configuration of a counter block in an implementation consistent with the principles of the invention; -
FIG. 5 illustrates an exemplary configuration of a lookup table (LUT) in an implementation consistent with the principles of the invention; -
FIG. 6 illustrates an exemplary configuration of an external memory in an implementation consistent with the principles of the invention; -
FIG. 7 illustrates an exemplary process for updating counts in an implementation consistent with the principles of the invention; -
FIG. 8 illustrates a simplified block diagram of the processing described in relation toFIG. 7 ; and -
FIG. 9 illustrates an exemplary process for retrieving counts in an implementation consistent with the principles of the invention. - The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
- Implementations consistent with the principles of the invention efficiently perform accounting in a network device by providing a centralized counter logic block that performs all accounting functions and provides the ability to retrieve single counts through one PIO-read request or blocks of counts through a packetization technique thereby saving valuable bandwidth that would otherwise be spent on multiple PIO-read requests.
-
FIG. 1 is a block diagram illustrating anexemplary routing system 100 in which systems and methods consistent with the principles of the invention may be implemented.System 100 receives one or more packet streams from physical links, processes the packet stream(s) to determine destination information, and transmits the packet stream(s) out on links in accordance with the destination information.System 100 may include packet forwarding engines (PFEs) 110, aswitch fabric 120, and a routing engine (RE) 130. - RE 130 performs high level management functions for
system 100. For example, RE 130 communicates with other networks and systems connected tosystem 100 to exchange information regarding network topology. RE 130 may create routing tables based on network topology information, create forwarding tables based on the routing tables, and forward the forwarding tables toPFEs 110.PFEs 110 use the forwarding tables to perform route lookup for incoming packets. RE 130 may also perform other general control and monitoring functions forsystem 100. -
PFEs 110 are each connected to RE 130 andswitch fabric 120.PFEs 110 receive packet data on physical links connected to a network, such as a wide area network (WAN), a local area network (LAN), or another type of network. Each physical link could be one of many types of transport media, such as optical fiber or Ethernet cable. The data on the physical link is formatted according to one of several protocols, such as the synchronous optical network (SONET) standard, an asynchronous transfer mode (ATM) technology, or Ethernet. - A
PFE 110 may process incoming packet data prior to transmitting the data to another PFE or the network. PFE 110 may also perform a route lookup for the data using the forwarding table from RE 130 to determine destination information. If the destination indicates that the data should be sent out on a physical link connected toPFE 110, thenPFE 110 prepares the data for transmission by, for example, adding any necessary headers, and transmits the data from the port associated with the physical link. If the destination indicates that the data should be sent to another PFE viaswitch fabric 120, thenPFE 110 prepares the data for transmission to the other PFE, if necessary, and sends the data to the other PFE viaswitch fabric 120. -
FIG. 2 is a detailed block diagram illustrating portions ofrouting system 100.PFEs 110 connect to one another throughswitch fabric 120. Each ofPFEs 110 may include one ormore packet processors 210 and physical interface cards (PICs) 220. AlthoughFIG. 2 shows twoPICs 220 connected to each ofpacket processors 210 and threepacket processors 210 connected to switchfabric 120, in other embodiments consistent with principles of the invention there can be more orfewer PICs 220 andpacket processors 210. - Each of
packet processors 210 performs routing functions and handles packet transfers to and fromPICs 220 and switchfabric 120. For each packet it handles,packet processor 210 performs the previously-discussed route lookup function and may perform other processing-related functions. -
PIC 220 may transmit data between a physical link andpacket processor 210. Different PICs may be designed to handle different types of physical links. For example, one ofPICs 220 may be an interface for an optical link while anotherPIC 220 may be an interface for an Ethernet link. - In
routing system 100 described above, it may be desirable to perform accounting at various stages of the system. The flexible counter update and retrieval technique described below can be implemented in any part (e.g.,packet processor 210,PIC 220, etc.) ofrouting system 100 in which accounting services are desired. For explanatory purposes, it will be assumed that the flexible counter update and retrieval technique is implemented in aPIC 220. -
FIG. 3 illustrates anexemplary PIC 220 configuration according to an implementation consistent with the principles of the invention. As illustrated,PIC 220 includes receivelogic 310, sendlogic 320, andcounter logic 330 that, as will be described in detail below, updates and retrieves count values (referred to hereinafter as “counts”) from anexternal memory 340. It will be appreciated thatPIC 220 may include additional devices (not shown) that aid in receiving, processing, or transmitting data. Moreover, the number of components and sources illustrated inFIG. 3 is exemplary. - Receive
logic 310 may receive a packet (or other data unit) from one of a group of sources (for illustrative purposes, labeled 1 to 4,000) and determine, based on the packet, what type of event is occurring. In an exemplary implementation, each source may be associated with 32 different events. Exemplary event information may include whether the packet is enqueued, dequeued, dropped, includes an error, etc. Other event information may include packet size (e.g., in bytes). This way,PIC 220 may track, for example, how many bytes have been enqueued, dequeued, dropped, etc. from a particular source. Upon receipt of a packet, receivelogic 310 may transmit event information to counterlogic 330 to allow for the appropriate count(s) to be updated inexternal memory 340. - Send
logic 320 may receive a packet for sending out ofPIC 220 and notifycounter logic 330 accordingly.Counter logic 330 may then update the appropriate count(s) inexternal memory 340 based on the notification. -
Counter logic 330 may receive event information from receivelogic 310 and sendlogic 320 and update the appropriate count(s) inexternal memory 340 by performing a read/modify/write operation.Counter logic 330 may, for example, retrieve the appropriate count(s) fromexternal memory 340, increment the count(s), for example, by adding one to the retrieved count(s), and write the new value(s) back to the same location(s) inexternal memory 340. -
Counter logic 330 may also configure and allocate count space inexternal memory 340.Counter logic 330 allocates one count for each event associated with a source. In one implementation consistent with the principles of the invention,counter logic 330 may allocate counts in 1 byte, 2 byte, 4 byte, or 8 byte widths. All count widths are for packet counts, except, as will be described further below, that an 8-byte count may include a 29-bit packet-count field and a 35-bit byte-count field. -
External memory 340 stores counts for the different events associated with each source in the system. In one implementation, external memory may include a double data rate static random access memory (DDrSRAM) that includes 128,000 memory lines, where each line includes 64 bits. -
FIG. 4 illustrates an exemplary configuration ofcounter logic 330 in an implementation consistent with the principles of the invention. As illustrated,counter logic 330 may include anevent interface 410, anevent controller 420,test mode logic 430, amultiplex unit 440, amemory interface 450, a programmable input/output (PIO)interface 460, and apacket generator 470. It will be appreciated thatcounter logic 330 may include additional devices (not shown) that aid in receiving, processing, or transmitting data. -
Event interface 410 may receive a count retrieval or update request and determine the location of the appropriate count inexternal memory 340. As will be described in additional detail below, a count retrieval request may retrieve a single 64-bit line fromexternal memory 340 or a block of lines. An update request may include, for example, a source number that identifies the source (e.g., ranging from 0 to 4K-1), an event number that identifies the particular event (e.g., ranging from 0 to 31), and an increment amount in terms of packet length (e.g., from 0 to 65535).Counter logic 330 may use this information for updating a count inexternal memory 340. -
Event interface 410 may include a first-in, first-out (FIFO)memory 412, a group of lookup tables (LUTs) 414, and anoutput interface 416.FIFO 412 may receive a request and temporarily store the request.FIFO 412 outputs requests in a first-in, first-out order.LUT 414 provides the location of a count inexternal memory 340 and the characteristics of the count. -
FIG. 5 illustrates an exemplary configuration ofLUT 414 in an implementation consistent with the principles of the invention. As illustrated,LUT 414 may include a base pointer table 510 and an offset table 520. Base pointer table 510 may receive a request, including a source number and event number, and provide, based on the request, a base pointer that points to the start of a block of counts inexternal memory 340. Base pointer table 510 may include a firstbase pointer field 512, a secondbase pointer field 514, and an offsetindex field 516. Firstbase pointer field 512 may include one or more base pointers that point to the start of blocks inexternal memory 340. In one implementation, each base pointer points to a 64-byte block of memory. - Similarly, second
base pointer field 514 may include one or more base pointers that point to the start of blocks inexternal memory 340. As will be described in additional detail below,external memory 340 may be partitioned into a roll-over region and a saturating region. Firstbase pointer field 512 includes one or more base pointer that point to the start of blocks in the roll-over region, while secondbase pointer field 514 includes one or more base pointers that point to the start of blocks in the saturating region. Offsetindex field 516 may include indices to entries in offset table 520. - Offset table 520 may receive an offset index and event number from base pointer table 510 and provide, based thereon, an offset value that points to a location of a count in the block identified by the base pointer provided by base pointer table 510. That is, offset table 520 provides a location of a count within the block identified by base pointer table 510.
- Offset table 520 may include a
mode field 522, awidth field 524, and an offsetfield 526.Mode field 522 may include information indicating whether or not a particular count is enabled, and whether a count is in the roll-over or saturating region (i.e., if 1stBase Base 514 is to be used).Width field 524 may include information identifying the width of a particular count. As set forth above, a count may have a width of 1 byte, 2 bytes, 4 bytes, or 8 bytes. A width of a count may be easily changed by reprogramming the appropriate entry inwidth field 524 associated with the count. Offsetfield 526 may include offset values that identify the location of counts within a block identified by base pointer table 510. Base pointer table 510 and offset table 520 may be fully programmable. - Returning to
FIG. 4 ,output 416 may receive a base pointer and offset and width values fromLUT 414 and transfer these values toevent controller 420.Event controller 420 identifies the event associated with a received request, converts a base pointer/offset/width values from base pointer table 510 and offset table 520 into anexternal memory 340 pointer, retrieves the appropriate count value, updates the count value (if appropriate), and stores the updated count value back intoexternal memory 340.Event controller 420 may also transfer count read requests toPIO interface 460 andpacket generator 470 for transfer to the appropriate destination. -
Event controller 420 may include a set ofadders 422. In one implementation,event controller 420 may include twoadders 422. One adder may be dedicated to incrementing packet counts, while the other counter may be dedicated to incrementing byte counts. Packet counts may be incremented by one. Byte counts may be incremented by packet length. -
Test mode logic 430, when activated, may zero out (or set to some predetermined value) all counts inexternal memory 340.Multiplex unit 440 transfers signals fromevent controller 420 ortest mode logic 430 to theexternal memory 340. When test mode is activated,test mode logic 430 controls the reading/writing of count values from/toexternal memory 340. When test mode is deactivated,event controller 420 controls the reading/writing of count values from/toexternal memory 340. -
Memory interface 450 transfers data to and receives data fromexternal memory 340.Memory interface 450 also transfers the data received fromexternal memory 340 to its appropriate destination (e.g., event controller 420).PIO interface 460 handles all PIO requests and allows for a single line of counts to be read fromexternal memory 340 and overwritten, if desired.Packet generator 470 is similar toPIO interface 460 except that it allows for bigger chunks of counts (i.e., bigger than a single line) to be retrieved fromexternal memory 340 and transferred out ofPIC 220 in a packet format. -
FIG. 6 illustrates an exemplary configuration ofexternal memory 340 in an implementation consistent with the principles of the invention. In one implementation,external memory 340 may include 128,000 memory lines, where each line includes 64 bits. - As illustrated,
external memory 340 may be partitioned into a roll-overregion 610 and a saturatingregion 620. A count may be assigned to either roll-overregion 610 or saturatingregion 620. The boundry between these two regions may be software programmable. In roll-overregion 610, a count increments to some predetermined threshold, resets, increments to the threshold, and so on. In one implementation, the threshold is 16. In the saturating region, a count increments to some predetermined threshold and then stops, even if additional events for that count occur. Counts may be assigned to roll-overregion 610 or saturatingregion 620 viaLUT 414. As described above, counts may be assigned to 1-byte, 2-byte, 4-byte, or 8-byte widths. -
FIG. 7 illustrates an exemplary process for updating counts in an implementation consistent with the principles of the invention. Processing may begin withcounter logic 330 receiving an update request [act 710].Counter logic 330 may receive the update request from, for example, receivelogic 310 or sendlogic 320 in response to the occurrence of an event (e.g., the dropping of a packet). The update request may include a source number that identifies the source of the packet, an event number that identifies the event, and an increment amount that identifies the amount that a count associated with the event is to be incremented. In one implementation consistent with the principles of the invention, the update request may be 33 bits, where the source number is 12 bits, the event number is 5 bits, and the increment amount is 16 bits. The increment amount may, in one implementation, represent the packet size (or length) in terms of bytes. - In response to the update request,
counter logic 330 may retrieve the appropriate count fromexternal memory 340 associated with the source number/event number identified in the request [act 720]. To retrieve the count,counter logic 330 may use base pointer table 510 and offset table 520 to obtain a base pointer and offset/width values that uniquely identify the location of the count inexternal memory 340.Counter logic 330 may retrieve the count based on the base pointer and offset/width values. -
Counter logic 330 may then increment the count by the amount indicated by the increment amount in the update request [act 730]. In one implementation consistent with the principles of the invention, all counts inexternal memory 340 increment by one, except for those counts having an 8-byte width. As described above, 8-byte counts include a 29-bit packet-count field and a 35-bit byte-count field.Counter logic 330 may simultaneously update the packet-count field and byte-count field. The packet-count field may be incremented by one, while the byte-count field may be incremented by the packet size (in bytes). - Once the count has been incremented,
counter logic 330 may store the incremented count in external memory 340 [act 740]. Processing may then return to act 710 withcounter logic 330 processing the next update request. - When desired, the updating of counts may be enabled or disabled. As noted above, offset table 520 includes a
mode field 522 that stores a value indicating whether updating for a particular count is enabled or disabled. In addition,counter logic 330 may be remotely controlled to start/stop the count update processing described above. Moreover,counter logic 330 may be remotely controlled to accept/drop update requests. -
FIG. 8 illustrates a simplified block diagram of the processing described in relation toFIG. 7 . As illustrated,counter logic 330 may receive update requests from, for example, receivelogic 310 or sendlogic 320. In response,counter logic 330 may select one of the update requests to process based, for example, on which request was received first. Assume thatcounter logic 330 receives the update request {source # 1, event #X, increment N} first.Counter logic 330 may retrieve the appropriate count fromexternal memory 340, increment the count by the increment amount N, and store the new count (i.e., count+N) back toexternal memory 340.Counter logic 330 may then process the next update request. -
FIG. 9 illustrates an exemplary process for retrieving counts in an implementation consistent with the principles of the invention. Processing may begin withcounter logic 330 receiving a count retrieval request [act 910]. In one implementation, the count retrieval request may include a PIO-read request for a single line (64 bits) of counts or a PIO-packetization request for a block (64 bytes) of counts fromexternal memory 340. - If the request is for a single line of counts [act 920],
counter logic 330 may retrieve the appropriate line from external memory 340 [act 930]. Once retrieved, the line inexternal memory 340 can be overwritten, if desired, with a user-defined value.Counter logic 330 may transfer the retrieved line from external memory to the appropriate destination via PIO interface 460 [act 950]. - When high bandwidth is desired for count retrieval, a block retrieval request can be used. If
counter logic 330 receives a block retrieval request [act 920],counter logic 330 may retrieve the appropriate block from external memory 340 [act 940].Counter logic 330 may then packetize the retrieved block usingpacket generator 470 and transfer the packet to the appropriate destination [act 950]. In one implementation consistent with the principles of the invention, the packet may be interspersed with regular traffic and transmitted via, for example, sendlogic 320. Counts in saturatingregion 620 may be cleared (or reset) upon retrieval bycounter logic 330, while counts in roll-overregion 610 may remain intact upon retrieval. - In certain instances, it may be desirable to zero out (or reset) a group of counts in
external memory 340. In one implementation consistent with the principles of the invention, this may be accomplished viatest mode logic 430, via a single line retrieval request or a block retrieval request. Whencounter logic 330 enters a test mode, normal traffic to the block is temporarily stopped andtest mode logic 430 assumes control of interfacing withmemory interface 450.Test mode logic 430 may then walk through all the counts (or some portion of them) inexternal memory 340 and reset the values of the counts (e.g., set the values to zero). Alternatively,test mode logic 430 may set the counts to a user-specified value. - When a single line retrieval request is received, the requestor may be given the opportunity to overwrite the line of memory. If desired, the counts in the line in
external memory 340 may be reset or set to a user-specified value. As described above, when retrieving a block of counts via a block retrieval request, any counts retrieved from saturatingregion 620 ofexternal memory 340 may be automatically reset. - Implementations consistent with the principles of the invention improve the performance of accounting in a network device. Unlike conventional approaches, counter logic is aggregated and shared making it easier to configure counts, manipulate (e.g., start/stop) the count update process, and retrieve and overwrite counts. Flexibility is enhanced through the programmability of the characteristics for each counter. The counter logic need only include one set of adders since all counts are updated sequentially. While a single count can be retrieved through one PIO-read request, a one-shot packetization mechanism provides the ability to retrieve a block of counts via a single read request thereby saving valuable bandwidth over conventional techniques.
- Scalability and design reusability is also enhanced through the use of centralized counter logic. By changing the memory and lookup table sizes, the event counts can be scaled without going through architectural changes.
- Implementations consistent with the principles of the invention efficiently perform accounting in a network device by providing centralized counter logic that performs all accounting functions and provides the ability to retrieve single counts through one PIO-read request or blocks of counts through a packetization technique thereby saving valuable bandwidth that would be spent on multiple PIO-read requests.
- The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while series of acts have been described in
FIGS. 7 and 9 , the order of the acts may vary in other implementations consistent with the principles of the invention. Also, non-dependent acts may be performed in parallel. - Further, certain portions of the invention have been described as “logic” that performs one or more functions. This logic may include hardware, such as an application specific integrated circuit, software, or a combination of hardware and software.
- No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used.
- The scope of the invention is defined by the claims and their equivalents.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/723,280 US8331359B2 (en) | 2002-12-06 | 2010-03-12 | Flexible counter update and retrieval |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/310,778 US7317718B1 (en) | 2002-12-06 | 2002-12-06 | Flexible counter update and retrieval |
US11/943,225 US7710952B1 (en) | 2002-12-06 | 2007-11-20 | Flexible counter update and retrieval |
US12/723,280 US8331359B2 (en) | 2002-12-06 | 2010-03-12 | Flexible counter update and retrieval |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/943,225 Continuation US7710952B1 (en) | 2002-12-06 | 2007-11-20 | Flexible counter update and retrieval |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100169608A1 true US20100169608A1 (en) | 2010-07-01 |
US8331359B2 US8331359B2 (en) | 2012-12-11 |
Family
ID=38893438
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/310,778 Expired - Fee Related US7317718B1 (en) | 2002-12-06 | 2002-12-06 | Flexible counter update and retrieval |
US11/943,225 Expired - Fee Related US7710952B1 (en) | 2002-12-06 | 2007-11-20 | Flexible counter update and retrieval |
US12/723,280 Expired - Fee Related US8331359B2 (en) | 2002-12-06 | 2010-03-12 | Flexible counter update and retrieval |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/310,778 Expired - Fee Related US7317718B1 (en) | 2002-12-06 | 2002-12-06 | Flexible counter update and retrieval |
US11/943,225 Expired - Fee Related US7710952B1 (en) | 2002-12-06 | 2007-11-20 | Flexible counter update and retrieval |
Country Status (1)
Country | Link |
---|---|
US (3) | US7317718B1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140118369A1 (en) * | 2012-10-26 | 2014-05-01 | Nvidia Corporation | Managing event count reports in a tile-based architecture |
US20140372691A1 (en) * | 2013-06-13 | 2014-12-18 | Hewlett-Packard Development Company, L. P. | Counter policy implementation |
US9413627B2 (en) * | 2006-01-31 | 2016-08-09 | Juniper Networks, Inc. | Data unit counter |
US9734548B2 (en) | 2012-10-26 | 2017-08-15 | Nvidia Corporation | Caching of adaptively sized cache tiles in a unified L2 cache with surface compression |
US10032243B2 (en) | 2012-10-26 | 2018-07-24 | Nvidia Corporation | Distributed tiled caching |
US10438314B2 (en) | 2012-10-26 | 2019-10-08 | Nvidia Corporation | Two-pass cache tile processing for visibility testing in a tile-based architecture |
US20220217076A1 (en) * | 2019-05-23 | 2022-07-07 | Hewlett Packard Enterprise Development Lp | Method and system for facilitating wide lag and ecmp control |
US12267229B2 (en) | 2020-03-23 | 2025-04-01 | Hewlett Packard Enterprise Development Lp | System and method for facilitating data-driven intelligent network with endpoint congestion detection and control |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7317718B1 (en) * | 2002-12-06 | 2008-01-08 | Juniper Networks, Inc. | Flexible counter update and retrieval |
US7743140B2 (en) * | 2006-12-08 | 2010-06-22 | International Business Machines Corporation | Binding processes in a non-uniform memory access system |
US9838222B2 (en) * | 2013-06-13 | 2017-12-05 | Hewlett Packard Enterprise Development Lp | Counter update remote processing |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6038592A (en) * | 1993-04-19 | 2000-03-14 | International Business Machines Corporation | Method and device of multicasting data in a communications system |
US6192326B1 (en) * | 1996-08-29 | 2001-02-20 | Nokia Telecommunications Oy | Event recording in a service database system |
US6202130B1 (en) * | 1998-04-17 | 2001-03-13 | Motorola, Inc. | Data processing system for processing vector data and method therefor |
US6226295B1 (en) * | 1995-09-28 | 2001-05-01 | Micron Technology, Inc. | High speed programmable counter |
US6330599B1 (en) * | 1997-08-05 | 2001-12-11 | Cisco Technology, Inc. | Virtual interfaces with dynamic binding |
US6460010B1 (en) * | 1999-09-22 | 2002-10-01 | Alcatel Canada Inc. | Method and apparatus for statistical compilation |
US20030131218A1 (en) * | 2002-01-07 | 2003-07-10 | International Business Machines Corporation | Method and apparatus for mapping software prefetch instructions to hardware prefetch logic |
US6625266B1 (en) * | 1997-12-16 | 2003-09-23 | Nokia Corporation | Event pre-processing for composing a report |
US20030200412A1 (en) * | 2002-04-17 | 2003-10-23 | Marcus Peinado | Using limits on address translation to control access to an addressable entity |
US20030204673A1 (en) * | 2002-04-26 | 2003-10-30 | Suresh Venkumahanti | Data prefetching apparatus in a data processing system and method therefor |
US6642762B2 (en) * | 2001-07-09 | 2003-11-04 | Broadcom Corporation | Method and apparatus to ensure DLL locking at minimum delay |
US20040228462A1 (en) * | 2001-12-13 | 2004-11-18 | Nokia Corporation | Method and system for collecting counter data in a network element |
US20070226397A1 (en) * | 2004-07-20 | 2007-09-27 | Koninklijke Philips Electronics, N.V. | Time Budgeting for Non-Data Transfer Operations in Drive Units |
US7318123B2 (en) * | 2000-11-30 | 2008-01-08 | Mosaid Technologies Incorporated | Method and apparatus for accelerating retrieval of data from a memory system with cache by reducing latency |
US7317718B1 (en) * | 2002-12-06 | 2008-01-08 | Juniper Networks, Inc. | Flexible counter update and retrieval |
-
2002
- 2002-12-06 US US10/310,778 patent/US7317718B1/en not_active Expired - Fee Related
-
2007
- 2007-11-20 US US11/943,225 patent/US7710952B1/en not_active Expired - Fee Related
-
2010
- 2010-03-12 US US12/723,280 patent/US8331359B2/en not_active Expired - Fee Related
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6038592A (en) * | 1993-04-19 | 2000-03-14 | International Business Machines Corporation | Method and device of multicasting data in a communications system |
US6226295B1 (en) * | 1995-09-28 | 2001-05-01 | Micron Technology, Inc. | High speed programmable counter |
US6192326B1 (en) * | 1996-08-29 | 2001-02-20 | Nokia Telecommunications Oy | Event recording in a service database system |
US6330599B1 (en) * | 1997-08-05 | 2001-12-11 | Cisco Technology, Inc. | Virtual interfaces with dynamic binding |
US6625266B1 (en) * | 1997-12-16 | 2003-09-23 | Nokia Corporation | Event pre-processing for composing a report |
US6202130B1 (en) * | 1998-04-17 | 2001-03-13 | Motorola, Inc. | Data processing system for processing vector data and method therefor |
US6460010B1 (en) * | 1999-09-22 | 2002-10-01 | Alcatel Canada Inc. | Method and apparatus for statistical compilation |
US7318123B2 (en) * | 2000-11-30 | 2008-01-08 | Mosaid Technologies Incorporated | Method and apparatus for accelerating retrieval of data from a memory system with cache by reducing latency |
US6642762B2 (en) * | 2001-07-09 | 2003-11-04 | Broadcom Corporation | Method and apparatus to ensure DLL locking at minimum delay |
US20040228462A1 (en) * | 2001-12-13 | 2004-11-18 | Nokia Corporation | Method and system for collecting counter data in a network element |
US20030131218A1 (en) * | 2002-01-07 | 2003-07-10 | International Business Machines Corporation | Method and apparatus for mapping software prefetch instructions to hardware prefetch logic |
US20030200412A1 (en) * | 2002-04-17 | 2003-10-23 | Marcus Peinado | Using limits on address translation to control access to an addressable entity |
US20030204673A1 (en) * | 2002-04-26 | 2003-10-30 | Suresh Venkumahanti | Data prefetching apparatus in a data processing system and method therefor |
US7317718B1 (en) * | 2002-12-06 | 2008-01-08 | Juniper Networks, Inc. | Flexible counter update and retrieval |
US7710952B1 (en) * | 2002-12-06 | 2010-05-04 | Juniper Networks, Inc. | Flexible counter update and retrieval |
US20070226397A1 (en) * | 2004-07-20 | 2007-09-27 | Koninklijke Philips Electronics, N.V. | Time Budgeting for Non-Data Transfer Operations in Drive Units |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9413627B2 (en) * | 2006-01-31 | 2016-08-09 | Juniper Networks, Inc. | Data unit counter |
US20140118369A1 (en) * | 2012-10-26 | 2014-05-01 | Nvidia Corporation | Managing event count reports in a tile-based architecture |
US9448804B2 (en) | 2012-10-26 | 2016-09-20 | Nvidia Corporation | Techniques for managing graphics processing resources in a tile-based architecture |
US9483270B2 (en) | 2012-10-26 | 2016-11-01 | Nvidia Corporation | Distributed tiled caching |
US9612839B2 (en) | 2012-10-26 | 2017-04-04 | Nvidia Corporation | Higher accuracy Z-culling in a tile-based architecture |
US9639366B2 (en) | 2012-10-26 | 2017-05-02 | Nvidia Corporation | Techniques for managing graphics processing resources in a tile-based architecture |
US9639367B2 (en) * | 2012-10-26 | 2017-05-02 | Nvidia Corporation | Managing event count reports in a tile-based architecture |
US9734548B2 (en) | 2012-10-26 | 2017-08-15 | Nvidia Corporation | Caching of adaptively sized cache tiles in a unified L2 cache with surface compression |
US9792122B2 (en) | 2012-10-26 | 2017-10-17 | Nvidia Corporation | Heuristics for improving performance in a tile based architecture |
US9952868B2 (en) | 2012-10-26 | 2018-04-24 | Nvidia Corporation | Two-pass cache tile processing for visibility testing in a tile-based architecture |
US10032243B2 (en) | 2012-10-26 | 2018-07-24 | Nvidia Corporation | Distributed tiled caching |
US10032242B2 (en) | 2012-10-26 | 2018-07-24 | Nvidia Corporation | Managing deferred contexts in a cache tiling architecture |
US10083036B2 (en) | 2012-10-26 | 2018-09-25 | Nvidia Corporation | Techniques for managing graphics processing resources in a tile-based architecture |
US10223122B2 (en) | 2012-10-26 | 2019-03-05 | Nvidia Corporation | Managing event count reports in a tile-based architecture |
US10282803B2 (en) | 2012-10-26 | 2019-05-07 | Nvidia Corporation | State handling in a tiled architecture |
US10438314B2 (en) | 2012-10-26 | 2019-10-08 | Nvidia Corporation | Two-pass cache tile processing for visibility testing in a tile-based architecture |
US10489875B2 (en) | 2012-10-26 | 2019-11-26 | Nvidia Corporation | Data structures for efficient tiled rendering |
US11107176B2 (en) | 2012-10-26 | 2021-08-31 | Nvidia Corporation | Scheduling cache traffic in a tile-based architecture |
US20140372691A1 (en) * | 2013-06-13 | 2014-12-18 | Hewlett-Packard Development Company, L. P. | Counter policy implementation |
US20220217076A1 (en) * | 2019-05-23 | 2022-07-07 | Hewlett Packard Enterprise Development Lp | Method and system for facilitating wide lag and ecmp control |
US11750504B2 (en) | 2019-05-23 | 2023-09-05 | Hewlett Packard Enterprise Development Lp | Method and system for providing network egress fairness between applications |
US11757763B2 (en) | 2019-05-23 | 2023-09-12 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient host memory access from a network interface controller (NIC) |
US11757764B2 (en) | 2019-05-23 | 2023-09-12 | Hewlett Packard Enterprise Development Lp | Optimized adaptive routing to reduce number of hops |
US11765074B2 (en) | 2019-05-23 | 2023-09-19 | Hewlett Packard Enterprise Development Lp | System and method for facilitating hybrid message matching in a network interface controller (NIC) |
US11777843B2 (en) | 2019-05-23 | 2023-10-03 | Hewlett Packard Enterprise Development Lp | System and method for facilitating data-driven intelligent network |
US11784920B2 (en) | 2019-05-23 | 2023-10-10 | Hewlett Packard Enterprise Development Lp | Algorithms for use of load information from neighboring nodes in adaptive routing |
US11799764B2 (en) | 2019-05-23 | 2023-10-24 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient packet injection into an output buffer in a network interface controller (NIC) |
US11818037B2 (en) | 2019-05-23 | 2023-11-14 | Hewlett Packard Enterprise Development Lp | Switch device for facilitating switching in data-driven intelligent network |
US11848859B2 (en) | 2019-05-23 | 2023-12-19 | Hewlett Packard Enterprise Development Lp | System and method for facilitating on-demand paging in a network interface controller (NIC) |
US11855881B2 (en) | 2019-05-23 | 2023-12-26 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient packet forwarding using a message state table in a network interface controller (NIC) |
US11863431B2 (en) | 2019-05-23 | 2024-01-02 | Hewlett Packard Enterprise Development Lp | System and method for facilitating fine-grain flow control in a network interface controller (NIC) |
US11876702B2 (en) | 2019-05-23 | 2024-01-16 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient address translation in a network interface controller (NIC) |
US11876701B2 (en) | 2019-05-23 | 2024-01-16 | Hewlett Packard Enterprise Development Lp | System and method for facilitating operation management in a network interface controller (NIC) for accelerators |
US11882025B2 (en) | 2019-05-23 | 2024-01-23 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient message matching in a network interface controller (NIC) |
US11899596B2 (en) | 2019-05-23 | 2024-02-13 | Hewlett Packard Enterprise Development Lp | System and method for facilitating dynamic command management in a network interface controller (NIC) |
US11902150B2 (en) | 2019-05-23 | 2024-02-13 | Hewlett Packard Enterprise Development Lp | Systems and methods for adaptive routing in the presence of persistent flows |
US11916782B2 (en) | 2019-05-23 | 2024-02-27 | Hewlett Packard Enterprise Development Lp | System and method for facilitating global fairness in a network |
US11916781B2 (en) | 2019-05-23 | 2024-02-27 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC) |
US11929919B2 (en) | 2019-05-23 | 2024-03-12 | Hewlett Packard Enterprise Development Lp | System and method for facilitating self-managing reduction engines |
US11962490B2 (en) | 2019-05-23 | 2024-04-16 | Hewlett Packard Enterprise Development Lp | Systems and methods for per traffic class routing |
US11968116B2 (en) | 2019-05-23 | 2024-04-23 | Hewlett Packard Enterprise Development Lp | Method and system for facilitating lossy dropping and ECN marking |
US11973685B2 (en) | 2019-05-23 | 2024-04-30 | Hewlett Packard Enterprise Development Lp | Fat tree adaptive routing |
US11985060B2 (en) | 2019-05-23 | 2024-05-14 | Hewlett Packard Enterprise Development Lp | Dragonfly routing with incomplete group connectivity |
US11991072B2 (en) | 2019-05-23 | 2024-05-21 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient event notification management for a network interface controller (NIC) |
US12003411B2 (en) | 2019-05-23 | 2024-06-04 | Hewlett Packard Enterprise Development Lp | Systems and methods for on the fly routing in the presence of errors |
US12021738B2 (en) | 2019-05-23 | 2024-06-25 | Hewlett Packard Enterprise Development Lp | Deadlock-free multicast routing on a dragonfly network |
US12034633B2 (en) | 2019-05-23 | 2024-07-09 | Hewlett Packard Enterprise Development Lp | System and method for facilitating tracer packets in a data-driven intelligent network |
US12040969B2 (en) | 2019-05-23 | 2024-07-16 | Hewlett Packard Enterprise Development Lp | System and method for facilitating data-driven intelligent network with flow control of individual applications and traffic flows |
US12058032B2 (en) | 2019-05-23 | 2024-08-06 | Hewlett Packard Enterprise Development Lp | Weighting routing |
US12058033B2 (en) | 2019-05-23 | 2024-08-06 | Hewlett Packard Enterprise Development Lp | Method and system for providing network ingress fairness between applications |
US12132648B2 (en) | 2019-05-23 | 2024-10-29 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient load balancing in a network interface controller (NIC) |
US12218829B2 (en) | 2019-05-23 | 2025-02-04 | Hewlett Packard Enterprise Development Lp | System and method for facilitating data-driven intelligent network with per-flow credit-based flow control |
US12218828B2 (en) | 2019-05-23 | 2025-02-04 | Hewlett Packard Enterprise Development Lp | System and method for facilitating efficient packet forwarding in a network interface controller (NIC) |
US12244489B2 (en) | 2019-05-23 | 2025-03-04 | Hewlett Packard Enterprise Development Lp | System and method for performing on-the-fly reduction in a network |
US12267229B2 (en) | 2020-03-23 | 2025-04-01 | Hewlett Packard Enterprise Development Lp | System and method for facilitating data-driven intelligent network with endpoint congestion detection and control |
Also Published As
Publication number | Publication date |
---|---|
US7317718B1 (en) | 2008-01-08 |
US7710952B1 (en) | 2010-05-04 |
US8331359B2 (en) | 2012-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8331359B2 (en) | Flexible counter update and retrieval | |
JP3984680B2 (en) | A digital network having a mechanism for grouping virtual message transfer paths having similar transfer service rates in order to increase the efficiency of transfer scheduling on the virtual message transfer path | |
EP0797335B1 (en) | Network adapter | |
US7843816B1 (en) | Systems and methods for limiting low priority traffic from blocking high priority traffic | |
US5822300A (en) | Congestion management scheme | |
US7814283B1 (en) | Low latency request dispatcher | |
US7613192B1 (en) | Reorder engine with error recovery | |
US5311509A (en) | Configurable gigabits switch adapter | |
US5640399A (en) | Single chip network router | |
US7421564B1 (en) | Incrementing successive write operations to a plurality of memory devices | |
US8180966B2 (en) | System and method for operating a packet buffer in an intermediate node | |
CA2470758A1 (en) | Deferred queuing in a buffered switch | |
US8015312B2 (en) | Scheduler for transmit system interfaces | |
US5557266A (en) | System for cascading data switches in a communication node | |
US8706896B2 (en) | Guaranteed bandwidth memory apparatus and method | |
US20060050639A1 (en) | Credit-based method and apparatus for controlling data communications | |
US7971008B2 (en) | Flexible queue and stream mapping systems and methods | |
US6636952B1 (en) | Systems and methods for processing packet streams in a network device | |
EP1065835B1 (en) | Packet memory management (PACMAN) scheme | |
US7711910B1 (en) | Flexible queue and stream mapping systems and methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20201211 |