+

CN118964226A - A method, device, equipment and storage medium for storing a pre-buried order - Google Patents

A method, device, equipment and storage medium for storing a pre-buried order Download PDF

Info

Publication number
CN118964226A
CN118964226A CN202411437804.8A CN202411437804A CN118964226A CN 118964226 A CN118964226 A CN 118964226A CN 202411437804 A CN202411437804 A CN 202411437804A CN 118964226 A CN118964226 A CN 118964226A
Authority
CN
China
Prior art keywords
address
preset
address management
management area
embedded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411437804.8A
Other languages
Chinese (zh)
Other versions
CN118964226B (en
Inventor
李红英
陈旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengli Anyuan Technology Hangzhou Co ltd
Original Assignee
Shengli Anyuan Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shengli Anyuan Technology Hangzhou Co ltd filed Critical Shengli Anyuan Technology Hangzhou Co ltd
Priority to CN202411437804.8A priority Critical patent/CN118964226B/en
Publication of CN118964226A publication Critical patent/CN118964226A/en
Application granted granted Critical
Publication of CN118964226B publication Critical patent/CN118964226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Hardware Design (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a pre-buried list storage method, a device, equipment and a storage medium, which relate to the technical field of data storage and comprise the following steps: when receiving the pre-buried list, inquiring whether a primary address management area address of the lower single channel number exists in the preset FPGA internal address management according to the lower single channel number corresponding to the pre-buried list; if the embedded order exists, reading the primary address management area according to the address, acquiring the maximum order number of each secondary data storage area, and determining the insertion position of the embedded order based on the size relation between the maximum order number and the order number of the embedded order; inquiring a secondary data storage area address in the primary address management area by using the insertion position, and reading the secondary data storage area based on the secondary data storage area address to acquire each target order number; and determining the address to be inserted of the embedded list based on the size sequence of the serial numbers of each target order, and storing the embedded list into a secondary data storage area according to the address to be inserted. The application realizes the optimized storage of the embedded list.

Description

Pre-buried list storage method, device, equipment and storage medium
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a method, an apparatus, a device, and a storage medium for storing a pre-buried list.
Background
In the process of trading securities and futures, for professional investors in high-frequency trading, the processing of orders to achieve '0 time delay' is one of the targets pursued by technology, so that the function of pre-buried orders is generated, the orders are ordered in advance before the trade is not yet started, the orders are directly reported to the exchange after the trade is started, the number of the orders is large, the corresponding storage is larger, and the storage management algorithm for the pre-buried orders is generated in order to ensure the maximum utilization rate of the storage space due to the fact that the orders stored by different platforms are different. Therefore, in the FPGA (Field Programmable GATE ARRAY ), the memory spaces can be multiplexed as much as possible under the same resource memory format, and the memory spaces can be flexibly used, so that the FPGA is one of the problems of high-frequency transaction counter research.
Disclosure of Invention
In view of the above, the present invention aims to provide a method, a device, an apparatus and a storage medium for storing pre-buried sheets, which can optimize the scheme of storing pre-buried sheets, greatly reduce the storage space and improve the utilization rate of the space. The specific scheme is as follows:
in a first aspect, the application discloses a pre-buried list storage method, which comprises the following steps:
When an embedded list is received, inquiring whether a preset primary address management area address corresponding to a lower single channel number exists in preset FPGA internal address management according to the lower single channel number corresponding to the embedded list;
if the pre-embedded list exists, reading a pre-set primary address management area according to the pre-set primary address management area address, acquiring a corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list;
Inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order;
and determining the address to be inserted of the embedded list based on the size sequence of each target order number, and storing the embedded list into the preset secondary data storage area according to the address to be inserted.
Optionally, after the querying, according to the lower single channel number corresponding to the pre-buried single, whether the preset primary address management area address corresponding to the lower single channel number exists in the preset FPGA internal address management, the method further includes:
And if the preset primary address management area address corresponding to the next channel number does not exist in the preset FPGA internal address management, a primary address management area and a secondary address management area are newly built to store the information of the embedded list.
Optionally, the determining the insertion position of the pre-buried list based on the magnitude relation between the maximum order number and the order number of the pre-buried list includes:
And if the order number of the embedded list is smaller than the maximum order number, determining a preset secondary data storage area corresponding to the maximum order number as an insertion position of the embedded list.
Optionally, the method further comprises:
if the number of the preset primary address management area storage preset secondary address management area addresses reaches a first number threshold, applying for a new primary address management area, and storing the new primary address management area address corresponding to the new primary address management area into the preset primary address management area;
And if the number of the preset secondary address management area storage embedded sheets reaches a second number threshold, applying for a new secondary address management area, and storing the new secondary address management area address corresponding to the new secondary address management area to the current primary address management area.
Optionally, the method further comprises:
if the primary address management area of the next link does not exist in the primary address management area, recovering the primary address management area address to a preset address cache pool when the front end deletes the embedded list, so that the primary address management area address is reused when the front end applies for the primary address management area address;
If the preset primary address management area has a primary address management area of a next link and does not have a primary address management area of an upper link, recovering the preset primary address management area address to the preset address cache pool when the front end deletes the embedded list, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the preset FPGA internal address management;
If the preset primary address management area has the primary address management area of the next link and the primary address management area of the previous link, and the primary address management area of the previous link is the address read out from the non-preset FPGA internal address management, when the front end deletes the pre-embedded list, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the first address bit corresponding to the primary address management area of the previous link, so that the invalid link is eliminated.
Optionally, after the pre-embedded sheet is stored in the preset secondary data storage area according to the address to be inserted, the method further includes:
and if the number of the embedded orders in the preset secondary data storage area reaches a third number threshold, reapplying a new secondary data storage area, and storing the order numbers meeting preset conditions in the preset secondary data storage area into the new secondary data storage area.
Optionally, the method further comprises:
When the front end deletes the embedded list, if one embedded list exists in the preset secondary data storage area, the preset secondary data storage area address is recovered to a preset address cache pool, so that the preset secondary address management area address is reused when the front end applies for the secondary address management area address;
And when the front end deletes the embedded sheets, if a plurality of embedded sheets exist in the preset secondary data storage area, deleting the embedded sheets in the preset secondary data storage area according to the address to be inserted, rearranging the residual embedded sheets in the preset secondary data storage area, and writing the corresponding arranged embedded sheets into the preset secondary data storage area.
In a second aspect, the present application discloses a pre-buried list storage apparatus, comprising:
The address inquiry module is used for inquiring whether a preset primary address management area address corresponding to a lower single channel number exists in preset FPGA internal address management according to the lower single channel number corresponding to the pre-buried single when the pre-buried single is received;
The inserting position determining module is used for reading the preset primary address management area according to the preset primary address management area address if the preset primary address management area address exists, obtaining the corresponding maximum order number of each preset secondary data storage area, and determining the inserting position of the embedded list based on the size relation between the maximum order number and the order number of the embedded list;
the order number acquisition module is used for inquiring a preset secondary data storage area address in the preset primary address management area by utilizing the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order;
The embedded list storage module is used for determining the address to be inserted of the embedded list based on the size sequence of each target order number, and storing the embedded list into the preset secondary data storage area according to the address to be inserted.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the embedded single storage method.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program, where the computer program when executed by a processor implements a pre-buried list storage method as described above.
When an embedded list is stored, firstly, inquiring whether a preset primary address management area address corresponding to a lower list channel number exists in preset FPGA internal address management according to the lower list channel number corresponding to the embedded list when the embedded list is received; if the pre-embedded list exists, reading a pre-set primary address management area according to the pre-set primary address management area address, acquiring a corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list; then, inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order; and finally, determining the address to be inserted of the embedded list based on the size sequence of the target order numbers, and storing the embedded list into the preset secondary data storage area according to the address to be inserted. Therefore, the application carries out dynamic address allocation on the channel number of the embedded bill through the FPGA internal address management, the primary address management area and the secondary data storage area, greatly increases the utilization rate of the storage space, each index is dynamically allocated, greatly saves the consumption of the FPGA internal resources, greatly reduces the storage space, further improves the processing speed of order information, and orders the bill from small to large, thereby being convenient for the FPGA internal to process the embedded bill in sequence.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a pre-buried list storage method disclosed by the application;
FIG. 2 is a schematic diagram of a primary address management region data format according to the present disclosure;
FIG. 3 is a schematic flow chart of the present application after 16 secondary data storage areas are full;
FIG. 4 is a schematic diagram of a pre-buried single storage unit according to the present disclosure;
Fig. 5 is a block diagram of an electronic device according to the present disclosure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The current storage mode builds a dynamic storage space according to the internal storage resources of the FPGA and the external storage resources of the FPGA, each index can apply for a space for storing the current order number, and a storage space is applied after the current index is fully stored. And the embedded list under the current index is stored according to the order number sequence from small to large, and after the embedded list is sent out, the embedded list of the current order number is deleted, so that the read embedded list is ensured to be effective. However, if the current pre-embedded list cannot be reported to the exchange to search for 1v2, the query time response is slow when all the pre-embedded lists under the current index are queried, and because all the order numbers need to be read out from the external storage resource of the FPGA, the time required is longer, but the resource multiplexing rate is higher. In order to solve the technical problems, the application discloses a pre-buried list storage method which can optimize a scheme of a pre-buried list storage mode based on the existing high-frequency transaction system, thereby greatly reducing storage space and improving the utilization rate of the space.
Referring to fig. 1, the embodiment of the invention discloses a pre-buried list storage method, which comprises the following steps:
and S11, inquiring whether a preset primary address management area address corresponding to a lower channel number exists in preset FPGA internal address management according to the lower channel number corresponding to the pre-buried list when the pre-buried list is received.
In the embodiment, the storage of the embedded list is divided into three pieces, namely, the internal address management of the FPGA, the primary address management area and the secondary data storage area. Each index in the internal address management of the FPGA is unique, and the primary address management area address is stored. A total of 16 data may be stored in a piece of the primary address management area, 15 of which are the secondary data storage area address, and the current secondary data storage area address maximum order number. There is also an address of the next level address management area of the data store. Secondary data storage area: the sheet can store 16 data in total, all are the order numbers of the embedded sheets, and the order numbers are arranged from small to large. And each order number carries information that needs to be stored in other relevant fields.
When the embedded list is received, inquiring whether a preset primary address management area address corresponding to the lower list channel number exists in preset FPGA internal address management according to the lower list channel number corresponding to the embedded list, wherein the lower list channel number corresponding to the embedded list also refers to the current index. According to the index of the order, all order numbers below the current index can be queried, and all order numbers smaller than a certain order number can be queried. According to the index of the order, all order numbers below the current index can be deleted, and when the next time the order of the index is queried, the direct query returns no. Thus, the corresponding primary address management area address can be queried more simply and efficiently. And if the preset primary address management area address corresponding to the single channel number does not exist in the preset FPGA internal address management, creating a primary address management area and a secondary address management area to store the information of the embedded list. That is, no pre-buried bill is hung, which indicates that the pre-buried bill is the first bill under the channel, and the pre-buried bill needs to apply for three storage areas by itself. In addition, it should be noted that when the front end deletes a certain pre-buried list, and the pre-buried list is the last pre-buried list under the index, the data in the corresponding index in the FPGA internal address management needs to be cleared at this time. The data refers to the stored data corresponding to the embedded list, all the stored data are removed, and the embedded list cannot be found when the data are queried.
And step S12, if the pre-embedded list exists, reading the pre-set primary address management area according to the pre-set primary address management area address, acquiring the corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list.
In this embodiment, if a preset primary address management area address corresponding to a single channel number exists in the preset FPGA internal address management, it is indicated that a corresponding pre-buried list exists in the currently-ordered channel number, and when this occurs, a new pre-buried list needs to be mounted under the channel and a previous list needs to generate a tree structure. If there is no pre-buried list in suspension, then an address needs to be reapplied as the start of the tree. And then reading the preset primary address management area according to the address of the preset primary address management area, and obtaining the corresponding maximum order number of each preset secondary data storage area. And then judging the order number of the pre-buried list of the current index, and comparing the order number of the pre-buried list with the largest order number of each piece of the primary storage area in which the secondary data storage area (the data format stored in the primary storage area, the corresponding secondary data storage area address and the corresponding secondary data storage area) is the largest order number, wherein if the order number of the pre-buried list is smaller than the largest order number, the insertion position of the pre-buried list can be quickly found, that is, if the order number of the pre-buried list is smaller than the largest order number, the preset secondary data storage area corresponding to the largest order number is determined as the insertion position of the pre-buried list.
And S13, inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order.
In this embodiment, the insertion location may be used to query the address of the secondary data storage area (if it is the first pre-buried list under the channel, then the maximum order number of the corresponding secondary data storage is infinite), and the data storage format is shown in fig. 2.
However, as shown in fig. 3, after 16 pre-buried sheets are stored in the secondary data storage area, a new application of a secondary address management area is required, and then the address of the corresponding secondary address management area is stored in the current primary address management area. In addition, after 15 secondary address management areas are stored in the primary address management area, one primary address management area needs to be applied, then the address of the newly applied primary address management area is stored in the first address of the previous primary address management area, an address link is formed, and the next primary address management area is conveniently jumped to.
And finally, reading the preset secondary data storage area according to the address of the preset secondary data storage area inquired from the primary address management area so as to acquire each target order number of the stored embedded list.
And step S14, determining the address to be inserted of the embedded list based on the size sequence of each target order number, and storing the embedded list into the preset secondary data storage area according to the address to be inserted.
In this embodiment, after each target order number of the stored pre-buried list is acquired, it is determined to which address of the current secondary data storage area the order number of the current index pre-buried list should be inserted, and the corresponding order numbers are sequentially stored (arranged from small to large) in the secondary data storage area. And determining the address to be inserted of the embedded list based on the size sequence of the serial numbers of the target orders, and storing the embedded list into a preset secondary data storage area according to the address to be inserted. However, if the current order is changed to 16 addresses after being inserted, another two-level data storage area needs to be applied, then the first 8 order numbers are stored in the previous two-level data storage area, and the last 8 order numbers are stored in the next two-level data storage area. Because the order numbers corresponding to the embedded sheets are irregular, a reserved space is needed, and the subsequent sheets can be written in at any position, so that the application is avoided too frequently. For example, the order number 1,2,3,4,5,6,7,10,11,12,13,14,15,16,17,18 corresponding to the pre-buried list of channel 0 is received. The first 8 order numbers 1 to 10 are stored in the previous piece of the secondary data storage area, and the last 8 order numbers 11 to 18 are stored in the next piece of the secondary data storage area. At this time, the first area can be directly written into the first area 9, so that a large number of logic operations are not required to be introduced due to the fact that the first area is not inserted, and resources are wasted.
In addition, when only one pre-buried list exists in the current secondary data storage area, the front end needs to delete the pre-buried list, and then the current secondary address management area is directly recovered to a preset address cache pool, enters the queuing sequence and is used in the next application. If the front end deletes the current two-stage data storage area only has a plurality of pre-embedded lists, then the position of the current pre-embedded list is found, and after the pre-embedded list is removed, the current two-stage data storage area is rearranged and written into the two-stage data storage area. Therefore, the processing speed of order information in the high-frequency transaction system is further improved based on the characteristics of the algorithm besides the characteristics of parallel processing in the FPGA, the space of the address cache pool is more flexible, the dynamic allocation of addresses is realized, how many applications are needed is supported, and the space is released. And the application is also applicable to other field-related storage processing scenarios.
When an embedded list is stored, firstly, inquiring whether a preset primary address management area address corresponding to a lower list channel number exists in preset FPGA internal address management according to the lower list channel number corresponding to the embedded list when the embedded list is received; if the pre-embedded list exists, reading a pre-set primary address management area according to the pre-set primary address management area address, acquiring a corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list; then, inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order; and finally, determining the address to be inserted of the embedded list based on the size sequence of the target order numbers, and storing the embedded list into the preset secondary data storage area according to the address to be inserted. Therefore, the application carries out dynamic address allocation on the channel number of the embedded bill through the FPGA internal address management, the primary address management area and the secondary data storage area, greatly increases the utilization rate of the storage space, each index is dynamically allocated, greatly saves the consumption of the FPGA internal resources, greatly reduces the storage space, further improves the processing speed of order information, and orders the bill from small to large, thereby being convenient for the FPGA internal to process the embedded bill in sequence.
Based on the above embodiment, after 15 secondary address management areas are stored in the primary address management area, one primary address management area needs to be applied, and then the address of the newly applied primary address management area is stored in the first address of the previous primary address management area to form an address link, so that the next primary address management area can be conveniently skipped. Next, a description will be given of a scenario in which the front end deletes an embedded ticket when only one secondary address management area remains in the primary address management area and only one embedded ticket remains in the secondary address management area.
First, there is no primary address management area of the next link in the current primary address management area:
That is, if the primary address management area of the next link does not exist in the preset primary address management area, when the front end deletes the pre-buried list, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address. The address cache pool in the application: support functions of applying for addresses and returning addresses. At present, an address cache pool built by an FPGA internal storage resource and an FPGA external storage resource is used, 8 addresses are applied for and placed in the address cache pool during initialization, and when the address cache pool has no data, the 8 addresses are applied for and stored in the address cache pool, and each time the application is 8 addresses. If the internal storage resource of the FPGA has no data, but the external storage resource of the FPGA has data, the address in the external storage resource of the FPGA needs to be moved to the internal storage resource of the FPGA, and each movement is 8 addresses. When the front end applies for the address, the address is directly read from the address cache pool and sent out. When the front end returns the address, the address is put into an address cache pool for queuing, and the queuing mode is as follows: first kind: when the external storage resources of the FPGA have no data, and the internal storage resources of the FPGA are not full of data, the data are directly stored in the internal storage resources of the FPGA. Second kind: and after the data of the internal storage resources of the FPGA are full, starting to store the data into the external storage resources of the FPGA. Third kind: when data exists in the external storage resources of the FPGA, the data needs to be stored into the external storage resources of the FPGA even if the internal storage resources of the FPGA are not full. Therefore, the recovered address needs to enter the address cache pool, so that the absolute utilization of space can be realized, and the space is not wasted.
Second, the current primary address management area has the next link but no previous link, and is the read address of the FPGA internal address management:
That is, if the preset primary address management area has a primary address management area of the next link and there is no primary address management area of the previous link, when the front end deletes the pre-embedded list, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the preset FPGA internal address management. At this time, the current primary address management area is reclaimed, and the primary address management area address of the next link is written into the FPGA internal address management.
Third, the current primary address management area has the next link and the last link, and the primary address management area is not the read address of the FPGA internal address management:
That is, if the preset primary address management area includes a primary address management area of the next link and a primary address management area of the previous link, and the primary address management area of the previous link is an address read from the internal address management of the non-preset FPGA, when the pre-embedded ticket is deleted at the front end, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the first address bit corresponding to the primary address management area of the previous link, so as to eliminate the invalid link. At this time, the current primary address management area is recovered, and the primary address management area address of the next link is written into the first address bit of the primary address management area of the previous link, so that the function of eliminating invalid links of the links is formed.
Therefore, the space of the address cache pool is more flexible, the dynamic allocation of addresses supports how much application is needed, the space is released quickly after the application, the utilization rate of the storage space is greatly increased, and the consumption of internal resources of the FPGA is greatly saved.
Referring to fig. 4, an embodiment of the present invention discloses a pre-buried list storage apparatus, including:
the address inquiry module 11 is configured to inquire whether a preset primary address management area address corresponding to a next channel number exists in preset FPGA internal address management according to the next channel number corresponding to the pre-buried list when the pre-buried list is received;
The insertion position determining module 12 is configured to, if the insertion position determining module exists, read a preset primary address management area according to the preset primary address management area address, obtain a maximum order number of each corresponding preset secondary data storage area, and determine an insertion position of the embedded sheet based on a size relationship between the maximum order number and an order number of the embedded sheet;
the order number obtaining module 13 is configured to query a preset secondary data storage area address in the preset primary address management area by using the insertion position, and read the preset secondary data storage area based on the preset secondary data storage area address, so as to obtain each target order number of the stored pre-buried order;
the pre-buried list storage module 14 is configured to determine an address to be inserted of the pre-buried list according to a size sequence of each target order number, and store the pre-buried list in the preset secondary data storage area according to the address to be inserted.
When an embedded list is stored, firstly, inquiring whether a preset primary address management area address corresponding to a lower list channel number exists in preset FPGA internal address management according to the lower list channel number corresponding to the embedded list when the embedded list is received; if the pre-embedded list exists, reading a pre-set primary address management area according to the pre-set primary address management area address, acquiring a corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list; then, inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order; and finally, determining the address to be inserted of the embedded list based on the size sequence of the target order numbers, and storing the embedded list into the preset secondary data storage area according to the address to be inserted. Therefore, the application carries out dynamic address allocation on the channel number of the embedded bill through the FPGA internal address management, the primary address management area and the secondary data storage area, greatly increases the utilization rate of the storage space, each index is dynamically allocated, greatly saves the consumption of the FPGA internal resources, greatly reduces the storage space, further improves the processing speed of order information, and orders the bill from small to large, thereby being convenient for the FPGA internal to process the embedded bill in sequence.
In some specific embodiments, the device is further configured to create a primary address management area and a secondary address management area to store the information of the pre-buried list if the preset primary address management area address corresponding to the next channel number does not exist in the preset FPGA internal address management.
In some specific embodiments, the insertion position determining module 12 may be specifically configured to determine, as the insertion position of the pre-buried order, a preset secondary data storage area corresponding to the maximum order number if the order number of the pre-buried order is smaller than the maximum order number.
In some specific embodiments, the device is further configured to apply a new primary address management area if the number of the preset primary address management area storage preset secondary address management area addresses reaches a first number threshold, and store a new primary address management area address corresponding to the new primary address management area to the preset primary address management area; and if the number of the preset secondary address management area storage embedded sheets reaches a second number threshold, applying for a new secondary address management area, and storing the new secondary address management area address corresponding to the new secondary address management area to the current primary address management area.
In some specific embodiments, the device is further configured to, if the primary address management area of the next link does not exist in the preset primary address management area, recycle the preset primary address management area address to a preset address cache pool when the front end deletes the pre-embedded list, so that the preset primary address management area address is reused when the front end applies for the primary address management area address; if the preset primary address management area has a primary address management area of a next link and does not have a primary address management area of an upper link, recovering the preset primary address management area address to the preset address cache pool when the front end deletes the embedded list, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the preset FPGA internal address management; if the preset primary address management area has the primary address management area of the next link and the primary address management area of the previous link, and the primary address management area of the previous link is the address read out from the non-preset FPGA internal address management, when the front end deletes the pre-embedded list, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the first address bit corresponding to the primary address management area of the previous link, so that the invalid link is eliminated.
In some specific embodiments, the device is further configured to apply for a new secondary data storage area again if the number of pre-buried orders in the preset secondary data storage area reaches a third number threshold, and store the order number meeting the preset condition in the preset secondary data storage area to the new secondary data storage area.
In some specific embodiments, the device is further configured to, when the front end deletes the pre-embedded ticket, if there is one pre-embedded ticket in the preset secondary data storage area, recycle the preset secondary data storage area address to a preset address cache pool, so that the front end applies for the secondary address management area address to reuse the preset secondary address management area address; and when the front end deletes the embedded sheets, if a plurality of embedded sheets exist in the preset secondary data storage area, deleting the embedded sheets in the preset secondary data storage area according to the address to be inserted, rearranging the residual embedded sheets in the preset secondary data storage area, and writing the corresponding arranged embedded sheets into the preset secondary data storage area.
Further, the embodiment of the present application further discloses an electronic device, and fig. 5 is a block diagram of an electronic device 20 according to an exemplary embodiment, where the content of the figure is not to be considered as any limitation on the scope of use of the present application.
Fig. 5 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present application. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. The memory 22 is configured to store a computer program, where the computer program is loaded and executed by the processor 21 to implement relevant steps in the pre-buried list storage method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and the communication protocol in which the communication interface is in compliance is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon may include an operating system 221, a computer program 222, and the like, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device 20 and the computer program 222, which may be Windows Server, netware, unix, linux, etc. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the pre-buried single storage method performed by the electronic device 20 as disclosed in any of the previous embodiments.
Further, the application also discloses a computer readable storage medium for storing a computer program; the computer program realizes the pre-buried list storage method when being executed by the processor. For specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the application that follows may be better understood, and in order that the present principles and embodiments may be better understood; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The pre-buried list storage method is characterized by comprising the following steps of:
When an embedded list is received, inquiring whether a preset primary address management area address corresponding to a lower single channel number exists in preset FPGA internal address management according to the lower single channel number corresponding to the embedded list;
if the pre-embedded list exists, reading a pre-set primary address management area according to the pre-set primary address management area address, acquiring a corresponding maximum order number of each pre-set secondary data storage area, and determining the insertion position of the pre-embedded list based on the size relation between the maximum order number and the order number of the pre-embedded list;
Inquiring a preset secondary data storage area address in the preset primary address management area by using the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order;
and determining the address to be inserted of the embedded list based on the size sequence of each target order number, and storing the embedded list into the preset secondary data storage area according to the address to be inserted.
2. The method for storing an embedded bill according to claim 1, wherein after the step of querying whether the preset primary address management area address corresponding to the lower channel number exists in the preset FPGA internal address management according to the lower channel number corresponding to the embedded bill, the method further comprises:
And if the preset primary address management area address corresponding to the next channel number does not exist in the preset FPGA internal address management, a primary address management area and a secondary address management area are newly built to store the information of the embedded list.
3. The method for storing an embedded bill according to claim 1, wherein determining the insertion position of the embedded bill based on the magnitude relation between the maximum order number and the order number of the embedded bill comprises:
And if the order number of the embedded list is smaller than the maximum order number, determining a preset secondary data storage area corresponding to the maximum order number as an insertion position of the embedded list.
4. The pre-buried list storage method of claim 1, further comprising:
if the number of the preset primary address management area storage preset secondary address management area addresses reaches a first number threshold, applying for a new primary address management area, and storing the new primary address management area address corresponding to the new primary address management area into the preset primary address management area;
And if the number of the preset secondary address management area storage embedded sheets reaches a second number threshold, applying for a new secondary address management area, and storing the new secondary address management area address corresponding to the new secondary address management area to the current primary address management area.
5. The pre-buried list storage method of claim 1, further comprising:
if the primary address management area of the next link does not exist in the primary address management area, recovering the primary address management area address to a preset address cache pool when the front end deletes the embedded list, so that the primary address management area address is reused when the front end applies for the primary address management area address;
If the preset primary address management area has a primary address management area of a next link and does not have a primary address management area of an upper link, recovering the preset primary address management area address to the preset address cache pool when the front end deletes the embedded list, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the preset FPGA internal address management;
If the preset primary address management area has the primary address management area of the next link and the primary address management area of the previous link, and the primary address management area of the previous link is the address read out from the non-preset FPGA internal address management, when the front end deletes the pre-embedded list, the preset primary address management area address is recovered to the preset address cache pool, so that the preset primary address management area address is reused when the front end applies for the primary address management area address, and the primary address management area address of the next link is written into the first address bit corresponding to the primary address management area of the previous link, so that the invalid link is eliminated.
6. The pre-buried list storing method according to claim 1, wherein after said storing said pre-buried list in said preset secondary data storing area according to said address to be inserted, further comprising:
and if the number of the embedded orders in the preset secondary data storage area reaches a third number threshold, reapplying a new secondary data storage area, and storing the order numbers meeting preset conditions in the preset secondary data storage area into the new secondary data storage area.
7. The pre-buried list storage method according to any one of claims 1 to 6, further comprising:
When the front end deletes the embedded list, if one embedded list exists in the preset secondary data storage area, the preset secondary data storage area address is recovered to a preset address cache pool, so that the preset secondary address management area address is reused when the front end applies for the secondary address management area address;
And when the front end deletes the embedded sheets, if a plurality of embedded sheets exist in the preset secondary data storage area, deleting the embedded sheets in the preset secondary data storage area according to the address to be inserted, rearranging the residual embedded sheets in the preset secondary data storage area, and writing the corresponding arranged embedded sheets into the preset secondary data storage area.
8. An embedded single storage device, comprising:
The address inquiry module is used for inquiring whether a preset primary address management area address corresponding to a lower single channel number exists in preset FPGA internal address management according to the lower single channel number corresponding to the pre-buried single when the pre-buried single is received;
The inserting position determining module is used for reading the preset primary address management area according to the preset primary address management area address if the preset primary address management area address exists, obtaining the corresponding maximum order number of each preset secondary data storage area, and determining the inserting position of the embedded list based on the size relation between the maximum order number and the order number of the embedded list;
the order number acquisition module is used for inquiring a preset secondary data storage area address in the preset primary address management area by utilizing the insertion position, and reading the preset secondary data storage area based on the preset secondary data storage area address so as to acquire each target order number of the stored pre-buried order;
The embedded list storage module is used for determining the address to be inserted of the embedded list based on the size sequence of each target order number, and storing the embedded list into the preset secondary data storage area according to the address to be inserted.
9. An electronic device, comprising:
a memory for storing a computer program;
A processor for executing the computer program to implement the pre-buried list storage method as claimed in any one of claims 1 to 7.
10. A computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the pre-buried single storage method of any of claims 1 to 7.
CN202411437804.8A 2024-10-15 2024-10-15 Pre-buried list storage method, device, equipment and storage medium Active CN118964226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411437804.8A CN118964226B (en) 2024-10-15 2024-10-15 Pre-buried list storage method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411437804.8A CN118964226B (en) 2024-10-15 2024-10-15 Pre-buried list storage method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN118964226A true CN118964226A (en) 2024-11-15
CN118964226B CN118964226B (en) 2025-05-30

Family

ID=93384090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411437804.8A Active CN118964226B (en) 2024-10-15 2024-10-15 Pre-buried list storage method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118964226B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016149051A (en) * 2015-02-13 2016-08-18 富士通株式会社 Storage control device, storage control program, and storage control method
CN111241108A (en) * 2020-01-16 2020-06-05 北京百度网讯科技有限公司 Indexing method, device, electronic device and medium based on key-value pair KV system
CN111813709A (en) * 2020-07-21 2020-10-23 北京计算机技术及应用研究所 High-speed parallel storage method based on FPGA (field programmable Gate array) storage and calculation integrated framework
WO2021189977A1 (en) * 2020-08-31 2021-09-30 平安科技(深圳)有限公司 Address coding method and apparatus, and computer device and computer-readable storage medium
CN118276779A (en) * 2024-03-31 2024-07-02 山东云海国创云计算装备产业创新中心有限公司 Management control method, device, equipment and medium of nonvolatile memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016149051A (en) * 2015-02-13 2016-08-18 富士通株式会社 Storage control device, storage control program, and storage control method
CN111241108A (en) * 2020-01-16 2020-06-05 北京百度网讯科技有限公司 Indexing method, device, electronic device and medium based on key-value pair KV system
CN111813709A (en) * 2020-07-21 2020-10-23 北京计算机技术及应用研究所 High-speed parallel storage method based on FPGA (field programmable Gate array) storage and calculation integrated framework
WO2021189977A1 (en) * 2020-08-31 2021-09-30 平安科技(深圳)有限公司 Address coding method and apparatus, and computer device and computer-readable storage medium
CN118276779A (en) * 2024-03-31 2024-07-02 山东云海国创云计算装备产业创新中心有限公司 Management control method, device, equipment and medium of nonvolatile memory

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谷文彦;李俊;潘昌森;: "一种面向地震数据的两级索引", 微型机与应用, no. 18, 25 September 2015 (2015-09-25) *

Also Published As

Publication number Publication date
CN118964226B (en) 2025-05-30

Similar Documents

Publication Publication Date Title
US11042477B2 (en) Memory management using segregated free lists
KR20170123336A (en) File manipulation method and apparatus
EP3220275A1 (en) Array controller, solid state disk and data writing control method for solid state disk
CN109614377A (en) File deletion method, device, device and storage medium of distributed file system
CN107632791A (en) The distribution method and system of a kind of memory space
US20190220443A1 (en) Method, apparatus, and computer program product for indexing a file
CN107256196A (en) The caching system and method for support zero-copy based on flash array
CN101094129A (en) Method for accessing domain name, and client terminal
CN114327917A (en) Memory management method, computing device and readable storage medium
CN104933051B (en) File storage recovery method and device
CN113805816B (en) Disk space management method, device, equipment and storage medium
CN111177032A (en) Cache space application method, system, device and computer readable storage medium
CN118964226A (en) A method, device, equipment and storage medium for storing a pre-buried order
CN112817766B (en) Memory management method, electronic device and medium
CN117591445A (en) Table item updating method, device, equipment and storage medium
CN110781101A (en) One-to-many mapping relation storage method and device, electronic equipment and medium
CN115907949A (en) Bank transaction data processing method and device
US11132129B2 (en) Methods for minimizing fragmentation in SSD within a storage system and devices thereof
CN109976672B (en) Read-write conflict optimization method and device, electronic equipment and readable storage medium
KR102053406B1 (en) Data storage device and operating method thereof
CN113687962A (en) A request processing method, apparatus, device and storage medium
CN112162937A (en) Data recovery method and device for memory chip, computer equipment and storage medium
CN116594808B (en) Database rollback resource processing method, device, computer equipment and medium
CN111679909A (en) Data processing method and device and terminal equipment
JP2994138B2 (en) Catalog Variable Management Method for Interactive Processing System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载