+

CN114218267A - Query request asynchronous processing method and device, computer equipment and storage medium - Google Patents

Query request asynchronous processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114218267A
CN114218267A CN202111407680.5A CN202111407680A CN114218267A CN 114218267 A CN114218267 A CN 114218267A CN 202111407680 A CN202111407680 A CN 202111407680A CN 114218267 A CN114218267 A CN 114218267A
Authority
CN
China
Prior art keywords
query
key
value pair
task
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111407680.5A
Other languages
Chinese (zh)
Other versions
CN114218267B (en
Inventor
吴松圃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202111407680.5A priority Critical patent/CN114218267B/en
Publication of CN114218267A publication Critical patent/CN114218267A/en
Application granted granted Critical
Publication of CN114218267B publication Critical patent/CN114218267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开涉及一种查询请求异步处理方法、装置、计算机设备、存储介质,所述方法包括:接收查询端发送的查询请求,基于所述查询请求生成查询任务并分配任务编号,将所述查询任务写入查询队列;利用分布式执行节点从所述查询队列中获取当前查询任务,基于所述当前查询任务向缓存库发起查询;根据所述当前查询任务的结构化查询语言,查询所述缓存库中的第一键值对是否存在目标键值对,得到查询结果;根据所述查询结果和所述当前查询任务的任务编号,相应的生成所述当前查询任务的第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取。本公开可以使得查询任务的响应结果更加高效,减少了查询端的等待时间;适应性更广。

Figure 202111407680

The present disclosure relates to a method, device, computer equipment, and storage medium for asynchronous processing of query requests. The method includes: receiving a query request sent by a query terminal, generating a query task based on the query request and assigning a task number, and assigning the query task to the query task. Write the query queue; use the distributed execution node to obtain the current query task from the query queue, and initiate a query to the cache library based on the current query task; query the cache library according to the structured query language of the current query task Whether there is a target key-value pair in the first key-value pair of The second key-value pair is stored in the cache library for the query end to read. The present disclosure can make the response result of the query task more efficient, reduce the waiting time of the query end, and have wider adaptability.

Figure 202111407680

Description

Query request asynchronous processing method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of data query technologies, and in particular, to an asynchronous query request processing method, an asynchronous query request processing device, a computer device, and a storage medium.
Background
With the maturity of various software and hardware service technologies and the development of the big data era, the information data volume and the information query complexity are greatly improved. At present, query requests of users are usually responded through background logic and a database, but under the condition of high concurrency, the database is possibly blocked, the query requests cannot be responded in time, the users need to wait for result return all the time, and the database cannot be connected or even is down in severe cases.
In the conventional technology, the capacity of the database is often expanded, some cache mechanisms of the database are started, or a service framework is changed, and partial requests of slow query are returned asynchronously. However, the current processing method has the following problems: the database opening self-caching mechanism is limited by a single-machine memory of the server, and the limitation is large; and the traditional asynchronous query solution has no universality, and each service needs to be modified aiming at the service, so that the modification is complicated.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an asynchronous query request processing method, an asynchronous query request processing apparatus, a computer device, a storage medium, and a computer program product, which can improve the response efficiency of a query request without changing or slightly changing the original system service architecture.
In a first aspect, the present disclosure provides a method for asynchronously processing a query request. The method comprises the following steps:
receiving a query request sent by a query end, generating a query task based on the query request, distributing a task number, and writing the query task into a query queue;
acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
inquiring whether a first key value pair in the cache library has a target key value pair according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read; and the second key-value pair is the task number of the query task and the key-value pair of the query result set.
In one embodiment, the query queue obtains a current query task from the query queue by using a distributed execution node, and initiates a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set, and comprises:
based on the query task, the distributed execution nodes acquire the corresponding query task according to a preset rule.
In one embodiment, the generating a second key-value pair of the current query task according to the query result and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
and under the condition that the query result is that the target key-value pair exists in the cache library, generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read.
In one embodiment, if the query result is that the target key-value pair exists in the cache library, then generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
acquiring a query result set of the target key-value pair, and generating a second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
performing data splitting on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the generating a second key-value pair of the current query task according to the query result and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
and under the condition that the query result is that the target key value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key value pair and a second key value pair of the current query task, and storing the first key value pair and the second key value pair to the cache library for the query end to read.
In one embodiment, if the query result is that the target key-value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key-value pair and a second key-value pair of the current query task, and storing the first key-value pair and the second key-value pair in the cache library, so that the query end reads the key-value pair including:
initiating query to a database according to the structured query language of the current query task, and receiving a query result set of the current query task returned by the database;
generating a first key value pair according to the structured query language and the query result set of the current query task;
generating a second key value pair according to the task number of the current query task and the query result set;
respectively carrying out data splitting on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the storing the data fragments on a plurality of cache nodes of the cache library respectively includes:
the data fragments are subjected to calculation according to a preset algorithm to output node numbers, and the node numbers are used for corresponding to the cache nodes one by one;
and storing the data fragments to corresponding cache nodes according to the node numbers.
In one embodiment, the storing the data fragments on a plurality of cache nodes of the cache library respectively further includes:
and generating a fragment copy of the data fragment, and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
In one embodiment, the providing for the query side to read includes:
and providing a service interface for the query end to query a corresponding query result set in the cache library based on the task number of the query task.
In one embodiment, the providing for the query side to read includes:
and the distributed execution node sends the acquired query result set of the current query task to the query end.
In a second aspect, the present disclosure also provides an asynchronous processing apparatus for query requests. The device comprises:
the query queue module is used for receiving a query request sent by a query end, generating a query task based on the query request, distributing a task number, and writing the query task into a query queue;
the distributed execution node module is used for acquiring a current query task from the query queue by using the distributed execution node and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
the cache query module is used for querying whether a target key value pair exists in a first key value pair in the cache library according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
and the key value pair generating module is used for correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read.
In one embodiment, the distributed execution node module includes:
and the acquisition unit is used for acquiring the corresponding query task by the distributed execution node according to a preset rule based on the query task.
In one embodiment, the key-value pair generating module is configured to, when the query result is that the target key-value pair exists in the cache library, generate a second key-value pair according to the target key-value pair and the task number of the current query task, and store the second key-value pair in the cache library for the query end to read.
In one embodiment, the key-value pair generating module includes:
the second key-value pair unit is used for acquiring a query result set of the target key-value pair and generating the second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
the data fragmentation unit is used for carrying out data fragmentation on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and the cache node unit is used for respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the key-value pair generating module is configured to, if the query result is that the target key-value pair does not exist in the cache library, query a database to obtain a query result set of the current query task, generate a first key-value pair and a second key-value pair of the current query task, and store the first key-value pair and the second key-value pair in the cache library for the query end to read.
In one embodiment, the key-value pair generating module includes:
the database query unit is used for initiating query to a database according to the structured query language of the current query task and receiving a query result set of the current query task returned by the database;
the first key-value pair unit is used for generating a first key-value pair according to the structured query language and the query result set of the current query task;
the second key-value pair unit is used for generating a second key-value pair according to the task number of the current query task and the query result set;
the data fragmentation unit is used for respectively carrying out data fragmentation on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and the cache node unit is used for respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the cache node unit includes:
the node numbering subunit is used for calculating the node number output by the data fragment according to a preset algorithm, wherein the node number is used for corresponding to the cache node one by one;
and the storage subunit is used for storing the data fragments to the corresponding cache nodes according to the node numbers.
In one embodiment, the cache node unit further includes:
and the copy subunit is used for generating a fragment copy of the data fragment and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
In one embodiment, the apparatus further comprises:
and the service interface unit is used for providing a service interface for the inquiry end so that the inquiry end can inquire the corresponding inquiry result set in the cache library based on the task number of the inquiry task.
In one embodiment, the apparatus further comprises:
and the sending unit is used for indicating the distributed execution nodes to send the obtained query result set of the current query task to the query end.
In a third aspect, the present disclosure also provides a computer device. The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the query request asynchronous processing method when executing the computer program.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the above-described query request asynchronous processing method.
In a fifth aspect, the present disclosure also provides a computer program product. The computer program product comprises a computer program, and the computer program realizes the steps of the asynchronous processing method of the query request when being executed by a processor.
The asynchronous processing method, the asynchronous processing device, the asynchronous processing computer equipment, the asynchronous processing storage medium and the asynchronous processing computer program product at least have the following beneficial effects:
according to the method and the device, the query task is acquired through the distributed execution nodes for query, so that the response result of the query task is more efficient, and the waiting time of a query end is reduced; meanwhile, the cache library can comprise a plurality of cache nodes, and the cache limit of a single-machine memory of the server is broken through; in addition, the query request is received by being connected with the query end, and the decoupling is realized with the service system of the query end, so that an asynchronous query module is not required to be added or modified in the original service system of the query end; and the system can serve a plurality of query ends simultaneously, can realize query only by receiving query requests sent by the query ends, has wider adaptability, can well support high concurrency and high availability, and is easy to expand transversely.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present disclosure, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application environment for a method for asynchronous processing of query requests in one embodiment;
FIG. 2 is a flow diagram that illustrates a method for asynchronous processing of query requests, according to one embodiment;
FIG. 3 is a flowchart illustrating the steps of obtaining a query result set in one embodiment;
FIG. 4 is another flowchart illustrating the steps of obtaining a query result set in one embodiment;
FIG. 5 is a flowchart illustrating steps for storing data slices in one embodiment;
FIG. 6 is another flow diagram illustrating the steps of storing data fragments in one embodiment;
FIG. 7 is a schematic diagram illustrating a data flow for the query end to read in one embodiment;
FIG. 8 is a block diagram of an apparatus for asynchronous processing of query requests in one embodiment;
FIG. 9 is a block diagram that illustrates the structure of distributed execution node modules in one embodiment;
FIG. 10 is a block diagram that illustrates the structure of a key-value pair generation module in one embodiment;
FIG. 11 is another block diagram of the key-value pair generation module in one embodiment;
FIG. 12 is a block diagram of a cache node unit in one embodiment;
FIG. 13 is a block diagram of another embodiment of a cache node unit;
FIG. 14 is another block diagram showing an example of the structure of an asynchronous query request processing device;
FIG. 15 is another block diagram showing an example of the structure of an asynchronous query request processing device;
FIG. 16 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein in the description of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, if the terms first, second, etc. are used to denote names, they do not denote any particular order.
As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes" or "including," etc., specify the presence of stated features, integers, steps, operations, components, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof. Also, in this specification, the term "and/or" includes any and all combinations of the associated listed items.
The query request asynchronous processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the query end 102 communicates with the asynchronous query system 104 over a network. The asynchronous query server can be implemented by an independent server or a server cluster composed of a plurality of servers. The data storage system may store data that needs to be processed by the asynchronous query server. The data storage system may be integrated on the asynchronous query server, or may be placed on the cloud or other network servers. The cache library may store historical query data for the asynchronous query server. The cache library may be integrated on the asynchronous query server, or may be placed on the cloud or other network server. The cache bank may also be a multi-node cache bank.
In some embodiments of the present disclosure, as shown in fig. 2, a query request asynchronous processing method is provided, which is described by taking the method as an example applied to the asynchronous query system in fig. 1, and includes the following steps:
step S10: receiving a query request sent by a query end, generating a query task based on the query request, allocating a task number, and writing the query task into a query queue.
Specifically, in this embodiment, the query end may generally refer to a device end on the side of the user who sends the query request, and may be, for example, a user terminal or other device that provides a service. And after receiving a query request initiated by a query end, establishing a query task according to the query request. And the established query tasks are distributed with unique task numbers, and finally, the query tasks with the task numbers are written into a query queue. The query queue may generally refer to a linear table stored in a continuous storage space, and two pointers are set for management in this embodiment. The query tasks are enqueued from the back end of the query queue and dequeued from the front end of the query queue.
Step S20: acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set.
Specifically, the number of distributed execution nodes in this embodiment is greater than or equal to 2. A conventional single node may be a single physical machine that includes all of the services and databases. In contrast to a conventional single node, a distributed execution node is generally understood to mean that a plurality of nodes collectively include all services and databases, and each distributed execution node includes its own processor and memory, each having independent functions for processing data. Generally, they are equal in rank, have no primary or secondary points, and can work autonomously, and can coordinate task processing by transmitting information through a shared communication line. Meanwhile, different distributed execution nodes can respectively execute a plurality of subtasks to jointly complete a large overall task.
In this embodiment, the query task is pulled from the query queue by a plurality of distributed execution nodes, and each distributed execution node initiates a query to the cache library based on its current query task. The cache library comprises a plurality of cache nodes, and the single cache can be expanded to a plurality of service devices. The cache library may be used to store a first key-value pair of a historical query task. A key-value pair generally refers to an organization of a data store, and a "value" corresponding to a "key" can be obtained by querying the "key". In this embodiment, the first key-value pair includes a structured query language and a query result set of the query task, that is, the structured query language of the query task is used as a "key" and the query result set of the query task is used as a "value".
The structured query language may represent the full sentence of an SQL statement, such as:
select col1,col2 from table1 where condition.
or json format query languages of the query engine, such as:
query=[{"queryType":"scan","dataSource":{"type":"table","name":"table2"},"intervals":{"type":"intervals","intervals":["2021-09-08T08:23:32.096Z/2021-09-18T15:36:27.903Z"]},"descending":false,"granularity":{"type":"all"}}.
after receiving the same structured query language, the subsequent distributed execution nodes can directly obtain the value taking the structured query language as the key, namely the query result set of the query task, through the cache library.
Step S30: inquiring whether a first key value pair in the cache library has a target key value pair according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair in the cache library having the same structured query language as the current query task.
Specifically, after receiving the query task according to the distributed execution node and according to the structured query language of the current query task, whether a target key-value pair exists in a first key-value pair in the cache library can be queried through the cache library, where the target key-value pair is the first key-value pair in the cache library having the same structured query language as the current query task. By searching for the target key-value pair in the cache library, the query result of this embodiment may be that the target key-value pair exists in the cache library, or may be that the target key-value pair does not exist in the cache library.
Step S40: correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read; and the second key-value pair is the task number of the query task and the key-value pair of the query result set.
Specifically, according to the query result of the step S30, the distributed execution node executes different task operations according to different query results, and finally generates the second key value pair of the current query task according to the query result and the task number of the current query task. Here, the "key" of the second key-value pair is the task label of the query task, and the "value" of the second key-value pair is the query result set of the query task. And the second key value pair is also stored in the cache library, so that the query end can quickly obtain a query result set of the query task according to the task number of the query task.
In the asynchronous processing method for the query request, the query task is acquired through the distributed execution nodes for query, so that the response result of the query task is more efficient, and the waiting time of a query end is reduced; meanwhile, the cache library can comprise a plurality of cache nodes, and the cache limit of a single-machine memory of the server is broken through; in addition, the query request is received by being connected with the query end, and the decoupling is realized with the service system of the query end, so that an asynchronous query module is not required to be added or modified in the original service system of the query end; and the system can serve a plurality of query ends simultaneously, can realize query only by receiving query requests sent by the query ends, has wider adaptability, can well support high concurrency and high availability, and is easy to expand transversely.
In some embodiments of the present disclosure, step S20 includes:
based on the query task, the distributed execution nodes acquire the corresponding query task according to a preset rule.
Specifically, when the distributed execution nodes pull the query tasks from the query queue, the distributed execution nodes respectively obtain the query tasks corresponding to the distributed execution nodes, so that the query tasks in the query queue can be uniformly distributed in each distributed execution node, and congestion caused by a certain distributed execution node is avoided.
And calculating the task number of the query task according to the number of the distributed execution nodes and a preset algorithm. For example, the number of distributed execution nodes is 3, and it may be set that 3 is remainder according to the order in which each query task enters the query queue, and the remainder result only has three values: 0. 1, 2, 0, 1, 2 are respectively allocated to 3 distributed execution nodes. And each distributed execution node respectively pulls the query task corresponding to the number of the distributed execution node.
In the embodiment, the query action is executed by the distributed execution task, so that the query pressure is shared, and particularly, the query action can be stably and quickly executed in a high-concurrency occasion; meanwhile, the distributed execution nodes pull the query tasks corresponding to the distributed execution nodes according to the set algorithm, the whole process is orderly and efficient, and the risk of query congestion is reduced.
In some embodiments of the present disclosure, step S40 includes:
step S41: and under the condition that the query result is that the target key-value pair exists in the cache library, generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read.
Specifically, when the distributed execution node searches for a target key value pair in the cache library, and the query result is that the target key value pair exists in the cache library, the query result set of the current query task can be directly obtained from the cache library at this time. And generating a second key value pair according to the task number of the current query task and the query result set, and storing the second key value pair in a cache library.
In the embodiment, the cache library is preferentially queried through the distributed execution nodes, and when the first key value pair of the current query task exists in the cache library, namely the current query task is a repeated query task, a query result set can be directly obtained from the cache library without querying a database; meanwhile, a second key value pair is generated according to the current query task and stored in the cache library, so that the query end can conveniently obtain a query result set of the query task according to the task number.
In some embodiments of the present disclosure, as shown in fig. 3, step S41 includes:
step S412: and acquiring a query result set of the target key value pair, and generating a second key value pair of the current query task according to the task number of the current query task and the query result set of the target key value pair.
Specifically, the target key value pair of the current query task is found in the cache library, and the query result set of the current query task can be obtained. And generating a second key value pair according to the task number of the current query task and the query result set of the target key value pair.
Step S414: and carrying out data splitting on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments.
Specifically, the second key-value pair is subjected to data splitting to generate a plurality of data fragments, so that the second key-value pair can be dispersedly stored on a plurality of cache nodes.
Step S416: and respectively storing the data fragments on a plurality of cache nodes of the cache library.
Specifically, a plurality of data fragments are respectively stored on a plurality of cache nodes of the cache library, so that the number of the data fragments stored on each storage node is as close as possible.
In some embodiments of the present disclosure, step S40 includes:
step S42: and under the condition that the query result is that the target key value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key value pair and a second key value pair of the current query task, and storing the first key value pair and the second key value pair to the cache library for the query end to read.
Specifically, when the target key value pair of the current query task does not exist in the cache library, the distributed execution node needs to initiate a query to the database to obtain a query result set, and generate a first key value pair and a second key value pair of the current query task.
In this embodiment, a cache library is preferentially queried by a distributed execution node, and when a first key value pair of a current query task does not exist in the cache library, that is, the current query task is a new query task, a query needs to be performed on a database; meanwhile, a first key value pair and a second key value pair are generated according to the query result set obtained from the database and the current query task and are stored in the cache library, so that the query end can conveniently repeat query and obtain the query result set of the query task according to the task number.
In some embodiments of the present disclosure, as shown in fig. 4, step S42 includes:
step S421: and initiating query to a database according to the structured query language of the current query task, and receiving a query result set of the current query task returned by the database.
Step S423: generating a first key value pair according to the structured query language and the query result set of the current query task;
step S425: generating a second key value pair according to the task number of the current query task and the query result set;
step S427: respectively carrying out data splitting on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
step S429: and respectively storing the data fragments on a plurality of cache nodes of the cache library.
Specifically, the distributed execution node initiates a query to the database according to the structured query language of the current query task to obtain a query result set of the current query task. And finally, storing the generated first key-value pair and the second key-value pair in a cache library, wherein the specific steps are the same as the step S41 when storing, and are not described herein again.
In some embodiments of the present disclosure, as shown in fig. 5, the aforementioned step S416 or step S429 includes:
step A10: and calculating the number of output nodes of the data fragment according to a preset algorithm, wherein the node number is used for corresponding to the cache nodes one by one.
Specifically, the divided data fragments are subjected to hash calculation, so that the node numbers can be calculated and obtained according to each data fragment. The node numbers are in one-to-one correspondence with the cache nodes. The node number of the data fragment generated by each first key-value pair or second key-value pair is unique. In general, the number of data fragments generated by each first key-value pair or second key-value pair may be set to be the same as the number of cache nodes. For example, the cache library includes 3 cache nodes. Therefore, the first key value pair or the second key value pair which needs to be stored can be divided into 3 groups of data fragments, the 3 groups of data fragments are calculated to obtain 3 different node numbers, and each node number corresponds to one cache node.
Step A20: and storing the data fragments to corresponding cache nodes according to the node numbers.
Specifically, the data fragments are stored to the corresponding cache nodes according to the node numbers of the data fragments.
In this embodiment, through data fragment storage and a multi-node cache library, a query result set of a query task may be stored in memories of multiple service devices, and when one complete data needs to be read, compared with reading of one file on a single node, the data fragment mode supports more visitors, and concurrency may be improved.
In some embodiments of the present disclosure, as shown in fig. 6, the foregoing step S416 or step S429 further includes:
step A30: and generating a fragment copy of the data fragment, and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
Specifically, each data fragment is backed up to generate a fragment copy, and during storage, the fragment copy of the data fragment and the data fragment need to be stored on different cache nodes respectively. For example, the cache library comprises a first cache node, a second cache node, and a third cache node; and splitting the first key value pair or the second key value pair to be stored into a first data fragment, a second data fragment and a third data fragment. And calculating the 3 groups of data fragments to obtain 3 different node numbers, wherein each node number corresponds to one cache node. And simultaneously, the fragment copy of each data fragment is also stored on the cache node which does not correspond to the node number of the data fragment. One of the following may be the case:
the first cache node stores fragment copies of the first data fragment and the second data fragment;
the second cache node stores fragment copies of the second data fragment and the third data fragment;
and the third cache node stores a third data fragment and a fragment copy of the first data fragment.
In the embodiment, by combining the data fragment and the fragment copy, more visitors can be supported, and the concurrency is improved; and the visitor can still inquire the full amount of data under the condition that a certain cache node is hung, so that the fault-tolerant mechanism is improved, and the availability is high. The above characteristics can obtain better effect by adjusting the number of fragment copies, the number of data fragments, the number of cache nodes and the like.
In some embodiments of the present disclosure, the step S40 of reading by the querying end includes:
and providing a service interface for the query end to query a corresponding query result set in the cache library based on the task number of the query task.
Specifically, in conjunction with fig. 7, by providing an asynchronizable service interface to the query end, the query end can actively read a query result set of the query task from the cache library. The query result set can be set to be actively read from the buffer library by the query end after the query end initiates a query request and the set time. According to the foregoing step of storing the data fragments in the cache library, it can be known that when the data fragments are read from the cache library to form complete data, the node numbers also need to be obtained by calculation according to the query task based on the hash algorithm, so that the query end can read the data from the corresponding cache nodes, which is not described herein again.
The embodiment of the invention can reduce the change of the database architecture of the query end by providing the service interface for the query end to read data from the cache library, and has more stable query and reading processes and low development cost.
In some embodiments of the present disclosure, the step S40 of reading by the querying end includes:
and the distributed execution node sends the acquired query result set of the current query task to the query end.
Specifically, when the distributed execution node obtains the query result set of the current query task from the cache or the database, the distributed execution node actively pushes the query result set to the query end. Correspondingly, the query end needs to monitor the pushed interface.
According to the method and the device, the query result set of the current query task is actively pushed by the distributed execution nodes, so that the timeliness of the query response is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a query request asynchronous processing device for implementing the query request asynchronous processing method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the query request asynchronous processing device provided below can refer to the limitations of the query request asynchronous processing method in the foregoing, and details are not described here.
The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative concept, the embodiments of the present disclosure provide an apparatus in one or more embodiments as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
In some embodiments of the present disclosure, as shown in fig. 8, an asynchronous query request processing apparatus is provided, and the apparatus Z00 may be the aforementioned terminal, or may also be a server, or a module, component, device, unit, and the like integrated in the terminal. The apparatus may include:
the query queue module Z10 is configured to receive a query request sent by a query end, generate a query task based on the query request, assign a task number, and write the query task into a query queue;
the distributed execution node module Z20 is used for acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to the cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
the cache query module Z30 is configured to query, according to the structured query language of the current query task, whether a target key-value pair exists in a first key-value pair in the cache library, so as to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
and the key value pair generating module Z40 is configured to generate a second key value pair of the current query task according to the query result and the task number of the current query task, and store the second key value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 9, the distributed execution node module Z20 includes:
an obtaining unit Z22, configured to, based on the query task, obtain, by the distributed execution node, a corresponding query task according to a preset rule.
In some embodiments of the present disclosure, the key-value pair generating module Z40 is configured to, if the query result is that the target key-value pair exists in the cache library, generate a second key-value pair according to the target key-value pair and the task number of the current query task, and store the second key-value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 10, the key-value pair generation module Z40 includes:
a second key-value pair unit Z41, configured to obtain a query result set of the target key-value pair, and generate the second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
the data fragmentation unit Z43 is configured to perform data fragmentation on the second key value of the current query task based on a hash algorithm, and generate a plurality of data fragments;
and the cache node unit Z45 is configured to store the data fragments on a plurality of cache nodes of the cache library respectively.
In some embodiments of the present disclosure, the key-value pair generating module Z40 is configured to, if the query result is that the target key-value pair does not exist in the cache library, query a database to obtain a query result set of the current query task, generate a first key-value pair and a second key-value pair of the current query task, and store the first key-value pair and the second key-value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 11, the key-value pair generation module Z40 includes:
a database query unit Z47, configured to initiate a query to a database according to the structured query language of the current query task, and receive a query result set of the current query task returned by the database;
a first key-value pair unit Z49, configured to generate a first key-value pair according to the structured query language and the query result set of the current query task;
a second key-value pair unit Z41, configured to generate a second key-value pair according to the task number of the current query task and the query result set;
the data fragmentation unit Z43 is configured to perform data fragmentation on the first key value pair and the second key value pair of the current query task based on a hash algorithm, respectively, to generate a plurality of data fragments;
and the cache node unit Z45 is used for storing the data fragments on a plurality of cache nodes of the cache library respectively.
In some embodiments of the present disclosure, as shown in fig. 12, the cache node unit Z45 includes:
a node numbering subunit Z451, configured to calculate, according to a preset algorithm, the data fragment and output a node number, where the node number is used to correspond to the cache node one to one;
and the storage subunit Z453 is configured to store the data fragments to the corresponding cache nodes according to the node numbers.
In some embodiments of the present disclosure, as shown in fig. 13, the cache node unit Z45 further includes:
and the copy subunit Z455 is configured to generate a fragment copy of the data fragment, and store the fragment copy to a cache node that does not correspond to the node number of the data fragment.
In some embodiments of the present disclosure, as shown in fig. 14, the apparatus Z00 further comprises:
and the service interface unit Z50 is configured to provide a service interface to the querying end, so that the querying end queries a corresponding query result set in the cache library based on the task number of the query task.
In some embodiments of the present disclosure, as shown in fig. 15, the apparatus Z00 further comprises:
a sending unit Z60, configured to instruct the distributed execution node to send the obtained query result set of the current query task to the query end.
The modules in the query request asynchronous processing device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
Based on the foregoing description of the embodiment of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer device is provided, where the computer device may be a server, and the internal structure diagram of the computer device may be as shown in fig. 16. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a query request asynchronous processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Based on the foregoing description of the embodiments of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps in the embodiments of the method described above.
Based on the foregoing description of embodiments of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer program product is provided, which comprises a computer program that, when executed by a processor, implements the steps in the embodiments of the methods described above.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present disclosure are information and data that are authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
In the description herein, references to the description of "some embodiments," "other embodiments," "desired embodiments," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, a schematic description of the above terminology may not necessarily refer to the same embodiment or example.
It is understood that the embodiments of the method described above are described in a progressive manner, and the same/similar parts of the embodiments are referred to each other, and each embodiment focuses on differences from the other embodiments. Reference may be made to the description of other method embodiments for relevant points.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features of the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the claims. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the concept of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.

Claims (23)

1.一种查询请求异步处理方法,其特征在于,所述方法包括:1. A query request asynchronous processing method, wherein the method comprises: 接收查询端发送的查询请求,基于所述查询请求生成查询任务并分配任务编号,将所述查询任务写入查询队列;Receive a query request sent by the query terminal, generate a query task based on the query request, assign a task number, and write the query task into a query queue; 利用分布式执行节点从所述查询队列中获取当前查询任务,基于所述当前查询任务向缓存库发起查询;所述缓存库包括若干缓存节点,所述缓存库用于存储包括历史查询任务的第一键值对,所述第一键值对为所述查询任务的结构化查询语言和查询结果集的键值对;Use distributed execution nodes to obtain the current query task from the query queue, and initiate a query to the cache library based on the current query task; the cache library includes several cache nodes, and the cache library is used to store the first query task including historical query tasks. A key-value pair, where the first key-value pair is the structured query language of the query task and the key-value pair of the query result set; 根据所述当前查询任务的结构化查询语言,查询所述缓存库中的第一键值对是否存在目标键值对,得到查询结果;所述目标键值对为所述缓存库中与所述当前查询任务具有相同结构化查询语言的第一键值对;According to the structured query language of the current query task, query whether there is a target key-value pair in the first key-value pair in the cache library, and obtain a query result; the target key-value pair is the same as that in the cache library The current query task has the first key-value pair of the same structured query language; 根据所述查询结果和所述当前查询任务的任务编号,相应的生成所述当前查询任务的第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取;所述第二键值对为所述查询任务的任务编号和查询结果集的键值对。According to the query result and the task number of the current query task, a second key-value pair of the current query task is correspondingly generated, and the second key-value pair is stored in the cache library for the The query end reads; the second key-value pair is the task number of the query task and the key-value pair of the query result set. 2.根据权利要求1所述的方法,其特征在于,所述利用分布式执行节点从所述查询队列中获取当前查询任务,基于所述当前查询任务向缓存库发起查询;所述缓存库包括若干缓存节点,所述缓存库用于存储包括历史查询任务的第一键值对,所述第一键值对为所述查询任务的结构化查询语言和查询结果集的键值对包括:2 . The method according to claim 1 , wherein the use of a distributed execution node to obtain a current query task from the query queue, and initiating a query to a cache library based on the current query task; the cache library includes: 3 . Several cache nodes, the cache library is used to store the first key-value pair including the historical query task, and the first key-value pair is the structured query language of the query task and the key-value pair of the query result set, including: 基于所述查询任务,所述分布式执行节点按照预设规则获取对应的查询任务。Based on the query task, the distributed execution node obtains the corresponding query task according to a preset rule. 3.根据权利要求1所述的方法,其特征在于,所述根据所述查询结果和所述当前查询任务的任务编号,相应的生成所述当前查询任务的第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取包括:3. The method according to claim 1, wherein, according to the query result and the task number of the current query task, the second key-value pair of the current query task is correspondingly generated, and the The second key-value pair stored in the cache library for the query end to read includes: 在所述查询结果为所述缓存库中存在所述目标键值对的情况下,则根据所述目标键值对和所述当前查询任务的任务编号生成第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取。When the query result is that the target key-value pair exists in the cache library, a second key-value pair is generated according to the target key-value pair and the task number of the current query task, and the first key-value pair is The two key-value pairs are stored in the cache library for the query end to read. 4.根据权利要求3所述的方法,其特征在于,所述在所述查询结果为所述缓存库中存在所述目标键值对的情况下,则根据所述目标键值对和所述当前查询任务的任务编号生成第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取包括:4. The method according to claim 3, wherein when the query result is that the target key-value pair exists in the cache library, then according to the target key-value pair and the The task number of the current query task generates a second key-value pair, and storing the second key-value pair in the cache library for the query end to read includes: 获取所述目标键值对的查询结果集,根据所述当前查询任务的任务编号和所述目标键值对的查询结果集生成所述当前查询任务的第二键值对;Obtain the query result set of the target key-value pair, and generate the second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair; 基于哈希算法将所述当前查询任务的第二键值进行数据拆分,生成若干数据分片;Data splitting is performed on the second key value of the current query task based on a hash algorithm to generate several data fragments; 将所述数据分片分别存储在所述缓存库的若干缓存节点上。The data shards are respectively stored on several cache nodes of the cache library. 5.根据权利要求1所述的方法,其特征在于,所述根据所述查询结果和所述当前查询任务的任务编号,相应的生成所述当前查询任务的第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取包括:5. The method according to claim 1, wherein the second key-value pair of the current query task is correspondingly generated according to the query result and the task number of the current query task, and the The second key-value pair stored in the cache library for the query end to read includes: 在所述查询结果为所述缓存库中不存在所述目标键值对的情况下,则向数据库查询获取所述当前查询任务的查询结果集,生成所述当前查询任务的第一键值对和第二键值对,并存储至所述缓存库,以供所述查询端读取。When the query result is that the target key-value pair does not exist in the cache library, query the database to obtain the query result set of the current query task, and generate the first key-value pair of the current query task and the second key-value pair, and store it in the cache library for the query end to read. 6.根据权利要求5所述的方法,其特征在于,所述在所述查询结果为所述缓存库中不存在所述目标键值对的情况下,则向数据库查询获取所述当前查询任务的查询结果集,生成所述当前查询任务的第一键值对和第二键值对,并存储至所述缓存库,以供所述查询端读取包括:6. The method according to claim 5, wherein when the query result is that the target key-value pair does not exist in the cache library, then query a database to obtain the current query task The query result set, generating the first key-value pair and the second key-value pair of the current query task, and storing them in the cache library for the query end to read includes: 根据所述当前查询任务的结构化查询语言向数据库发起查询,接收所述数据库返回的所述当前查询任务的查询结果集;Initiating a query to a database according to the structured query language of the current query task, and receiving a query result set of the current query task returned by the database; 根据所述当前查询任务的结构化查询语言和查询结果集生成第一键值对;Generate a first key-value pair according to the structured query language of the current query task and the query result set; 根据所述当前查询任务的任务编号和查询结果集生成第二键值对;Generate a second key-value pair according to the task number of the current query task and the query result set; 基于哈希算法将所述当前查询任务的第一键值对和第二键值分别进行数据拆分,生成若干数据分片;Based on the hash algorithm, the first key-value pair and the second key-value of the current query task are respectively data split to generate several data fragments; 将所述数据分片分别存储在所述缓存库的若干缓存节点上。The data shards are respectively stored on several cache nodes of the cache library. 7.根据权利要求4或6所述的方法,其特征在于,所述将所述数据分片分别存储在所述缓存库的若干缓存节点上包括:7. The method according to claim 4 or 6, wherein the storing the data fragments on several cache nodes of the cache library respectively comprises: 所述数据分片按照预设算法进行计算输出节点编号,所述节点编号用于与所述缓存节点一一对应;The data fragment is calculated according to a preset algorithm to output a node number, and the node number is used for one-to-one correspondence with the cache node; 将所述数据分片按照节点编号存储至对应的缓存节点。The data fragments are stored in the corresponding cache nodes according to the node numbers. 8.根据权利要求7所述的方法,其特征在于,所述将所述数据分片分别存储在所述缓存库的若干缓存节点上还包括:8. The method according to claim 7, wherein the storing the data fragments respectively on several cache nodes of the cache library further comprises: 生成所述数据分片的分片副本,将所述分片副本存储至与所述数据分片的节点编号不对应的缓存节点。A fragment copy of the data fragment is generated, and the fragment copy is stored in a cache node that does not correspond to the node number of the data fragment. 9.根据权利要求1所述的方法,其特征在于,所述以供所述查询端读取包括:9. The method according to claim 1, wherein the reading by the query terminal comprises: 向所述查询端提供服务接口,以供所述查询端基于所述查询任务的任务编号在所述缓存库中查询对应的查询结果集。A service interface is provided to the query terminal, so that the query terminal can query the corresponding query result set in the cache library based on the task number of the query task. 10.根据权利要求1所述的方法,其特征在于,所述以供所述查询端读取包括:10. The method according to claim 1, wherein the reading by the query terminal comprises: 所述分布式执行节点将获取到的所述当前查询任务的查询结果集发送至所述查询端。The distributed execution node sends the acquired query result set of the current query task to the query terminal. 11.一种查询请求异步处理装置,其特征在于,所述装置包括:11. A query request asynchronous processing device, wherein the device comprises: 查询队列模块,用于接收查询端发送的查询请求,基于所述查询请求生成查询任务并分配任务编号,将所述查询任务写入查询队列;a query queue module, configured to receive a query request sent by a query terminal, generate a query task based on the query request, assign a task number, and write the query task into a query queue; 分布式执行节点模块,用于利用分布式执行节点从所述查询队列中获取当前查询任务,基于所述当前查询任务向缓存库发起查询;所述缓存库包括若干缓存节点,所述缓存库用于存储包括历史查询任务的第一键值对,所述第一键值对为所述查询任务的结构化查询语言和查询结果集的键值对;The distributed execution node module is used to obtain the current query task from the query queue by using the distributed execution node, and initiate a query to the cache library based on the current query task; the cache library includes several cache nodes, and the cache library uses for storing a first key-value pair including a historical query task, where the first key-value pair is a key-value pair of the structured query language of the query task and a query result set; 缓存查询模块,用于根据所述当前查询任务的结构化查询语言,查询所述缓存库中的第一键值对是否存在目标键值对,得到查询结果;所述目标键值对为所述缓存库中与所述当前查询任务具有相同结构化查询语言的第一键值对;A cache query module, configured to query whether a target key-value pair exists in the first key-value pair in the cache library according to the structured query language of the current query task, and obtain a query result; the target key-value pair is the a first key-value pair in the cache library that has the same structured query language as the current query task; 键值对生成模块,用于根据所述查询结果和所述当前查询任务的任务编号,相应的生成所述当前查询任务的第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取。A key-value pair generating module is configured to correspondingly generate a second key-value pair of the current query task according to the query result and the task number of the current query task, and store the second key-value pair in the cache library for the query side to read. 12.根据权利要求11所述的装置,其特征在于,所述分布式执行节点模块包括:12. The apparatus according to claim 11, wherein the distributed execution node module comprises: 获取单元,用于基于所述查询任务,所述分布式执行节点按照预设规则获取对应的查询任务。an obtaining unit, configured to obtain the corresponding query task by the distributed execution node according to a preset rule based on the query task. 13.根据权利要求11所述的装置,其特征在于,所述键值对生成模块用于在所述查询结果为所述缓存库中存在所述目标键值对的情况下,则根据所述目标键值对和所述当前查询任务的任务编号生成第二键值对,将所述第二键值对存储在所述缓存库中,以供所述查询端读取。13. The apparatus according to claim 11, wherein the key-value pair generating module is configured to, in the case that the query result is that the target key-value pair exists in the cache library, according to the The target key-value pair and the task number of the current query task generate a second key-value pair, and the second key-value pair is stored in the cache library for the query end to read. 14.根据权利要求13所述的装置,其特征在于,所述键值对生成模块包括:14. The apparatus according to claim 13, wherein the key-value pair generation module comprises: 第二键值对单元,用于获取所述目标键值对的查询结果集,根据所述当前查询任务的任务编号和所述目标键值对的查询结果集生成所述当前查询任务的第二键值对;The second key-value pair unit is configured to obtain the query result set of the target key-value pair, and generate the second query result set of the current query task according to the task number of the current query task and the query result set of the target key-value pair key-value pair; 数据分片单元,用于基于哈希算法将所述当前查询任务的第二键值进行数据拆分,生成若干数据分片;a data fragmentation unit, configured to perform data splitting on the second key value of the current query task based on a hash algorithm to generate several data fragments; 缓存节点单元,用于将所述数据分片分别存储在所述缓存库的若干缓存节点上。A cache node unit, configured to store the data shards on several cache nodes of the cache library respectively. 15.根据权利要求11所述的装置,其特征在于,所述键值对生成模块用于在所述查询结果为所述缓存库中不存在所述目标键值对的情况下,则向数据库查询获取所述当前查询任务的查询结果集,生成所述当前查询任务的第一键值对和第二键值对,并存储至所述缓存库,以供所述查询端读取。15 . The apparatus according to claim 11 , wherein the key-value pair generating module is configured to, in the case where the query result is that the target key-value pair does not exist in the cache The query obtains the query result set of the current query task, generates a first key-value pair and a second key-value pair of the current query task, and stores them in the cache library for the query end to read. 16.根据权利要求15所述的装置,其特征在于,所述键值对生成模块包括:16. The apparatus according to claim 15, wherein the key-value pair generation module comprises: 数据库查询单元,用于根据所述当前查询任务的结构化查询语言向数据库发起查询,接收所述数据库返回的所述当前查询任务的查询结果集;a database query unit, configured to initiate a query to the database according to the structured query language of the current query task, and receive the query result set of the current query task returned by the database; 第一键值对单元,用于根据所述当前查询任务的结构化查询语言和查询结果集生成第一键值对;a first key-value pair unit, configured to generate a first key-value pair according to the structured query language and the query result set of the current query task; 第二键值对单元,用于根据所述当前查询任务的任务编号和查询结果集生成第二键值对;A second key-value pair unit, configured to generate a second key-value pair according to the task number of the current query task and the query result set; 数据分片单元,用于基于哈希算法将所述当前查询任务的第一键值对和第二键值分别进行数据拆分,生成若干数据分片;a data fragmentation unit, configured to separate the data of the first key-value pair and the second key-value of the current query task based on a hash algorithm, to generate several data fragments; 缓存节点单元,将所述数据分片分别存储在所述缓存库的若干缓存节点上。The cache node unit stores the data shards on several cache nodes of the cache library respectively. 17.根据权利要求14或16所述的装置,其特征在于,所述缓存节点单元包括:17. The apparatus according to claim 14 or 16, wherein the cache node unit comprises: 节点编号子单元,用于所述数据分片按照预设算法进行计算输出节点编号,所述节点编号用于与所述缓存节点一一对应;a node number subunit, used for the data fragment to calculate and output a node number according to a preset algorithm, and the node number is used for one-to-one correspondence with the cache node; 存储子单元,用于将所述数据分片按照节点编号存储至对应的缓存节点。The storage subunit is configured to store the data fragments to the corresponding cache nodes according to the node numbers. 18.根据权利要求17所述的装置,其特征在于,所述缓存节点单元还包括:18. The apparatus according to claim 17, wherein the cache node unit further comprises: 副本子单元,用于生成所述数据分片的分片副本,将所述分片副本存储至与所述数据分片的节点编号不对应的缓存节点。A copy subunit, configured to generate a fragment copy of the data fragment, and store the fragment copy in a cache node that does not correspond to the node number of the data fragment. 19.根据权利要求11所述的装置,其特征在于,所述装置还包括:19. The apparatus of claim 11, wherein the apparatus further comprises: 服务接口单元,用于向所述查询端提供服务接口,以供所述查询端基于所述查询任务的任务编号在所述缓存库中查询对应的查询结果集。The service interface unit is configured to provide the query terminal with a service interface, so that the query terminal can query the corresponding query result set in the cache library based on the task number of the query task. 20.根据权利要求11所述的装置,其特征在于,所述装置还包括:20. The apparatus of claim 11, wherein the apparatus further comprises: 发送单元,用于指示所述分布式执行节点将获取到的所述当前查询任务的查询结果集发送至所述查询端。A sending unit, configured to instruct the distributed execution node to send the obtained query result set of the current query task to the query end. 21.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至10中任一项所述的方法的步骤。21. A computer device, comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the method according to any one of claims 1 to 10 when the processor executes the computer program. step. 22.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至10中任一项所述的方法的步骤。22. A computer-readable storage medium on which a computer program is stored, wherein the computer program implements the steps of the method according to any one of claims 1 to 10 when the computer program is executed by a processor. 23.一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至10中任一项所述的方法的步骤。23. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202111407680.5A 2021-11-24 2021-11-24 Query request asynchronous processing method, device, computer equipment, and storage medium Active CN114218267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111407680.5A CN114218267B (en) 2021-11-24 2021-11-24 Query request asynchronous processing method, device, computer equipment, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111407680.5A CN114218267B (en) 2021-11-24 2021-11-24 Query request asynchronous processing method, device, computer equipment, and storage medium

Publications (2)

Publication Number Publication Date
CN114218267A true CN114218267A (en) 2022-03-22
CN114218267B CN114218267B (en) 2024-12-20

Family

ID=80698202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111407680.5A Active CN114218267B (en) 2021-11-24 2021-11-24 Query request asynchronous processing method, device, computer equipment, and storage medium

Country Status (1)

Country Link
CN (1) CN114218267B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438087A (en) * 2022-11-10 2022-12-06 广州思迈特软件有限公司 Data query method and device based on cache library, storage medium and equipment
CN115835147A (en) * 2022-11-23 2023-03-21 中国工商银行股份有限公司 A short message related information processing method and device
CN115935090A (en) * 2023-03-10 2023-04-07 北京锐服信科技有限公司 Data query method and system based on time slicing
CN116384497A (en) * 2023-05-11 2023-07-04 深圳量旋科技有限公司 Reading and writing system, related method, device and equipment for quantum computing experimental result
CN118410066A (en) * 2024-06-27 2024-07-30 成方金融科技有限公司 Method, device, electronic device and storage medium for querying association relationship

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799622A (en) * 2012-06-19 2012-11-28 北京大学 Distributed structured query language (SQL) query method based on MapReduce expansion framework
US20150331910A1 (en) * 2014-04-28 2015-11-19 Venkatachary Srinivasan Methods and systems of query engines and secondary indexes implemented in a distributed database
CN111190928A (en) * 2019-12-24 2020-05-22 平安普惠企业管理有限公司 Cache processing method, apparatus, computer equipment, and storage medium
CN112632157A (en) * 2021-03-11 2021-04-09 全时云商务服务股份有限公司 Multi-condition paging query method under distributed system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799622A (en) * 2012-06-19 2012-11-28 北京大学 Distributed structured query language (SQL) query method based on MapReduce expansion framework
US20150331910A1 (en) * 2014-04-28 2015-11-19 Venkatachary Srinivasan Methods and systems of query engines and secondary indexes implemented in a distributed database
CN111190928A (en) * 2019-12-24 2020-05-22 平安普惠企业管理有限公司 Cache processing method, apparatus, computer equipment, and storage medium
CN112632157A (en) * 2021-03-11 2021-04-09 全时云商务服务股份有限公司 Multi-condition paging query method under distributed system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115438087A (en) * 2022-11-10 2022-12-06 广州思迈特软件有限公司 Data query method and device based on cache library, storage medium and equipment
CN115835147A (en) * 2022-11-23 2023-03-21 中国工商银行股份有限公司 A short message related information processing method and device
CN115935090A (en) * 2023-03-10 2023-04-07 北京锐服信科技有限公司 Data query method and system based on time slicing
CN116384497A (en) * 2023-05-11 2023-07-04 深圳量旋科技有限公司 Reading and writing system, related method, device and equipment for quantum computing experimental result
CN116384497B (en) * 2023-05-11 2023-08-25 深圳量旋科技有限公司 Reading and writing system, related method, device and equipment for quantum computing experimental result
CN118410066A (en) * 2024-06-27 2024-07-30 成方金融科技有限公司 Method, device, electronic device and storage medium for querying association relationship
CN118410066B (en) * 2024-06-27 2024-11-05 成方金融科技有限公司 Association relation inquiry method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114218267B (en) 2024-12-20

Similar Documents

Publication Publication Date Title
CN114218267B (en) Query request asynchronous processing method, device, computer equipment, and storage medium
US10467245B2 (en) System and methods for mapping and searching objects in multidimensional space
US8954391B2 (en) System and method for supporting transient partition consistency in a distributed data grid
CN101727465B (en) Methods for establishing and inquiring index of distributed column storage database, device and system thereof
US9563426B1 (en) Partitioned key-value store with atomic memory operations
US9613104B2 (en) System and method for building a point-in-time snapshot of an eventually-consistent data store
US9323791B2 (en) Apparatus and method for expanding a shared-nothing system
CN111797121A (en) Strong consistency query method, device and system for read-write separation architecture service system
US20170228422A1 (en) Flexible task scheduler for multiple parallel processing of database data
CN108959538B (en) Full text retrieval system and method
CN111723161B (en) A data processing method, device and equipment
CN109684270A (en) Database filing method, apparatus, system, equipment and readable storage medium storing program for executing
CN112818021B (en) Data request processing method, device, computer equipment and storage medium
JP7440007B2 (en) Systems, methods and apparatus for querying databases
US20200242118A1 (en) Managing persistent database result sets
US20170270149A1 (en) Database systems with re-ordered replicas and methods of accessing and backing up databases
CN113111038A (en) File storage method, device, server and storage medium
US11625503B2 (en) Data integrity procedure
CN103559247A (en) Data service processing method and device
US10534765B2 (en) Assigning segments of a shared database storage to nodes
CN106716400B (en) Method and device for partition management of data table
US10019472B2 (en) System and method for querying a distributed dwarf cube
CN110427390B (en) Data query method and device, storage medium and electronic device
CN111221814B (en) Method, device and equipment for constructing secondary index
CN113127717A (en) Key retrieval method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载