Disclosure of Invention
In view of the foregoing, it is desirable to provide an asynchronous query request processing method, an asynchronous query request processing apparatus, a computer device, a storage medium, and a computer program product, which can improve the response efficiency of a query request without changing or slightly changing the original system service architecture.
In a first aspect, the present disclosure provides a method for asynchronously processing a query request. The method comprises the following steps:
receiving a query request sent by a query end, generating a query task based on the query request, distributing a task number, and writing the query task into a query queue;
acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
inquiring whether a first key value pair in the cache library has a target key value pair according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read; and the second key-value pair is the task number of the query task and the key-value pair of the query result set.
In one embodiment, the query queue obtains a current query task from the query queue by using a distributed execution node, and initiates a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set, and comprises:
based on the query task, the distributed execution nodes acquire the corresponding query task according to a preset rule.
In one embodiment, the generating a second key-value pair of the current query task according to the query result and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
and under the condition that the query result is that the target key-value pair exists in the cache library, generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read.
In one embodiment, if the query result is that the target key-value pair exists in the cache library, then generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
acquiring a query result set of the target key-value pair, and generating a second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
performing data splitting on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the generating a second key-value pair of the current query task according to the query result and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read includes:
and under the condition that the query result is that the target key value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key value pair and a second key value pair of the current query task, and storing the first key value pair and the second key value pair to the cache library for the query end to read.
In one embodiment, if the query result is that the target key-value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key-value pair and a second key-value pair of the current query task, and storing the first key-value pair and the second key-value pair in the cache library, so that the query end reads the key-value pair including:
initiating query to a database according to the structured query language of the current query task, and receiving a query result set of the current query task returned by the database;
generating a first key value pair according to the structured query language and the query result set of the current query task;
generating a second key value pair according to the task number of the current query task and the query result set;
respectively carrying out data splitting on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the storing the data fragments on a plurality of cache nodes of the cache library respectively includes:
the data fragments are subjected to calculation according to a preset algorithm to output node numbers, and the node numbers are used for corresponding to the cache nodes one by one;
and storing the data fragments to corresponding cache nodes according to the node numbers.
In one embodiment, the storing the data fragments on a plurality of cache nodes of the cache library respectively further includes:
and generating a fragment copy of the data fragment, and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
In one embodiment, the providing for the query side to read includes:
and providing a service interface for the query end to query a corresponding query result set in the cache library based on the task number of the query task.
In one embodiment, the providing for the query side to read includes:
and the distributed execution node sends the acquired query result set of the current query task to the query end.
In a second aspect, the present disclosure also provides an asynchronous processing apparatus for query requests. The device comprises:
the query queue module is used for receiving a query request sent by a query end, generating a query task based on the query request, distributing a task number, and writing the query task into a query queue;
the distributed execution node module is used for acquiring a current query task from the query queue by using the distributed execution node and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
the cache query module is used for querying whether a target key value pair exists in a first key value pair in the cache library according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
and the key value pair generating module is used for correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read.
In one embodiment, the distributed execution node module includes:
and the acquisition unit is used for acquiring the corresponding query task by the distributed execution node according to a preset rule based on the query task.
In one embodiment, the key-value pair generating module is configured to, when the query result is that the target key-value pair exists in the cache library, generate a second key-value pair according to the target key-value pair and the task number of the current query task, and store the second key-value pair in the cache library for the query end to read.
In one embodiment, the key-value pair generating module includes:
the second key-value pair unit is used for acquiring a query result set of the target key-value pair and generating the second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
the data fragmentation unit is used for carrying out data fragmentation on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and the cache node unit is used for respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the key-value pair generating module is configured to, if the query result is that the target key-value pair does not exist in the cache library, query a database to obtain a query result set of the current query task, generate a first key-value pair and a second key-value pair of the current query task, and store the first key-value pair and the second key-value pair in the cache library for the query end to read.
In one embodiment, the key-value pair generating module includes:
the database query unit is used for initiating query to a database according to the structured query language of the current query task and receiving a query result set of the current query task returned by the database;
the first key-value pair unit is used for generating a first key-value pair according to the structured query language and the query result set of the current query task;
the second key-value pair unit is used for generating a second key-value pair according to the task number of the current query task and the query result set;
the data fragmentation unit is used for respectively carrying out data fragmentation on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
and the cache node unit is used for respectively storing the data fragments on a plurality of cache nodes of the cache library.
In one embodiment, the cache node unit includes:
the node numbering subunit is used for calculating the node number output by the data fragment according to a preset algorithm, wherein the node number is used for corresponding to the cache node one by one;
and the storage subunit is used for storing the data fragments to the corresponding cache nodes according to the node numbers.
In one embodiment, the cache node unit further includes:
and the copy subunit is used for generating a fragment copy of the data fragment and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
In one embodiment, the apparatus further comprises:
and the service interface unit is used for providing a service interface for the inquiry end so that the inquiry end can inquire the corresponding inquiry result set in the cache library based on the task number of the inquiry task.
In one embodiment, the apparatus further comprises:
and the sending unit is used for indicating the distributed execution nodes to send the obtained query result set of the current query task to the query end.
In a third aspect, the present disclosure also provides a computer device. The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the query request asynchronous processing method when executing the computer program.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the above-described query request asynchronous processing method.
In a fifth aspect, the present disclosure also provides a computer program product. The computer program product comprises a computer program, and the computer program realizes the steps of the asynchronous processing method of the query request when being executed by a processor.
The asynchronous processing method, the asynchronous processing device, the asynchronous processing computer equipment, the asynchronous processing storage medium and the asynchronous processing computer program product at least have the following beneficial effects:
according to the method and the device, the query task is acquired through the distributed execution nodes for query, so that the response result of the query task is more efficient, and the waiting time of a query end is reduced; meanwhile, the cache library can comprise a plurality of cache nodes, and the cache limit of a single-machine memory of the server is broken through; in addition, the query request is received by being connected with the query end, and the decoupling is realized with the service system of the query end, so that an asynchronous query module is not required to be added or modified in the original service system of the query end; and the system can serve a plurality of query ends simultaneously, can realize query only by receiving query requests sent by the query ends, has wider adaptability, can well support high concurrency and high availability, and is easy to expand transversely.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein in the description of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, if the terms first, second, etc. are used to denote names, they do not denote any particular order.
As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes" or "including," etc., specify the presence of stated features, integers, steps, operations, components, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof. Also, in this specification, the term "and/or" includes any and all combinations of the associated listed items.
The query request asynchronous processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the query end 102 communicates with the asynchronous query system 104 over a network. The asynchronous query server can be implemented by an independent server or a server cluster composed of a plurality of servers. The data storage system may store data that needs to be processed by the asynchronous query server. The data storage system may be integrated on the asynchronous query server, or may be placed on the cloud or other network servers. The cache library may store historical query data for the asynchronous query server. The cache library may be integrated on the asynchronous query server, or may be placed on the cloud or other network server. The cache bank may also be a multi-node cache bank.
In some embodiments of the present disclosure, as shown in fig. 2, a query request asynchronous processing method is provided, which is described by taking the method as an example applied to the asynchronous query system in fig. 1, and includes the following steps:
step S10: receiving a query request sent by a query end, generating a query task based on the query request, allocating a task number, and writing the query task into a query queue.
Specifically, in this embodiment, the query end may generally refer to a device end on the side of the user who sends the query request, and may be, for example, a user terminal or other device that provides a service. And after receiving a query request initiated by a query end, establishing a query task according to the query request. And the established query tasks are distributed with unique task numbers, and finally, the query tasks with the task numbers are written into a query queue. The query queue may generally refer to a linear table stored in a continuous storage space, and two pointers are set for management in this embodiment. The query tasks are enqueued from the back end of the query queue and dequeued from the front end of the query queue.
Step S20: acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to a cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set.
Specifically, the number of distributed execution nodes in this embodiment is greater than or equal to 2. A conventional single node may be a single physical machine that includes all of the services and databases. In contrast to a conventional single node, a distributed execution node is generally understood to mean that a plurality of nodes collectively include all services and databases, and each distributed execution node includes its own processor and memory, each having independent functions for processing data. Generally, they are equal in rank, have no primary or secondary points, and can work autonomously, and can coordinate task processing by transmitting information through a shared communication line. Meanwhile, different distributed execution nodes can respectively execute a plurality of subtasks to jointly complete a large overall task.
In this embodiment, the query task is pulled from the query queue by a plurality of distributed execution nodes, and each distributed execution node initiates a query to the cache library based on its current query task. The cache library comprises a plurality of cache nodes, and the single cache can be expanded to a plurality of service devices. The cache library may be used to store a first key-value pair of a historical query task. A key-value pair generally refers to an organization of a data store, and a "value" corresponding to a "key" can be obtained by querying the "key". In this embodiment, the first key-value pair includes a structured query language and a query result set of the query task, that is, the structured query language of the query task is used as a "key" and the query result set of the query task is used as a "value".
The structured query language may represent the full sentence of an SQL statement, such as:
select col1,col2 from table1 where condition.
or json format query languages of the query engine, such as:
query=[{"queryType":"scan","dataSource":{"type":"table","name":"table2"},"intervals":{"type":"intervals","intervals":["2021-09-08T08:23:32.096Z/2021-09-18T15:36:27.903Z"]},"descending":false,"granularity":{"type":"all"}}.
after receiving the same structured query language, the subsequent distributed execution nodes can directly obtain the value taking the structured query language as the key, namely the query result set of the query task, through the cache library.
Step S30: inquiring whether a first key value pair in the cache library has a target key value pair according to the structured query language of the current query task to obtain a query result; the target key-value pair is a first key-value pair in the cache library having the same structured query language as the current query task.
Specifically, after receiving the query task according to the distributed execution node and according to the structured query language of the current query task, whether a target key-value pair exists in a first key-value pair in the cache library can be queried through the cache library, where the target key-value pair is the first key-value pair in the cache library having the same structured query language as the current query task. By searching for the target key-value pair in the cache library, the query result of this embodiment may be that the target key-value pair exists in the cache library, or may be that the target key-value pair does not exist in the cache library.
Step S40: correspondingly generating a second key value pair of the current query task according to the query result and the task number of the current query task, and storing the second key value pair in the cache library for the query end to read; and the second key-value pair is the task number of the query task and the key-value pair of the query result set.
Specifically, according to the query result of the step S30, the distributed execution node executes different task operations according to different query results, and finally generates the second key value pair of the current query task according to the query result and the task number of the current query task. Here, the "key" of the second key-value pair is the task label of the query task, and the "value" of the second key-value pair is the query result set of the query task. And the second key value pair is also stored in the cache library, so that the query end can quickly obtain a query result set of the query task according to the task number of the query task.
In the asynchronous processing method for the query request, the query task is acquired through the distributed execution nodes for query, so that the response result of the query task is more efficient, and the waiting time of a query end is reduced; meanwhile, the cache library can comprise a plurality of cache nodes, and the cache limit of a single-machine memory of the server is broken through; in addition, the query request is received by being connected with the query end, and the decoupling is realized with the service system of the query end, so that an asynchronous query module is not required to be added or modified in the original service system of the query end; and the system can serve a plurality of query ends simultaneously, can realize query only by receiving query requests sent by the query ends, has wider adaptability, can well support high concurrency and high availability, and is easy to expand transversely.
In some embodiments of the present disclosure, step S20 includes:
based on the query task, the distributed execution nodes acquire the corresponding query task according to a preset rule.
Specifically, when the distributed execution nodes pull the query tasks from the query queue, the distributed execution nodes respectively obtain the query tasks corresponding to the distributed execution nodes, so that the query tasks in the query queue can be uniformly distributed in each distributed execution node, and congestion caused by a certain distributed execution node is avoided.
And calculating the task number of the query task according to the number of the distributed execution nodes and a preset algorithm. For example, the number of distributed execution nodes is 3, and it may be set that 3 is remainder according to the order in which each query task enters the query queue, and the remainder result only has three values: 0. 1, 2, 0, 1, 2 are respectively allocated to 3 distributed execution nodes. And each distributed execution node respectively pulls the query task corresponding to the number of the distributed execution node.
In the embodiment, the query action is executed by the distributed execution task, so that the query pressure is shared, and particularly, the query action can be stably and quickly executed in a high-concurrency occasion; meanwhile, the distributed execution nodes pull the query tasks corresponding to the distributed execution nodes according to the set algorithm, the whole process is orderly and efficient, and the risk of query congestion is reduced.
In some embodiments of the present disclosure, step S40 includes:
step S41: and under the condition that the query result is that the target key-value pair exists in the cache library, generating a second key-value pair according to the target key-value pair and the task number of the current query task, and storing the second key-value pair in the cache library for the query end to read.
Specifically, when the distributed execution node searches for a target key value pair in the cache library, and the query result is that the target key value pair exists in the cache library, the query result set of the current query task can be directly obtained from the cache library at this time. And generating a second key value pair according to the task number of the current query task and the query result set, and storing the second key value pair in a cache library.
In the embodiment, the cache library is preferentially queried through the distributed execution nodes, and when the first key value pair of the current query task exists in the cache library, namely the current query task is a repeated query task, a query result set can be directly obtained from the cache library without querying a database; meanwhile, a second key value pair is generated according to the current query task and stored in the cache library, so that the query end can conveniently obtain a query result set of the query task according to the task number.
In some embodiments of the present disclosure, as shown in fig. 3, step S41 includes:
step S412: and acquiring a query result set of the target key value pair, and generating a second key value pair of the current query task according to the task number of the current query task and the query result set of the target key value pair.
Specifically, the target key value pair of the current query task is found in the cache library, and the query result set of the current query task can be obtained. And generating a second key value pair according to the task number of the current query task and the query result set of the target key value pair.
Step S414: and carrying out data splitting on the second key value of the current query task based on a Hash algorithm to generate a plurality of data fragments.
Specifically, the second key-value pair is subjected to data splitting to generate a plurality of data fragments, so that the second key-value pair can be dispersedly stored on a plurality of cache nodes.
Step S416: and respectively storing the data fragments on a plurality of cache nodes of the cache library.
Specifically, a plurality of data fragments are respectively stored on a plurality of cache nodes of the cache library, so that the number of the data fragments stored on each storage node is as close as possible.
In some embodiments of the present disclosure, step S40 includes:
step S42: and under the condition that the query result is that the target key value pair does not exist in the cache library, querying a database to obtain a query result set of the current query task, generating a first key value pair and a second key value pair of the current query task, and storing the first key value pair and the second key value pair to the cache library for the query end to read.
Specifically, when the target key value pair of the current query task does not exist in the cache library, the distributed execution node needs to initiate a query to the database to obtain a query result set, and generate a first key value pair and a second key value pair of the current query task.
In this embodiment, a cache library is preferentially queried by a distributed execution node, and when a first key value pair of a current query task does not exist in the cache library, that is, the current query task is a new query task, a query needs to be performed on a database; meanwhile, a first key value pair and a second key value pair are generated according to the query result set obtained from the database and the current query task and are stored in the cache library, so that the query end can conveniently repeat query and obtain the query result set of the query task according to the task number.
In some embodiments of the present disclosure, as shown in fig. 4, step S42 includes:
step S421: and initiating query to a database according to the structured query language of the current query task, and receiving a query result set of the current query task returned by the database.
Step S423: generating a first key value pair according to the structured query language and the query result set of the current query task;
step S425: generating a second key value pair according to the task number of the current query task and the query result set;
step S427: respectively carrying out data splitting on the first key value pair and the second key value pair of the current query task based on a Hash algorithm to generate a plurality of data fragments;
step S429: and respectively storing the data fragments on a plurality of cache nodes of the cache library.
Specifically, the distributed execution node initiates a query to the database according to the structured query language of the current query task to obtain a query result set of the current query task. And finally, storing the generated first key-value pair and the second key-value pair in a cache library, wherein the specific steps are the same as the step S41 when storing, and are not described herein again.
In some embodiments of the present disclosure, as shown in fig. 5, the aforementioned step S416 or step S429 includes:
step A10: and calculating the number of output nodes of the data fragment according to a preset algorithm, wherein the node number is used for corresponding to the cache nodes one by one.
Specifically, the divided data fragments are subjected to hash calculation, so that the node numbers can be calculated and obtained according to each data fragment. The node numbers are in one-to-one correspondence with the cache nodes. The node number of the data fragment generated by each first key-value pair or second key-value pair is unique. In general, the number of data fragments generated by each first key-value pair or second key-value pair may be set to be the same as the number of cache nodes. For example, the cache library includes 3 cache nodes. Therefore, the first key value pair or the second key value pair which needs to be stored can be divided into 3 groups of data fragments, the 3 groups of data fragments are calculated to obtain 3 different node numbers, and each node number corresponds to one cache node.
Step A20: and storing the data fragments to corresponding cache nodes according to the node numbers.
Specifically, the data fragments are stored to the corresponding cache nodes according to the node numbers of the data fragments.
In this embodiment, through data fragment storage and a multi-node cache library, a query result set of a query task may be stored in memories of multiple service devices, and when one complete data needs to be read, compared with reading of one file on a single node, the data fragment mode supports more visitors, and concurrency may be improved.
In some embodiments of the present disclosure, as shown in fig. 6, the foregoing step S416 or step S429 further includes:
step A30: and generating a fragment copy of the data fragment, and storing the fragment copy to a cache node which does not correspond to the node number of the data fragment.
Specifically, each data fragment is backed up to generate a fragment copy, and during storage, the fragment copy of the data fragment and the data fragment need to be stored on different cache nodes respectively. For example, the cache library comprises a first cache node, a second cache node, and a third cache node; and splitting the first key value pair or the second key value pair to be stored into a first data fragment, a second data fragment and a third data fragment. And calculating the 3 groups of data fragments to obtain 3 different node numbers, wherein each node number corresponds to one cache node. And simultaneously, the fragment copy of each data fragment is also stored on the cache node which does not correspond to the node number of the data fragment. One of the following may be the case:
the first cache node stores fragment copies of the first data fragment and the second data fragment;
the second cache node stores fragment copies of the second data fragment and the third data fragment;
and the third cache node stores a third data fragment and a fragment copy of the first data fragment.
In the embodiment, by combining the data fragment and the fragment copy, more visitors can be supported, and the concurrency is improved; and the visitor can still inquire the full amount of data under the condition that a certain cache node is hung, so that the fault-tolerant mechanism is improved, and the availability is high. The above characteristics can obtain better effect by adjusting the number of fragment copies, the number of data fragments, the number of cache nodes and the like.
In some embodiments of the present disclosure, the step S40 of reading by the querying end includes:
and providing a service interface for the query end to query a corresponding query result set in the cache library based on the task number of the query task.
Specifically, in conjunction with fig. 7, by providing an asynchronizable service interface to the query end, the query end can actively read a query result set of the query task from the cache library. The query result set can be set to be actively read from the buffer library by the query end after the query end initiates a query request and the set time. According to the foregoing step of storing the data fragments in the cache library, it can be known that when the data fragments are read from the cache library to form complete data, the node numbers also need to be obtained by calculation according to the query task based on the hash algorithm, so that the query end can read the data from the corresponding cache nodes, which is not described herein again.
The embodiment of the invention can reduce the change of the database architecture of the query end by providing the service interface for the query end to read data from the cache library, and has more stable query and reading processes and low development cost.
In some embodiments of the present disclosure, the step S40 of reading by the querying end includes:
and the distributed execution node sends the acquired query result set of the current query task to the query end.
Specifically, when the distributed execution node obtains the query result set of the current query task from the cache or the database, the distributed execution node actively pushes the query result set to the query end. Correspondingly, the query end needs to monitor the pushed interface.
According to the method and the device, the query result set of the current query task is actively pushed by the distributed execution nodes, so that the timeliness of the query response is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a query request asynchronous processing device for implementing the query request asynchronous processing method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the query request asynchronous processing device provided below can refer to the limitations of the query request asynchronous processing method in the foregoing, and details are not described here.
The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative concept, the embodiments of the present disclosure provide an apparatus in one or more embodiments as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
In some embodiments of the present disclosure, as shown in fig. 8, an asynchronous query request processing apparatus is provided, and the apparatus Z00 may be the aforementioned terminal, or may also be a server, or a module, component, device, unit, and the like integrated in the terminal. The apparatus may include:
the query queue module Z10 is configured to receive a query request sent by a query end, generate a query task based on the query request, assign a task number, and write the query task into a query queue;
the distributed execution node module Z20 is used for acquiring a current query task from the query queue by using a distributed execution node, and initiating a query to the cache library based on the current query task; the cache library comprises a plurality of cache nodes, and is used for storing a first key-value pair comprising a historical query task, wherein the first key-value pair is a structured query language of the query task and a key-value pair of a query result set;
the cache query module Z30 is configured to query, according to the structured query language of the current query task, whether a target key-value pair exists in a first key-value pair in the cache library, so as to obtain a query result; the target key-value pair is a first key-value pair which has the same structured query language as the current query task in the cache library;
and the key value pair generating module Z40 is configured to generate a second key value pair of the current query task according to the query result and the task number of the current query task, and store the second key value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 9, the distributed execution node module Z20 includes:
an obtaining unit Z22, configured to, based on the query task, obtain, by the distributed execution node, a corresponding query task according to a preset rule.
In some embodiments of the present disclosure, the key-value pair generating module Z40 is configured to, if the query result is that the target key-value pair exists in the cache library, generate a second key-value pair according to the target key-value pair and the task number of the current query task, and store the second key-value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 10, the key-value pair generation module Z40 includes:
a second key-value pair unit Z41, configured to obtain a query result set of the target key-value pair, and generate the second key-value pair of the current query task according to the task number of the current query task and the query result set of the target key-value pair;
the data fragmentation unit Z43 is configured to perform data fragmentation on the second key value of the current query task based on a hash algorithm, and generate a plurality of data fragments;
and the cache node unit Z45 is configured to store the data fragments on a plurality of cache nodes of the cache library respectively.
In some embodiments of the present disclosure, the key-value pair generating module Z40 is configured to, if the query result is that the target key-value pair does not exist in the cache library, query a database to obtain a query result set of the current query task, generate a first key-value pair and a second key-value pair of the current query task, and store the first key-value pair and the second key-value pair in the cache library for the query end to read.
In some embodiments of the present disclosure, as shown in fig. 11, the key-value pair generation module Z40 includes:
a database query unit Z47, configured to initiate a query to a database according to the structured query language of the current query task, and receive a query result set of the current query task returned by the database;
a first key-value pair unit Z49, configured to generate a first key-value pair according to the structured query language and the query result set of the current query task;
a second key-value pair unit Z41, configured to generate a second key-value pair according to the task number of the current query task and the query result set;
the data fragmentation unit Z43 is configured to perform data fragmentation on the first key value pair and the second key value pair of the current query task based on a hash algorithm, respectively, to generate a plurality of data fragments;
and the cache node unit Z45 is used for storing the data fragments on a plurality of cache nodes of the cache library respectively.
In some embodiments of the present disclosure, as shown in fig. 12, the cache node unit Z45 includes:
a node numbering subunit Z451, configured to calculate, according to a preset algorithm, the data fragment and output a node number, where the node number is used to correspond to the cache node one to one;
and the storage subunit Z453 is configured to store the data fragments to the corresponding cache nodes according to the node numbers.
In some embodiments of the present disclosure, as shown in fig. 13, the cache node unit Z45 further includes:
and the copy subunit Z455 is configured to generate a fragment copy of the data fragment, and store the fragment copy to a cache node that does not correspond to the node number of the data fragment.
In some embodiments of the present disclosure, as shown in fig. 14, the apparatus Z00 further comprises:
and the service interface unit Z50 is configured to provide a service interface to the querying end, so that the querying end queries a corresponding query result set in the cache library based on the task number of the query task.
In some embodiments of the present disclosure, as shown in fig. 15, the apparatus Z00 further comprises:
a sending unit Z60, configured to instruct the distributed execution node to send the obtained query result set of the current query task to the query end.
The modules in the query request asynchronous processing device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
Based on the foregoing description of the embodiment of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer device is provided, where the computer device may be a server, and the internal structure diagram of the computer device may be as shown in fig. 16. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a query request asynchronous processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Based on the foregoing description of the embodiments of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps in the embodiments of the method described above.
Based on the foregoing description of embodiments of the asynchronous processing method for query requests, in another embodiment provided by the present disclosure, a computer program product is provided, which comprises a computer program that, when executed by a processor, implements the steps in the embodiments of the methods described above.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present disclosure are information and data that are authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
In the description herein, references to the description of "some embodiments," "other embodiments," "desired embodiments," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, a schematic description of the above terminology may not necessarily refer to the same embodiment or example.
It is understood that the embodiments of the method described above are described in a progressive manner, and the same/similar parts of the embodiments are referred to each other, and each embodiment focuses on differences from the other embodiments. Reference may be made to the description of other method embodiments for relevant points.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features of the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the claims. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the concept of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.