Multi-scene caching proxy method
Technical Field
The invention relates to the field of cache processing, in particular to a multi-scene cache agent method.
Background
With the rapid development of online services, various new services and new systems are continuously built on the ground like bamboo shoots in spring after rain, and the response time of the system can be greatly prolonged by utilizing the memory of a machine as a cache database with the characteristic of high-speed cache. The requirement of a large number of service systems on extremely high response time urges a large number of demands of application systems on the cache database. The cache database is generally deployed in a distributed cluster mode, and has the characteristics of large machine number, difficulty in deployment, troublesome configuration and difficulty in monitoring. As the number of cache services used by the system increases, the types of the cache middleware of each system are different, and the deployment modes are also different. The differences of the type selection and the deployment of different technologies lead to the difficulty in operation and maintenance at the later stage, such as monitoring, disaster recovery and expansion, which are quite troublesome or not considered at all.
In the prior art, a distributed cache technology is introduced, data are deployed in a shared memory of a small machine, the capacity of the memory reaches the upper limit, and the memory capacity is small; the ever-increasing data real-time expansion demand of the service and the system elastic flexibility bottleneck; large-scale concurrent data I/O performance bottleneck, large load pressure of a database, low transaction throughput rate and long system delay.
Disclosure of Invention
In view of the above, the present invention has been developed to provide a multi-scenario caching proxy method that overcomes, or at least partially solves, the above-mentioned problems.
According to an aspect of the present invention, there is provided a multi-scenario caching proxy method, including:
the application system calling party carries out cache calling and sends request data requesting for calling the context;
after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context;
the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element.
Optionally, the proxy method further includes:
the cache agent is provided with a connection pool processing mode;
the cache agent performs section processing through various configured interception filters.
Optionally, the proxy method further includes: and a registration discovery mode is adopted between the cache agent and the application system for service calling, and standard scene cache capacity output is provided for the outside through the cache agent.
Optionally, the proxy method further includes: the cache agent realizes a cache service protocol, serves as an intermediate layer between the tenant and the cache service, and decouples the application and the cache service in a cluster and sentry mode;
and providing a standard division principle of various service scenes.
Optionally, the designing of the service scenario includes:
the hot spot data is stored in a Redis cache for a data dictionary frequently used in the project;
the second killing activity is commodity information used for the second killing activity, and a counting function related to the commodity information;
the counter is used for counting the commodities on the second killing function;
the distributed session is used for storing a user name and a password and an authentication code;
the ranking list is used for ranking the user scores;
the distributed lock is used for processing competing for the resource in the distributed architecture;
the filter is used for removing the weight of the crawled resource;
the distributed timing task is used for timing task processing in competition for the distributed lock;
the message queue is used as a message queue by the service system.
The invention provides a multi-scenario caching proxy method, which comprises the following steps: the application system calling party carries out cache calling and sends request data requesting for calling the context; after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context; the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element. The cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions, and the coupling of an application system and cache service is removed; unified and more refined calling control is provided through unified tenant and service scene standardization capability output; the unified operation and maintenance visual angle is provided, the functions of monitoring the view and configuring the management in real time, unified and comprehensive are realized, and the operation and maintenance difficulty is reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a processing flow of accessing a cache data service through a connection pool according to an embodiment of the present invention;
fig. 2 is a flowchart of a registration process according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating that a caching agent provides a standard scenario caching capability output to the outside according to an embodiment of the present invention;
fig. 4 is a schematic view of a scene design provided in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terms "comprises" and "comprising," and any variations thereof, in the present description and claims and drawings are intended to cover a non-exclusive inclusion, such as a list of steps or elements.
The technical solution of the present invention is further described in detail with reference to the accompanying drawings and embodiments.
And the cache agent performs the control functions of scenes, environments and data center latitudes through tenant dimensions. As shown in fig. 1, the cache finds a corresponding scene environment according to tenant information of the access end, finds a corresponding connection pool according to the scene environment, accesses the cache data service through the connection pool, and processes flow from bottom to top as shown in fig. 1.
A multi-scenario caching proxy method, the proxy method comprising:
the application system calling party carries out cache calling and sends request data requesting for calling the context;
after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context;
the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element.
The proxy method further comprises: the cache agent is provided with a connection pool processing mode; the cache agent performs section processing through various configured interception filters.
The caching proxy is stateless and registers with the registry, and the client can discover the caching proxy in a service discovery mode. The registration process flow is shown in fig. 2.
The proxy method further comprises: and a registration discovery mode is adopted between the cache agent and the application system for service calling, and standard scene cache capacity output is provided for the outside through the cache agent.
As shown in fig. 3, a registration discovery mode is used between the caching agent and the application system to invoke a service, and a standard scenario caching capability output is provided to the outside through the caching agent.
The proxy method further comprises: the cache agent realizes a cache service protocol, serves as an intermediate layer between the tenant and the cache service, and decouples the application and the cache service in a cluster and sentry mode;
as shown in fig. 4, a canonical division principle for a variety of service scenarios is provided.
The design of the service scene comprises the following steps: the hot spot data is stored in a Redis cache for a data dictionary frequently used in the project; the second killing activity is commodity information used for the second killing activity, and a counting function related to the commodity information; the counter is used for counting the commodities on the second killing function; the distributed session is used for storing a user name and a password and an authentication code; the ranking list is used for ranking the user scores; the distributed lock is used for processing competing for the resource in the distributed architecture; the filter is used for removing the weight of the crawled resource; the distributed timing task is used for timing task processing in competition for the distributed lock; the message queue is used as a message queue by the service system.
The key points of this patent are:
and the cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions. The cache agent can release the coupling of the application system and the cache service, and each application does not need to consider the height of the cache service; available, deployed and disaster recovery functions. And the method supports multi-tenant, multi-scene division, disaster recovery data synchronization and disaster recovery.
Has the advantages that:
and the cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions, and the coupling of an application system and cache service is released.
Unified and more refined calling control is provided through unified tenant and service scene standardization capability output.
The unified operation and maintenance visual angle can be provided, the real-time, unified and comprehensive monitoring view and configuration management functions are realized, and the operation and maintenance difficulty is reduced.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.