+

CN113656203A - Multi-scene caching proxy method - Google Patents

Multi-scene caching proxy method Download PDF

Info

Publication number
CN113656203A
CN113656203A CN202111030424.9A CN202111030424A CN113656203A CN 113656203 A CN113656203 A CN 113656203A CN 202111030424 A CN202111030424 A CN 202111030424A CN 113656203 A CN113656203 A CN 113656203A
Authority
CN
China
Prior art keywords
cache
service
proxy method
tenant
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111030424.9A
Other languages
Chinese (zh)
Inventor
周侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital China Financial Software Co ltd
Original Assignee
Digital China Financial Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital China Financial Software Co ltd filed Critical Digital China Financial Software Co ltd
Priority to CN202111030424.9A priority Critical patent/CN113656203A/en
Publication of CN113656203A publication Critical patent/CN113656203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本发明提供的一种多场景缓存代理方法,代理方法包括:应用系统调用方进行缓存调用,并发送请求调用上下文的请求数据;缓存代理接收到所述请求数据后,对请求的客户端连接、请求数据、认证数据、控制参数进行统一处理,识别到唯一的租户信息及绑定的业务场景,统一存放在应用上下文中;租户信息与业务场景信息在调用过程中进行全局传递,各链路通过租户与场景要素进行统一管控。缓存代理通过租户维度进行场景、环境、数据中心纬度进行精细化的管理;通过统一的租户、业务场景标准化能力输出,提供统一的、更加精细化调用管控;提供统一的运维视角,实现实时、统一、全面的监控视图和配置管理功能,降低运维难度。

Figure 202111030424

The present invention provides a multi-scene cache proxy method. The proxy method includes: the application system caller makes a cache call, and sends request data for requesting a calling context; after the cache proxy receives the request data, it connects to the requested client, Request data, authentication data, and control parameters are processed uniformly, and unique tenant information and bound business scenarios are identified and stored in the application context; tenant information and business scenario information are transmitted globally during the calling process, and each link passes through Unified management and control of tenants and scene elements. The caching proxy performs refined management of scenarios, environments, and data center dimensions through tenant dimensions; through unified tenant and business scenario standardized capability output, it provides unified and more refined call management and control; it provides a unified operation and maintenance perspective to achieve real-time, Unified and comprehensive monitoring views and configuration management functions reduce the difficulty of operation and maintenance.

Figure 202111030424

Description

Multi-scene caching proxy method
Technical Field
The invention relates to the field of cache processing, in particular to a multi-scene cache agent method.
Background
With the rapid development of online services, various new services and new systems are continuously built on the ground like bamboo shoots in spring after rain, and the response time of the system can be greatly prolonged by utilizing the memory of a machine as a cache database with the characteristic of high-speed cache. The requirement of a large number of service systems on extremely high response time urges a large number of demands of application systems on the cache database. The cache database is generally deployed in a distributed cluster mode, and has the characteristics of large machine number, difficulty in deployment, troublesome configuration and difficulty in monitoring. As the number of cache services used by the system increases, the types of the cache middleware of each system are different, and the deployment modes are also different. The differences of the type selection and the deployment of different technologies lead to the difficulty in operation and maintenance at the later stage, such as monitoring, disaster recovery and expansion, which are quite troublesome or not considered at all.
In the prior art, a distributed cache technology is introduced, data are deployed in a shared memory of a small machine, the capacity of the memory reaches the upper limit, and the memory capacity is small; the ever-increasing data real-time expansion demand of the service and the system elastic flexibility bottleneck; large-scale concurrent data I/O performance bottleneck, large load pressure of a database, low transaction throughput rate and long system delay.
Disclosure of Invention
In view of the above, the present invention has been developed to provide a multi-scenario caching proxy method that overcomes, or at least partially solves, the above-mentioned problems.
According to an aspect of the present invention, there is provided a multi-scenario caching proxy method, including:
the application system calling party carries out cache calling and sends request data requesting for calling the context;
after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context;
the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element.
Optionally, the proxy method further includes:
the cache agent is provided with a connection pool processing mode;
the cache agent performs section processing through various configured interception filters.
Optionally, the proxy method further includes: and a registration discovery mode is adopted between the cache agent and the application system for service calling, and standard scene cache capacity output is provided for the outside through the cache agent.
Optionally, the proxy method further includes: the cache agent realizes a cache service protocol, serves as an intermediate layer between the tenant and the cache service, and decouples the application and the cache service in a cluster and sentry mode;
and providing a standard division principle of various service scenes.
Optionally, the designing of the service scenario includes:
the hot spot data is stored in a Redis cache for a data dictionary frequently used in the project;
the second killing activity is commodity information used for the second killing activity, and a counting function related to the commodity information;
the counter is used for counting the commodities on the second killing function;
the distributed session is used for storing a user name and a password and an authentication code;
the ranking list is used for ranking the user scores;
the distributed lock is used for processing competing for the resource in the distributed architecture;
the filter is used for removing the weight of the crawled resource;
the distributed timing task is used for timing task processing in competition for the distributed lock;
the message queue is used as a message queue by the service system.
The invention provides a multi-scenario caching proxy method, which comprises the following steps: the application system calling party carries out cache calling and sends request data requesting for calling the context; after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context; the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element. The cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions, and the coupling of an application system and cache service is removed; unified and more refined calling control is provided through unified tenant and service scene standardization capability output; the unified operation and maintenance visual angle is provided, the functions of monitoring the view and configuring the management in real time, unified and comprehensive are realized, and the operation and maintenance difficulty is reduced.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a processing flow of accessing a cache data service through a connection pool according to an embodiment of the present invention;
fig. 2 is a flowchart of a registration process according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating that a caching agent provides a standard scenario caching capability output to the outside according to an embodiment of the present invention;
fig. 4 is a schematic view of a scene design provided in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terms "comprises" and "comprising," and any variations thereof, in the present description and claims and drawings are intended to cover a non-exclusive inclusion, such as a list of steps or elements.
The technical solution of the present invention is further described in detail with reference to the accompanying drawings and embodiments.
And the cache agent performs the control functions of scenes, environments and data center latitudes through tenant dimensions. As shown in fig. 1, the cache finds a corresponding scene environment according to tenant information of the access end, finds a corresponding connection pool according to the scene environment, accesses the cache data service through the connection pool, and processes flow from bottom to top as shown in fig. 1.
A multi-scenario caching proxy method, the proxy method comprising:
the application system calling party carries out cache calling and sends request data requesting for calling the context;
after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context;
the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element.
The proxy method further comprises: the cache agent is provided with a connection pool processing mode; the cache agent performs section processing through various configured interception filters.
The caching proxy is stateless and registers with the registry, and the client can discover the caching proxy in a service discovery mode. The registration process flow is shown in fig. 2.
The proxy method further comprises: and a registration discovery mode is adopted between the cache agent and the application system for service calling, and standard scene cache capacity output is provided for the outside through the cache agent.
As shown in fig. 3, a registration discovery mode is used between the caching agent and the application system to invoke a service, and a standard scenario caching capability output is provided to the outside through the caching agent.
The proxy method further comprises: the cache agent realizes a cache service protocol, serves as an intermediate layer between the tenant and the cache service, and decouples the application and the cache service in a cluster and sentry mode;
as shown in fig. 4, a canonical division principle for a variety of service scenarios is provided.
The design of the service scene comprises the following steps: the hot spot data is stored in a Redis cache for a data dictionary frequently used in the project; the second killing activity is commodity information used for the second killing activity, and a counting function related to the commodity information; the counter is used for counting the commodities on the second killing function; the distributed session is used for storing a user name and a password and an authentication code; the ranking list is used for ranking the user scores; the distributed lock is used for processing competing for the resource in the distributed architecture; the filter is used for removing the weight of the crawled resource; the distributed timing task is used for timing task processing in competition for the distributed lock; the message queue is used as a message queue by the service system.
The key points of this patent are:
and the cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions. The cache agent can release the coupling of the application system and the cache service, and each application does not need to consider the height of the cache service; available, deployed and disaster recovery functions. And the method supports multi-tenant, multi-scene division, disaster recovery data synchronization and disaster recovery.
Has the advantages that:
and the cache agent performs fine management on scenes, environments and data center latitudes through tenant dimensions, and the coupling of an application system and cache service is released.
Unified and more refined calling control is provided through unified tenant and service scene standardization capability output.
The unified operation and maintenance visual angle can be provided, the real-time, unified and comprehensive monitoring view and configuration management functions are realized, and the operation and maintenance difficulty is reduced.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. A multi-scenario caching proxy method, characterized in that the proxy method comprises:
the application system calling party carries out cache calling and sends request data requesting for calling the context;
after receiving the request data, the cache agent uniformly processes the requested client connection, the request data, the authentication data and the control parameters, identifies unique tenant information and a bound service scene, and uniformly stores the tenant information and the bound service scene in an application context;
the tenant information and the service scene information are globally transferred in the calling process, and each link is uniformly controlled through the tenant and the scene element.
2. The multi-scenario caching proxy method of claim 1, wherein the proxy method further comprises:
the cache agent is provided with a connection pool processing mode;
the cache agent performs section processing through various configured interception filters.
3. The multi-scenario caching proxy method of claim 1, wherein the proxy method further comprises: and a registration discovery mode is adopted between the cache agent and the application system for service calling, and standard scene cache capacity output is provided for the outside through the cache agent.
4. The multi-scenario caching proxy method of claim 1, wherein the proxy method further comprises: the cache agent realizes a cache service protocol, serves as an intermediate layer between the tenant and the cache service, and decouples the application and the cache service in a cluster and sentry mode;
and providing a standard division principle of various service scenes.
5. The multi-scenario caching proxy method of claim 1, wherein the design of the service scenario comprises:
the hot spot data is stored in a Redis cache for a data dictionary frequently used in the project;
the second killing activity is commodity information used for the second killing activity, and a counting function related to the commodity information;
the counter is used for counting the commodities on the second killing function;
the distributed session is used for storing a user name and a password and an authentication code;
the ranking list is used for ranking the user scores;
the distributed lock is used for processing competing for the resource in the distributed architecture;
the filter is used for removing the weight of the crawled resource;
the distributed timing task is used for timing task processing in competition for the distributed lock;
the message queue is used as a message queue by the service system.
CN202111030424.9A 2021-09-03 2021-09-03 Multi-scene caching proxy method Pending CN113656203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111030424.9A CN113656203A (en) 2021-09-03 2021-09-03 Multi-scene caching proxy method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030424.9A CN113656203A (en) 2021-09-03 2021-09-03 Multi-scene caching proxy method

Publications (1)

Publication Number Publication Date
CN113656203A true CN113656203A (en) 2021-11-16

Family

ID=78482746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030424.9A Pending CN113656203A (en) 2021-09-03 2021-09-03 Multi-scene caching proxy method

Country Status (1)

Country Link
CN (1) CN113656203A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503163A (en) * 2016-10-31 2017-03-15 用友网络科技股份有限公司 Based on the global configuration multi-tenant dynamic data origin system that SaaS is applied
US20180041491A1 (en) * 2016-08-05 2018-02-08 Oracle International Corporation Caching framework for a multi-tenant identity and data security management cloud service
CN112653665A (en) * 2020-11-25 2021-04-13 航天信息股份有限公司广州航天软件分公司 Data isolation interaction method and system based on cloud service
CN112860451A (en) * 2021-01-21 2021-05-28 中国建设银行股份有限公司 Multi-tenant data processing method and device based on SaaS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180041491A1 (en) * 2016-08-05 2018-02-08 Oracle International Corporation Caching framework for a multi-tenant identity and data security management cloud service
CN106503163A (en) * 2016-10-31 2017-03-15 用友网络科技股份有限公司 Based on the global configuration multi-tenant dynamic data origin system that SaaS is applied
CN112653665A (en) * 2020-11-25 2021-04-13 航天信息股份有限公司广州航天软件分公司 Data isolation interaction method and system based on cloud service
CN112860451A (en) * 2021-01-21 2021-05-28 中国建设银行股份有限公司 Multi-tenant data processing method and device based on SaaS

Similar Documents

Publication Publication Date Title
EP3837604B1 (en) In situ triggered function as a service within a service mesh
CN100549960C (en) The troop method and system of the quick application notification that changes in the computing system
US6775700B2 (en) System and method for common information model object manager proxy interface and management
CN111752965B (en) Real-time database data interaction method and system based on micro-service
CN105208047B (en) Inserting method and server on distribution system services device
CN112738184B (en) Plug-in dynamic registration distributed micro-service gateway system
CN112015578B (en) Wind control system and method based on pre-synchronous processing and post-asynchronous processing
CN106293887A (en) Data base processing method and device
CN112751847A (en) Interface call request processing method and device, electronic equipment and storage medium
CN108833462A (en) A system and method for microservice-oriented self-registration service discovery
US20090327459A1 (en) On-Demand Capacity Management
CN112149079A (en) Planning review management platform and user access authorization method based on microservice architecture
CN112559461A (en) File transmission method and device, storage medium and electronic equipment
CN113742048B (en) Hotel cloud service system and service method thereof
CN109947081B (en) Networked vehicle control method and device
CN110266787B (en) A hybrid cloud management system, method and computer equipment
KR20050029202A (en) Asynchronous messaging in storage area network
CN114338769B (en) Access request processing method and device
CN114371935B (en) Gateway processing method, gateway, device and medium
CN1972276A (en) A management method and system for protocol access
CN113656203A (en) Multi-scene caching proxy method
CN116264515B (en) Virtual desktop resource remote access control method and device
CN108234481B (en) A method and distributed system for controlling multi-machine distributed access to external systems
US7912922B2 (en) Globally unique instance identification
CN112131238B (en) Transaction state machine design method, processing device and processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211116

RJ01 Rejection of invention patent application after publication
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载