+

CN113076187A - Distributed lock management method and device - Google Patents

Distributed lock management method and device Download PDF

Info

Publication number
CN113076187A
CN113076187A CN202010003853.6A CN202010003853A CN113076187A CN 113076187 A CN113076187 A CN 113076187A CN 202010003853 A CN202010003853 A CN 202010003853A CN 113076187 A CN113076187 A CN 113076187A
Authority
CN
China
Prior art keywords
service node
lock
client
distributed lock
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010003853.6A
Other languages
Chinese (zh)
Other versions
CN113076187B (en
Inventor
安凯歌
朱云锋
卢毅军
唐治洋
程霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010003853.6A priority Critical patent/CN113076187B/en
Publication of CN113076187A publication Critical patent/CN113076187A/en
Application granted granted Critical
Publication of CN113076187B publication Critical patent/CN113076187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a distributed lock management method and a distributed lock management device, wherein the method comprises the following steps: the method comprises the steps that a first service node sets a distributed lock to be migrated to a first state, and in the first state, the distributed lock cannot be distributed; migrating the distributed lock from the first service node to a second service node; acquiring the routing information of the distributed lock at the second service node; and processing the lock request sent by the client based on the routing information. The technical scheme provided by the invention realizes the online migration of the distributed lock, does not relate to the operation of offline or process stopping, and can avoid the risk of single-point failure in the migration process, thereby improving the resource utilization rate.

Description

Distributed lock management method and device
Technical Field
The invention relates to the technical field of computers, in particular to a distributed lock management method and device.
Background
With the development of business scale, distributed systems are widely used in various business scenarios. As the number of clients (clients) in a distributed system increases rapidly, the pressure on services carried by a single service unit increases. In order to ensure the service quality, a common way is to add a new service node and migrate part of the service request to the newly added service node for processing. The distributed lock is used as an important medium for realizing mutual exclusion access of shared resources and ensuring data consistency in a distributed system, and how to migrate the distributed lock on line is a problem which needs to be solved urgently.
When a distributed lock migrates from one service node to another, it is necessary to ensure that two or more clients do not appear to be simultaneously in order to hold the distributed lock. For this reason, one migration scheme is to prohibit the lock robbing behavior of other clients not holding the distributed lock before performing the distributed lock migration, and only keep the client currently holding the distributed lock; then, the client is triggered to acquire the routing information of the distributed lock at the new service node and perform lock preemption again, so that the client performs lock preemption at the new service node. And when the client successfully robs the lock, allowing other clients which are forbidden to rob the lock in the new service node according to the routing information.
However, in the migration scheme, only the client currently holding the distributed lock is allowed to rob the lock in the new service node according to the routing information during migration, and if the device where the client is located fails, no other client robs the lock and provides service to the outside, so that the mutually exclusive resource corresponding to the distributed lock is in an unavailable state in the time period when the client recovers the failure and successfully robs the lock, thereby reducing the system availability and the resource utilization efficiency.
Disclosure of Invention
In view of this, the present invention provides a distributed lock management method and apparatus, which are used to improve system availability and resource utilization.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a distributed lock management method, including:
the method comprises the steps that a first service node sets a distributed lock to be migrated to a first state, and in the first state, the distributed lock cannot be distributed;
migrating the distributed lock from the first service node to a second service node;
acquiring the routing information of the distributed lock at the second service node;
and processing the lock request sent by the client based on the routing information.
In a second aspect, an embodiment of the present invention provides a distributed lock management apparatus, including:
a setting unit, configured to set, by a first service node, a distributed lock to be migrated to a first state in which the distributed lock cannot be allocated;
a migration unit to migrate the distributed lock from the first service node to a second service node;
the acquisition unit is used for acquiring the routing information of the distributed lock at the second service node;
and the processing unit is used for processing the lock request sent by the client based on the routing information obtained by the obtaining unit.
In a third aspect, an embodiment of the present invention further provides a distributed lock management apparatus, including: a memory for storing a computer program and a processor; the processor is configured to execute the management method according to the embodiment of the first aspect when the computer program is called.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the management method described in the foregoing first aspect.
The distributed lock management method and device provided by the embodiment of the invention set the state of the distributed lock to be migrated on the first service node as the first state which cannot be allocated, thereby prohibiting the client from robbing the distributed lock to be migrated on the first service node, satisfying the migration condition of the distributed lock to be migrated, migrating the distributed lock to the second service node, acquiring the routing information of the distributed lock on the second service node, informing the client that the distributed lock has been migrated to the second service node, and guiding the lock request of the client to the second service node. In the migration process, the operation of offline or process stopping of the client and the service node is not involved, and the risk of single-point failure in the migration process can be avoided, so that the waste of the resource use efficiency can be avoided, and the resource utilization rate can be improved.
Drawings
Fig. 1 is a schematic structural diagram of a distributed system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a distributed lock management method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another distributed lock management method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a migration process of a distributed lock according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a distributed lock management apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another distributed lock management apparatus according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating access between a client and a service node according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a heartbeat mechanism in a process of accessing a service node by a client according to an embodiment of the present invention.
Detailed Description
Aiming at the technical problem that only one lock grabbing client needs to be reserved in the migration of the existing distributed lock migration scheme, so that the resource use efficiency is reduced when the equipment where the client is located fails, the embodiment of the invention provides a distributed lock management method and a distributed lock management device. In the migration process, the operation of offline or process stopping of the client and the service node is not involved, and the risk of single-point failure in the migration process can be avoided, so that the waste of the resource use efficiency can be avoided, and the resource utilization rate can be improved.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
For ease of understanding, the distributed system is first described below.
Fig. 1 is a schematic structural diagram of a distributed system according to an embodiment of the present invention, and as shown in fig. 1, in this embodiment, the distributed system includes one or more clients and one service node, where the number of the clients and the number of the service nodes may be one or more (two clients and one service node are exemplarily illustrated in the figure), each service node includes a plurality of services, and each service node maintains a memory database of the distributed system, and a transaction log and snapshot data that are persistently stored.
Specifically, one or more distributed locks may be provided in the service node, and multiple clients may contend for the same distributed lock, but at most only one client may contend successfully, and the client contending for the distributed lock may access the corresponding mutually exclusive resource (including a file, a database table, and the like), so as to provide service, and the client not contending for the lock is in a waiting state.
In a specific implementation, as shown in fig. 7, when the mutex resource needs to be operated, the client first obtains routing information (GetRoute) of the distributed lock, and then applies for a service in a corresponding service node according to the routing information to obtain a corresponding distributed lock (TryLock); after the acquisition is successful, the client can operate the mutually exclusive resource, and in the operation process, the client can determine the validity of the distributed lock through a Heartbeat mechanism (Heartbeat).
The distributed lock updates the life cycle through a heartbeat mechanism between the client and the service node, and the client periodically sends heartbeat information to the service node after occupying the distributed lock; and after receiving the heartbeat information sent by the client, the service node returns a heartbeat response. As shown in fig. 8, Lease (Lease) in the figure refers to a time limit for a client to access a mutually exclusive resource in a service node. When the service node does not receive heartbeat information from the client within the time specified by the local terminal, the distributed lock can be cleared to release the ownership of the distributed lock; when the client does not receive the heartbeat response from the service node within the time specified by the client, the distributed lock occupied by the client lock is considered to be lost, and then the lock losing process is carried out to release the ownership of the distributed lock.
When the client establishes connection with the service node, any service node in the distributed system can be selected to establish connection, and a globally unique session can be registered in the service node; for a client that successfully preempts the distributed lock, the service node may record the association relationship between the session and the distributed lock. When the client sends heartbeat information, the heartbeat information is associated with the session, and the lifetime of the distributed lock, namely the lifetime of the session, is updated through the periodic heartbeat mechanism.
The method provided by the embodiment of the invention is explained below.
Fig. 2 is a schematic flowchart of a distributed lock management method according to an embodiment of the present invention, where an execution subject of the method is a service node. As shown in fig. 2, the method provided by this embodiment may include the following steps:
s110, the first service node sets the distributed lock to be migrated to be in a first state.
Wherein the migrated distributed lock is not assignable in the first state. Since the distributed lock needs to be migrated to the second service node, the working state of the distributed lock needs to be adjusted to a state without client occupation. Therefore, the non-assignable specifically means that when the distributed lock is in a state without client occupation, all the clients cannot preempt the distributed lock through the first service node; when the distributed lock is in a state occupied by the client, other clients cannot seize the distributed lock through the first service node, and the first service node terminates a keep-alive mechanism of the distributed lock, namely, does not process heartbeat information sent by the client, so that the distributed lock fails and becomes a state without client occupation. In practical application, the distributed lock ignores the lock heartbeat information sent by the client in the first state, so that the client senses that the lock is lost, and then the lock request is sent again. For a distributed lock to be migrated (i.e. a target distributed lock), when receiving heartbeat information sent by a client occupying the target distributed lock, a first service node stops responding to the heartbeat information, so that disconnection between the client and the first service node where a service is located is triggered, and the client is triggered to lose the lock, so that the target distributed lock is released.
Specifically, when receiving the heartbeat information, the first service node may determine whether the heartbeat information is sent by the client that occupies the target distributed lock according to the session associated with the heartbeat information and the session associated with the target distributed lock. And when the session associated with the heartbeat information is consistent with the session associated with the target distributed lock, the heartbeat information sent by the client occupying the target distributed lock is received.
In this step, the distributed lock to be migrated is set on the first service node, or the client needs to preempt the distributed lock to be migrated through the first service node.
And S120, migrating the distributed lock from the first service node to the second service node.
In performing this step, it is determined that the distributed lock is in the first state, i.e., the distributed lock is in the released state, and the client cannot preempt the distributed lock.
In addition, the second service node may be a designated service node or a new service node obtained by detection.
S130, obtaining the routing information of the distributed lock on the second service node.
After the migration operation of the distributed lock is completed, the routing information of the distributed lock at the second service node needs to be determined, and the routing information is used for redirecting the distributed lock accessed by the client.
And S140, processing the lock request sent by the client based on the routing information.
And after the routing information is obtained, the lock request of the client is guided to the second service node by utilizing the routing information according to the access rule in the distributed system. For example, when the access object of the client is the first service node, the distributed system stores the routing information into the first service node, so that the first service node feeds back the routing information and informs the client to send a lock request to the second service node by using the routing information; when the access object of the client is the proxy service node, the distributed system stores the routing information into the proxy service node, the proxy service node feeds back the routing information and informs the client to send a lock request to the second service node by using the routing information, or directly sends the lock request to the second service node by the proxy client.
Further, fig. 3 is a schematic flow chart of another distributed lock management method according to an embodiment of the present invention, and this embodiment is a specific implementation manner in the embodiment shown in fig. 2. As shown in fig. 3, the method provided by this embodiment may include the following steps:
and S210, adding the distributed lock to be migrated into a blacklist.
The blacklist is a specific implementation manner of the first state, and after the distributed lock to be migrated is added to the blacklist, the connection between the client and the first service node is triggered to be disconnected, so that the client senses that the lock is lost, and the lock request is sent again.
Specifically, when the first service node receives heartbeat information sent by the client, it may be determined whether a distributed lock related to the heartbeat information exists in a blacklist; if yes, the heartbeat information sent by the client occupying the target distributed lock can be determined to be received. The distributed lock related to the heartbeat information can be determined through the association relationship between the heartbeat information and the session and the association relationship between the distributed lock and the session, and the distributed lock related to the heartbeat information is the distributed lock related to the session related to the heartbeat information. The blacklist may include one or more distributed locks, and the target distributed lock described in this embodiment may be any one distributed lock in the blacklist; the distributed lock in the blacklist is in a first state, namely the distributed lock in the blacklist cannot be allocated and occupied from the first service node; when the system is started, the distributed system can select one service node from the plurality of service nodes as a main service node, the rest service nodes are used as slave service nodes, and when a blacklist is set, the service nodes can be set through the main service node and then synchronized to each slave service node.
As shown in fig. 4 (a), when a service capability of a current service node (i.e., a default service node) is a bottleneck, or when it is required to ensure quality of service of a specific distributed lock for physical level isolation, a new service node (i.e., a target service node) may be added to provide services; at this time, the distributed lock to be migrated (i.e., the target distributed lock) may be migrated into the target service node. Where the gray boxes represent clients that possess the target distributed lock and the white boxes represent clients that do not preempt the lock.
As shown in fig. 4 (b), when the distributed lock to be migrated in the default service node needs to be migrated into the target service node, the distributed lock to be migrated may be added to the blacklist, so that the distributed lock is in the first state, that is, the distributed lock cannot be allocated and occupied in the default service node.
When the service in the default service node receives heartbeat information sent by the client, determining the distributed lock related to the heartbeat information according to the pre-stored association relationship between the heartbeat information and the session and the association relationship between the distributed lock and the session, and further checking whether the distributed lock related to the heartbeat information exists in a blacklist; if the distributed latch related to the heartbeat information is in the blacklist, it is indicated that the distributed latch corresponding to the heartbeat information needs to be migrated; if the distributed lock related to the heartbeat information does not exist in the blacklist, it is indicated that the distributed lock corresponding to the heartbeat information does not need to be migrated, and a corresponding heartbeat response can be returned.
When the distributed latch related to the heartbeat information is in the blacklist, it is indicated that the distributed lock occupied by the client sending the heartbeat information is a distributed lock to be migrated (i.e., a target distributed lock), and at this time, the response to the heartbeat information is stopped, so that disconnection between the client and the first service node where the service is located is triggered, and thus, the client is triggered to lose the lock (for example, a gray box shown in (c) in fig. 4 is changed into a white box), so that the target distributed lock is released, and meanwhile, because the target distributed lock is in the blacklist, the lock cannot be seized by other clients.
As can be seen from the content shown in fig. 4, when the distributed lock to be migrated is set to the first state, after the distributed lock is occupied by the client, the distributed lock needs to be released through the heartbeat mechanism, that is, the response is stopped for the received lock heartbeat information sent by the client. However, when the distributed lock to be migrated is set to the first state, the distributed lock may not be occupied by the client in addition to the state occupied by the client, and at this time, the response may also be stopped for the lock request sent by the client for the distributed lock, so as to achieve the purpose of migrating the distributed lock not occupied by the client from the first service node to the second service node.
S220, migrating the distributed lock from the first service node to the second service node.
In this step, after the distributed lock is set to the first state, the distributed lock is migrated from the first service node to the second service node in a state without client occupation.
Further, before executing migration, the occupation state of the distributed lock needs to be determined, specifically, the occupation state can be implemented by judging whether the first service node receives lock heartbeat information sent by a client occupying the distributed lock within a preset time period, if the first service node receives the lock heartbeat information, it is indicated that the distributed lock still exists in the occupied client, at this time, the response to the lock heartbeat information is stopped, and the time of the preset time period is reset. And until the lock heartbeat information is not received in the time period, the distributed lock is invalid, and the migration operation is executed.
And S230, acquiring the routing information of the distributed lock at the second service node.
Specifically, when the routing information is obtained, it is necessary to determine whether the distributed lock has been migrated to the second service node, and if the migration is completed, the routing information of the distributed lock at the second service node is obtained. That is, this step requires real-time monitoring of the migration process to determine the completion of the migration operation, and thus determine the routing information.
In the distributed system, the route information of the distributed lock at the second service node can be updated by the master service node, and then synchronized to each slave service node, so as to realize the updating of the route information (such as updating the route information of the target distributed lock shown in (d) of fig. 4).
And S240, processing the lock request sent by the client based on the routing information.
Specifically, after heartbeat information sent by a client possessing a target distributed lock cannot be responded, the client can sense the lock loss, and when the client realizes the lock loss, the client can rob the lock again; other clients corresponding to the target distributed lock can also perform lock preemption, when the lock is specifically preempted, the client can firstly acquire the routing information of the target distributed lock, at the moment, the routing information corresponding to the distributed lock in a new service node (namely, a target service unit) can be provided for the client, and the client is redirected to the target service node to perform lock preemption.
In this embodiment, the routing information of the distributed lock may be stored in the first service node, or may also be stored in other devices (proxy service nodes), where the other devices specifically may be routing storage devices dedicated for storing and managing the routing information of the distributed lock, or may also be proxy devices arranged between the client and the service node (data transmission between the client and the service node is routed to the opposite end through the proxy devices).
When the routing information of the distributed lock is stored in the first service node, the first service node receives a lock request sent by the client for requesting to acquire the target distributed lock, at this time, the first service node may return the routing information corresponding to the target distributed lock in the second service node to the client, and the client may preempt the target distributed lock in the second service node according to the routing information.
When the routing information of the distributed lock is stored in the other device, the first service node may instruct the other device to return, when receiving a lock request for requesting to acquire the target distributed lock, sent by the client, the routing information corresponding to the target distributed lock in the second service node to the client, so that the client preempts the target distributed lock in the second service node according to the routing information. In a specific implementation, the first service node may send a notification message to the other device, and after receiving the message, the other device returns, to the client, the routing information corresponding to the target distributed lock in the second service node if receiving the lock request corresponding to the target distributed lock. As shown in (d) of fig. 4, after the client loads the routing information, the client may preempt the target distributed lock in the second service node according to the routing information.
Further, in this embodiment, since the migration operation is performed after the distributed lock is determined to be released, that is, before the migration, the occupied client does not exist in the distributed lock, and after the migration, the client re-preempts the distributed lock according to the routing information, it is ensured that a situation in which two or more clients simultaneously think that the distributed lock is occupied does not occur.
In addition, in the existing migration scheme, if some versions of the client are rolled back unexpectedly, the original first service node (i.e., the default service node in fig. 4) will rob the lock, so that the same distributed lock is occupied in two service nodes at the same time, that is, a lock is occupied more. In this embodiment, the target distributed lock is blackened in the original first service node and cannot be allocated and occupied, so that even if the client returns to the original first service node to rob the lock, the target distributed lock cannot be preempted, and the phenomenon of one lock occupying more can be avoided. Namely, by adding the distributed lock to be migrated into the blacklist, the phenomenon of one lock occupying more caused by version rollback can be avoided.
Based on the same inventive concept, as an implementation of the foregoing method, an embodiment of the present invention provides a distributed lock management apparatus, where the apparatus embodiment corresponds to the foregoing method embodiment, and for convenience of reading, details in the foregoing method embodiment are not repeated in this apparatus embodiment one by one, but it should be clear that the apparatus in this embodiment can correspondingly implement all the contents in the foregoing method embodiment.
Fig. 5 is a schematic structural diagram of a distributed lock management apparatus according to an embodiment of the present invention, where the apparatus provided in this embodiment may be an independent device, or may be integrated in the service node. As shown in fig. 5, the apparatus provided in this embodiment includes:
a setting unit 31, configured to set, by a first service node, a distributed lock to be migrated to a first state where the distributed lock may not be allocated;
a migration unit 32, configured to migrate the distributed lock set to the first state by the setting unit 31 from the first service node to a second service node;
an obtaining unit 33, configured to obtain the routing information that the distributed lock is locked at the second service node;
and the processing unit 34 is configured to process the lock request sent by the client based on the routing information obtained by the obtaining unit 33.
Further, as shown in fig. 5, the apparatus further includes:
a response unit 35, configured to, when the distributed lock to be migrated is in the first state, if the distributed lock is occupied by a client, stop responding, by the first service node, to lock heartbeat information sent by the client; and if the distributed lock is not occupied by the client, the first service node stops responding to the lock request sent by the client.
Further, as shown in fig. 5, the processing unit 34 includes:
a first storage module 341, configured to store the routing information in the first service node;
the first processing module 342 is configured to, when receiving a lock request sent by a client to a first service node, feed back the routing information to the client by the first service node, so that the client sends the lock request to a second service node according to the routing information.
Further, as shown in fig. 5, the processing unit 34 includes:
a second storage module 343, configured to store the routing information in a proxy service node;
the second processing module 344 is configured to send a lock request of a client to the proxy service node, and the proxy service node processes the lock request sent by the client according to the routing information.
Further, the second processing module 344 is further configured to, the proxy service node feeds back the routing information to the client, so that the client sends a lock request to a second service node according to the routing information; or the proxy service node sends a lock request to a second service node according to the routing information, and feeds back the result of the lock request to the client.
Further, as shown in fig. 5, the apparatus further includes:
a management unit 36, configured to, before the migration unit 32 migrates the distributed lock from the first service node to the second service node, if the first service node does not receive lock heartbeat information sent by a client occupying the distributed lock within a preset time period, clear the distributed lock occupied by the client, so as to release ownership of the distributed lock.
Further, as shown in fig. 5, the acquiring unit 33 includes:
the judging module 331 is configured to judge whether the distributed lock is migrated to the second service node;
an obtaining module 332, configured to obtain the routing information of the distributed lock at the second service node when determining to migrate to the second service node.
The apparatus provided in this embodiment may perform the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Based on the same inventive concept, the embodiment of the invention also provides a distributed lock management device. Fig. 6 is a schematic structural diagram of a distributed lock management apparatus according to an embodiment of the present invention, and as shown in fig. 6, a server according to this embodiment includes: a memory 41 and a processor 42, the memory 41 being for storing computer programs; the processor 42 is adapted to perform the method according to the above-described method embodiment when the computer program is invoked.
The server provided in this embodiment may execute the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
The embodiment of the present invention further provides a distributed system, which includes a client and the distributed lock management apparatus shown in fig. 6.
The server is used for managing migration of distributed locks in each service node, wherein when a distributed lock to be migrated exists in a first service node, the first service node sets the distributed lock to be in a first state, the distributed lock cannot be allocated in the first state, the distributed lock is released based on a heartbeat keep-alive mechanism adopted by the distributed lock, and at the moment, the server migrates the distributed lock from the first service node to a second service node and acquires routing information of the distributed lock at the second service node; and processing the lock request sent by the client based on the routing information.
And the client in the distributed system is used for receiving the routing information of the second service node fed back by the server when sending the lock request to the distributed lock to be migrated, and sending the lock request to the second service node according to the routing information.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method described in the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer readable media include both permanent and non-permanent, removable and non-removable storage media. Storage media may implement information storage by any method or technology, and the information may be computer-readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A distributed lock management method, comprising:
the method comprises the steps that a first service node sets a distributed lock to be migrated to a first state, and in the first state, the distributed lock cannot be distributed;
migrating the distributed lock from the first service node to a second service node;
acquiring the routing information of the distributed lock at the second service node;
and processing the lock request sent by the client based on the routing information.
2. The method of claim 1, further comprising:
when the distributed lock to be migrated is in a first state, if the distributed lock is occupied by a client, the first service node stops responding to lock heartbeat information sent by the client; and if the distributed lock is not occupied by the client, the first service node stops responding to the lock request sent by the client.
3. The method according to claim 1 or 2, wherein processing the lock request sent by the client based on the routing information comprises:
storing the routing information to a first service node;
when a lock request sent by a client to a first service node is received, the first service node feeds back the routing information to the client, so that the client sends the lock request to a second service node according to the routing information.
4. The method according to claim 1 or 2, wherein processing the lock request sent by the client based on the routing information comprises:
storing the routing information in a proxy service node;
and sending the lock request of the client to the proxy service node, and processing the lock request sent by the client by the proxy service node according to the routing information.
5. The method of claim 4, wherein processing, by the proxy service node, the lock request sent by the client according to the routing information comprises:
the proxy service node feeds back the routing information to the client so that the client sends a lock request to a second service node according to the routing information; or
And the proxy service node sends a lock request to a second service node according to the routing information and feeds back the result of the lock request to the client.
6. The method of claim 1, further comprising:
before the distributed lock is migrated from the first service node to the second service node, if the first service node does not receive lock heartbeat information sent by a client occupying the distributed lock within a preset time period, the distributed lock occupied by the client is cleared, so that the distributed lock is released.
7. A distributed lock management apparatus, comprising:
a setting unit, configured to set, by a first service node, a distributed lock to be migrated to a first state in which the distributed lock cannot be allocated;
a migration unit to migrate the distributed lock from the first service node to a second service node;
the acquisition unit is used for acquiring the routing information of the distributed lock at the second service node;
and the processing unit is used for processing the lock request sent by the client based on the routing information obtained by the obtaining unit.
8. The apparatus of claim 7, further comprising:
a response unit, configured to control, when the distributed lock to be migrated is in a first state, if the distributed lock is occupied by a client, the first service node to stop responding to lock heartbeat information sent by the client; and if the distributed lock is not occupied by the client, controlling the first service node to stop responding to a lock request sent by the client.
9. A distributed lock management apparatus, comprising: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of claims 1-6 when the computer program is invoked.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010003853.6A 2020-01-03 2020-01-03 Distributed lock management method and device Active CN113076187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010003853.6A CN113076187B (en) 2020-01-03 2020-01-03 Distributed lock management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010003853.6A CN113076187B (en) 2020-01-03 2020-01-03 Distributed lock management method and device

Publications (2)

Publication Number Publication Date
CN113076187A true CN113076187A (en) 2021-07-06
CN113076187B CN113076187B (en) 2024-01-09

Family

ID=76608717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010003853.6A Active CN113076187B (en) 2020-01-03 2020-01-03 Distributed lock management method and device

Country Status (1)

Country Link
CN (1) CN113076187B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257591A (en) * 2021-12-16 2022-03-29 富盛科技股份有限公司 Networking method and system for weak centralized distributed system
CN117608766A (en) * 2024-01-23 2024-02-27 杭州阿里云飞天信息技术有限公司 Distributed lock processing method, device, storage medium and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140365549A1 (en) * 2013-06-10 2014-12-11 Amazon Technologies, Inc. Distributed lock management in a cloud computing environment
WO2016141702A1 (en) * 2015-03-10 2016-09-15 中兴通讯股份有限公司 Distributed system metadata migration method and device
CN106534227A (en) * 2015-09-09 2017-03-22 阿里巴巴集团控股有限公司 Method and device of expanding distributed consistency service
CN106572054A (en) * 2015-10-09 2017-04-19 阿里巴巴集团控股有限公司 Distributed lock service realization method and device for distributed system
WO2017180143A1 (en) * 2016-04-15 2017-10-19 Hitachi Data Systems Corporation Distributed lock management enabling scalability
CN107466456A (en) * 2015-12-30 2017-12-12 华为技术有限公司 Lock request processing method and server
CN109766324A (en) * 2018-12-14 2019-05-17 东软集团股份有限公司 Control method, device, readable storage medium storing program for executing and the electronic equipment of distributed lock

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140365549A1 (en) * 2013-06-10 2014-12-11 Amazon Technologies, Inc. Distributed lock management in a cloud computing environment
WO2016141702A1 (en) * 2015-03-10 2016-09-15 中兴通讯股份有限公司 Distributed system metadata migration method and device
CN106534227A (en) * 2015-09-09 2017-03-22 阿里巴巴集团控股有限公司 Method and device of expanding distributed consistency service
CN106572054A (en) * 2015-10-09 2017-04-19 阿里巴巴集团控股有限公司 Distributed lock service realization method and device for distributed system
CN107466456A (en) * 2015-12-30 2017-12-12 华为技术有限公司 Lock request processing method and server
WO2017180143A1 (en) * 2016-04-15 2017-10-19 Hitachi Data Systems Corporation Distributed lock management enabling scalability
CN109766324A (en) * 2018-12-14 2019-05-17 东软集团股份有限公司 Control method, device, readable storage medium storing program for executing and the electronic equipment of distributed lock

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257591A (en) * 2021-12-16 2022-03-29 富盛科技股份有限公司 Networking method and system for weak centralized distributed system
CN117608766A (en) * 2024-01-23 2024-02-27 杭州阿里云飞天信息技术有限公司 Distributed lock processing method, device, storage medium and system
CN117608766B (en) * 2024-01-23 2024-04-30 杭州阿里云飞天信息技术有限公司 Distributed lock processing method, device, storage medium and system

Also Published As

Publication number Publication date
CN113076187B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
US11288253B2 (en) Allocation method and device for a distributed lock
WO2019137320A1 (en) Resource scheduling method, apparatus, device and system
US9749445B2 (en) System and method for updating service information for across-domain messaging in a transactional middleware machine environment
US7870425B2 (en) De-centralized nodal failover handling
WO2016184175A1 (en) Database processing method and apparatus
US11675622B2 (en) Leader election with lifetime term
CN107783842B (en) Distributed lock implementation method, device and storage device
US10802896B2 (en) Rest gateway for messaging
CN114900449B (en) Resource information management method, system and device
CN113076187B (en) Distributed lock management method and device
CN107920101B (en) File access method, device and system and electronic equipment
US11397632B2 (en) Safely recovering workloads within a finite timeframe from unhealthy cluster nodes
EP3672203A1 (en) Distribution method for distributed data computing, device, server and storage medium
US10545667B1 (en) Dynamic data partitioning for stateless request routing
CN112463757B (en) A resource access method and related device of a distributed system
CN105205160A (en) Data write-in method and device
CN113064732B (en) Distributed system and management method thereof
CN118363713A (en) Task allocation method and device of distributed system and electronic equipment
CN113873052B (en) Domain name resolution method, device and equipment of Kubernetes cluster
CN112422598A (en) Resource scheduling method, intelligent front-end equipment, intelligent gateway and distributed system
CN117193974A (en) Configuration request processing method and device based on multiple processes/threads
CN116455920A (en) Data storage method, system, computer equipment and storage medium
CN114510459A (en) Distributed lock management method and system based on Redis cache system
CN114201117A (en) Cache data processing method and device, computer equipment and storage medium
CN111435320B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载