WO2017167106A1 - Système de stockage - Google Patents
Système de stockage Download PDFInfo
- Publication number
- WO2017167106A1 WO2017167106A1 PCT/CN2017/077755 CN2017077755W WO2017167106A1 WO 2017167106 A1 WO2017167106 A1 WO 2017167106A1 CN 2017077755 W CN2017077755 W CN 2017077755W WO 2017167106 A1 WO2017167106 A1 WO 2017167106A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- node
- disk
- nodes
- network
- Prior art date
Links
- 238000005192 partition Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000002085 persistent effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000000034 method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 241001425718 Vagrans egista Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
Definitions
- the present invention relates to the technical field of data storage systems, and more particularly to a storage system.
- a typical highly available distributed storage system connects the physical servers of multiple devices.
- one storage server fails, its workload will be taken over by other storage servers.
- the method of heartbeat is commonly used. The two servers are connected by heartbeat. If one server cannot receive the heartbeat signal of another server, Just judge that another server has failed. There is a certain problem with this method. When the server has no fault, only the heartbeat line fails, and a misjudgment will occur. It may even happen that both servers think that the other party has a fault and robbed each other of the workload of the other party.
- the arbitration disk is used to solve this problem.
- the quorum disk is the storage space shared by the master and slave servers. It is determined whether the corresponding server is faulty by writing a specific signal to the arbitrator disk. But in fact, this technology does not completely solve the problem. If only the channel leading to the arbitrator disk fails, but the server is still intact, the same problem still exists.
- embodiments of the present invention provide a storage system that improves the availability of an arbitration disk and improves the reliability judgment of the storage system.
- At least two storage nodes connected to the storage network
- each storage device includes at least one storage medium, and all storage media included in the at least one storage device constitute a storage pool;
- the storage network is configured such that each storage node can access all storage media without resorting to other storage nodes;
- the storage pool is divided into at least two storage areas, and one storage area is selected from the at least two storage areas as a global arbitration disk.
- the storage nodes virtual machines, containers, and the like
- the storage nodes are also in the shared storage, they are in the same shared storage pool as the quorum disk.
- the same storage channel is taken, so if a server cannot read and write the quorum disk, whether the server is faulty or the related storage channel fails, the computing node on the server will not work normally. It is especially accurate to determine if a server has failed.
- FIG. 1 shows a block diagram of a memory system constructed in accordance with one embodiment of the present invention.
- FIG. 2 shows a block diagram of a memory system in accordance with an embodiment of the present invention.
- FIG. 1 shows a block diagram of a memory system in accordance with an embodiment of the present invention.
- the storage system includes a storage network, and a storage node is connected to the storage network, wherein the storage node is a software module that provides a storage service, instead of hardware including a storage medium in a general sense.
- a server and a storage device, also connected to the storage network.
- Each storage device includes at least one storage medium.
- the storage network is configured such that each storage node can access all storage media without resorting to other storage nodes.
- each storage node can access all storage media without using other storage nodes, so that all storage media of the present invention are actually shared by all storage nodes, thereby realizing global storage. The effect of the pool.
- the storage node is located on the side of the storage medium, or strictly speaking, the storage medium is a built-in disk of the physical machine where the storage node is located. In the embodiment of the present invention, the storage node is located.
- the physical machine is independent of the storage device, and the storage device is more used as a channel for connecting the storage medium to the storage network.
- the storage node side further includes a computing node, and the computing node and the storage node are disposed in a physical server, and the physical server is connected to the storage device through the storage network.
- the aggregated storage system in which the computing node and the storage node are located in the same physical machine constructed by using the embodiment of the present invention can reduce the number of physical devices required, thereby reducing the cost.
- the compute node can also access the storage resources it wishes to access locally.
- the data exchange between the two can be as simple as shared memory, and the performance is particularly excellent.
- the length of the I/O data path between the computing node and the storage medium includes: (1) the storage medium to the storage node; and (2) the storage node to the computing node aggregated in the same physical server. (CPU bus path).
- the prior art shown in Figure 1 The storage system, the I/O data path length between the compute node and the storage medium includes: (1) storage medium to storage node; (2) storage node to storage network access network switch; (3) storage network connection Incoming network switch to core network switch; (4) core network switch to computing network access network switch; and (5) computing network access network switch to computing node.
- the total data path of the storage system of the embodiment of the present invention is only close to item (1) of the conventional storage system. That is, the storage system provided by the embodiment of the present invention can greatly improve the I/O channel performance of the storage system by extremely compressing the I/O data path length, and the actual running effect is very close to the I/O of the local hard disk. O channel.
- the storage node may be a virtual machine of a physical server, a container, or a module directly running on a physical operating system of the server, and the computing node may also be a virtual machine of the same physical machine server, A container or a module running directly on the physical operating system of the server.
- each storage node may correspond to one or more compute nodes.
- one physical server may be divided into multiple virtual machines, one of which is used as a storage node, and the other virtual machine is used as a computing node; or a module on the physical OS is used as a storage node, so as to implement Better performance.
- the virtualization technology forming the virtual machine may be KVM or Zen or VMware or Hyper-V virtualization technology
- the container technology forming the container may be Docker or Rocket or Odin or Chef or LXC or Vagrant. Or Ansible or Zone or Jail or Hyper-V container technology.
- each storage node is only responsible for managing a fixed storage medium at the same time, and one storage medium is not simultaneously written by multiple storage nodes to avoid data conflict, thereby enabling each storage node to be able to implement each storage node.
- the storage medium managed by it is accessed without resorting to other storage nodes, and the integrity of the data stored in the storage system can be guaranteed.
- all the storage media in the system may be divided according to storage logic.
- the storage pool of the entire system may be divided into a storage area, a storage group, and A logical storage hierarchy of memory blocks in which the storage blocks are the smallest unit of storage.
- the storage pool may be divided into at least two storage areas.
- each storage area may be divided into at least one storage group. In a preferred embodiment, each storage area is divided into at least two storage groups.
- the storage area and the storage group can be merged such that one level can be omitted in the storage hierarchy.
- each storage area may be composed of at least one storage block, wherein the storage block may be a complete storage medium or a part of a storage medium.
- each storage area may be composed of at least two storage blocks, and when any one of the storage blocks fails, the complete storage block may be calculated from the remaining storage blocks in the group.
- the data is stored.
- the redundant storage mode can be multi-copy mode, independent redundant disk array (RAID) mode, and erasure code mode.
- the redundant storage mode can be established by the ZFS file system.
- the plurality of storage blocks included in each storage area (or storage group) are not located in the same storage medium, or even located in the same storage medium. In the storage device. In an embodiment of the invention, any two storage blocks included in each storage area (or storage group) are not located in the same storage medium/storage device. In another embodiment of the present invention, the number of storage blocks located in the same storage medium/storage device in the same storage area (or storage group) is preferably less than or equal to the redundancy of the redundant storage.
- the redundant storage redundancy is 1, and the number of storage blocks in the same storage group of the same storage device is at most 1; for RAID 6, the redundant storage is With a redundancy of 2, the number of memory blocks in the same storage group on the same storage device is up to 2.
- each storage node can only read and write its own managed storage area. Since the read operations of the same storage block by multiple storage nodes do not conflict with each other, and multiple storage nodes write one storage block at the same time, conflicts are easily generated. Therefore, in another embodiment, each storage node can only Write your own managed storage area, but you can read your own managed storage The storage area and other storage areas managed by the storage node, that is, the write operation is local, but the read operation can be global.
- the storage system may further include a storage control node coupled to the storage network for determining a storage area managed by each storage node.
- each storage node may include a storage allocation module for determining a storage area managed by the storage node, which may be handled by communication and coordination between respective storage allocation modules included in each storage node. The algorithm is implemented, which may for example be based on the principle of load balancing between the various storage nodes.
- other or all of the storage nodes may be configured such that the storage nodes take over the storage area previously managed by the failed storage node.
- one of the storage nodes may take over a storage area managed by the failed storage node, or may be taken over by at least two other storage nodes, wherein each storage node takes over a portion of the storage area managed by the failed storage node, For example, at least two other storage nodes respectively take over different storage groups in the storage area.
- the storage medium may include, but is not limited to, a hard disk, a flash memory, an SRAM, a DRAM, an NVME, or the like.
- the access interface of the storage medium may include, but is not limited to, a SAS interface, a SATA interface, a PCI/e interface, a DIMM interface, NVMe interface, SCSI interface, and AHCI interface.
- the storage network may include at least one storage switching device, and the storage node accesses the storage medium through data exchange between the storage switching devices included therein. Specifically, the storage node and the storage medium are respectively connected to the storage switching device through the storage channel.
- a storage system supporting multipoint control is provided in which a single storage space can be accessed through multiple channels, such as by a compute node.
- the storage switching device may be a SAS switch or a PCI/e switch.
- the storage channel may be a SAS (Serial Attached SCSI) channel or a PCI/e channel.
- the solution has the advantages of high performance, large bandwidth, and a large number of disks in a single device.
- HBA host adapter
- SAS interface on a server board
- the storage provided by the SAS system can be easily accessed by multiple servers connected simultaneously.
- the SAS switch is connected to the storage device through a SAS line, and the storage device and the storage medium are also connected by a SAS interface.
- the storage device internally connects the SAS channel to each storage medium (may be in the storage device) Internally set a SAS switch chip). Since the bandwidth of a SAS network can reach 24Gb or 48Gb, which is several times that of Gigabit Ethernet, and several times that of an expensive 10 Gigabit Ethernet; at the same time, the link layer SAS has an order of magnitude improvement over the IP network, in transmission. Layer, because the TCP protocol three-way handshake is closed four times, the overhead is very high, and the TCP delay acknowledgement mechanism and slow start sometimes cause a delay of 100 milliseconds.
- SAS networks offer significant advantages in terms of bandwidth and latency over Ethernet-based TCP/IP. Those skilled in the art will appreciate that the performance of the PCI/e channel can also be adapted to the needs of the system.
- the storage network may include at least two storage switching devices, each of which may be connected to any one of the storage devices through any one of the storage switching devices, thereby being connected to the storage medium.
- the storage node reads and writes data on the storage device through other storage switching devices.
- the storage devices in the storage system 30 are constructed as a plurality of JBODs 307-310, which are respectively connected to the two SAS switches 305 and 306 through SAS data lines, which constitute the switching core of the storage network included in the storage system.
- the front end is at least two servers 301 and 302, each of which is connected to the two SAS switches 305 and 306 via an HBA device (not shown) or a SAS interface on the motherboard.
- Each server has a storage node that manages some or all of the disks in all JBOD disks using information obtained from the SAS links.
- the storage area, storage group, and storage block described above divide the JBOD disk into different storage groups.
- Each storage node manages one or more sets of such storage groups.
- redundant storage is used inside each storage group, redundantly stored metadata can exist on the disk, so that redundant storage can be directly recognized from the disk by other storage nodes.
- the storage node can install a monitoring and management module that is responsible for monitoring the status of local storage and other servers.
- a JBOD is abnormal overall or a disk on the JBOD is abnormal, data reliability is ensured by redundant storage.
- the management module in the storage node on another pre-configured server will locally identify and take over the disk managed by the storage node of the failed server according to the data on the disk.
- the storage node originally provided by the storage node of the faulty server will also be extended on the storage node on the new server. So far, a new highly available global storage pool structure has been implemented.
- the exemplary storage system 30 is constructed to provide a multi-point, controllable, globally accessible storage pool.
- the hardware uses multiple servers to provide external services, and uses JBOD to store disks.
- Multiple JBODs are connected to two SAS switches, and the two switches are respectively connected to the server's HBA cards, thereby ensuring that all disks on the JBOD can be accessed by all servers.
- the SAS redundant link also ensures high availability on the link.
- each server uses redundant storage technology to select redundant disks from each JBOD to avoid redundant data loss.
- the module that monitors the overall state will schedule another server to access the disks managed by the storage node of the failed server through the SAS channel, and quickly take over the disks that the other party is responsible for, achieving high-available global storage.
- JBOD storage disk is illustrated in FIG. 2 as an example, it should be understood that the embodiment of the present invention as shown in FIG. 2 also supports a storage device other than JBOD.
- the above is an example in which one storage medium (entire) is used as one storage block, and the same applies to a case where a part of one storage medium is used as one storage block.
- each server may be monitored for failure by dividing the global storage pool into at least two storage areas, and selecting one of the at least two storage areas as a global arbitration disk.
- Each storage node is capable of reading and writing the global quorum disk, but at the same time is only responsible for managing zero to more storage areas in the remaining storage area (except for the storage area where the global quorum disk is located).
- the global quorum disk is used by the upper layer application of the server, that is, the storage node, that is, each storage node can directly read and write the global quorum disk. Due to the multi-point control of storage access, each storage node can simultaneously see the contents of other storage node updates.
- the storage space of the global arbitration disk is divided into a plurality of fixed partitions, each of the plurality of fixed partitions being respectively assigned to each of the one or more storage nodes
- the storage node can avoid concurrent read and write conflicts of multiple control nodes for the quorum disk.
- the global arbitration disk may be configured such that each of the one or more storage nodes can only perform a write operation on the fixed partition allocated thereto when using the global arbitration disk. Reads are performed on fixed partitions assigned to other storage nodes. Enables storage nodes to update their own state while understanding the state changes of other storage nodes.
- an election lock may be provided on the global arbitration disk.
- the remaining storage nodes use the election lock mechanism to elect the takeover node.
- the value of the election lock mechanism is even greater.
- the global arbitration disk as a storage area may also have the characteristics of the storage area as discussed above.
- the global arbitration disk includes one or more storage media or includes some or all of one or more storage media.
- the storage medium included in the global arbitration disk may be located in the same or different storage devices.
- the global arbitration disk may be composed of one complete storage medium, or may be composed of two complete storage media, or a part of two storage media, or may be a part of one storage medium and another one or several A complete storage medium.
- the global arbitration disk may be configured by redundant storage of all or part of at least two storage media on at least two storage devices.
- JBOD the storage medium
- each storage node server can access all the storage resources on the JBOD, some storage space can be extracted from one or more disks of each JBOD, and the combination is used as a global arbitration. Use the disk.
- the reliability of the arbitration disk can be easily improved. In the most severe cases, the quorum disk still works when only one JBOD in the system survives.
- the storage nodes (virtual machines, containers, and the like) on each physical server are also stored in the global storage pool, specifically, they are located in the same sharing as the arbitration disk.
- the normal read and write of the global storage pool between the compute node and the storage node is the same storage channel as the storage node reads and writes the quorum disk.
- the compute nodes on the server are definitely not working properly, that is, they cannot access their normal storage resources. Therefore, it is very reliable to judge whether the corresponding computing node works effectively by such an arbitration disk structure.
- each storage node continuously writes data to the quorum disk.
- each storage node continuously monitors (by reading) whether other storage nodes periodically write data to the quorum disk. If the storage node does not write data to the quorum disk on time, it can be determined that the compute node corresponding to the storage node is not working properly.
- the manner in which the storage node continuously writes heartbeat data to the arbitration disk is that the storage node periodically writes heartbeat data to the arbitration disk at a preset time interval of the system, for example, writing data into the arbitration disk every five seconds.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne un système de stockage, fournissant un disque de quorum global à forte utilisabilité. Le système de stockage comprend : un réseau de stockage, au moins deux nœuds de stockage, et au moins un dispositif de stockage, chaque dispositif de stockage comprenant au moins un support de stockage, et tous les supports de stockage compris dans le ou les dispositifs de stockage formant un pool de stockage. Le réseau de stockage est configuré pour permettre à chaque nœud de stockage d'accéder à tous les supports de stockage sans l'aide des autres nœuds de stockage ; et le pool de stockage est divisé en au moins deux régions de stockage, une région de stockage étant sélectionnée parmi les deux régions de stockage ou plus en tant que disque de quorum global.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/139,712 US10782898B2 (en) | 2016-02-03 | 2018-09-24 | Data storage system, load rebalancing method thereof and access control method thereof |
US16/378,076 US20190235777A1 (en) | 2011-10-11 | 2019-04-08 | Redundant storage system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610181228.4 | 2016-03-26 | ||
CN201610181228.4A CN105872031B (zh) | 2016-03-26 | 2016-03-26 | 存储系统 |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/077757 Continuation-In-Part WO2017162178A1 (fr) | 2011-10-11 | 2017-03-22 | Procédé et dispositif de contrôle d'accès pour système de stockage |
PCT/CN2017/077754 Continuation-In-Part WO2017162177A1 (fr) | 2011-10-11 | 2017-03-22 | Système, procédé et dispositif de mémoire redondante |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/077757 Continuation-In-Part WO2017162178A1 (fr) | 2011-10-11 | 2017-03-22 | Procédé et dispositif de contrôle d'accès pour système de stockage |
US16/054,536 Continuation-In-Part US20180341419A1 (en) | 2011-10-11 | 2018-08-03 | Storage System |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017167106A1 true WO2017167106A1 (fr) | 2017-10-05 |
Family
ID=56625057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/077755 WO2017167106A1 (fr) | 2011-10-11 | 2017-03-22 | Système de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105872031B (fr) |
WO (1) | WO2017167106A1 (fr) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105472047B (zh) * | 2016-02-03 | 2019-05-14 | 天津书生云科技有限公司 | 存储系统 |
CN105872031B (zh) * | 2016-03-26 | 2019-06-14 | 天津书生云科技有限公司 | 存储系统 |
CN110244904B (zh) * | 2018-03-09 | 2020-08-28 | 杭州海康威视系统技术有限公司 | 一种数据存储系统、方法及装置 |
CN109840247B (zh) * | 2018-12-18 | 2020-12-18 | 深圳先进技术研究院 | 文件系统及数据布局方法 |
CN109951331B (zh) * | 2019-03-15 | 2021-08-20 | 北京百度网讯科技有限公司 | 用于发送信息的方法、装置和计算集群 |
CN111212141A (zh) * | 2020-01-02 | 2020-05-29 | 中国科学院计算技术研究所 | 一种共享存储系统 |
CN115359834B (zh) * | 2022-10-18 | 2023-03-24 | 苏州浪潮智能科技有限公司 | 一种盘仲裁区域检测方法、装置、设备及可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582013A (zh) * | 2009-06-10 | 2009-11-18 | 成都市华为赛门铁克科技有限公司 | 一种在分布式存储中处理存储热点的方法、装置及系统 |
CN103503414A (zh) * | 2012-12-31 | 2014-01-08 | 华为技术有限公司 | 一种计算存储融合的集群系统 |
CN203982354U (zh) * | 2014-06-19 | 2014-12-03 | 天津书生投资有限公司 | 一种冗余存储系统 |
US9110591B2 (en) * | 2011-04-22 | 2015-08-18 | Hewlett-Packard Development Company, L.P. | Memory resource provisioning using SAS zoning |
CN105872031A (zh) * | 2016-03-26 | 2016-08-17 | 天津书生云科技有限公司 | 存储系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9690818B2 (en) * | 2009-12-01 | 2017-06-27 | Sybase, Inc. | On demand locking of retained resources in a distributed shared disk cluster environment |
US8443231B2 (en) * | 2010-04-12 | 2013-05-14 | Symantec Corporation | Updating a list of quorum disks |
CN104219318B (zh) * | 2014-09-15 | 2018-02-13 | 北京联创信安科技股份有限公司 | 一种分布式文件存储系统及方法 |
CN104657316B (zh) * | 2015-03-06 | 2018-01-19 | 北京百度网讯科技有限公司 | 服务器 |
-
2016
- 2016-03-26 CN CN201610181228.4A patent/CN105872031B/zh active Active
-
2017
- 2017-03-22 WO PCT/CN2017/077755 patent/WO2017167106A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101582013A (zh) * | 2009-06-10 | 2009-11-18 | 成都市华为赛门铁克科技有限公司 | 一种在分布式存储中处理存储热点的方法、装置及系统 |
US9110591B2 (en) * | 2011-04-22 | 2015-08-18 | Hewlett-Packard Development Company, L.P. | Memory resource provisioning using SAS zoning |
CN103503414A (zh) * | 2012-12-31 | 2014-01-08 | 华为技术有限公司 | 一种计算存储融合的集群系统 |
CN203982354U (zh) * | 2014-06-19 | 2014-12-03 | 天津书生投资有限公司 | 一种冗余存储系统 |
CN105872031A (zh) * | 2016-03-26 | 2016-08-17 | 天津书生云科技有限公司 | 存储系统 |
Also Published As
Publication number | Publication date |
---|---|
CN105872031B (zh) | 2019-06-14 |
CN105872031A (zh) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017133483A1 (fr) | Système de mémorisation | |
US10642704B2 (en) | Storage controller failover system | |
WO2017162179A1 (fr) | Procédé et appareil de rééquilibrage de charge destinés à être utilisés dans un système de stockage | |
WO2017167106A1 (fr) | Système de stockage | |
WO2017162177A1 (fr) | Système, procédé et dispositif de mémoire redondante | |
WO2017162176A1 (fr) | Système de stockage, procédé d'accès pour système de stockage et dispositif d'accès pour système de stockage | |
US8898385B2 (en) | Methods and structure for load balancing of background tasks between storage controllers in a clustered storage environment | |
US11137940B2 (en) | Storage system and control method thereof | |
US8010829B1 (en) | Distributed hot-spare storage in a storage cluster | |
US7996608B1 (en) | Providing redundancy in a storage system | |
US9542320B2 (en) | Multi-node cache coherency with input output virtualization | |
JP5523468B2 (ja) | 直接接続ストレージ・システムのためのアクティブ−アクティブ・フェイルオーバー | |
US20190235777A1 (en) | Redundant storage system | |
US10318393B2 (en) | Hyperconverged infrastructure supporting storage and compute capabilities | |
US20110145452A1 (en) | Methods and apparatus for distribution of raid storage management over a sas domain | |
US10901626B1 (en) | Storage device | |
US10782898B2 (en) | Data storage system, load rebalancing method thereof and access control method thereof | |
US7434107B2 (en) | Cluster network having multiple server nodes | |
US8788753B2 (en) | Systems configured for improved storage system communication for N-way interconnectivity | |
WO2017162178A1 (fr) | Procédé et dispositif de contrôle d'accès pour système de stockage | |
US10782989B2 (en) | Method and device for virtual machine to access storage device in cloud computing management platform | |
US11201788B2 (en) | Distributed computing system and resource allocation method | |
US11188425B1 (en) | Snapshot metadata deduplication | |
US11366618B2 (en) | All flash array server and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17773136 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17773136 Country of ref document: EP Kind code of ref document: A1 |