US20100121824A1 - Disaster recovery processing method and apparatus and storage unit for the same - Google Patents
Disaster recovery processing method and apparatus and storage unit for the same Download PDFInfo
- Publication number
- US20100121824A1 US20100121824A1 US12/651,752 US65175210A US2010121824A1 US 20100121824 A1 US20100121824 A1 US 20100121824A1 US 65175210 A US65175210 A US 65175210A US 2010121824 A1 US2010121824 A1 US 2010121824A1
- Authority
- US
- United States
- Prior art keywords
- primary
- host computer
- processing
- disk
- copy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2076—Synchronous techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
Definitions
- the present invention relates to a technique for executing processing in another information processing apparatus, or a program or an object that conducts its processing, in response to occurrence of a failure or a predetermined condition, or a request.
- main site Upon a data write request from a host computer in a certain site (herein referred to as main site), a storage apparatus in the main site transfers pertinent data to a storage apparatus in another site (herein referred to as remote site). After arrival of a receipt report of pertinent data from the storage apparatus in the remote site, the storage apparatus in the main site reports writing completion to a host computer in the main site.
- a storage apparatus in the main site reports writing completion to the host computer in the main site without waiting for completion of pertinent data transfer to a remote site.
- the sequentiality assurance can be configured so as to be effective for a set of a plurality of disks.
- a technique for assuring the sequentiality for a set of a disk for log (journal) and a disk for DB is disclosed in U.S. Pat. No. 5,640,561.
- a database management system has a DB disk for storing data itself and a log disk for storing DB modification history information in a time series form. If a server in the main site (which is a computer or an information processing apparatus in the main site) is shut down, data on the DB disk assumes an incomplete modification state in some cases. At the time of restart of the DBMS, however, a consistent state is recovered on the basis of the DB modification history information on the log disk.
- the log and DB are transferred synchronously to the remote site.
- the same states of the log and DB as those in the main site are always present in the remote site.
- recovery processing in the same situation as the restart in the main site can be implemented.
- modification contents of transactions that have been completed in the main site are not lost in the remote site. Since the log and DB are transferred synchronously, however, the performance in the main site is degraded as compared with the case where such a configuration is not adopted.
- the log and DB are transferred asynchronously to the remote site.
- the modification sequentiality in the remote site is guaranteed. Since the modification in the remote side is delayed, states of the log and DB assumed in the main site the delay time before are present in the remote site. By making the modification sequentiality of the log and DB assured, the consistent DB state assumed in the main site the delay time before can be recovered. Although the performance degradation in the main site is slight, modification contents of transactions that have been completed in the main site are sometimes lost in the remote site.
- the log and DB are transferred to the remote site in synchronism with the main site, the possibility that modification contents of transactions that have been completed in the main site are lost in the remote site is low. Since the log and DB are transferred synchronously, there is a problem that the performance in the main site is degraded as compared with the case where such a configuration is not adopted.
- a first object of the present invention is to provide a technique in which the possibility that modification contents of transactions that have been completed in a main site are lost in a remote site is low, when executing processing in another information processing apparatus, or a program or an object that conducts its processing, in response to occurrence of a failure or a predetermined condition, or a request.
- a second object of the present invention is to provide a technique in which the possibility that modification contents of transactions that have been completed in a primary system are lost in a secondary system is low.
- a third object of the present invention is to provide a technique in which the performance degradation in the primary system can be reduced.
- an information processing recovery method for recovering the processing using the first program, by executing the processing using the second program when a failure has occurred in the processing using the first program is first means.
- the information processing recovery method the following processing is executed.
- the processing data, and status information indicating a storage location of the log information are stored in the first storage means.
- a response to the write request is sent to the second storage means.
- the processing data and the status information are stored in the second storage means.
- second means executes the following processing.
- log information is modified by synchronous writing
- database data and status information are modified by asynchronous writing.
- third means modifies log information by synchronous writing and modifies database data and status information by asynchronous writing, at the time of write request to the secondary system.
- a host computer includes a database buffer for temporarily holding contents of a database area in a storage subsystem, and a log buffer for temporarily holding contents of modification processing for the database buffer. Contents of the database buffer are modified with the advance of execution of database processing in the host computer.
- a write request of log information indicating contents of modification processing conducted on the database buffer, database data modified in the database buffer, or status information indicating a location of log information at the time of checkpoint is transmitted from the primary host computer in the primary system to the primary storage subsystem in the primary system.
- the primary storage subsystem receives the write request from the host computer. According to contents of the received write request, modification of log information, data in the database area, and status information in the primary storage subsystem is conducted.
- the primary storage subsystem is previously configured so that a log information disk may be subject to synchronous remote copy and a database data disk and a status information disk may be subject to asynchronous remote copy that guarantees a modification sequentiality over both disks.
- the primary storage subsystem writes a write request for a log information disk into a secondary storage device in the secondary system by using a synchronous method, and writes a write request for a database area data disk and status information disk into the secondary storage device in the secondary system by using an asynchronous method.
- the secondary storage subsystem receives a write request of the log information, database data, or status information from the primary storage subsystem. According to contents of the received write request, log information, database area data and status information in the secondary storage subsystem are modified (see U.S. Pat. No. 5,640,561).
- log information is read out from a location indicated by the status information, and data in the database area in the secondary storage subsystem is modified according to contents of the log information thus read out.
- the database area in the secondary storage subsystem is restored to the consistent state of the database area immediately before the failure occurrence.
- log information is modified by synchronous writing and database data and status information are modified by asynchronous writing, when writing to the secondary system is requested, as heretofore described. Therefore, the contents of modification in transactions completed in the primary system are prevented from being lost in the secondary system. It is possible to construct a disaster recovery system reduced in performance degradation in the primary system.
- FIG. 1 is a diagram showing a system configuration of a disaster recovery system in the present embodiment
- FIG. 2 is a diagram showing an outline of synchronous remote copy processing in a log block 262 a in the present embodiment
- FIG. 3 is a diagram showing an outline of asynchronous remote copy processing of DB blocks and status information in the present embodiment
- FIG. 4 is a diagram showing configuration information of a DB-disk mapping table 15 ;
- FIG. 5 is a diagram showing an example of a primary/secondary remote copy management table
- FIG. 6 is a flow chart showing a processing procedure of checkpoint acquisition processing in the present embodiment
- FIG. 7 is a flow chart showing a processing procedure conducted upon receiving a write command in the present embodiment
- FIG. 8 is a flow chart showing a processing procedure conducted upon receiving a read command in the present embodiment
- FIG. 9 is a flow chart showing a processing procedure of data reception processing in a secondary disk subsystem 4 in the present embodiment.
- FIG. 10 is a flow chart showing a processing procedure of DBMS start processing in the present embodiment.
- FIG. 11 is a diagram showing an outline of synchronous remote copy processing conducted at the time of checkpoint in the present embodiment.
- FIG. 1 is a diagram showing a system configuration of the present embodiment.
- a primary host computer 1 (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a DB access control section 111 (hardware, a program, or an object capable of conducting the processing), a checkpoint processing section 112 (hardware, a program, or an object capable of conducting the processing), a log management section 113 (hardware, a program, or an object capable of conducting the processing), and a DB delay write processing section 114 (hardware, a program, or an object capable of conducting the processing).
- DB access control section 111 hardware, a program, or an object capable of conducting the processing
- checkpoint processing section 112 hardware, a program, or an object capable of conducting the processing
- a log management section 113 hardware, a program, or an object capable of conducting the processing
- DB delay write processing section 114 hardware, a program, or an object capable of conducting the processing
- the DB access control section 111 is a processing section for controlling access to a primary DB disk 24 (storage means) and a primary log disk 26 (storage means) via a DB buffer 12 (storage means) and a log buffer 14 (storage means).
- the checkpoint processing section 112 is a processing section for transmitting a write request of all DB blocks modified in the DB buffer 12 , and status information indicating a log disk of a latest log record at that time point and its location, from the primary host computer 1 to a primary disk subsystem 2 , when it has become necessary to force contents in the DB buffer 12 in the primary host computer 1 to a storage device in the primary disk subsystem 2 , which is a disk subsystem in a primary system.
- the status information indicates in some cases the locations of the oldest log records relating to the uncompleted transactions. Modification of the status information on a disk is delayed in some cases. In either case, the status information may be used as information indicating the location of the log where reference is to be started when the database management system restarts.
- the log management section 113 is a processing section for transmitting a write request of a log block 262 a, which is log information indicating contents of database processing that has been conducted on the DB buffer 12 , from the primary host computer 1 to a primary disk subsystem 2 .
- the DB delay write processing section 114 is a processing section for transmitting a write request of database data on the DB buffer 12 from the primary host computer 1 to the primary disk subsystem 2 .
- a program for making the primary host computer 1 function as the DB access control section 111 , the checkpoint processing section 112 , the log management section 113 and the DB delay write processing section 114 is recorded on a recording medium such as a CD-ROM, stored on a magnetic disk or the like, and thereafter loaded into a memory, and executed.
- the recording medium for recording the program thereon may also be another recording medium other than the CD-ROM.
- the program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program.
- the primary disk subsystem 2 (which may be implemented by using a storage unit, a disk system, a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a disk control processing section 21 (hardware, a program, or an object capable of conducting the processing), a command processing section 211 (hardware, a program, or an object capable of conducting the processing), a primary remote copy processing section 212 (hardware, a program, or an object capable of conducting the processing), and a disk access control section 23 (hardware, a program, or an object capable of conducting the processing).
- a disk control processing section 21 hardware, a program, or an object capable of conducting the processing
- a command processing section 211 hardware, a program, or an object capable of conducting the processing
- a primary remote copy processing section 212 hardware, a program, or an object capable of conducting the processing
- a disk access control section 23 (hardware, a program, or an object capable of conducting the processing).
- the disk control processing section 21 is a control processing section for controlling operation of the whole primary disk subsystem apparatus.
- the command processing section 211 is a processing section for receiving a write request of a DB block 242 a, the status information or a log block 262 a from the primary host computer 1 , and modifying contents of the primary DB disk 24 , a primary status disk 25 , the primary log disk 26 , or a cache memory 22 (storage means) for storing their contents, included in the primary disk subsystem, according to contents of the received write request.
- the primary remote copy processing section 212 is a processing section for referring to the primary remote copy management table and conducting synchronous or asynchronous remote copying according to configuration information in the primary remote copy management table. If in the case of the present embodiment the received write request is a write request of the log block 262 a, then the primary remote copy processing section 212 conducts synchronous write processing of the log block 262 into a secondary disk subsystem 4 , which is a disk subsystem of a secondary system (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing).
- the disk access control section 23 is a processing section for controlling access to respective magnetic disk devices placed under the primary disk subsystem 2 .
- a program for making the primary disk subsystem 2 function as the disk control processing section 21 , the command processing section 211 , the primary remote copy processing section 212 and the disk access control section 23 is recorded on a recording medium such as a floppy disk, and executed.
- the recording medium for recording the program thereon may also be another recording medium other than the floppy disk.
- the program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program.
- a secondary host computer 3 (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a DB access control section 311 (hardware, a program, or an object capable of conducting the processing), a checkpoint processing section 312 (hardware, a program, or an object capable of conducting the processing), a log management section 313 (hardware, a program, or an object capable of conducting the processing), and a DB delay write processing section 314 (hardware, a program, or an object capable of conducting the processing).
- DB access control section 311 hardware, a program, or an object capable of conducting the processing
- a checkpoint processing section 312 hardware, a program, or an object capable of conducting the processing
- a log management section 313 hardware, a program, or an object capable of conducting the processing
- DB delay write processing section 314 hardware, a program, or an object capable of conducting the processing
- the DB access control section 311 is a processing section for conducting processing similar to that of the DB access control section 111 in the primary system, at the time of operation of the secondary system.
- the checkpoint processing section 312 is a processing section for conducting processing similar to that of the checkpoint processing section 112 in the primary system, at the time of operation of the secondary system.
- the log management section 313 is a processing section for conducting processing similar to that of the log management section 113 in the primary system, at the time of operation of the secondary system.
- the DB delay write processing section 314 is a processing section for conducting processing similar to that of the DB delay write processing section 114 in the primary system, at the time of operation of the secondary system.
- a program for making the secondary host computer 3 function as the DB access control section 311 , the checkpoint processing section 312 , the log management section 313 and the DB delay write processing section 314 is recorded on a recording medium such as a CD-ROM, stored on a magnetic disk or the like, and thereafter loaded into a memory, and executed.
- the recording medium for recording the program thereon may also be another recording medium other than the CD-ROM.
- the program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program.
- a secondary disk subsystem 4 (which may be implemented by using a storage unit, a disk system, a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a disk control processing section 41 (hardware, a program, or an object capable of conducting the processing), a command processing section 411 (hardware, a program, or an object capable of conducting the processing), a secondary remote copy processing section 412 (hardware, a program, or an object capable of conducting the processing), and a disk access control section 43 (hardware, a program, or an object capable of conducting the processing).
- a disk control processing section 41 hardware, a program, or an object capable of conducting the processing
- a command processing section 411 hardware, a program, or an object capable of conducting the processing
- a secondary remote copy processing section 412 hardware, a program, or an object capable of conducting the processing
- a disk access control section 43 (hardware, a program, or an object capable of conducting the processing).
- the disk control processing section 41 is a control processing section for controlling operation of the whole secondary disk subsystem apparatus.
- the command processing section 411 reads out a log record from a location of a log block 462 a indicated by status information in a secondary status disk 45 and sends out the log record to the secondary host computer 3 , in accordance with an order issued by the secondary host computer 3 .
- the disk control processing section 41 By modifying data on a secondary DB disk 44 in the secondary disk subsystem 4 according to contents of the pertinent log record analyzed by the secondary host computer in accordance with an order issued by the secondary host computer 3 , the disk control processing section 41 conducts processing of restoring the state of the secondary DB disk 44 in the secondary disk subsystem 4 to the state of the primary DB disk 24 immediately before the switching to the secondary system. The disk control processing section 41 conducts modification on the secondary DB disk 44 , the secondary status disk 45 and a secondary log disk 46 in keeping with database processing after the switching.
- the secondary remote copy processing section 412 is a processing section for receiving a write request of the DB block 242 a, the status information or the log block 262 a from the primary disk subsystem 2 , and conducting modification on the secondary DB disk 44 , the secondary status disk 45 and the secondary log disk 46 in the secondary disk subsystem 4 , or on a cache memory 42 storing their contents.
- the disk access control section 43 is a processing section for controlling access to respective magnetic devices placed under the secondary disk subsystem 2 . In the case of the asynchronous remote copy, modification on the pertinent cache memory 42 or disk is conducted after confirmation of the sequentiality, as described in JP-A-11-85408 entitled “Storage control apparatus.”
- a program for making the secondary disk subsystem 4 function as the disk control processing section 41 , the command processing section 411 , the secondary remote copy processing section 412 and the disk access control section 43 is recorded on a recording medium such as a floppy disk, and executed.
- the recording medium for recording the program thereon may also be another recording medium other than the floppy disk.
- the program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program.
- the primary disk subsystem 2 for the primary host computer 1 serving as the primary system and the secondary disk subsystem 4 for the subsidiary host computer 3 serving as the secondary system may be connected to each other via a fiber channel, a network such as Ethernet, Gigabit Ethernet or SONET, or a link.
- the connection means may be a virtual network, or any data communication means using radio, broadcast communication or satellite communication.
- the primary host computer 1 the DB access control section 111 of the primary system operates.
- the primary host computer 1 includes the DB buffer 12 for temporarily holding contents of the primary DB disk 24 in the primary disk subsystem 2 , and the log buffer 14 for temporarily holding contents of modification processing conducted on the DB buffer 12 .
- Each of the DB buffer 12 and the log buffer 14 may also be a volatile memory, which typically loses data at the time of a power failure.
- the primary DB disk 24 on a magnetic disk device is accessed through the disk control processing section 21 , the cache memory 22 and the disk access control section 23 , which receive an instruction from the primary host computer and operate. Disk access is conducted always via the cache memory 22 .
- the cache memory 22 may also be a volatile memory, which typically loses data at the time of a power failure. In this case, at the time when data is stored in the cache memory 22 , the data is guaranteed.
- the DB access control section 111 in the primary host computer 1 of the present embodiment acquires the DB block 242 a from the primary disk subsystem 2 by using a read command, stores the DB block 242 a in the DB buffer 12 , conducts database processing on the DB block 242 a in the DB buffer 12 , and then stores log information indicating contents of the processing in the log block 262 a in the log buffer 14 .
- the checkpoint processing section 112 If it has become necessary to force the contents of the DB buffer 12 in the primary host computer 1 to a storage device in the primary disk subsystem 2 serving as a disk subsystem in the primary system, such as when log records indicating that records on the DB buffer 12 has been modified arrives at a predetermined number, then the checkpoint processing section 112 generates a write command for writing a DB block or status information, as a write request for all DB blocks modified in the DB buffer 12 and status information indicating a location of a log record that is the latest at that time point, and transmits the write command from the primary host computer 1 to the primary disk subsystem 2 .
- the log management section 113 If at the time of transaction committing, a predetermined condition, such as elapse of a predetermined time since start of log information recording or disappearance of an empty place in the log buffer 14 , is arrived at, then the log management section 113 generates a write command for writing the log block 262 a, as a write request of the log block 262 a stored in the log buffer 14 into the primary log disk 26 , and transmits the write command from the primary host computer 1 to the primary disk subsystem 2 .
- a predetermined condition such as elapse of a predetermined time since start of log information recording or disappearance of an empty place in the log buffer 14 .
- the DB delay write processing section 114 If a predetermined condition, such as elapse of a predetermined time since start of database processing or disappearance of an empty place in the DB buffer 12 , is arrived at, then the DB delay write processing section 114 generates a write command for writing the DB block 242 a, as a write request of the DB block 242 a stored in the DB buffer 12 into the primary DB disk 24 , and transmits the write command from the primary host computer 1 to the primary disk subsystem 2 .
- a predetermined condition such as elapse of a predetermined time since start of database processing or disappearance of an empty place in the DB buffer 12 .
- the primary disk subsystem 2 of the present embodiment conducts synchronous remote copy processing to the secondary disk subsystem 4 in synchronism with writing performed in the primary disk subsystem 2 .
- the primary disk subsystem 2 of the present embodiment conducts asynchronous remote copy processing, which is not synchronized to writing in the primary disk subsystem 2 , to the secondary disk subsystem 4 .
- FIG. 2 is a diagram showing an outline of synchronous remote copy processing of the log block 262 a in the present embodiment. If a primary log write request for requesting to write the log block 262 a is transmitted from the primary host computer 1 as shown in FIG. 2 , then the primary disk subsystem 2 writes the log block 262 a transmitted together with the write request into the cache 22 , transmits the log block 262 a to the secondary disk subsystem 4 , requests remote copy of the log block 262 a in the secondary disk subsystem 4 , and waits for completion of the remote copy.
- the secondary disk subsystem 4 If a command for requesting to write the log block 262 a is transmitted from the primary disk subsystem 2 , then the secondary disk subsystem 4 writes the log block 262 a transmitted together with the write request into the cache 22 , and thereafter generates a remote copy completion notice indicating that the writing has been completed, and transmits the remote copy completion notice to the primary disk subsystem 2 .
- the primary disk subsystem 2 Upon receiving the remote copy completion notice from the secondary disk subsystem 4 , the primary disk subsystem 2 generates a primary log write completion notice indicating that the writing the log block 262 a requested by the primary host computer 1 has been completed, and transmits the primary log write completion notice to the primary host computer 1 .
- FIG. 3 is a diagram showing an outline of asynchronous remote copy processing of a DB block and status information in the present embodiment. If a primary DB write request for requesting to write the DB block and the status information is transmitted from the primary host computer 1 as shown in FIG. 3 , then the primary disk subsystem 2 writes the DB block and the status information transmitted together with the write request into the cache 22 , thereafter temporarily stores the DB block and the status information in a queue in a memory or a magnetic disk in the primary disk subsystem 2 , generates a primary DB write completion notice indicating that writing the DB block 242 a requested by the primary host computer 1 has been completed, and transmits the primary DB write completion notice to the primary host computer 1 .
- the primary disk subsystem 2 transmits the stored DB block or status information to the secondary disk subsystem 4 , requests remote copy of the DB block and status information in the secondary disk subsystem 4 , and waits for completion of the remote copy.
- the secondary disk subsystem 4 receives the DB block or status information transmitted together with the remote copy request, thereafter generates a remote copy completion notice indicating that the request has been completed, and transmits the remote copy completion notice to the primary disk subsystem 2 .
- FIG. 4 is a diagram showing configuration information of a DB-disk mapping table 15 in the present embodiment.
- the DB-disk mapping table 15 stores a database area ID, a file ID, and a kind.
- the database area ID is information for identifying a database area in the primary DB disk 24 .
- the file ID indicates a sequential number of a file in the case where the database area identified by the database area ID includes a plurality of files.
- the kind indicates which of database data, log information and status information is data stored in the database area.
- IDs of the primary disk subsystem 2 and the secondary disk subsystem 4 are stored.
- a DB-disk mapping table 35 in the secondary disk subsystem 4 also has a configuration similar to that of the DB-disk mapping table 15 in the primary disk subsystem 2 .
- FIG. 5 is a diagram showing an example of a primary/secondary remote copy management table in the present embodiment.
- a copy mode indicating whether the write processing is conducted synchronously or asynchronously is stored in a primary remote copy management table 213 and a secondary remote copy management table 413 .
- a disk control device number of a disk control device in which write processing is conducted with that copy mode and a physical device ID of a magnetic disk device, IDs in the primary disk subsystem 2 and the secondary disk subsystem 4 are stored.
- the copy mode for the magnetic disk device having the primary disk control device ID “CTL#A1” and the primary physical device ID “VOL12-A” is “synchronous.” Therefore, the log block in the database area ID “LOG1” is written into the secondary disk subsystem 4 by the synchronous remote copy processing.
- the system serving as the secondary system also has a similar configuration.
- the primary disk subsystem 2 and the secondary disk subsystem 4 are connected to each other via the network.
- the secondary host computer 3 In the standby state, the secondary host computer 3 is not in operation.
- the secondary disk subsystem 4 receives the log block, DB block and status information from the primary disk subsystem 2 via the network, and modifies disks respectively corresponding to them.
- the checkpoint processing section 112 in the primary host computer 1 of the present embodiment stores all DB blocks modified on the DB buffer 12 in the primary DB disk 24 , and stores status information indicating the location of the log record at that time in the primary status disk 25 .
- this checkpoint acquisition processing will be described.
- FIG. 6 is a flow chart showing a processing procedure of the checkpoint acquisition processing in the present embodiment.
- the checkpoint processing section 112 in the primary host computer 1 conducts processing of transmitting a write request for all DB blocks modified in the DB buffer 12 and the status information indicating the location of the log record that is the latest at that time point, from the primary host computer 1 to the primary disk subsystem 2 as shown in FIG. 6 .
- the checkpoint processing section 112 generates a checkpoint acquisition start log, which indicates that the checkpoint acquisition has been started, and stores the checkpoint acquisition start log in the log block 262 a.
- the checkpoint processing section 112 generates a write command for writing all DB blocks modified on the DB buffer 12 into the primary disk subsystem 2 , transmits the write command to the primary disk subsystem 2 to request the primary disk subsystem 2 to write the DB blocks.
- the primary disk subsystem 2 receives the write command, writes the DB blocks into the cache memory 22 , and forces contents of modification conducted in the DB buffer 12 to the cache memory 22 .
- Step 703 will be described at the end of the description of the present embodiment.
- a checkpoint acquisition end log which indicates that the checkpoint acquisition has been finished, is generated and stored in the log block 262 a.
- a write command for writing an LSN (Log Sequence Number) of the checkpoint acquisition end log into the primary disk subsystem 2 as status information is generated, and the write command is transmitted to the primary disk subsystem 2 to request the primary disk subsystem 2 to write the status information.
- the status information is written into the primary status disk 25 .
- the state of the database that has been completed until immediately before the termination can be recovered by reading out a log record from a location indicated by status information in the primary status disk 25 and modifying data in the primary DB disk 24 according to contents of the log record.
- FIG. 7 is a flow chart showing a processing procedure taken in the present embodiment when a write command has been received.
- the command processing section 211 in the primary disk subsystem analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a write command (step 341 ). It is now supposed that a device ID requested to be accessed can be acquired from the address to be accessed by comparing the address to be accessed with information in a device configuration management table, which indicates addresses assigned to a plurality of disk subsystems and their magnetic disk devices.
- step 342 it is determined whether data of the address to be accessed found at the step 341 is held in the cache memory 22 in the primary disk subsystem 2 , and a cache hit miss decision is conducted (step 342 ).
- a transfer destination cache area is secured.
- the cache address of the transfer destination is managed and acquired by using a typical method such as a cache vacancy list.
- step 342 If a cache hit is judged at the step 342 to hold true, or insurance of a cache area is finished at step 344 , then modification of the data is conducted on the cache memory 22 in the primary disk subsystem 2 (step 345 ). In other words, contents of the DB block 242 a, the status information, or the log block 262 a received from the primary host computer 1 are written into the cache memory 22 .
- the primary remote copy management table 213 is referred to, and a copy mode corresponding to the primary disk control device ID and the primary physical device ID indicated by the address to be accessed is read out to make a decision whether the copy mode is “synchronous.”
- the processing proceeds to step 347 .
- step 347 completion of the synchronous remote copy is waited for, and thereby synchronous remote copy processing of the log block 262 a is conducted.
- the processing proceeds to step 348 .
- the received data is temporarily stored in a queue in a memory or a magnetic disk in the primary disk subsystem 2 in order to prepare for asynchronous remote copy processing to be conducted thereafter on the secondary disk subsystem 4 .
- step 349 completion of the write command processing is reported to the primary host computer 1 .
- the primary disk subsystem 2 transmits the stored data to the secondary disk subsystem 4 , and executes asynchronous remote copy processing of the DB block or status information to the secondary disk subsystem 4 .
- FIG. 8 is a flow chart showing a processing procedure taken in the present embodiment when a read command has been received.
- the command processing section 211 analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a read access request (step 361 ). It is now supposed that a device ID requested to be accessed can be acquired from the address to be accessed.
- step 361 it is determined whether data of the address to be accessed found at the step 361 is held in the cache memory 22 in the primary disk subsystem 2 , and a cache hit miss decision is conducted (step 362 ).
- a device ID requested to be accessed is discriminated as described above, and the disk access control section 23 in the primary disk subsystem 2 is requested to transfer from a magnetic disk device corresponding to the device ID to the cache memory 22 (step 363 ).
- the read processing is interrupted until the end of transfer (step 364 ), and the read processing is continued again after the end of the transfer processing.
- the cache address of the transfer destination may be managed and acquired by using a typical method such as a cache vacancy list. As for the address of the transfer destination, however, it is necessary to modify a cache management table and thereby conduct registration.
- step 362 If a cache hit is judged at the step 362 to hold true, or the transfer processing is finished at step 364 , then data in the cache memory in the disk subsystem is transferred to a channel (step 365 ).
- FIG. 9 is a flow chart showing a processing procedure of data reception processing conducted by the secondary disk subsystem 4 in the present embodiment.
- the secondary remote copy processing section 412 in the secondary disk subsystem 4 analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a remote copy command (step 421 ). It is now supposed that a device ID requested to be accessed can be discriminated from the address to be accessed.
- step 421 it is determined whether data of the address to be accessed found at the step 421 is held in the cache memory 42 in the secondary disk subsystem 4 , and a cache hit miss decision is conducted (step 422 ).
- a transfer destination cache area is secured.
- the cache address of the transfer destination may be managed and acquired by using a typical method such as a cache vacancy list. As for the address of the transfer destination, however, it is necessary to modify a cache management table and thereby conduct registration.
- step 425 contents of the DB block 242 a, the status information, or the log block 262 a received from the primary disk subsystem 2 are written into the cache memory 42 .
- the case of the synchronous remote copy has heretofore been described. In the case where asynchronous remote copy is used and the sequentiality as described in JP-A-11-85408 entitled “storage control apparatus” is guaranteed, it is necessary before modification on the cache to ascertain that all data that should arrive by then are ready.
- step 426 completion of the report copy command processing is reported to the primary disk subsystem 2 .
- synchronous remote copy processing in the secondary disk subsystem 4 synchronized with the writing in the primary disk subsystem 2 is conducted, in the disaster recovery system of the present embodiment as described above. Therefore, it is possible to prevent that contents of transaction modification that has been completed in the primary system are lost in the secondary system.
- asynchronous remote copy processing in the secondary disk subsystem 4 which is not synchronized with the writing in the primary disk subsystem 2 , is conducted. Therefore, performance degradation in the primary system can be prevented as far as possible.
- FIG. 10 is a flow chart showing a processing procedure of the DBMS start processing in the present embodiment. If switching from the primary system to the secondary system is conducted and database processing in the secondary database processing system is started, then the DB access control section 311 in the secondary host computer 3 orders the secondary disk subsystem 4 to execute the DBMS start processing.
- the command processing section 411 in the secondary disk subsystem 4 reads out a status file on the secondary status disk 45 , and acquires information indicating the state of the database. It is now supposed that information indicating that the DBMS is in operation is stored in the status file as information indicating the database state at the time of database processing start and information indicating that the DBMS has been normally finished is stored in the status file at the end of the database processing.
- step 1202 it is determined whether the database processing of the last time has been finished normally, by referring to the acquired information indicating the database state. If the acquired database state indicates that the DBMS is in operation, i.e., information indicating that the DBMS has been finished normally is not recorded in the status file, then the database processing of the last time is regarded as have not been finished normally and the processing proceeds to step 1203 .
- status information indicating a location of a log record at the time of immediately preceding checkpoint is referred to, and an input location of the log record is acquired.
- the secondary log disk 46 is referred to in order to read out the log record from the acquired input location, and rollforward processing is conducted on the database area in the secondary DB disk 44 .
- step 1205 rollback processing for canceling processing of uncompleted transactions among transactions subjected to the rollforward processing using the log record is conducted.
- step 1206 information indicating that the DBMS is in operation and status information indicating the location of the log record after recovery are stored in the status file in the secondary status disk 45 .
- DBMS data modified in a transaction is not written to the storage in synchronism with the committing of the pertinent transaction in order to ensure the execution performance of the transaction.
- a trigger called checkpoint having a predetermined number of times of transaction occurrence or a predetermined time as a trigger is provided.
- DB data modified during that time is written to the storage.
- DB contents modified after the checkpoint are written to the log disk.
- DB modification after the checkpoint is restored and recovered from modification history in the log disk.
- a status file for managing a log disk input point at the time of checkpoint is provided so as to prevent mismatching from being caused in recovery in the secondary disk subsystem 4 even if a log block is subjected to synchronous remote copy processing and a DB block is subjected to asynchronous remote copy processing.
- the status file is transferred in asynchronous remote copy processing, and the modification order between the status file and the DB block transferred asynchronously in the same way is guaranteed by the secondary disk subsystem 4 .
- a write request at the time of checkpoint is also transmitted to the secondary disk subsystem 4 asynchronously as described above.
- that write request and write requests temporarily stored until that time point for asynchronous writing may also be transmitted to the secondary disk subsystem 4 .
- FIG. 11 is a diagram showing an outline of processing conducted at the time of checkpoint in the present embodiment. If a primary DB volume checkpoint request for requesting a checkpoint of the primary DB disk 24 is transmitted from the primary host computer 1 as shown in FIG. 11 , then the primary disk subsystem 2 transmits remote copy data temporarily stored in the queue in the memory or magnetic disk in the primary disk subsystem 2 at that time to the secondary disk subsystem 4 , and transmits the DB block 242 a and status information received together with the primary DB volume checkpoint request to the secondary disk subsystem 4 .
- the secondary disk subsystem 4 writes all of the DB block 242 a and status information transmitted together with the write request into the cache 42 , and then generates a remote copy completion notice, which indicates that the writing has been completed, and transmits the remote copy completion notice to the primary disk subsystem 2 .
- the primary disk subsystem 2 Upon receiving the remote copy completion notice from the secondary disk subsystem 4 , the primary disk subsystem 2 generates a primary DB volume checkpoint completion notice indicating that the checkpoint processing requested by the primary host computer 1 has been completed, and transmits the primary DB volume checkpoint completion notice to the primary host computer 1 .
- log information is modified by synchronous writing and database data and status information are modified by asynchronous writing, when writing to the secondary system is requested, as heretofore described. Therefore, the contents of modification in transactions completed in the primary system are prevented from being lost in the secondary system. It is possible to construct a disaster recovery system reduced in performance degradation in the primary system.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Debugging And Monitoring (AREA)
Abstract
A technique capable of constructing a disaster recovery system reduced in performance degradation of a primary system is provided. The technique includes a step of conducting synchronous writing of log information into a secondary storage subsystem in a secondary system when a write request received from a host computer is a write request of log information, a step of temporarily storing a write request and conducting asynchronous writing into the secondary storage subsystem when the received write request is a write request of database data or status information, a step of modifying log information, data in a database area, and status information in the secondary storage subsystem according to contents of a write request received from a primary storage subsystem, and a step of recovering the database area according to contents of log information in a location indicated by the status information.
Description
- This is a continuation application of U.S. Ser. No. 10/650,842, filed Aug. 29, 2003, the entire disclosure of which is hereby incorporated by reference.
- The present invention relates to a technique for executing processing in another information processing apparatus, or a program or an object that conducts its processing, in response to occurrence of a failure or a predetermined condition, or a request.
- For conventional database management systems (or computer systems or information processing systems), there is a technique of placing a plurality of replications in a plurality of geographically distributed sites (computers or information processing apparatuses) by way of precaution against failures, i.e., the so-called disaster recovery technique. In the so-called disaster recovery technique, data of a certain site are stored in other geographically separated sites as replications. In the case where a failure is caused by a disaster or the like in a certain site, business is recovered in another site.
- For database management systems (or computer systems or information processing systems, where database management systems are taken as an example), there are several methods as a technique of having such replications. Basically, a request is sent to a system that becomes principal when seen from clients, i.e., a primary system, and an information record called log is generated in the primary system, and used to recover the processing and as backup. In other words, this log record is sent from the primary system to a system called secondary system, and a host computer of the secondary system conducts the same modification processing as the primary system by referring to the log record and thereby modifies the state of the secondary system. Such a technique of implementing the replication by sending a log record generated in the primary system to the secondary system is disclosed in U.S. Pat. No. 5,640,561.
- In a remote copy function (see U.S. Pat. No. 5,640,561) in a storage apparatus used when sending a log record or database data generated in a primary system to a secondary system, conventional data transfer methods are mainly divided broadly into the following two kinds.
- Upon a data write request from a host computer in a certain site (herein referred to as main site), a storage apparatus in the main site transfers pertinent data to a storage apparatus in another site (herein referred to as remote site). After arrival of a receipt report of pertinent data from the storage apparatus in the remote site, the storage apparatus in the main site reports writing completion to a host computer in the main site.
- There is a merit that it is assured that data have arrived at the remote site when writing has been completed in the main site. On the other hand, there is a drawback that an increase in distance between sites or line delay increases the write response time in the main site and causes performance degradation.
- When a data write request from a host computer in a main site has arrived, a storage apparatus in the main site reports writing completion to the host computer in the main site without waiting for completion of pertinent data transfer to a remote site.
- As compared with the synchronous method, the possibility of performance degradation in the main site is reduced. In the case where a failure has occurred in the main site, there is a possibility that recent data are lost in the remote site and transactions are lost.
- There are methods in which it is assured that the sequentiality of data writing in the main site coincides with that in the remote site as disclosed in U.S. Pat. No. 5,640,561, and methods in which it is not assured. For avoiding that the state in the middle of a transaction remains and consistency of a database cannot be assured, it is necessary to assure the sequentiality of data writing. The sequentiality assurance can be configured so as to be effective for a set of a plurality of disks. A technique for assuring the sequentiality for a set of a disk for log (journal) and a disk for DB is disclosed in U.S. Pat. No. 5,640,561.
- In general, a database management system (DBMS) has a DB disk for storing data itself and a log disk for storing DB modification history information in a time series form. If a server in the main site (which is a computer or an information processing apparatus in the main site) is shut down, data on the DB disk assumes an incomplete modification state in some cases. At the time of restart of the DBMS, however, a consistent state is recovered on the basis of the DB modification history information on the log disk. (Such a technique is disclosed in Jim Gray and Andreas Reuter, “TRANSACTION PROCESSING: Concepts and Techniques,” published by Morgan Kaufmann Publishers, which is hereafter referred to as reference paper 1) In other words, at the time of server shut down, modification data of completed transactions are forced to the DB disk (rollforward), and modification data of transactions that have been incomplete at the time of server down are invalidated (rollback).
- As a transfer method in the disaster recovery system, the following methods are known.
- The log and DB are transferred synchronously to the remote site. The same states of the log and DB as those in the main site are always present in the remote site. When a failure has occurred, recovery processing in the same situation as the restart in the main site can be implemented. In other words, modification contents of transactions that have been completed in the main site are not lost in the remote site. Since the log and DB are transferred synchronously, however, the performance in the main site is degraded as compared with the case where such a configuration is not adopted.
- The log and DB are transferred asynchronously to the remote site. In addition, the modification sequentiality in the remote site is guaranteed. Since the modification in the remote side is delayed, states of the log and DB assumed in the main site the delay time before are present in the remote site. By making the modification sequentiality of the log and DB assured, the consistent DB state assumed in the main site the delay time before can be recovered. Although the performance degradation in the main site is slight, modification contents of transactions that have been completed in the main site are sometimes lost in the remote site.
- If in the conventional database management system (or computer system or information processing system, where a database management system is taken as an example) the log and DB (where a database is taken as an example, but data stored to be used for processing may also be used) are transferred to the remote site in synchronism with the main site, the possibility that modification contents of transactions that have been completed in the main site are lost in the remote site is low. Since the log and DB are transferred synchronously, there is a problem that the performance in the main site is degraded as compared with the case where such a configuration is not adopted.
- If in the conventional database management system the log and DB are transferred to the remote site asynchronously, then the performance degradation in the main site is slight, but there is a problem that modification contents of transactions that have been completed in the main site are sometimes lost in the remote site.
- A first object of the present invention is to provide a technique in which the possibility that modification contents of transactions that have been completed in a main site are lost in a remote site is low, when executing processing in another information processing apparatus, or a program or an object that conducts its processing, in response to occurrence of a failure or a predetermined condition, or a request.
- A second object of the present invention is to provide a technique in which the possibility that modification contents of transactions that have been completed in a primary system are lost in a secondary system is low.
- A third object of the present invention is to provide a technique in which the performance degradation in the primary system can be reduced.
- In a first program for storing processing data to be subject to processing using the first program and log information for recovering the processing in first storage means, and a second program for storing processing data to be subject to processing and log information for recovering the processing in second storage means, an information processing recovery method for recovering the processing using the first program, by executing the processing using the second program when a failure has occurred in the processing using the first program is first means. In the information processing recovery method, the following processing is executed.
- In response to an input write request of the log information, the processing data, and status information indicating a storage location of the log information, the log information, the processing data, and the status information are stored in the first storage means. When the log information has been stored in the second storage means, a response to the write request is sent to the second storage means. When a predetermined condition is satisfied, the processing data and the status information are stored in the second storage means.
- In a system in which switching to a second database processing system is conducted when a failure has occurred in a first database processing system and database processing is continued, second means executes the following processing. In response to a write request to the second database processing system, log information is modified by synchronous writing, and database data and status information are modified by asynchronous writing.
- In a disaster recovery system in which switching to a secondary database processing system is conducted when a failure has occurred in a primary database processing system and database processing is continued, third means modifies log information by synchronous writing and modifies database data and status information by asynchronous writing, at the time of write request to the secondary system.
- In a disaster recovery system according to the present invention, a host computer includes a database buffer for temporarily holding contents of a database area in a storage subsystem, and a log buffer for temporarily holding contents of modification processing for the database buffer. Contents of the database buffer are modified with the advance of execution of database processing in the host computer. When it has become necessary to force the modification contents to the database area in the storage subsystem, a write request of log information indicating contents of modification processing conducted on the database buffer, database data modified in the database buffer, or status information indicating a location of log information at the time of checkpoint is transmitted from the primary host computer in the primary system to the primary storage subsystem in the primary system.
- The primary storage subsystem receives the write request from the host computer. According to contents of the received write request, modification of log information, data in the database area, and status information in the primary storage subsystem is conducted. The primary storage subsystem is previously configured so that a log information disk may be subject to synchronous remote copy and a database data disk and a status information disk may be subject to asynchronous remote copy that guarantees a modification sequentiality over both disks.
- According to this configuration, the primary storage subsystem writes a write request for a log information disk into a secondary storage device in the secondary system by using a synchronous method, and writes a write request for a database area data disk and status information disk into the secondary storage device in the secondary system by using an asynchronous method.
- The secondary storage subsystem receives a write request of the log information, database data, or status information from the primary storage subsystem. According to contents of the received write request, log information, database area data and status information in the secondary storage subsystem are modified (see U.S. Pat. No. 5,640,561).
- If thereafter a failure occurs in the primary database processing system and database processing is started in the secondary database processing system, then log information is read out from a location indicated by the status information, and data in the database area in the secondary storage subsystem is modified according to contents of the log information thus read out. As a result, the database area in the secondary storage subsystem is restored to the consistent state of the database area immediately before the failure occurrence.
- In business processing having a high DB modification ratio, the I/O load on the DB disk becomes high as compared with the log disk. On the other hand, transactions to be recovered in the secondary system depend on the information on the log disk. Therefore, it becomes possible to prevent modification contents of transactions completed in the primary system from being lost by conducting synchronous copy on the information on the log disk, and it becomes possible to construct a disaster recovery system reduced in performance degradation in the primary system by conducting asynchronous copy on the information on the DB disk.
- According to the disaster recovery system of the present embodiment, log information is modified by synchronous writing and database data and status information are modified by asynchronous writing, when writing to the secondary system is requested, as heretofore described. Therefore, the contents of modification in transactions completed in the primary system are prevented from being lost in the secondary system. It is possible to construct a disaster recovery system reduced in performance degradation in the primary system.
- Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
-
FIG. 1 is a diagram showing a system configuration of a disaster recovery system in the present embodiment; -
FIG. 2 is a diagram showing an outline of synchronous remote copy processing in alog block 262 a in the present embodiment; -
FIG. 3 is a diagram showing an outline of asynchronous remote copy processing of DB blocks and status information in the present embodiment; -
FIG. 4 is a diagram showing configuration information of a DB-disk mapping table 15; -
FIG. 5 is a diagram showing an example of a primary/secondary remote copy management table; -
FIG. 6 is a flow chart showing a processing procedure of checkpoint acquisition processing in the present embodiment; -
FIG. 7 is a flow chart showing a processing procedure conducted upon receiving a write command in the present embodiment; -
FIG. 8 is a flow chart showing a processing procedure conducted upon receiving a read command in the present embodiment; -
FIG. 9 is a flow chart showing a processing procedure of data reception processing in asecondary disk subsystem 4 in the present embodiment; -
FIG. 10 is a flow chart showing a processing procedure of DBMS start processing in the present embodiment; and -
FIG. 11 is a diagram showing an outline of synchronous remote copy processing conducted at the time of checkpoint in the present embodiment. - Hereafter, a system of an embodiment in which upon a write request to a secondary system log information is updated by synchronous writing and database data and status information are modified by asynchronous writing will be described.
-
FIG. 1 is a diagram showing a system configuration of the present embodiment. As shown inFIG. 1 , a primary host computer 1 (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a DB access control section 111 (hardware, a program, or an object capable of conducting the processing), a checkpoint processing section 112 (hardware, a program, or an object capable of conducting the processing), a log management section 113 (hardware, a program, or an object capable of conducting the processing), and a DB delay write processing section 114 (hardware, a program, or an object capable of conducting the processing). - The DB
access control section 111 is a processing section for controlling access to a primary DB disk 24 (storage means) and a primary log disk 26 (storage means) via a DB buffer 12 (storage means) and a log buffer 14 (storage means). Thecheckpoint processing section 112 is a processing section for transmitting a write request of all DB blocks modified in theDB buffer 12, and status information indicating a log disk of a latest log record at that time point and its location, from theprimary host computer 1 to aprimary disk subsystem 2, when it has become necessary to force contents in theDB buffer 12 in theprimary host computer 1 to a storage device in theprimary disk subsystem 2, which is a disk subsystem in a primary system. - As disclosed in the
reference paper 1, some transactions are not complete at the time of a checkpoint. Besides the location of the latest log record, therefore, the status information indicates in some cases the locations of the oldest log records relating to the uncompleted transactions. Modification of the status information on a disk is delayed in some cases. In either case, the status information may be used as information indicating the location of the log where reference is to be started when the database management system restarts. - The
log management section 113 is a processing section for transmitting a write request of alog block 262 a, which is log information indicating contents of database processing that has been conducted on theDB buffer 12, from theprimary host computer 1 to aprimary disk subsystem 2. The DB delaywrite processing section 114 is a processing section for transmitting a write request of database data on theDB buffer 12 from theprimary host computer 1 to theprimary disk subsystem 2. - A program for making the
primary host computer 1 function as the DBaccess control section 111, thecheckpoint processing section 112, thelog management section 113 and the DB delaywrite processing section 114 is recorded on a recording medium such as a CD-ROM, stored on a magnetic disk or the like, and thereafter loaded into a memory, and executed. The recording medium for recording the program thereon may also be another recording medium other than the CD-ROM. The program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program. - The primary disk subsystem 2 (which may be implemented by using a storage unit, a disk system, a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a disk control processing section 21 (hardware, a program, or an object capable of conducting the processing), a command processing section 211 (hardware, a program, or an object capable of conducting the processing), a primary remote copy processing section 212 (hardware, a program, or an object capable of conducting the processing), and a disk access control section 23 (hardware, a program, or an object capable of conducting the processing).
- The disk
control processing section 21 is a control processing section for controlling operation of the whole primary disk subsystem apparatus. Thecommand processing section 211 is a processing section for receiving a write request of a DB block 242 a, the status information or alog block 262 a from theprimary host computer 1, and modifying contents of theprimary DB disk 24, aprimary status disk 25, theprimary log disk 26, or a cache memory 22 (storage means) for storing their contents, included in the primary disk subsystem, according to contents of the received write request. - The primary remote
copy processing section 212 is a processing section for referring to the primary remote copy management table and conducting synchronous or asynchronous remote copying according to configuration information in the primary remote copy management table. If in the case of the present embodiment the received write request is a write request of the log block 262 a, then the primary remotecopy processing section 212 conducts synchronous write processing of the log block 262 into asecondary disk subsystem 4, which is a disk subsystem of a secondary system (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing). If the received write request is a write request of the log block 242 a or status information, then the primary remotecopy processing section 212 temporarily stores the write request and conducts asynchronous write processing into thesecondary disk subsystem 4. The diskaccess control section 23 is a processing section for controlling access to respective magnetic disk devices placed under theprimary disk subsystem 2. - A program for making the
primary disk subsystem 2 function as the diskcontrol processing section 21, thecommand processing section 211, the primary remotecopy processing section 212 and the diskaccess control section 23 is recorded on a recording medium such as a floppy disk, and executed. The recording medium for recording the program thereon may also be another recording medium other than the floppy disk. The program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program. - A secondary host computer 3 (which may be implemented by using a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a DB access control section 311 (hardware, a program, or an object capable of conducting the processing), a checkpoint processing section 312 (hardware, a program, or an object capable of conducting the processing), a log management section 313 (hardware, a program, or an object capable of conducting the processing), and a DB delay write processing section 314 (hardware, a program, or an object capable of conducting the processing).
- The DB
access control section 311 is a processing section for conducting processing similar to that of the DBaccess control section 111 in the primary system, at the time of operation of the secondary system. Thecheckpoint processing section 312 is a processing section for conducting processing similar to that of thecheckpoint processing section 112 in the primary system, at the time of operation of the secondary system. - The
log management section 313 is a processing section for conducting processing similar to that of thelog management section 113 in the primary system, at the time of operation of the secondary system. The DB delaywrite processing section 314 is a processing section for conducting processing similar to that of the DB delaywrite processing section 114 in the primary system, at the time of operation of the secondary system. - A program for making the
secondary host computer 3 function as the DBaccess control section 311, thecheckpoint processing section 312, thelog management section 313 and the DB delaywrite processing section 314 is recorded on a recording medium such as a CD-ROM, stored on a magnetic disk or the like, and thereafter loaded into a memory, and executed. The recording medium for recording the program thereon may also be another recording medium other than the CD-ROM. The program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program. - A secondary disk subsystem 4 (which may be implemented by using a storage unit, a disk system, a computer, an information processing apparatus, or a program or an object capable of conducting the processing) includes a disk control processing section 41 (hardware, a program, or an object capable of conducting the processing), a command processing section 411 (hardware, a program, or an object capable of conducting the processing), a secondary remote copy processing section 412 (hardware, a program, or an object capable of conducting the processing), and a disk access control section 43 (hardware, a program, or an object capable of conducting the processing).
- The disk
control processing section 41 is a control processing section for controlling operation of the whole secondary disk subsystem apparatus. When switching from the primary system to the secondary system is conducted and database processing in the database processing system in the secondary system is started, thecommand processing section 411 reads out a log record from a location of a log block 462 a indicated by status information in asecondary status disk 45 and sends out the log record to thesecondary host computer 3, in accordance with an order issued by thesecondary host computer 3. By modifying data on asecondary DB disk 44 in thesecondary disk subsystem 4 according to contents of the pertinent log record analyzed by the secondary host computer in accordance with an order issued by thesecondary host computer 3, the diskcontrol processing section 41 conducts processing of restoring the state of thesecondary DB disk 44 in thesecondary disk subsystem 4 to the state of theprimary DB disk 24 immediately before the switching to the secondary system. The diskcontrol processing section 41 conducts modification on thesecondary DB disk 44, thesecondary status disk 45 and asecondary log disk 46 in keeping with database processing after the switching. - The secondary remote
copy processing section 412 is a processing section for receiving a write request of the DB block 242 a, the status information or the log block 262 a from theprimary disk subsystem 2, and conducting modification on thesecondary DB disk 44, thesecondary status disk 45 and thesecondary log disk 46 in thesecondary disk subsystem 4, or on acache memory 42 storing their contents. The diskaccess control section 43 is a processing section for controlling access to respective magnetic devices placed under thesecondary disk subsystem 2. In the case of the asynchronous remote copy, modification on thepertinent cache memory 42 or disk is conducted after confirmation of the sequentiality, as described in JP-A-11-85408 entitled “Storage control apparatus.” - A program for making the
secondary disk subsystem 4 function as the diskcontrol processing section 41, thecommand processing section 411, the secondary remotecopy processing section 412 and the diskaccess control section 43 is recorded on a recording medium such as a floppy disk, and executed. The recording medium for recording the program thereon may also be another recording medium other than the floppy disk. The program may be installed in the information processing apparatus from the recording medium and used, or the recording medium may be accessed through a network to use the program. - In the disaster recovery system of the present embodiment, the
primary disk subsystem 2 for theprimary host computer 1 serving as the primary system and thesecondary disk subsystem 4 for thesubsidiary host computer 3 serving as the secondary system may be connected to each other via a fiber channel, a network such as Ethernet, Gigabit Ethernet or SONET, or a link. The connection means may be a virtual network, or any data communication means using radio, broadcast communication or satellite communication. - In the
primary host computer 1, the DBaccess control section 111 of the primary system operates. Theprimary host computer 1 includes theDB buffer 12 for temporarily holding contents of theprimary DB disk 24 in theprimary disk subsystem 2, and thelog buffer 14 for temporarily holding contents of modification processing conducted on theDB buffer 12. Each of theDB buffer 12 and thelog buffer 14 may also be a volatile memory, which typically loses data at the time of a power failure. - In the
primary disk subsystem 2, theprimary DB disk 24 on a magnetic disk device is accessed through the diskcontrol processing section 21, thecache memory 22 and the diskaccess control section 23, which receive an instruction from the primary host computer and operate. Disk access is conducted always via thecache memory 22. Thecache memory 22 may also be a volatile memory, which typically loses data at the time of a power failure. In this case, at the time when data is stored in thecache memory 22, the data is guaranteed. - If access to the
primary DB disk 24 is requested by a transaction, then the DBaccess control section 111 in theprimary host computer 1 of the present embodiment acquires the DB block 242 a from theprimary disk subsystem 2 by using a read command, stores the DB block 242 a in theDB buffer 12, conducts database processing on the DB block 242 a in theDB buffer 12, and then stores log information indicating contents of the processing in the log block 262 a in thelog buffer 14. - If it has become necessary to force the contents of the
DB buffer 12 in theprimary host computer 1 to a storage device in theprimary disk subsystem 2 serving as a disk subsystem in the primary system, such as when log records indicating that records on theDB buffer 12 has been modified arrives at a predetermined number, then thecheckpoint processing section 112 generates a write command for writing a DB block or status information, as a write request for all DB blocks modified in theDB buffer 12 and status information indicating a location of a log record that is the latest at that time point, and transmits the write command from theprimary host computer 1 to theprimary disk subsystem 2. - If at the time of transaction committing, a predetermined condition, such as elapse of a predetermined time since start of log information recording or disappearance of an empty place in the
log buffer 14, is arrived at, then thelog management section 113 generates a write command for writing the log block 262 a, as a write request of the log block 262 a stored in thelog buffer 14 into theprimary log disk 26, and transmits the write command from theprimary host computer 1 to theprimary disk subsystem 2. - If a predetermined condition, such as elapse of a predetermined time since start of database processing or disappearance of an empty place in the
DB buffer 12, is arrived at, then the DB delaywrite processing section 114 generates a write command for writing the DB block 242 a, as a write request of the DB block 242 a stored in theDB buffer 12 into theprimary DB disk 24, and transmits the write command from theprimary host computer 1 to theprimary disk subsystem 2. - With respect to a write request of the log block 262 a included in write requests transmitted from the
primary host computer 1 as described above, theprimary disk subsystem 2 of the present embodiment conducts synchronous remote copy processing to thesecondary disk subsystem 4 in synchronism with writing performed in theprimary disk subsystem 2. With respect to writing of a DB block and status information, theprimary disk subsystem 2 of the present embodiment conducts asynchronous remote copy processing, which is not synchronized to writing in theprimary disk subsystem 2, to thesecondary disk subsystem 4. -
FIG. 2 is a diagram showing an outline of synchronous remote copy processing of the log block 262 a in the present embodiment. If a primary log write request for requesting to write the log block 262 a is transmitted from theprimary host computer 1 as shown inFIG. 2 , then theprimary disk subsystem 2 writes the log block 262 a transmitted together with the write request into thecache 22, transmits the log block 262 a to thesecondary disk subsystem 4, requests remote copy of the log block 262 a in thesecondary disk subsystem 4, and waits for completion of the remote copy. - If a command for requesting to write the log block 262 a is transmitted from the
primary disk subsystem 2, then thesecondary disk subsystem 4 writes the log block 262 a transmitted together with the write request into thecache 22, and thereafter generates a remote copy completion notice indicating that the writing has been completed, and transmits the remote copy completion notice to theprimary disk subsystem 2. - Upon receiving the remote copy completion notice from the
secondary disk subsystem 4, theprimary disk subsystem 2 generates a primary log write completion notice indicating that the writing the log block 262 a requested by theprimary host computer 1 has been completed, and transmits the primary log write completion notice to theprimary host computer 1. -
FIG. 3 is a diagram showing an outline of asynchronous remote copy processing of a DB block and status information in the present embodiment. If a primary DB write request for requesting to write the DB block and the status information is transmitted from theprimary host computer 1 as shown inFIG. 3 , then theprimary disk subsystem 2 writes the DB block and the status information transmitted together with the write request into thecache 22, thereafter temporarily stores the DB block and the status information in a queue in a memory or a magnetic disk in theprimary disk subsystem 2, generates a primary DB write completion notice indicating that writing the DB block 242 a requested by theprimary host computer 1 has been completed, and transmits the primary DB write completion notice to theprimary host computer 1. - Thereafter, the
primary disk subsystem 2 transmits the stored DB block or status information to thesecondary disk subsystem 4, requests remote copy of the DB block and status information in thesecondary disk subsystem 4, and waits for completion of the remote copy. - If a remote copy request for requesting to write the DB block or status information is transmitted from the
primary disk subsystem 2, then thesecondary disk subsystem 4 receives the DB block or status information transmitted together with the remote copy request, thereafter generates a remote copy completion notice indicating that the request has been completed, and transmits the remote copy completion notice to theprimary disk subsystem 2. -
FIG. 4 is a diagram showing configuration information of a DB-disk mapping table 15 in the present embodiment. As shown inFIG. 4 , the DB-disk mapping table 15 stores a database area ID, a file ID, and a kind. The database area ID is information for identifying a database area in theprimary DB disk 24. The file ID indicates a sequential number of a file in the case where the database area identified by the database area ID includes a plurality of files. The kind indicates which of database data, log information and status information is data stored in the database area. - With respect to a disk control device number for identifying a disk control device to which the database area is mapped, and a physical device ID of a magnetic disk device included in magnetic disk devices controlled by a disk control device having the disk control device number to which the database area is mapped, IDs of the
primary disk subsystem 2 and thesecondary disk subsystem 4 are stored. - A DB-disk mapping table 35 in the
secondary disk subsystem 4 also has a configuration similar to that of the DB-disk mapping table 15 in theprimary disk subsystem 2. -
FIG. 5 is a diagram showing an example of a primary/secondary remote copy management table in the present embodiment. As shown inFIG. 5 , a copy mode indicating whether the write processing is conducted synchronously or asynchronously is stored in a primary remote copy management table 213 and a secondary remote copy management table 413. With respect to a disk control device number of a disk control device in which write processing is conducted with that copy mode, and a physical device ID of a magnetic disk device, IDs in theprimary disk subsystem 2 and thesecondary disk subsystem 4 are stored. - On the basis of information in the DB-disk mapping table 15 shown in
FIG. 4 and information in the primary remote copy management table 213 shown in FIG. 5, it can be determined whether each of the log block, DB block and status information is written into the secondary disk subsystem synchronously or asynchronously. For example, on the basis ofFIG. 4 , a log block in a database area ID “LOG1” is written into a magnetic disk device having a primary disk control device ID “CTL#A1” and a primary physical device ID “VOL12-A.” With reference toFIG. 5 , the copy mode for the magnetic disk device having the primary disk control device ID “CTL#A1” and the primary physical device ID “VOL12-A” is “synchronous.” Therefore, the log block in the database area ID “LOG1” is written into thesecondary disk subsystem 4 by the synchronous remote copy processing. - On the other hand, the system serving as the secondary system also has a similar configuration. The
primary disk subsystem 2 and thesecondary disk subsystem 4 are connected to each other via the network. In the standby state, thesecondary host computer 3 is not in operation. Thesecondary disk subsystem 4 receives the log block, DB block and status information from theprimary disk subsystem 2 via the network, and modifies disks respectively corresponding to them. - When acquiring a checkpoint, the
checkpoint processing section 112 in theprimary host computer 1 of the present embodiment stores all DB blocks modified on theDB buffer 12 in theprimary DB disk 24, and stores status information indicating the location of the log record at that time in theprimary status disk 25. Hereafter, this checkpoint acquisition processing will be described. -
FIG. 6 is a flow chart showing a processing procedure of the checkpoint acquisition processing in the present embodiment. When it has become necessary to force the contents of theDB buffer 12 in theprimary host computer 1 to a storage device in theprimary disk subsystem 2 serving as a disk subsystem in the primary system, thecheckpoint processing section 112 in theprimary host computer 1 conducts processing of transmitting a write request for all DB blocks modified in theDB buffer 12 and the status information indicating the location of the log record that is the latest at that time point, from theprimary host computer 1 to theprimary disk subsystem 2 as shown inFIG. 6 . - At
step 701, thecheckpoint processing section 112 generates a checkpoint acquisition start log, which indicates that the checkpoint acquisition has been started, and stores the checkpoint acquisition start log in the log block 262 a. - At
step 702, thecheckpoint processing section 112 generates a write command for writing all DB blocks modified on theDB buffer 12 into theprimary disk subsystem 2, transmits the write command to theprimary disk subsystem 2 to request theprimary disk subsystem 2 to write the DB blocks. Theprimary disk subsystem 2 receives the write command, writes the DB blocks into thecache memory 22, and forces contents of modification conducted in theDB buffer 12 to thecache memory 22. - Step 703 will be described at the end of the description of the present embodiment.
- At
step 704, a checkpoint acquisition end log, which indicates that the checkpoint acquisition has been finished, is generated and stored in the log block 262 a. - At
step 705, a write command for writing an LSN (Log Sequence Number) of the checkpoint acquisition end log into theprimary disk subsystem 2 as status information is generated, and the write command is transmitted to theprimary disk subsystem 2 to request theprimary disk subsystem 2 to write the status information. Upon receiving the write command in theprimary disk subsystem 2, the status information is written into theprimary status disk 25. - In the case where database processing in the primary database processing system is terminated abnormally because of a failure or the like and thereafter the processing in the primary database processing system is resumed, the state of the database that has been completed until immediately before the termination can be recovered by reading out a log record from a location indicated by status information in the
primary status disk 25 and modifying data in theprimary DB disk 24 according to contents of the log record. - Supposing that in the disaster recovery system of the present embodiment the
primary host computer 1 has requested theprimary disk subsystem 2 to write the log block, DB block or status information, processing conducted in theprimary disk subsystem 2 will now be described. -
FIG. 7 is a flow chart showing a processing procedure taken in the present embodiment when a write command has been received. Upon receiving a command from theprimary host computer 1 as shown inFIG. 7 , thecommand processing section 211 in the primary disk subsystem analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a write command (step 341). It is now supposed that a device ID requested to be accessed can be acquired from the address to be accessed by comparing the address to be accessed with information in a device configuration management table, which indicates addresses assigned to a plurality of disk subsystems and their magnetic disk devices. - Subsequently, it is determined whether data of the address to be accessed found at the
step 341 is held in thecache memory 22 in theprimary disk subsystem 2, and a cache hit miss decision is conducted (step 342). - In the case of a cache miss in which the data to be accessed is not held in the
cache memory 22, a transfer destination cache area is secured. The cache address of the transfer destination is managed and acquired by using a typical method such as a cache vacancy list. - If a cache hit is judged at the
step 342 to hold true, or insurance of a cache area is finished atstep 344, then modification of the data is conducted on thecache memory 22 in the primary disk subsystem 2 (step 345). In other words, contents of the DB block 242 a, the status information, or the log block 262 a received from theprimary host computer 1 are written into thecache memory 22. - At
step 346, the primary remote copy management table 213 is referred to, and a copy mode corresponding to the primary disk control device ID and the primary physical device ID indicated by the address to be accessed is read out to make a decision whether the copy mode is “synchronous.” - If the copy mode is “synchronous” as a result of the decision, i.e., the received write request is a write request for the log block 262 a, then the processing proceeds to step 347. At the
step 347, completion of the synchronous remote copy is waited for, and thereby synchronous remote copy processing of the log block 262 a is conducted. - If the copy mode is “asynchronous,” i.e., the received write request is a write request of the DB block 242 a or the status information, then the processing proceeds to step 348. At the
step 348, the received data is temporarily stored in a queue in a memory or a magnetic disk in theprimary disk subsystem 2 in order to prepare for asynchronous remote copy processing to be conducted thereafter on thesecondary disk subsystem 4. - At
step 349, completion of the write command processing is reported to theprimary host computer 1. - The
primary disk subsystem 2 transmits the stored data to thesecondary disk subsystem 4, and executes asynchronous remote copy processing of the DB block or status information to thesecondary disk subsystem 4. -
FIG. 8 is a flow chart showing a processing procedure taken in the present embodiment when a read command has been received. Upon receiving a command from theprimary host computer 1 as shown inFIG. 8 , thecommand processing section 211 analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a read access request (step 361). It is now supposed that a device ID requested to be accessed can be acquired from the address to be accessed. - Subsequently, it is determined whether data of the address to be accessed found at the
step 361 is held in thecache memory 22 in theprimary disk subsystem 2, and a cache hit miss decision is conducted (step 362). - In the case of a cache miss in which the data to be accessed is not held in the
cache memory 22, a device ID requested to be accessed is discriminated as described above, and the diskaccess control section 23 in theprimary disk subsystem 2 is requested to transfer from a magnetic disk device corresponding to the device ID to the cache memory 22 (step 363). In this case, the read processing is interrupted until the end of transfer (step 364), and the read processing is continued again after the end of the transfer processing. The cache address of the transfer destination may be managed and acquired by using a typical method such as a cache vacancy list. As for the address of the transfer destination, however, it is necessary to modify a cache management table and thereby conduct registration. - If a cache hit is judged at the
step 362 to hold true, or the transfer processing is finished atstep 364, then data in the cache memory in the disk subsystem is transferred to a channel (step 365). - Supposing that in the disaster recovery system of the present embodiment the
primary disk subsystem 2 has requested thesecondary disk subsystem 4 to write the log block synchronously or write the DB block or status information asynchronously, processing conducted in thesecondary disk subsystem 4 will now be described. -
FIG. 9 is a flow chart showing a processing procedure of data reception processing conducted by thesecondary disk subsystem 4 in the present embodiment. Upon receiving a command from theprimary host computer 1 as shown inFIG. 9 , the secondary remotecopy processing section 412 in thesecondary disk subsystem 4 analyzes the received command to find a command kind and an address to be accessed, and recognizes that the command is a remote copy command (step 421). It is now supposed that a device ID requested to be accessed can be discriminated from the address to be accessed. - Subsequently, it is determined whether data of the address to be accessed found at the
step 421 is held in thecache memory 42 in thesecondary disk subsystem 4, and a cache hit miss decision is conducted (step 422). - In the case of a cache miss in which the data to be accessed is not held in the
cache memory 42, a transfer destination cache area is secured. The cache address of the transfer destination may be managed and acquired by using a typical method such as a cache vacancy list. As for the address of the transfer destination, however, it is necessary to modify a cache management table and thereby conduct registration. - If a cache hit is judged at the
step 422 to hold true, or insurance of a cache area is finished atstep 424, then modification of the data is conducted on thecache memory 42 in the secondary disk subsystem 4 (step 425). In other words, contents of the DB block 242 a, the status information, or the log block 262 a received from theprimary disk subsystem 2 are written into thecache memory 42. The case of the synchronous remote copy has heretofore been described. In the case where asynchronous remote copy is used and the sequentiality as described in JP-A-11-85408 entitled “storage control apparatus” is guaranteed, it is necessary before modification on the cache to ascertain that all data that should arrive by then are ready. - At
step 426, completion of the report copy command processing is reported to theprimary disk subsystem 2. - As for the log block write request, synchronous remote copy processing in the
secondary disk subsystem 4 synchronized with the writing in theprimary disk subsystem 2 is conducted, in the disaster recovery system of the present embodiment as described above. Therefore, it is possible to prevent that contents of transaction modification that has been completed in the primary system are lost in the secondary system. As for the DB block and status information writing, asynchronous remote copy processing in thesecondary disk subsystem 4, which is not synchronized with the writing in theprimary disk subsystem 2, is conducted. Therefore, performance degradation in the primary system can be prevented as far as possible. - If writing in the
secondary disk subsystem 4 is conducted as described above and thereafter a failure occurs in the primary database processing system and database processing is started in the secondary database processing system, then in DBMS start processing log information is read out from a location indicated by status information in thesecondary status disk 45, and the state of the database area in the primary system immediately before the occurrence of the failure is recovered on thesecondary DB disk 44 in thesecondary disk subsystem 4. -
FIG. 10 is a flow chart showing a processing procedure of the DBMS start processing in the present embodiment. If switching from the primary system to the secondary system is conducted and database processing in the secondary database processing system is started, then the DBaccess control section 311 in thesecondary host computer 3 orders thesecondary disk subsystem 4 to execute the DBMS start processing. - At
step 1201, thecommand processing section 411 in thesecondary disk subsystem 4 reads out a status file on thesecondary status disk 45, and acquires information indicating the state of the database. It is now supposed that information indicating that the DBMS is in operation is stored in the status file as information indicating the database state at the time of database processing start and information indicating that the DBMS has been normally finished is stored in the status file at the end of the database processing. - At
step 1202, it is determined whether the database processing of the last time has been finished normally, by referring to the acquired information indicating the database state. If the acquired database state indicates that the DBMS is in operation, i.e., information indicating that the DBMS has been finished normally is not recorded in the status file, then the database processing of the last time is regarded as have not been finished normally and the processing proceeds to step 1203. - At the
step 1203, status information indicating a location of a log record at the time of immediately preceding checkpoint is referred to, and an input location of the log record is acquired. - At
step 1204, thesecondary log disk 46 is referred to in order to read out the log record from the acquired input location, and rollforward processing is conducted on the database area in thesecondary DB disk 44. - At
step 1205, rollback processing for canceling processing of uncompleted transactions among transactions subjected to the rollforward processing using the log record is conducted. - At
step 1206, information indicating that the DBMS is in operation and status information indicating the location of the log record after recovery are stored in the status file in thesecondary status disk 45. - In general, in the conventional DBMS, data modified in a transaction is not written to the storage in synchronism with the committing of the pertinent transaction in order to ensure the execution performance of the transaction, a trigger called checkpoint having a predetermined number of times of transaction occurrence or a predetermined time as a trigger is provided. Upon the trigger, DB data modified during that time is written to the storage. And DB contents modified after the checkpoint are written to the log disk. In restart processing at the time of server down, DB modification after the checkpoint is restored and recovered from modification history in the log disk.
- At the time of restart after the server shut down, from which location in which log disk log information should be forced after the latest checkpoint poses a problem. In general, such information is stored in a header portion or the like on the log disk. A log disk and a read location that becomes a subject of the force at the time of restart are determined on the basis of the information.
- In the case where a log disk is subjected to synchronous copy and a DB disk is subjected to asynchronous copy in such a conventional DBMS, there is a possibility that modification contents of DB subjected to checkpoint on the log disk in the main site have not been transferred. Modification contents of DB forced to the storage in the main site at the time of checkpoint are lost in the remote site, and mismatching is caused in the recovery.
- On the other hand, in the disaster recovery system of the present embodiment, a status file for managing a log disk input point at the time of checkpoint is provided so as to prevent mismatching from being caused in recovery in the
secondary disk subsystem 4 even if a log block is subjected to synchronous remote copy processing and a DB block is subjected to asynchronous remote copy processing. In addition, the status file is transferred in asynchronous remote copy processing, and the modification order between the status file and the DB block transferred asynchronously in the same way is guaranteed by thesecondary disk subsystem 4. - As a result, it is possible to refer to the status file on the
secondary status disk 45 at the time of database processing start after the switching from the primary system to the secondary system, and conduct recovery from a location indicated by the status information. - In the disaster recovery system of the present embodiment, a write request at the time of checkpoint is also transmitted to the
secondary disk subsystem 4 asynchronously as described above. In the case where a write request at the time of checkpoint has been issued, however, that write request and write requests temporarily stored until that time point for asynchronous writing may also be transmitted to thesecondary disk subsystem 4. -
FIG. 11 is a diagram showing an outline of processing conducted at the time of checkpoint in the present embodiment. If a primary DB volume checkpoint request for requesting a checkpoint of theprimary DB disk 24 is transmitted from theprimary host computer 1 as shown inFIG. 11 , then theprimary disk subsystem 2 transmits remote copy data temporarily stored in the queue in the memory or magnetic disk in theprimary disk subsystem 2 at that time to thesecondary disk subsystem 4, and transmits the DB block 242 a and status information received together with the primary DB volume checkpoint request to thesecondary disk subsystem 4. - The
secondary disk subsystem 4 writes all of the DB block 242 a and status information transmitted together with the write request into thecache 42, and then generates a remote copy completion notice, which indicates that the writing has been completed, and transmits the remote copy completion notice to theprimary disk subsystem 2. - Upon receiving the remote copy completion notice from the
secondary disk subsystem 4, theprimary disk subsystem 2 generates a primary DB volume checkpoint completion notice indicating that the checkpoint processing requested by theprimary host computer 1 has been completed, and transmits the primary DB volume checkpoint completion notice to theprimary host computer 1. - In the case where synchronization processing of the
primary disk subsystem 2 and thesecondary disk subsystem 4 is conducted at the time of the log block write request and the checkpoint request in the disaster recovery system of the present embodiment, the contents of modification in transactions completed in the primary system are prevented from being lost in the secondary system, and writing the DB block and status information is conducted collectively at the time of checkpoint. As compared with the case where all of the DB block and status information are transferred by using synchronous remote copy, therefore, performance degradation in the primary system can be prevented. Even in a configuration using a database management system that does not have a dedicated status file, DB modification data forced to the storage in the primary system at the time of checkpoint is not lost in the secondary system. - According to the disaster recovery system of the present embodiment, log information is modified by synchronous writing and database data and status information are modified by asynchronous writing, when writing to the secondary system is requested, as heretofore described. Therefore, the contents of modification in transactions completed in the primary system are prevented from being lost in the secondary system. It is possible to construct a disaster recovery system reduced in performance degradation in the primary system.
- According to the present invention, it becomes possible to reduce the possibility that modification contents of transactions completed in execution are lost in the transaction processing.
- It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.
Claims (9)
1. A computer system, which is a primary system to a secondary computer system comprising a secondary host computer and a secondary storage subsystem, the computer system comprising:
a primary host computer executing a primary database management system program corresponding to a secondary database management system program to be executed by the secondary host computer; and
a primary storage subsystem coupled to the primary host computer and the secondary storage subsystem,
wherein the primary storage subsystem, in response to a write request from the primary host computer, stores primary log information, primary database data, and primary status information, which are modified by the primary host computer based on the execution of the primary database management system program, the primary status information indicating a location of log information to be used at a time of switching transaction processing from the primary host computer to the secondary host computer,
wherein, to the secondary storage subsystem, the primary storage subsystem executes a synchronous remote copy of the primary log information and an asynchronous remote copy of the primary database data and the primary status information, the secondary database management system program causing the secondary host computer to:
(A) read a copy of the primary status information stored in the secondary storage subsystem, created by the asynchronous remote copy;
(B) based on the copy of the primary status information, decide locations on a copy of the primary log information created by the synchronous remote copy, to be used to modify a copy of the secondary database data created by the asynchronous remote copy;
(C) read a part of the copy of the primary log information indicated by the locations on the copy of the primary log information; and
(D) modify the copy of the primary database data according to the part of the secondary log information so that modification of a completed transaction processed by the primary host computer is stored in the copy of the primary database data.
2. A computer system according to claim 1 ,
wherein, as to the modification by the primary host computer, the primary database management system program causes the primary host computer to:
(i) modify the primary database data in the primary storage subsystem based on modification data temporarily buffered in the primary host computer;
(ii) store a checkpoint acquisition log to the primary log information in the primary storage subsystem; and
(iii) modify the primary status information in the primary storage subsystem after the processing of (i) and (ii).
3. A computer system according to claim 2 ,
wherein the primary storage subsystem includes a first primary disk, a second primary disk, and a third primary disk,
wherein the primary log information is stored in the first primary disk,
wherein the primary database data is stored in the second primary disk, and
wherein the primary status information is stored in the third primary disk.
4. A computer system according to claim 3 ,
wherein the step (D) comprises a roll-forward processing and a roll-back processing.
5. A computer system according to claim 4 ,
wherein the secondary database management system program causes the secondary host computer to:
(E) change an operation state from another state after completion of the roll-forward processing and the roll-back processing.
6. A disaster recovery method for a computer system being a primary system to a secondary computer system including a secondary host computer and a secondary storage subsystem, the computer system including a primary host computer and a primary storage subsystem, the method comprising:
by the primary host computer, executing a primary database management system program corresponding to a secondary database management system program to be executed by the secondary host computer;
by the primary storage subsystem, in response to a write request from a primary host computer, storing primary log information, primary database data, and primary status information, which are modified by the primary host computer based on the execution of the primary database management system program, the primary status information indicating a location of log information to be used at a time of switching transaction processing from the primary host computer to the secondary host computer; and
by the primary storage subsystem, executing a synchronous remote copy about the primary log information and an asynchronous remote copy about the primary database data and the primary status information, the secondary database management system program causing the secondary host computer to:
(A) read a copy of the primary status information stored in the secondary storage subsystem, created by the asynchronous remote copy;
(B) based on the copy of the primary status information, decide locations on a copy of the primary log information created by the synchronous remote copy, to be used to modify a copy of the secondary database data created by the asynchronous remote copy;
(C) read a part of the copy of the primary log information indicated by the locations on the copy of the primary log information; and
(D) modify the copy of the primary database data according to the part of the secondary log information so that modification of a completed transaction processed by the primary host computer is stored in the copy of the primary database data.
7. A disaster recovery method according to claim 6 ,
wherein, as to the modification by the primary host computer, the primary database management system program causes the primary host computer to:
(i) modify the primary database data in the primary storage subsystem based on modification data temporarily buffered in the primary host computer;
(ii) store a checkpoint acquisition log to the primary log information in the primary storage subsystem; and
(iii) modify the primary status information in the primary storage subsystem after the processing of (i) and (ii).
8. A disaster recovery method according to claim 7 ,
wherein the step (D) comprises a roll-forward processing and a roll-back processing.
9. A disaster recovery method according to claim 8 ,
wherein the secondary database management system program causes the secondary host computer to:
(E) change an operation state from another state after completion of the roll-forward processing and the roll-back processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/651,752 US20100121824A1 (en) | 2003-03-31 | 2010-01-04 | Disaster recovery processing method and apparatus and storage unit for the same |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-096725 | 2003-03-31 | ||
JP2003096725A JP4301849B2 (en) | 2003-03-31 | 2003-03-31 | Information processing method and its execution system, its processing program, disaster recovery method and system, storage device for executing the processing, and its control processing method |
US10/650,842 US7668874B2 (en) | 2003-03-31 | 2003-08-29 | Disaster recovery processing method and apparatus and storage unit for the same |
US12/651,752 US20100121824A1 (en) | 2003-03-31 | 2010-01-04 | Disaster recovery processing method and apparatus and storage unit for the same |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/650,842 Continuation US7668874B2 (en) | 2003-03-31 | 2003-08-29 | Disaster recovery processing method and apparatus and storage unit for the same |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100121824A1 true US20100121824A1 (en) | 2010-05-13 |
Family
ID=32985497
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/650,842 Expired - Fee Related US7668874B2 (en) | 2003-03-31 | 2003-08-29 | Disaster recovery processing method and apparatus and storage unit for the same |
US11/227,180 Expired - Lifetime US7562103B2 (en) | 2003-03-31 | 2005-09-16 | Disaster recovery processing method and apparatus and storage unit for the same |
US12/651,752 Abandoned US20100121824A1 (en) | 2003-03-31 | 2010-01-04 | Disaster recovery processing method and apparatus and storage unit for the same |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/650,842 Expired - Fee Related US7668874B2 (en) | 2003-03-31 | 2003-08-29 | Disaster recovery processing method and apparatus and storage unit for the same |
US11/227,180 Expired - Lifetime US7562103B2 (en) | 2003-03-31 | 2005-09-16 | Disaster recovery processing method and apparatus and storage unit for the same |
Country Status (2)
Country | Link |
---|---|
US (3) | US7668874B2 (en) |
JP (1) | JP4301849B2 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100161923A1 (en) * | 2008-12-19 | 2010-06-24 | Ati Technologies Ulc | Method and apparatus for reallocating memory content |
US20100332901A1 (en) * | 2009-06-30 | 2010-12-30 | Sun Microsystems, Inc. | Advice-based feedback for transactional execution |
US20120284722A1 (en) * | 2011-05-06 | 2012-11-08 | Ibm Corporation | Method for dynamically throttling transactional workloads |
US20130204843A1 (en) * | 2012-02-07 | 2013-08-08 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent cdp checkponts |
US8689046B2 (en) | 2010-11-05 | 2014-04-01 | International Business Machines Corporation | System and method for remote recovery with checkpoints and intention logs |
US8775381B1 (en) * | 2011-05-14 | 2014-07-08 | Pivotal Software, Inc. | Parallel database mirroring |
WO2015056169A1 (en) * | 2013-10-16 | 2015-04-23 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
US9020895B1 (en) * | 2010-12-27 | 2015-04-28 | Netapp, Inc. | Disaster recovery for virtual machines across primary and secondary sites |
US9195397B2 (en) | 2005-04-20 | 2015-11-24 | Axxana (Israel) Ltd. | Disaster-proof data recovery |
US10078558B2 (en) | 2014-01-10 | 2018-09-18 | Hitachi, Ltd. | Database system control method and database system |
US10379958B2 (en) | 2015-06-03 | 2019-08-13 | Axxana (Israel) Ltd. | Fast archiving for database systems |
US10515671B2 (en) | 2016-09-22 | 2019-12-24 | Advanced Micro Devices, Inc. | Method and apparatus for reducing memory access latency |
US10592326B2 (en) | 2017-03-08 | 2020-03-17 | Axxana (Israel) Ltd. | Method and apparatus for data loss assessment |
Families Citing this family (120)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7020599B1 (en) * | 2001-12-13 | 2006-03-28 | Oracle International Corporation (Oic) | Mean time to recover (MTTR) advisory |
JP2004178254A (en) * | 2002-11-27 | 2004-06-24 | Hitachi Ltd | Information processing system, storage system, storage device controller, and program |
JP4290975B2 (en) * | 2002-12-19 | 2009-07-08 | 株式会社日立製作所 | Database processing method and apparatus, processing program therefor, disaster recovery method and system |
US6973654B1 (en) * | 2003-05-27 | 2005-12-06 | Microsoft Corporation | Systems and methods for the repartitioning of data |
US8595185B2 (en) * | 2003-09-29 | 2013-11-26 | International Business Machines Corporation | Storage disaster recovery using a predicted superset of unhardened primary data |
US7441052B2 (en) * | 2003-09-29 | 2008-10-21 | Hitachi Data Systems Corporation | Methods and apparatuses for providing copies of stored data for disaster recovery and other uses |
US8214328B2 (en) * | 2003-11-25 | 2012-07-03 | International Business Machines Corporation | High-performance asynchronous peer-to-peer remote copy for databases |
JP4434857B2 (en) * | 2003-12-04 | 2010-03-17 | 株式会社日立製作所 | Remote copy system and system |
JP4305328B2 (en) * | 2003-12-24 | 2009-07-29 | 株式会社日立製作所 | Computer system and system switching control method using the same |
JP4551096B2 (en) * | 2004-02-03 | 2010-09-22 | 株式会社日立製作所 | Storage subsystem |
JP4578119B2 (en) * | 2004-02-23 | 2010-11-10 | 大日本印刷株式会社 | Information processing apparatus and security ensuring method in information processing apparatus |
US7457830B1 (en) * | 2004-02-27 | 2008-11-25 | Symantec Operating Corporation | Method and system of replicating data using a recovery data change log |
JP4452533B2 (en) | 2004-03-19 | 2010-04-21 | 株式会社日立製作所 | System and storage system |
JP2005276017A (en) | 2004-03-26 | 2005-10-06 | Hitachi Ltd | Storage system |
JP4631301B2 (en) * | 2004-03-31 | 2011-02-16 | 株式会社日立製作所 | Cache management method for storage device |
US7640274B2 (en) * | 2004-07-21 | 2009-12-29 | Tinker Jeffrey L | Distributed storage architecture based on block map caching and VFS stackable file system modules |
JP4662743B2 (en) * | 2004-09-13 | 2011-03-30 | Necインフロンティア株式会社 | Data duplex system |
US8032608B2 (en) | 2004-10-08 | 2011-10-04 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device notification access control |
US8115945B2 (en) * | 2004-10-08 | 2012-02-14 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device job configuration management |
US8125666B2 (en) | 2004-10-08 | 2012-02-28 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device document management |
US7870185B2 (en) | 2004-10-08 | 2011-01-11 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device event notification administration |
US8051125B2 (en) | 2004-10-08 | 2011-11-01 | Sharp Laboratories Of America, Inc. | Methods and systems for obtaining imaging device event notification subscription |
US8049677B2 (en) * | 2004-10-08 | 2011-11-01 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device display element localization |
US8384925B2 (en) * | 2004-10-08 | 2013-02-26 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device accounting data management |
US8156424B2 (en) * | 2004-10-08 | 2012-04-10 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device dynamic document creation and organization |
US8018610B2 (en) * | 2004-10-08 | 2011-09-13 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device remote application interaction |
US8051140B2 (en) | 2004-10-08 | 2011-11-01 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device control |
US8006292B2 (en) * | 2004-10-08 | 2011-08-23 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential submission and consolidation |
US8065384B2 (en) | 2004-10-08 | 2011-11-22 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device event notification subscription |
US8023130B2 (en) * | 2004-10-08 | 2011-09-20 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device accounting data maintenance |
US7966396B2 (en) | 2004-10-08 | 2011-06-21 | Sharp Laboratories Of America, Inc. | Methods and systems for administrating imaging device event notification |
US7920101B2 (en) * | 2004-10-08 | 2011-04-05 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device display standardization |
US8230328B2 (en) | 2004-10-08 | 2012-07-24 | Sharp Laboratories Of America, Inc. | Methods and systems for distributing localized display elements to an imaging device |
US7873553B2 (en) * | 2004-10-08 | 2011-01-18 | Sharp Laboratories Of America, Inc. | Methods and systems for authorizing imaging device concurrent account use |
US8024792B2 (en) | 2004-10-08 | 2011-09-20 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential submission |
US8060921B2 (en) | 2004-10-08 | 2011-11-15 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential authentication and communication |
US8035831B2 (en) * | 2004-10-08 | 2011-10-11 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device remote form management |
US8001586B2 (en) * | 2004-10-08 | 2011-08-16 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential management and authentication |
US20060077119A1 (en) * | 2004-10-08 | 2006-04-13 | Sharp Laboratories Of America, Inc. | Methods and systems for receiving content at an imaging device |
US7873718B2 (en) | 2004-10-08 | 2011-01-18 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device accounting server recovery |
US8060930B2 (en) | 2004-10-08 | 2011-11-15 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential receipt and authentication |
US8006176B2 (en) | 2004-10-08 | 2011-08-23 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging-device-based form field management |
US8115946B2 (en) | 2004-10-08 | 2012-02-14 | Sharp Laboratories Of America, Inc. | Methods and sytems for imaging device job definition |
US8120797B2 (en) | 2004-10-08 | 2012-02-21 | Sharp Laboratories Of America, Inc. | Methods and systems for transmitting content to an imaging device |
US8120793B2 (en) | 2004-10-08 | 2012-02-21 | Sharp Laboratories Of America, Inc. | Methods and systems for displaying content on an imaging device |
US8120799B2 (en) | 2004-10-08 | 2012-02-21 | Sharp Laboratories Of America, Inc. | Methods and systems for accessing remote, descriptor-related data at an imaging device |
US8032579B2 (en) | 2004-10-08 | 2011-10-04 | Sharp Laboratories Of America, Inc. | Methods and systems for obtaining imaging device notification access control |
US8115947B2 (en) | 2004-10-08 | 2012-02-14 | Sharp Laboratories Of America, Inc. | Methods and systems for providing remote, descriptor-related data to an imaging device |
US7826081B2 (en) * | 2004-10-08 | 2010-11-02 | Sharp Laboratories Of America, Inc. | Methods and systems for receiving localized display elements at an imaging device |
US8001183B2 (en) * | 2004-10-08 | 2011-08-16 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device related event notification |
US8115944B2 (en) * | 2004-10-08 | 2012-02-14 | Sharp Laboratories Of America, Inc. | Methods and systems for local configuration-based imaging device accounting |
US8120798B2 (en) * | 2004-10-08 | 2012-02-21 | Sharp Laboratories Of America, Inc. | Methods and systems for providing access to remote, descriptor-related data at an imaging device |
US20060095536A1 (en) * | 2004-10-08 | 2006-05-04 | Rono Mathieson | Methods and systems for imaging device remote location functions |
US7970813B2 (en) * | 2004-10-08 | 2011-06-28 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device event notification administration and subscription |
US8001587B2 (en) | 2004-10-08 | 2011-08-16 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential management |
US8237946B2 (en) * | 2004-10-08 | 2012-08-07 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device accounting server redundancy |
US20060085430A1 (en) * | 2004-10-08 | 2006-04-20 | Sharp Laboratories Of America, Inc. | Methods and systems for accessing a remote file structure from an imaging device |
US8213034B2 (en) * | 2004-10-08 | 2012-07-03 | Sharp Laboratories Of America, Inc. | Methods and systems for providing remote file structure access on an imaging device |
US8171404B2 (en) * | 2004-10-08 | 2012-05-01 | Sharp Laboratories Of America, Inc. | Methods and systems for disassembly and reassembly of examination documents |
US7934217B2 (en) * | 2004-10-08 | 2011-04-26 | Sharp Laboratories Of America, Inc. | Methods and systems for providing remote file structure access to an imaging device |
US7969596B2 (en) * | 2004-10-08 | 2011-06-28 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device document translation |
US8006293B2 (en) * | 2004-10-08 | 2011-08-23 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device credential acceptance |
US20060077443A1 (en) * | 2004-10-08 | 2006-04-13 | Sharp Laboratories Of America, Inc. | Methods and systems for imaging device display coordination |
US7978618B2 (en) * | 2004-10-08 | 2011-07-12 | Sharp Laboratories Of America, Inc. | Methods and systems for user interface customization |
US8015234B2 (en) * | 2004-10-08 | 2011-09-06 | Sharp Laboratories Of America, Inc. | Methods and systems for administering imaging device notification access control |
JP4671399B2 (en) | 2004-12-09 | 2011-04-13 | 株式会社日立製作所 | Data processing system |
US20060155781A1 (en) * | 2005-01-10 | 2006-07-13 | Microsoft Corporation | Systems and methods for structuring distributed fault-tolerant systems |
US8428484B2 (en) * | 2005-03-04 | 2013-04-23 | Sharp Laboratories Of America, Inc. | Methods and systems for peripheral accounting |
JP4731975B2 (en) * | 2005-04-20 | 2011-07-27 | 株式会社日立製作所 | Database management method and storage system |
JP4699091B2 (en) * | 2005-05-31 | 2011-06-08 | 株式会社日立製作所 | Disaster recovery method and system |
US20070028144A1 (en) * | 2005-07-29 | 2007-02-01 | Stratus Technologies Bermuda Ltd. | Systems and methods for checkpointing |
US8615578B2 (en) * | 2005-10-07 | 2013-12-24 | Oracle International Corporation | Using a standby data storage system to detect the health of a cluster of data storage servers |
JP4668763B2 (en) * | 2005-10-20 | 2011-04-13 | 株式会社日立製作所 | Storage device restore method and storage device |
JP4762693B2 (en) * | 2005-11-22 | 2011-08-31 | 株式会社日立製作所 | File server, file server log management system, and file server log management method |
US20070118574A1 (en) * | 2005-11-22 | 2007-05-24 | Franklin William J | Reorganizing data with update activity |
JP4903461B2 (en) * | 2006-03-15 | 2012-03-28 | 株式会社日立製作所 | Storage system, data migration method, and server apparatus |
JP5165206B2 (en) * | 2006-03-17 | 2013-03-21 | 富士通株式会社 | Backup system and backup method |
JP4824458B2 (en) * | 2006-04-24 | 2011-11-30 | 株式会社日立製作所 | Computer system and method for reducing power consumption of storage system |
JP4833734B2 (en) * | 2006-05-19 | 2011-12-07 | 株式会社日立製作所 | Database system, storage device, initial copy method, and log application method |
US8345272B2 (en) * | 2006-09-28 | 2013-01-01 | Sharp Laboratories Of America, Inc. | Methods and systems for third-party control of remote imaging jobs |
US8060712B2 (en) * | 2007-04-13 | 2011-11-15 | International Business Machines Corporation | Remote mirroring between a primary site and a secondary site |
US7774646B2 (en) * | 2007-07-23 | 2010-08-10 | Netapp, Inc. | Surviving storage system takeover by replaying operations in an operations log mirror |
JP4521678B2 (en) * | 2007-11-19 | 2010-08-11 | フェリカネットワークス株式会社 | COMMUNICATION SYSTEM, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING DEVICE |
US20090164285A1 (en) * | 2007-12-20 | 2009-06-25 | International Business Machines Corporation | Auto-cascading clear to build engine for multiple enterprise order level parts management |
US8521682B2 (en) * | 2008-01-17 | 2013-08-27 | International Business Machines Corporation | Transfer of data from transactional data sources to partitioned databases in restartable environments |
US8156084B2 (en) * | 2008-01-17 | 2012-04-10 | International Business Machines Corporation | Transfer of data from positional data sources to partitioned databases in restartable environments |
US7933873B2 (en) * | 2008-01-17 | 2011-04-26 | International Business Machines Corporation | Handling transfer of bad data to database partitions in restartable environments |
US9201745B2 (en) * | 2008-01-23 | 2015-12-01 | Omx Technology Ab | Method of improving replica server performance and a replica server system |
US20090327601A1 (en) * | 2008-06-30 | 2009-12-31 | Shachar Fienblit | Asynchronous data mirroring with look-ahead synchronization record |
US8069322B2 (en) | 2008-08-15 | 2011-11-29 | International Business Machines Corporation | Active-active remote configuration of a storage system |
US8539175B2 (en) | 2010-09-21 | 2013-09-17 | International Business Machines Corporation | Transferring learning metadata between storage servers having clusters via copy services operations on a shared virtual logical unit that stores the learning metadata |
US9043283B2 (en) * | 2011-11-01 | 2015-05-26 | International Business Machines Corporation | Opportunistic database duplex operations |
KR101331452B1 (en) * | 2012-03-22 | 2013-11-21 | 주식회사 엘지씨엔에스 | Method for providing database management and the database management server there of |
US9135262B2 (en) * | 2012-10-19 | 2015-09-15 | Oracle International Corporation | Systems and methods for parallel batch processing of write transactions |
US9251002B2 (en) | 2013-01-15 | 2016-02-02 | Stratus Technologies Bermuda Ltd. | System and method for writing checkpointing data |
ES2652262T3 (en) | 2013-12-30 | 2018-02-01 | Stratus Technologies Bermuda Ltd. | Method of delaying checkpoints by inspecting network packets |
JP6518672B2 (en) | 2013-12-30 | 2019-05-22 | ストラタス・テクノロジーズ・バミューダ・リミテッド | Dynamic check pointing system and method |
EP3090336A1 (en) | 2013-12-30 | 2016-11-09 | Paul A. Leveille | Checkpointing systems and methods of using data forwarding |
US10152396B2 (en) * | 2014-05-05 | 2018-12-11 | Oracle International Corporation | Time-based checkpoint target for database media recovery |
WO2016139787A1 (en) * | 2015-03-04 | 2016-09-09 | 株式会社日立製作所 | Storage system and data writing control method |
US10083082B2 (en) * | 2015-09-07 | 2018-09-25 | International Business Machines Corporation | Efficient index checkpointing in log-structured object stores |
US10083089B2 (en) * | 2015-09-07 | 2018-09-25 | International Business Machines Corporation | Efficient index recovery in log-structured object stores |
JP2017091456A (en) * | 2015-11-17 | 2017-05-25 | 富士通株式会社 | Control device, control program, and control method |
US10303678B2 (en) * | 2016-06-29 | 2019-05-28 | International Business Machines Corporation | Application resiliency management using a database driver |
US10417198B1 (en) * | 2016-09-21 | 2019-09-17 | Well Fargo Bank, N.A. | Collaborative data mapping system |
US10534676B2 (en) * | 2017-02-27 | 2020-01-14 | Sap Se | Measuring snapshot delay between source database system and its asynchronous replica |
CN108418859B (en) * | 2018-01-24 | 2020-11-06 | 华为技术有限公司 | Method and device for writing data |
US10564894B2 (en) * | 2018-03-20 | 2020-02-18 | Microsoft Technology Licensing, Llc | Free space pass-through |
US10592354B2 (en) | 2018-03-20 | 2020-03-17 | Microsoft Technology Licensing, Llc | Configurable recovery states |
JP7007017B2 (en) * | 2018-03-22 | 2022-01-24 | Necソリューションイノベータ株式会社 | Storage systems, control methods, and programs |
CN108776670B (en) * | 2018-05-11 | 2021-08-03 | 创新先进技术有限公司 | Remote disaster recovery method, system and electronic equipment |
US11188516B2 (en) | 2018-08-24 | 2021-11-30 | Oracle International Corproation | Providing consistent database recovery after database failure for distributed databases with non-durable storage leveraging background synchronization point |
CN110865945B (en) * | 2018-08-28 | 2022-11-11 | 上海忆芯实业有限公司 | Extended address space for memory devices |
US10963353B2 (en) * | 2018-10-23 | 2021-03-30 | Capital One Services, Llc | Systems and methods for cross-regional back up of distributed databases on a cloud service |
CN109460318B (en) * | 2018-10-26 | 2021-01-01 | 珠海市时杰信息科技有限公司 | Import method of rollback archive collected data, computer device and computer readable storage medium |
US10866869B2 (en) * | 2019-01-16 | 2020-12-15 | Vmware, Inc. | Method to perform crash and failure recovery for a virtualized checkpoint protected storage system |
CN110955647A (en) * | 2019-12-04 | 2020-04-03 | 世纪龙信息网络有限责任公司 | Database assistance method, database assistance device, computer equipment and storage medium |
CN111291008B (en) * | 2020-01-22 | 2023-04-25 | 阿里巴巴集团控股有限公司 | Data processing method, device, system, electronic equipment and computer storage medium |
CN113849846B (en) * | 2021-11-30 | 2022-03-11 | 山东捷瑞数字科技股份有限公司 | Log storage management system of multi-server website |
JP7629883B2 (en) | 2022-02-28 | 2025-02-14 | 株式会社日立製作所 | Storage control device and method |
Citations (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4751702A (en) * | 1986-02-10 | 1988-06-14 | International Business Machines Corporation | Improving availability of a restartable staged storage data base system that uses logging facilities |
US4821172A (en) * | 1986-09-04 | 1989-04-11 | Hitachi, Ltd. | Apparatus for controlling data transfer between storages |
US5170480A (en) * | 1989-09-25 | 1992-12-08 | International Business Machines Corporation | Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time |
US5276876A (en) * | 1990-05-16 | 1994-01-04 | International Business Machines Corporation | Registration of resources for commit procedures |
US5280611A (en) * | 1991-11-08 | 1994-01-18 | International Business Machines Corporation | Method for managing database recovery from failure of a shared store in a system including a plurality of transaction-based systems of the write-ahead logging type |
US5530855A (en) * | 1992-10-13 | 1996-06-25 | International Business Machines Corporation | Replicating a database by the sequential application of hierarchically sorted log records |
US5594900A (en) * | 1992-12-02 | 1997-01-14 | International Business Machines Corporation | System and method for providing a backup copy of a database |
US5745674A (en) * | 1995-06-07 | 1998-04-28 | International Business Machines Corp. | Management of units of work on a computer system log |
US5758355A (en) * | 1996-08-07 | 1998-05-26 | Aurum Software, Inc. | Synchronization of server database with client database using distribution tables |
US5781912A (en) * | 1996-12-19 | 1998-07-14 | Oracle Corporation | Recoverable data replication between source site and destination site without distributed transactions |
US6021408A (en) * | 1996-09-12 | 2000-02-01 | Veritas Software Corp. | Methods for operating a log device |
US6065018A (en) * | 1998-03-04 | 2000-05-16 | International Business Machines Corporation | Synchronizing recovery log having time stamp to a remote site for disaster recovery of a primary database having related hierarchial and relational databases |
US6163856A (en) * | 1998-05-29 | 2000-12-19 | Sun Microsystems, Inc. | Method and apparatus for file system disaster recovery |
US6173292B1 (en) * | 1998-03-04 | 2001-01-09 | International Business Machines Corporation | Data recovery in a transactional database using write-ahead logging and file caching |
US6173377B1 (en) * | 1993-04-23 | 2001-01-09 | Emc Corporation | Remote data mirroring |
US6178427B1 (en) * | 1998-05-07 | 2001-01-23 | Platinum Technology Ip, Inc. | Method of mirroring log datasets using both log file data and live log data including gaps between the two data logs |
US6226651B1 (en) * | 1998-03-27 | 2001-05-01 | International Business Machines Corporation | Database disaster remote site recovery |
US6289357B1 (en) * | 1998-04-24 | 2001-09-11 | Platinum Technology Ip, Inc. | Method of automatically synchronizing mirrored database objects |
US20020007468A1 (en) * | 2000-05-02 | 2002-01-17 | Sun Microsystems, Inc. | Method and system for achieving high availability in a networked computer system |
US20020049925A1 (en) * | 1995-06-09 | 2002-04-25 | Galipeau Kenneth J. | Backing up selected files of a computer system |
US6408370B2 (en) * | 1997-09-12 | 2002-06-18 | Hitachi, Ltd. | Storage system assuring data integrity and a synchronous remote data duplexing |
US20020095547A1 (en) * | 2001-01-12 | 2002-07-18 | Naoki Watanabe | Virtual volume storage |
US20020103815A1 (en) * | 2000-12-12 | 2002-08-01 | Fresher Information Corporation | High speed data updates implemented in an information storage and retrieval system |
US20020107878A1 (en) * | 2000-09-08 | 2002-08-08 | Masashi Tsuchida | Method and system for managing multiple database storage units |
US6446176B1 (en) * | 2000-03-09 | 2002-09-03 | Storage Technology Corporation | Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred |
US20020133507A1 (en) * | 2001-03-16 | 2002-09-19 | Iti, Inc. | Collision avoidance in database replication systems |
US6467034B1 (en) * | 1999-03-26 | 2002-10-15 | Nec Corporation | Data mirroring method and information processing system for mirroring data |
US6466951B1 (en) * | 1999-02-10 | 2002-10-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Data base synchronizing system with at least two host databases and a remote database |
US20020188711A1 (en) * | 2001-02-13 | 2002-12-12 | Confluence Networks, Inc. | Failover processing in a storage system |
US20030074600A1 (en) * | 2000-04-12 | 2003-04-17 | Masaharu Tamatsu | Data backup/recovery system |
US6567928B1 (en) * | 2000-05-23 | 2003-05-20 | International Business Machines Corporation | Method and apparatus for efficiently recovering from a failure in a database that includes unlogged objects |
US20030126163A1 (en) * | 2001-12-28 | 2003-07-03 | Hong-Yeon Kim | Method for file deletion and recovery against system failures in database management system |
US20030126133A1 (en) * | 2001-12-27 | 2003-07-03 | Slamdunk Networks, Inc. | Database replication using application program event playback |
US6604118B2 (en) * | 1998-07-31 | 2003-08-05 | Network Appliance, Inc. | File system image transfer |
US6606694B2 (en) * | 2000-12-22 | 2003-08-12 | Bull Hn Information Systems Inc. | Write logging in mirrored disk subsystems |
US6615223B1 (en) * | 2000-02-29 | 2003-09-02 | Oracle International Corporation | Method and system for data replication |
US6643795B1 (en) * | 2000-03-30 | 2003-11-04 | Hewlett-Packard Development Company, L.P. | Controller-based bi-directional remote copy system with storage site failover capability |
US6658590B1 (en) * | 2000-03-30 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Controller-based transaction logging system for data recovery in a storage area network |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US20040034670A1 (en) * | 2002-07-31 | 2004-02-19 | At&T Wireless Services, Inc. | Efficient synchronous and asynchronous database replication |
US20040044865A1 (en) * | 2000-03-31 | 2004-03-04 | Sicola Stephen J. | Method for transaction command ordering in a remote data replication system |
US20040064639A1 (en) * | 2000-03-30 | 2004-04-01 | Sicola Stephen J. | Controller-based remote copy system with logical unit grouping |
US6723123B1 (en) * | 1999-11-10 | 2004-04-20 | Impsa International Incorporated | Prosthetic heart valve |
US6732124B1 (en) * | 1999-03-30 | 2004-05-04 | Fujitsu Limited | Data processing system with mechanism for restoring file systems based on transaction logs |
US20040098371A1 (en) * | 2002-11-14 | 2004-05-20 | David Bayliss | Failure recovery in a parallel-processing database system |
US20040098425A1 (en) * | 2002-11-15 | 2004-05-20 | Sybase, Inc. | Database System Providing Improved Methods For Data Replication |
US20040107381A1 (en) * | 2002-07-12 | 2004-06-03 | American Management Systems, Incorporated | High performance transaction storage and retrieval system for commodity computing environments |
US20040133591A1 (en) * | 2001-03-16 | 2004-07-08 | Iti, Inc. | Asynchronous coordinated commit replication and dual write with replication transmission and locking of target database on updates only |
US20040139124A1 (en) * | 2002-12-19 | 2004-07-15 | Nobuo Kawamura | Disaster recovery processing method and apparatus and storage unit for the same |
US20040158588A1 (en) * | 2003-02-07 | 2004-08-12 | International Business Machines Corporation | Apparatus and method for coordinating logical data replication with highly available data replication |
US20040193625A1 (en) * | 2003-03-27 | 2004-09-30 | Atsushi Sutoh | Data control method for duplicating data between computer systems |
US6850958B2 (en) * | 2001-05-25 | 2005-02-01 | Fujitsu Limited | Backup system, backup method, database apparatus, and backup apparatus |
US6889231B1 (en) * | 2002-08-01 | 2005-05-03 | Oracle International Corporation | Asynchronous information sharing system |
US20050229021A1 (en) * | 2002-03-28 | 2005-10-13 | Clark Lubbers | Automatic site failover |
US20050262298A1 (en) * | 2002-03-26 | 2005-11-24 | Clark Lubbers | System and method for ensuring merge completion in a storage area network |
US20050262312A1 (en) * | 2002-07-30 | 2005-11-24 | Noboru Morishita | Storage system for multi-site remote copy |
US6983362B1 (en) * | 2000-05-20 | 2006-01-03 | Ciena Corporation | Configurable fault recovery policy for a computer system |
US7003694B1 (en) * | 2002-05-22 | 2006-02-21 | Oracle International Corporation | Reliable standby database failover |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001318801A (en) | 2000-05-10 | 2001-11-16 | Mitsubishi Electric Corp | Duplex computer system |
-
2003
- 2003-03-31 JP JP2003096725A patent/JP4301849B2/en not_active Expired - Fee Related
- 2003-08-29 US US10/650,842 patent/US7668874B2/en not_active Expired - Fee Related
-
2005
- 2005-09-16 US US11/227,180 patent/US7562103B2/en not_active Expired - Lifetime
-
2010
- 2010-01-04 US US12/651,752 patent/US20100121824A1/en not_active Abandoned
Patent Citations (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4751702A (en) * | 1986-02-10 | 1988-06-14 | International Business Machines Corporation | Improving availability of a restartable staged storage data base system that uses logging facilities |
US4821172A (en) * | 1986-09-04 | 1989-04-11 | Hitachi, Ltd. | Apparatus for controlling data transfer between storages |
US5170480A (en) * | 1989-09-25 | 1992-12-08 | International Business Machines Corporation | Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time |
US5276876A (en) * | 1990-05-16 | 1994-01-04 | International Business Machines Corporation | Registration of resources for commit procedures |
US5280611A (en) * | 1991-11-08 | 1994-01-18 | International Business Machines Corporation | Method for managing database recovery from failure of a shared store in a system including a plurality of transaction-based systems of the write-ahead logging type |
US5530855A (en) * | 1992-10-13 | 1996-06-25 | International Business Machines Corporation | Replicating a database by the sequential application of hierarchically sorted log records |
US5640561A (en) * | 1992-10-13 | 1997-06-17 | International Business Machines Corporation | Computerized method and system for replicating a database using log records |
US5594900A (en) * | 1992-12-02 | 1997-01-14 | International Business Machines Corporation | System and method for providing a backup copy of a database |
US6173377B1 (en) * | 1993-04-23 | 2001-01-09 | Emc Corporation | Remote data mirroring |
US20030005355A1 (en) * | 1993-04-23 | 2003-01-02 | Moshe Yanai | Remote data mirroring system using local and remote write pending indicators |
US6502205B1 (en) * | 1993-04-23 | 2002-12-31 | Emc Corporation | Asynchronous remote data mirroring system |
US20040073831A1 (en) * | 1993-04-23 | 2004-04-15 | Moshe Yanai | Remote data mirroring |
US5745674A (en) * | 1995-06-07 | 1998-04-28 | International Business Machines Corp. | Management of units of work on a computer system log |
US20020049925A1 (en) * | 1995-06-09 | 2002-04-25 | Galipeau Kenneth J. | Backing up selected files of a computer system |
US5758355A (en) * | 1996-08-07 | 1998-05-26 | Aurum Software, Inc. | Synchronization of server database with client database using distribution tables |
US6021408A (en) * | 1996-09-12 | 2000-02-01 | Veritas Software Corp. | Methods for operating a log device |
US5781912A (en) * | 1996-12-19 | 1998-07-14 | Oracle Corporation | Recoverable data replication between source site and destination site without distributed transactions |
US6408370B2 (en) * | 1997-09-12 | 2002-06-18 | Hitachi, Ltd. | Storage system assuring data integrity and a synchronous remote data duplexing |
US6065018A (en) * | 1998-03-04 | 2000-05-16 | International Business Machines Corporation | Synchronizing recovery log having time stamp to a remote site for disaster recovery of a primary database having related hierarchial and relational databases |
US6173292B1 (en) * | 1998-03-04 | 2001-01-09 | International Business Machines Corporation | Data recovery in a transactional database using write-ahead logging and file caching |
US6226651B1 (en) * | 1998-03-27 | 2001-05-01 | International Business Machines Corporation | Database disaster remote site recovery |
US6289357B1 (en) * | 1998-04-24 | 2001-09-11 | Platinum Technology Ip, Inc. | Method of automatically synchronizing mirrored database objects |
US6178427B1 (en) * | 1998-05-07 | 2001-01-23 | Platinum Technology Ip, Inc. | Method of mirroring log datasets using both log file data and live log data including gaps between the two data logs |
US6163856A (en) * | 1998-05-29 | 2000-12-19 | Sun Microsystems, Inc. | Method and apparatus for file system disaster recovery |
US6604118B2 (en) * | 1998-07-31 | 2003-08-05 | Network Appliance, Inc. | File system image transfer |
US6466951B1 (en) * | 1999-02-10 | 2002-10-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Data base synchronizing system with at least two host databases and a remote database |
US6467034B1 (en) * | 1999-03-26 | 2002-10-15 | Nec Corporation | Data mirroring method and information processing system for mirroring data |
US6732124B1 (en) * | 1999-03-30 | 2004-05-04 | Fujitsu Limited | Data processing system with mechanism for restoring file systems based on transaction logs |
US6671705B1 (en) * | 1999-08-17 | 2003-12-30 | Emc Corporation | Remote mirroring system, device, and method |
US6723123B1 (en) * | 1999-11-10 | 2004-04-20 | Impsa International Incorporated | Prosthetic heart valve |
US6615223B1 (en) * | 2000-02-29 | 2003-09-02 | Oracle International Corporation | Method and system for data replication |
US6446176B1 (en) * | 2000-03-09 | 2002-09-03 | Storage Technology Corporation | Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred |
US20040064639A1 (en) * | 2000-03-30 | 2004-04-01 | Sicola Stephen J. | Controller-based remote copy system with logical unit grouping |
US6658590B1 (en) * | 2000-03-30 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Controller-based transaction logging system for data recovery in a storage area network |
US6643795B1 (en) * | 2000-03-30 | 2003-11-04 | Hewlett-Packard Development Company, L.P. | Controller-based bi-directional remote copy system with storage site failover capability |
US20040044865A1 (en) * | 2000-03-31 | 2004-03-04 | Sicola Stephen J. | Method for transaction command ordering in a remote data replication system |
US20030074600A1 (en) * | 2000-04-12 | 2003-04-17 | Masaharu Tamatsu | Data backup/recovery system |
US20020007468A1 (en) * | 2000-05-02 | 2002-01-17 | Sun Microsystems, Inc. | Method and system for achieving high availability in a networked computer system |
US6983362B1 (en) * | 2000-05-20 | 2006-01-03 | Ciena Corporation | Configurable fault recovery policy for a computer system |
US6567928B1 (en) * | 2000-05-23 | 2003-05-20 | International Business Machines Corporation | Method and apparatus for efficiently recovering from a failure in a database that includes unlogged objects |
US20020107878A1 (en) * | 2000-09-08 | 2002-08-08 | Masashi Tsuchida | Method and system for managing multiple database storage units |
US20020103815A1 (en) * | 2000-12-12 | 2002-08-01 | Fresher Information Corporation | High speed data updates implemented in an information storage and retrieval system |
US6606694B2 (en) * | 2000-12-22 | 2003-08-12 | Bull Hn Information Systems Inc. | Write logging in mirrored disk subsystems |
US20020095547A1 (en) * | 2001-01-12 | 2002-07-18 | Naoki Watanabe | Virtual volume storage |
US6748502B2 (en) * | 2001-01-12 | 2004-06-08 | Hitachi, Ltd. | Virtual volume storage |
US20020188711A1 (en) * | 2001-02-13 | 2002-12-12 | Confluence Networks, Inc. | Failover processing in a storage system |
US20040133591A1 (en) * | 2001-03-16 | 2004-07-08 | Iti, Inc. | Asynchronous coordinated commit replication and dual write with replication transmission and locking of target database on updates only |
US20020133507A1 (en) * | 2001-03-16 | 2002-09-19 | Iti, Inc. | Collision avoidance in database replication systems |
US6850958B2 (en) * | 2001-05-25 | 2005-02-01 | Fujitsu Limited | Backup system, backup method, database apparatus, and backup apparatus |
US20030126133A1 (en) * | 2001-12-27 | 2003-07-03 | Slamdunk Networks, Inc. | Database replication using application program event playback |
US20030126163A1 (en) * | 2001-12-28 | 2003-07-03 | Hong-Yeon Kim | Method for file deletion and recovery against system failures in database management system |
US7032131B2 (en) * | 2002-03-26 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | System and method for ensuring merge completion in a storage area network |
US20050262298A1 (en) * | 2002-03-26 | 2005-11-24 | Clark Lubbers | System and method for ensuring merge completion in a storage area network |
US20050229021A1 (en) * | 2002-03-28 | 2005-10-13 | Clark Lubbers | Automatic site failover |
US7003694B1 (en) * | 2002-05-22 | 2006-02-21 | Oracle International Corporation | Reliable standby database failover |
US20040107381A1 (en) * | 2002-07-12 | 2004-06-03 | American Management Systems, Incorporated | High performance transaction storage and retrieval system for commodity computing environments |
US20050262312A1 (en) * | 2002-07-30 | 2005-11-24 | Noboru Morishita | Storage system for multi-site remote copy |
US20040034670A1 (en) * | 2002-07-31 | 2004-02-19 | At&T Wireless Services, Inc. | Efficient synchronous and asynchronous database replication |
US6889231B1 (en) * | 2002-08-01 | 2005-05-03 | Oracle International Corporation | Asynchronous information sharing system |
US20040098371A1 (en) * | 2002-11-14 | 2004-05-20 | David Bayliss | Failure recovery in a parallel-processing database system |
US20040098425A1 (en) * | 2002-11-15 | 2004-05-20 | Sybase, Inc. | Database System Providing Improved Methods For Data Replication |
US20040139124A1 (en) * | 2002-12-19 | 2004-07-15 | Nobuo Kawamura | Disaster recovery processing method and apparatus and storage unit for the same |
US20040158588A1 (en) * | 2003-02-07 | 2004-08-12 | International Business Machines Corporation | Apparatus and method for coordinating logical data replication with highly available data replication |
US20040193625A1 (en) * | 2003-03-27 | 2004-09-30 | Atsushi Sutoh | Data control method for duplicating data between computer systems |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9195397B2 (en) | 2005-04-20 | 2015-11-24 | Axxana (Israel) Ltd. | Disaster-proof data recovery |
US20100161923A1 (en) * | 2008-12-19 | 2010-06-24 | Ati Technologies Ulc | Method and apparatus for reallocating memory content |
US9569349B2 (en) * | 2008-12-19 | 2017-02-14 | Ati Technologies Ulc | Method and apparatus for reallocating memory content |
US20100332901A1 (en) * | 2009-06-30 | 2010-12-30 | Sun Microsystems, Inc. | Advice-based feedback for transactional execution |
US8281185B2 (en) * | 2009-06-30 | 2012-10-02 | Oracle America, Inc. | Advice-based feedback for transactional execution |
US8689046B2 (en) | 2010-11-05 | 2014-04-01 | International Business Machines Corporation | System and method for remote recovery with checkpoints and intention logs |
US9020895B1 (en) * | 2010-12-27 | 2015-04-28 | Netapp, Inc. | Disaster recovery for virtual machines across primary and secondary sites |
US8689219B2 (en) * | 2011-05-06 | 2014-04-01 | International Business Machines Corporation | Systems and method for dynamically throttling transactional workloads |
US8707311B2 (en) * | 2011-05-06 | 2014-04-22 | International Business Machines Corporation | Method for dynamically throttling transactional workloads |
US20120284722A1 (en) * | 2011-05-06 | 2012-11-08 | Ibm Corporation | Method for dynamically throttling transactional workloads |
US20120284721A1 (en) * | 2011-05-06 | 2012-11-08 | International Business Machines Corporation | Systems and method for dynamically throttling transactional workloads |
US8775381B1 (en) * | 2011-05-14 | 2014-07-08 | Pivotal Software, Inc. | Parallel database mirroring |
US9792345B1 (en) | 2011-05-14 | 2017-10-17 | Pivotal Software, Inc. | Parallel database mirroring |
US8832037B2 (en) * | 2012-02-07 | 2014-09-09 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent CDP checkpoints |
US8959059B2 (en) | 2012-02-07 | 2015-02-17 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent CDP checkpoints |
US9176827B2 (en) | 2012-02-07 | 2015-11-03 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent CDP checkpoints |
US8868513B1 (en) * | 2012-02-07 | 2014-10-21 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent CDP checkpoints |
US20140298092A1 (en) * | 2012-02-07 | 2014-10-02 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent cdp checkpoints |
US20130204843A1 (en) * | 2012-02-07 | 2013-08-08 | Zerto Ltd. | Adaptive quiesce for efficient cross-host consistent cdp checkponts |
WO2015056169A1 (en) * | 2013-10-16 | 2015-04-23 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
US10769028B2 (en) | 2013-10-16 | 2020-09-08 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
US10078558B2 (en) | 2014-01-10 | 2018-09-18 | Hitachi, Ltd. | Database system control method and database system |
US10379958B2 (en) | 2015-06-03 | 2019-08-13 | Axxana (Israel) Ltd. | Fast archiving for database systems |
US10515671B2 (en) | 2016-09-22 | 2019-12-24 | Advanced Micro Devices, Inc. | Method and apparatus for reducing memory access latency |
US10592326B2 (en) | 2017-03-08 | 2020-03-17 | Axxana (Israel) Ltd. | Method and apparatus for data loss assessment |
Also Published As
Publication number | Publication date |
---|---|
US20040193658A1 (en) | 2004-09-30 |
JP4301849B2 (en) | 2009-07-22 |
US20060010180A1 (en) | 2006-01-12 |
US7668874B2 (en) | 2010-02-23 |
JP2004303025A (en) | 2004-10-28 |
US7562103B2 (en) | 2009-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7562103B2 (en) | Disaster recovery processing method and apparatus and storage unit for the same | |
CN100585566C (en) | Restoration method and device for asynchronous transmission of database data using log synchronization | |
US7577788B2 (en) | Disk array apparatus and disk array apparatus control method | |
US7421614B2 (en) | Data synchronization of multiple remote storage after remote copy suspension | |
US7111139B2 (en) | Data synchronization of multiple remote storage | |
US7694177B2 (en) | Method and system for resynchronizing data between a primary and mirror data storage system | |
US6539462B1 (en) | Remote data copy using a prospective suspend command | |
US7565572B2 (en) | Method for rolling back from snapshot with log | |
US7234033B2 (en) | Data synchronization of multiple remote storage facilities | |
US7925633B2 (en) | Disaster recovery system suitable for database system | |
US6671705B1 (en) | Remote mirroring system, device, and method | |
US7539703B2 (en) | Setup method for disaster recovery system | |
US20090013012A1 (en) | Journal management method in cdp remote configuration | |
KR19980024086A (en) | Computer system and file management methods | |
JP4290975B2 (en) | Database processing method and apparatus, processing program therefor, disaster recovery method and system | |
JP4452494B2 (en) | Data synchronization method after stopping remote copy on multiple remote storages | |
JP4721057B2 (en) | Data management system, data management method, and data management program | |
JPH1185594A (en) | Information processing system for remote copy | |
JP2004272884A5 (en) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |