Detailed Description
      For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
      In one exemplary configuration of the application, the terminal, the devices of the services network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
      The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
      Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer program instructions, data structures, modules of the program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
      Fig. 1 shows a flow chart of a data migration method according to an embodiment of the present application. The method at least comprises step S101, step S102, step S103 and step S104.
      In a practical scenario, the execution subject of the method may be a network device, or may also be an application running on a network device, including but not limited to an implementation such as a network host, a single network server, a set of multiple network servers, or a set of computers based on cloud computing. Here, the Cloud is composed of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual computer composed of a group of loosely coupled computer sets.
      In the process of migrating data from a source database to a key value database, unlike the traditional full-table scanning migration or Binlog-based synchronization mode, the embodiment of the application executes corresponding data writing operation in the source database and adopts a target key name to record writing requests in the key value database, service is not required to be stopped in the migration process, consistency and rollability of the data are ensured, smooth migration and rollback of the data are realized under the condition of not interrupting service, and the method is particularly suitable for high concurrency scenes.
      For example, in an actual application scenario of video metadata update, information such as a video play address and an open condition needs to be updated. Since the query rate per second (QPS) of system requests is typically on the order of tens of thousands, it is necessary to constantly request acquisition of video play addresses, and update information of video addresses, whether video is open, and the like. During the whole process, the service cannot be suspended, otherwise the user experience is affected. The embodiment of the application can extract the data to form a record on the premise of not influencing the original read-write operation so as to correct the transferred data later.
      The source database and the key value database in the embodiment of the application are heterogeneous data. In some embodiments, the source data is a relational database.
      Referring to fig. 1, in step S101, a first data processing policy is executed to process a data write request of a client in response to a data migration instruction.
      The first data processing strategy executes corresponding data writing operation in the source database, and records the data writing request in the key value database by the target key name.
      The target key name is used for distinguishing new and old data versions in the migration process in the key value database. The embodiment of the application can generate the target key name by various methods, such as adding a specific prefix or suffix into the original key name, inserting special characters in the middle, carrying out hash operation on the original key name, taking the obtained hash value as the corresponding target key name, and the like.
      According to one embodiment, the target key name is formed by adding a specific prefix before the original key name. For example, assuming that the original key name is "key", the prefix "prefix_key" is formed by adding the prefix "prefix_" as the target key name. Based on the mode, the key to be processed can be rapidly identified through the prefix, and subsequent data scanning is facilitated.
      Wherein the data write request includes INSERT new data (INSERT), UPDATE data (UPDATE), DELETE Data (DELETE), and the like.
      The target key name is adopted in the key value database to record the occupation space for identifying data operation, and the storage of specific values is not involved. The recorded content may include key information such as the type of data operation (e.g., insert, update, or delete), the time of the data operation, and the like, without regard to the size of the actual stored value or the specific content.
      According to one embodiment, the first data processing policy is implemented by a client agent. The method further comprises step S105, and step S101 comprises step S1011.
      Wherein the client agent is a middle tier component deployed on the client side for handling database related operations. For example, the source database is MySQL, and the client agent intercepts SQL statements and key operations via a Hook (Hook) mechanism of a database connection pool or ORM framework to execute the first data processing policy.
      In step S105, deployment and configuration of the agent program are performed in the client.
      Specifically, an agent is deployed between the application and the database, ensuring that it can intercept all database operation requests. And configuring parameters of the proxy, such as database connection information, caching strategy, load balancing configuration and the like, according to the requirements of the application program. It should be noted that, the client is directly connected to the database, that is, the client is a back-end service, and the service accesses the database after the upper layer requests the back-end service. The client agent of the embodiment of the application directly intercepts the request in the back-end service.
      In step S1011, the agent is brought up in the client to execute a first data processing policy to process the data write request of the client.
      The data write flow Cheng Man of the first data processing policy is required by the client agent to execute, for each write request, the agent to execute the following operations, when writing data to the source database, directly executing the data write operation in the source database, when writing data to the key value database, adding a specific prefix before the original key name of the data to be written to form a target key value, and recording the data write request with the target key name.
      The data reading flow of the client agent executing the first data processing strategy comprises that all data reading requests acquire data from the source database and the data reading result of the source database is in control.
      Optionally, the agent is tested before formally online to ensure that it can properly intercept and process requests and that it does not negatively impact the normal operation of the application. After confirming that the agent configuration is correct and the test is correct, the agent is formally online, and the agent starts to process the database request of the application program.
      Optionally, after the online client agent, the performance and stability of the agent are continuously monitored to optimize and adjust the agent program according to the actual situation.
      Continuing with the description of fig. 1, in step S102, the target data in the source data is synchronized to the key-value database by performing asynchronous data processing.
      Wherein the target data corresponds to a user selected data segment.
      The user queries the data in the source database and selects the target data segment to be migrated. Wherein the data in the source database may be queried using a query tool that is self-contained with the database (e.g., an SQL query) or a third party query tool. The method can select the target data segment based on a manual selection mode, for example, manually selecting the data segment needing to be migrated through a graphical interface or a command line tool. Or select the target data segment based on an automatic selection, e.g., automatically selecting the corresponding data segment according to preset rules (e.g., data type, amount of data, time range, etc.).
      The method can adopt a segmented form to improve concurrency and greatly accelerate the data migration process. For example, in a MySQL scenario adopting a master multi-slave architecture, data can be segmented according to the id of the data, different slave nodes for processing data requests of different data segments support multiple migration tasks to be executed concurrently by the same slave node, and the concurrency is further improved, so that data migration is accelerated.
      According to one embodiment, the method batch migrates target data to a key database by asynchronous data processing based on a data range and migration times of batch migration data set by a user.
      Specifically, the method obtains the data range and the migration times of batch migration data set by a user. Specifically, the user can flexibly define the data volume (i.e. the data range) of each migration and the migration batch (i.e. the migration times) required to be performed according to the actual requirements and the system resource conditions. Wherein the user may determine a data range of the batch migration data based on the time range or the data identification. For example, the user may specify an amount of data per migration of 1000 records for a total of 5 migrates, thereby migrating a total of 5000 pieces of target data into the key-value database in batches.
      And then, the method transfers the target data to the key value database in batches according to the data range and the transfer times set by the user. During each migration process, the system extracts corresponding amount of data according to the set data range and transmits the data to the key value database through an asynchronous processing mechanism. The method not only improves the flexibility of data migration, but also optimizes the migration process according to the actual demands of users, and ensures the high efficiency and reliability of data migration.
      In step S103, in response to the instruction to complete the data migration, the executed first data processing policy is switched to a second data processing policy that causes the key-value database to operate on the data with the original key name.
      After data migration is completed, in the key value database, part of key names in the data from the source database are target key names, and the rest key names are original key names.
      Wherein the method may perform the operation of step S103 in response to an instruction to stop asynchronous data processing. Or the method may perform the operation of S103 based on an operation instruction from the user. Or the operation of step S103 may be performed when a preset time point is reached.
      It should be noted that, after the first data processing policy is switched to the second data processing policy, when new data is inserted, an original key name is used for insertion, when an update (update) or delete (delete) operation is performed on the data, if the original key name of the data does not have a target key name form in the key value database, it is indicated that the data is not newly built or updated before migration, a corresponding update operation or delete operation can be directly performed, and if the original key name of the data has a target key name form, it is indicated that the data may be newly built or updated before migration is performed, and at this time, performing the update operation or delete operation may cause temporary inaccuracy on the data, and this problem is repaired by performing a subsequent scan operation.
      In step S104, the target key name in the key value database is cleaned up by scanning the target key name in the key value database.
      Specifically, scanning is performed in the key value database based on the target key name, the latest data is searched from the source database, and if no data corresponding to the target key name exists in the source database, the relevant record of the target key name in the key value database is deleted. If the data corresponding to the target key name exists in the source database, the latest value of the data in the source database is read, the latest value is inserted into the key value database by the original key name, and the related record of the target key name in the key value database is deleted after the data is confirmed to be correct.
      According to one embodiment, the method further comprises step S106.
      In step S106, consistency of the migration data in the source database and the key database is checked, and if there is an inconsistency problem, the discrepancy is repaired based on the migration data in the source database.
      Specifically, by comparing the migration data in the source database and the key database, it is determined whether the migration data is consistent in the source database and the key database. If the data in the source database is inconsistent, the migration data in the key value database is updated based on the data in the source database, so that the migration data is consistent in the source database and the key value database.
      Optionally, in the whole migration process, continuously verifying whether the migrated data has an inconsistency problem in the source database and the key value database, and if so, repairing the discrepancy by taking the data in the source database as a reference, thereby maintaining the consistency of the data.
      According to one embodiment, if the key-value database is problematic or consistency of the data needs to be verified, a rollback operation may be performed, the method further comprising step S107.
      In step S107, in response to the data rollback instruction, a corresponding rollback operation is performed on the data in the key-value database that needs to be rolled back.
      Wherein the rollback operation includes, but is not limited to, deleting data that has been migrated to the key-value store or overwriting data that needs to be rolled back with corresponding data in the source database.
      Because the embodiment of the application writes the data change into the source database and the key value database in a double-writing mode in the data migration process, no additional backup operation is needed to be carried out on the source database when the data rollback is carried out.
      According to the method, in the migration process, corresponding data writing operation is executed in the source database, the target key name is adopted to record the writing request in the key value database, the mode of only recording the target key value is adopted, complex writing operation is reduced, data migration is more convenient, migration flow is simplified, migration efficiency is improved, the data in two database systems are managed and synchronized through the data processing strategy of the embodiment of the application, consistency and rollability of the data are ensured, smooth migration and rollback of the data are realized under the condition that service is not interrupted, and data migration is carried out based on the data selected by a user, so that the user can flexibly select which data to migrate, and the data migration efficiency is improved.
      The process of data migration of embodiments of the present application is described below in conjunction with an example.
      Referring to the exemplary data migration scenario shown in FIG. 2. The source database in this example is MySQL.
      The names appearing in fig. 2 are described below:
       client; 
       proxy; 
       async, operation of asynchronous synchronous data; 
       KV: key value database (KV database); 
       MySQL MySQL database; 
       curd four basic actions representing database operations, C is Create (Create) which refers to the operation of inserting new data in the database, U is Update (Update) which refers to the operation of modifying data already present in the database, R is Retrieve (Retrieve) which refers to the operation of reading data from the database, and D is Delete (Delete) which refers to the operation of deleting data from the database. 
      The data migration process in this example includes 4 phases. The process is described below by steps P1 to P6, where step P1 corresponds to phase 1 in fig. 2, P2 corresponds to phase 2, steps P3 and P4 correspond to phase 3, and steps P5 and P6 correspond to phase 4.
      Steps P1 to P6 are described below:
       p1, executing a data double-write strategy at a client; 
       the client is online with a proxy (proxy) that is responsible for implementing the data dual write strategy. Through the dual write strategy, all data changes are written into the MySQL and key value database simultaneously. 
      The policy data writing process includes, for each writing request (e.g., INSERT, UPDATE, DELETE), the agent performing operations of writing MySQL, directly executing an original SQL statement to ensure real-time updating of the master database data, and writing the KV database, when writing data to the KV database, adding a prefix "v2_" in front of an original key name "key" to form a v2_key, so that the data writing request records with the v2_key as a key value.
      The data reading flow of the strategy comprises that all reading requests obtain data from MySQL preferentially, and the reading result of the MySQL data is the right.
      P2, synchronizing target data in MySQL to a KV database by asynchronous data processing;
       Selecting the initial time of transferring data, recording MySQL data position at said time, and transferring the correspondent data into KV data base by executing async according to the recorded MySQL data position. For example, a time T1 is selected, a corresponding data writing position is recorded, data with id=1000 is started to migrate the data with id of 1 to 1000 to the KV database. 
      P3, after migration is completed, switching the double write strategy to enable KV storage to operate in a normal key value mode;
       After the data migration is completed, all data in MySQL exists in KV database in the form of key values v2_key and key. The client switches the double write strategy so that the KV database starts to perform data operation in a normal key form. This marks that the main phase of data migration has been completed and the system begins to operate on KV databases as the main operational object. 
      Wherein after switching to the KV database and starting the normal operation key, an update operation needs to be processed. If the updated key does not exist in v2_key form in the KV database, the data is not updated before migration, and the update or delete operation can be directly performed. However, if the updated key exists in v2_key form, this indicates that the data may have been newly created or updated prior to migration. At this time, the update or delete operation may cause temporary inaccuracy of the data, and the subsequent operation of P4 will fix the problem.
      P4, cleaning v2_key in KV storage;
       To ensure consistency of the data, v2_keys in the KV database are scanned and the latest data is looked up from MySQL. And if the data corresponding to the v2_key does not exist in the MySQL, deleting the relevant record of the v2_key in the KV database. If the data corresponding to the v2_key exists in the MySQL, reading the latest value of the data in the MySQL, inserting the latest value into the KV database by the key, and deleting the relevant record of the v2_key after confirming that the data is correct. 
      P5, checking data consistency, and repairing the difference by taking MySQL as a reference;
       In the whole migration process, continuously checking whether the migrated data has inconsistent problems in MySQL and KV, and if so, repairing the differences by taking the data in the MySQL database as a reference so as to maintain the data consistency. 
      And P6, performing double-writing double-reading of data in the MySQL and KV databases, and switching the data between the MySQL and KV databases or performing data rollback operation when the data are needed.
      The above-described manner of the present example allows data to be migrated from MySQL to KV databases without interrupting the service, while providing assurance of data consistency and the ability to rollback when necessary. For example, when the steps of the present example are applied to an e-commerce order data migration scenario, the e-commerce system client is online to proxy, write requests for order data, the proxy writes MySQL and KV databases simultaneously, prefix is added in front of the original key name when writing the KV databases, and the read requests preferentially acquire data from MySQL. And selecting a business valley period, asynchronously migrating target order data in MySQL to a KV database, and reducing performance influence by batch operation. After the data migration is completed and verified, the client switches the double write strategy, and the KV database starts to operate the data in the form of normal key values. And scanning the prefix-bearing key value in KV storage, comparing with MySQL data, and deleting the prefix key value after repairing the inconsistency. And continuously checking the two database data, and repairing the difference by taking MySQL data as a reference. The double write and double read is maintained for a period of time and can be switched back to MySQL or rollback data if necessary. And the consistency of data migration of the electronic commerce orders under the condition of no shutdown is realized by means of a double write strategy, asynchronous migration, data verification and repair and the like.
      Fig. 3 is a schematic structural diagram of an apparatus for data migration according to an embodiment of the present application.
      The apparatus includes means for executing a first data processing policy to process a data write request of a client in response to a data migration instruction (hereinafter referred to as "first processing means 101"), means for synchronizing target data in source data to a key-value database by performing asynchronous data processing (hereinafter referred to as "asynchronous synchronizing means 102"), means for switching the executed first data processing policy to a second data processing policy in response to an instruction to complete data migration (hereinafter referred to as "second processing means 103"), and means for cleaning the target key name in the key-value database by scanning the target key name in the key-value database (hereinafter referred to as "record cleaning means 104").
      Referring to fig. 3, in response to a data migration instruction, the first processing device 101 executes a first data processing policy to process a data write request of a client.
      The first data processing strategy executes corresponding data writing operation in the source database, and records the data writing request in the key value database by the target key name.
      The target key name is used for distinguishing new and old data versions in the migration process in the key value database. The embodiment of the application can generate the target key name by various methods, such as adding a specific prefix or suffix into the original key name, inserting special characters in the middle, carrying out hash operation on the original key name, taking the obtained hash value as the corresponding target key name, and the like.
      According to one embodiment, the target key name is formed by adding a specific prefix before the original key name. For example, assuming that the original key name is "key", the prefix "prefix_key" is formed by adding the prefix "prefix_" as the target key name. Based on the mode, the key to be processed can be rapidly identified through the prefix, and subsequent data scanning is facilitated.
      Wherein the data write request includes INSERT new data (INSERT), UPDATE data (UPDATE), DELETE Data (DELETE), and the like.
      The target key name is adopted in the key value database to record the occupation space for identifying data operation, and the storage of specific values is not involved. The recorded content may include key information such as the type of data operation (e.g., insert, update, or delete), the time of the data operation, and the like, without regard to the size of the actual stored value or the specific content.
      According to one embodiment, the first data processing policy is implemented by a client agent. The apparatus further comprises a proxy deployment apparatus.
      Wherein the client agent is a middle tier component deployed on the client side for handling database related operations. For example, the source database is MySQL, and the client agent intercepts SQL statements and key operations via a Hook (Hook) mechanism of a database connection pool or ORM framework to execute the first data processing policy.
      The agent deployment device deploys and configures the agent program in the client.
      Specifically, an agent is deployed between the application and the database, ensuring that it can intercept all database operation requests. And configuring parameters of the proxy, such as database connection information, caching strategy, load balancing configuration and the like, according to the requirements of the application program. It should be noted that, the client is directly connected to the database, that is, the client is a back-end service, and the service accesses the database after the upper layer requests the back-end service. The client agent of the embodiment of the application directly intercepts the request in the back-end service.
      The agent is brought up in the client, causing the first processing means 101 to process the data write request of the client by executing the first data processing policy by the processor.
      The data write flow Cheng Man of the first data processing policy is required by the client agent to execute, for each write request, the agent to execute the following operations, when writing data to the source database, directly executing the data write operation in the source database, when writing data to the key value database, adding a specific prefix before the original key name of the data to be written to form a target key value, and recording the data write request with the target key name.
      The data reading flow of the client agent executing the first data processing strategy comprises that all data reading requests acquire data from the source database and the data reading result of the source database is in control.
      Optionally, the agent is tested before formally online to ensure that it can properly intercept and process requests and that it does not negatively impact the normal operation of the application. After confirming that the agent configuration is correct and the test is correct, the agent is formally online, and the agent starts to process the database request of the application program.
      Optionally, after the online client agent, the performance and stability of the agent are continuously monitored to optimize and adjust the agent program according to the actual situation.
      Continuing with the description of fig. 3, in step S102, the asynchronous synchronization apparatus 102 synchronizes the target data in the source data to the key value database by performing asynchronous data processing.
      Wherein the target data corresponds to a user selected data segment.
      The user queries the data in the source database and selects the target data segment to be migrated. Wherein the data in the source database may be queried using a query tool that is self-contained with the database (e.g., an SQL query) or a third party query tool. The method can select the target data segment based on a manual selection mode, for example, manually selecting the data segment needing to be migrated through a graphical interface or a command line tool. Or select the target data segment based on an automatic selection, e.g., automatically selecting the corresponding data segment according to preset rules (e.g., data type, amount of data, time range, etc.).
      The device can adopt a segmented form to improve concurrency and greatly accelerate the data migration process. For example, in a MySQL scenario adopting a master multi-slave architecture, data can be segmented according to the id of the data, different slave nodes for processing data requests of different data segments support multiple migration tasks to be executed concurrently by the same slave node, and the concurrency is further improved, so that data migration is accelerated.
      According to one embodiment, the apparatus batch-migrates the target data to the key-value database by asynchronous data processing based on the data range and the migration number of batch-migrated data set by the user.
      Specifically, the device acquires a data range and migration times of batch migration data set by a user. Specifically, the user can flexibly define the data volume (i.e. the data range) of each migration and the migration batch (i.e. the migration times) required to be performed according to the actual requirements and the system resource conditions. Wherein the user may determine a data range of the batch migration data based on the time range or the data identification. For example, the user may specify an amount of data per migration of 1000 records for a total of 5 migrates, thereby migrating a total of 5000 pieces of target data into the key-value database in batches.
      Then, the device transfers the target data to the key value database in batches according to the data range and the transfer times set by the user. During each migration process, the system extracts corresponding amount of data according to the set data range and transmits the data to the key value database through an asynchronous processing mechanism. The method not only improves the flexibility of data migration, but also optimizes the migration process according to the actual demands of users, and ensures the high efficiency and reliability of data migration.
      Continuing with the description of FIG. 3, in response to an instruction to complete the data migration, the second processing device 103 switches the executed first data processing policy to a second data processing policy that causes the key database to operate on the data with the original key name.
      After data migration is completed, in the key value database, part of key names in the data from the source database are target key names, and the rest key names are original key names.
      Wherein the operation of the second processing means 103 may be performed in response to an instruction to stop asynchronous data processing. Or the operation of the second processing means 103 may be performed based on an operation instruction from the user. Or the operation of the second processing means 103 may be performed when a preset point in time is reached.
      It should be noted that, after the first data processing policy is switched to the second data processing policy, when new data is inserted, an original key name is used for insertion, when an update (update) or delete (delete) operation is performed on the data, if the original key name of the data does not have a target key name form in the key value database, it is indicated that the data is not newly built or updated before migration, a corresponding update operation or delete operation can be directly performed, and if the original key name of the data has a target key name form, it is indicated that the data may be newly built or updated before migration is performed, and at this time, performing the update operation or delete operation may cause temporary inaccuracy on the data, and this problem is repaired by performing a subsequent scan operation.
      The record cleaning device 104 cleans the target key names in the key value database by scanning the target key names in the key value database.
      Specifically, scanning is performed in the key value database based on the target key name, the latest data is searched from the source database, and if no data corresponding to the target key name exists in the source database, the relevant record of the target key name in the key value database is deleted. If the data corresponding to the target key name exists in the source database, the latest value of the data in the source database is read, the latest value is inserted into the key value database by the original key name, and the related record of the target key name in the key value database is deleted after the data is confirmed to be correct.
      According to one embodiment, the method further comprises a consistency verification device.
      The consistency verification device verifies consistency of the migration data in the source database and the key value database, and if the inconsistency problem exists, the difference is repaired based on the migration data in the source database.
      Specifically, the consistency verification device compares the migration data in the source database and the key value database to determine whether the migration data is consistent in the source database and the key value database. If the data in the source database is inconsistent, the migration data in the key value database is updated based on the data in the source database, so that the migration data is consistent in the source database and the key value database.
      Optionally, in the whole migration process, the consistency verification device continuously verifies whether the migrated data has an inconsistency problem in the source database and the key value database, and if the data has the inconsistency, the consistency verification device repairs the difference by taking the data in the source database as a reference, so that the consistency of the data is maintained.
      According to one embodiment, if the key-value database is problematic or consistency of the data needs to be verified, a rollback operation may be performed, the apparatus further comprising rollback performing means.
      And responding to the data rollback instruction, and executing corresponding rollback operation on the data needing to be rollback in the key value database by the rollback execution device.
      Wherein the rollback operation includes, but is not limited to, deleting data that has been migrated to the key-value store or overwriting data that needs to be rolled back with corresponding data in the source database.
      Because the embodiment of the application writes the data change into the source database and the key value database in a double-writing mode in the data migration process, no additional backup operation is needed to be carried out on the source database when the data rollback is carried out.
      According to the device, in the migration process, corresponding data writing operation is executed in the source database, the target key name is adopted to record the writing request in the key value database, the mode of only recording the target key value is adopted, complex writing operation is reduced, data migration is more convenient, migration flow is simplified, migration efficiency is improved, the data in two database systems are managed and synchronized through the data processing strategy of the embodiment of the application, consistency and rollability of the data are ensured, smooth migration and rollback of the data are realized under the condition that service is not interrupted, and data migration is carried out based on the data selected by a user, so that the user can flexibly select which data to migrate, and the data migration efficiency is improved.
      Based on the same inventive concept, the embodiment of the present application further provides an electronic device, where the corresponding method of the electronic device may be the data migration method in the foregoing embodiment, and the principle of solving the problem is similar to that of the method. The electronic device provided by the embodiment of the application comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the method and/or the technical scheme of the plurality of embodiments of the application.
      The electronic device may be a user device, or a device formed by integrating the user device and a network device through a network, or may also be an application running on the device, where the user device includes, but is not limited to, a computer, a mobile phone, a tablet computer, a smart watch, a bracelet, and other various terminal devices, and the network device includes, but is not limited to, a network host, a single network server, a plurality of network server sets, or a computer set based on cloud computing, where the network device is implemented, and may be used to implement a part of processing functions when setting an alarm clock. Here, the Cloud is composed of a large number of hosts or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual computer composed of a group of loosely coupled computer sets.
      Fig. 4 shows a structure of a device suitable for implementing the method and/or technical solution in an embodiment of the present application, the device 1200 includes a central processing unit (CPU, central Processing Unit) 1201, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1202 or a program loaded from a storage portion 1208 into a random access Memory (RAM, random Access Memory) 1203. In the RAM 1203, various programs and data required for the system operation are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204. An Input/Output (I/O) interface 1205 is also connected to the bus 1204.
      Connected to the I/O interface 1205 are an input section 1206 including a keyboard, a mouse, a touch screen, a microphone, an infrared sensor, and the like, an output section 1207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), an LED display, an OLED display, and the like, and a speaker, a storage section 1208 including one or more computer-readable media such as a hard disk, an optical disk, a magnetic disk, a semiconductor memory, and the like, and a communication section 1209 including a network interface card such as a LAN (local area network ) card, a modem, and the like. The communication section 1209 performs communication processing via a network such as the internet.
      In particular, the methods and/or embodiments of the present application may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1201.
      Another embodiment of the present application also provides a computer readable storage medium having stored thereon computer program instructions executable by a processor to implement the method and/or the technical solution of any one or more of the embodiments of the present application described above.
      In particular, the present embodiments may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
      The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
      Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
      Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
      The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
      It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
      In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple elements or page components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
      The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
      In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
      The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods described in the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
      It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present application.
      Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.