Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
The invention aims to solve the problems, and provides a method and a system for performing SQL optimization by deep learning participation HTAP database, which utilize a deep learning model to realize an automatic optimization technology for SQL query, avoid manually optimizing a search engine, improve query speed and simplify occupation of memory, thereby improving machine performance.
The technical scheme of the invention is that the invention discloses a method for executing SQL optimization by a deep learning participation HTAP database, which comprises the following steps:
step 1, an SQL parser receives an SQL execution statement and checks whether the SQL execution statement meets the specification;
Step 2, the execution plan module generates an execution plan through a deep learning model based on SQL sentences conforming to the specifications after being checked by the SQL parser;
step 3, the engine executes the execution plan generated in the step 2 and sends the SQL execution result and the detailed information of the SQL execution process to the training set of the deep learning model;
And 4, the deep learning model performs learning and training based on the training set, and then the results of model learning and training are fed back to the execution plan module in the mode of an optimal optimization scheme, and the execution plan module automatically optimizes the execution plan and applies the execution plan to the next SQL query after receiving the feedback of the deep learning model.
According to one embodiment of the method for deep learning to participate in the SQL optimization of the HTAP database, the execution plan generated in step 2 is divided into a row-based plan and a column-based plan, and the engines in step 3 are divided into a row-based execution engine to execute the row-based plan and a vectorized execution engine to execute the column-based plan.
According to an embodiment of the method for performing SQL optimization by the deep learning participation HTAP database, the generating an execution plan based on the deep learning model in the step 2 further comprises:
The execution plan module utilizes an input layer, a hidden layer and an output layer of the cyclic neural network structure, the input layer inputs SQL execution results and detailed information of SQL execution processes, the hidden layer makes cost calculation, and the output layer outputs SQL sentences and an optimal execution plan.
According to one embodiment of the method for performing SQL optimization by the deep learning participation HTAP database, in the cost calculation of the hidden layer, cost estimation is performed on SQL execution results and detailed information of the SQL execution process, loss function parameters are continuously adjusted in the loop iteration process to reduce cost, and when the reduced cost is superior to an expected value of the cost estimation, the optimization result of the loop iteration is constructed into a data structure of a defined execution plan and fed back to an execution plan module.
According to an embodiment of the method for performing SQL optimization by the deep learning participation HTAP database of the present invention, the step 4 further comprises:
the method comprises the steps that interaction agreements are preset between an execution plan module and a deep learning model, and data structures of the execution plan are defined in advance;
The deep learning model learns and trains the training set data sent by the engine in the step 3, quantifies and designs the feedbacks aiming at the feedbacks, and finally feeds back the results of model learning and training to the execution plan module, so that the execution plan module directly modifies the data structure of the corresponding execution plan in the abstract grammar tree in the memory to optimize after receiving the feedback of the deep learning model, and ensures that the next query can be applied to the optimized feedback.
The invention also discloses a system for deep learning to participate in the SQL optimization of the HTAP database, which comprises:
the SQL parser is used for receiving the SQL execution statement and checking whether the SQL execution statement accords with the specification;
the execution plan module is used for generating an execution plan through the deep learning model based on SQL sentences which are checked by the SQL parser and accord with the specifications;
The engine module is used for executing the generated execution plan and sending the SQL execution result and the detailed information of the SQL execution process to the training set of the deep learning model;
And the deep learning model is used for learning and training based on the training set, and the results of model learning and training are fed back to the execution plan module in a mode of an optimal optimization scheme, so that the execution plan module automatically optimizes the execution plan and applies the execution plan to the next SQL query after receiving the feedback of the deep learning model.
According to one embodiment of the system for deep learning participation in SQL optimization of an HTAP database of the present invention, the execution plans generated by the execution plan module are divided into row-based plans and column-based plans, and the engine module is divided into a row-based execution engine to execute the row-based plans and a vectorized execution engine to execute the column-based plans.
According to an embodiment of the system for deep learning participation HTAP database in performing SQL optimization of the present invention, the deep learning based model generation execution plan of the execution plan module further comprises:
The execution plan module utilizes an input layer, a hidden layer and an output layer of the cyclic neural network structure, the input layer inputs SQL execution results and detailed information of SQL execution processes, the hidden layer makes cost calculation, and the output layer outputs SQL sentences and an optimal execution plan.
According to the embodiment of the system for performing SQL optimization by the deep learning participation HTAP database, in the cost calculation of a hidden layer, a cost estimation is carried out on SQL execution results and detailed information of an SQL execution process by an execution plan module, loss function parameters are continuously adjusted in the process of loop iteration to reduce cost, and when the reduced cost is superior to an expected value of the cost estimation, the optimization result of the loop iteration is constructed into a data structure of a defined execution plan and fed back to the execution plan module.
According to an embodiment of the system for deep learning participation HTAP databases to perform SQL optimization of the present invention, the deep learning model is further configured to:
the method comprises the steps that interaction agreements are preset between an execution plan module and a deep learning model, and data structures of the execution plan are defined in advance;
The deep learning model learns and trains training set data sent by the engine module, quantifies and designs the feedbacks aiming at points, and finally feeds back the results of model learning and training to the execution plan module, so that the execution plan module directly modifies the data structure of the corresponding execution plan in the abstract grammar tree in the memory to optimize after receiving the feedback of the deep learning model, and ensures that the next query can be applied to the optimized feedback.
Compared with the prior art, the method has the advantages that in the scheme, for the HTAP database, the execution result and the execution process are sent to the deep learning model for training after SQL is executed each time, and the optimization scheme prediction is carried out, the prediction result of the optimization scheme prediction is fed back to the execution plan module, and the execution plan module automatically optimizes and generates a plan according to the feedback of the prediction result and applies the generated plan to the next query. After the feedback takes effect, the whole SQL execution flow is more efficient and occupies lower memory.
Detailed Description
The invention is described in detail below with reference to the drawings and the specific embodiments. It is noted that the aspects described below in connection with the drawings and the specific embodiments are merely exemplary and should not be construed as limiting the scope of the invention in any way.
FIG. 1 illustrates a flow of one embodiment of a method of the present invention for deep learning participation in an HTAP database to perform SQL optimization. Referring to fig. 1, the following is a detailed description of the implementation steps of the method for performing SQL optimization for the deep learning participation HTAP database of the present embodiment.
Step 1 the SQL parser (SQL PARSER) receives the SQL execution statement to check if the SQL execution statement complies with the specification, typically including checking if the SQL format, syntax is correct, etc.
And 2, generating an execution plan by the execution plan module through the deep learning model based on the SQL sentences which are checked by the SQL parser and accord with the specifications.
According to FIG. 1, the execution plan module generates a logical layer plan and generates a physical layer plan based on the logical layer plan. Since the HTAP database is a row-column mixed database, the physical layer planning is divided into a row-based plan and a column-based plan.
The specific process of the execution plan module generating an execution plan based on the deep learning model is as follows.
The execution plan module utilizes a cyclic neural network structure (the cyclic neural network comprises an input layer, a hidden layer and an output layer), the input layer inputs SQL execution results and detailed information of SQL execution processes, the hidden layer makes cost calculation, and the output layer outputs SQL sentences and an optimal execution plan.
In the cost calculation of the hidden layer, cost estimation is carried out aiming at the SQL execution result and the detailed information of the SQL execution process, loss function parameters are continuously adjusted in the process of loop iteration (the iteration process is advanced by preset iteration times) so as to reduce the cost, and when the reduced cost is superior to an expected value of the cost estimation, an optimization result of the loop iteration is constructed into a data structure of a defined execution plan and fed back to an execution plan module.
The cost estimation is divided into 2 stages, one is cost estimation analysis and the other is learning optimization stage.
The cost estimation analysis stage comprises a plurality of execution conditions such as full-table scanning estimation, common index estimation, multi-table connection query and the like, the deep learning model carries out overall estimation on the different execution conditions to obtain an expected value of the cost estimation, and the expected value comprises IO cost (IO times), CPU calculation cost, memory occupation cost and the like.
In the subsequent learning optimization stage, according to training set data of the deep learning model, the weight parameters of the deep learning model are continuously optimized by using a cyclic neural network, expected values of the cost estimation analysis stage are compared, if learning optimization results are better than the expected values of the cost estimation analysis, the learning optimization results are fed back to an execution plan module after a certain learning times, and iterative optimization is continued.
The learning optimization stage and SQL execution are decoupled, the SQL execution result and the detailed information of the SQL execution process are sent to a training set of the deep learning model in an asynchronous mode each time, and the cyclic neural network feeds back to the execution plan module after a certain optimization times.
The execution plan generated is generally divided into the following cases:
1. Useless plan matching-there may sometimes be only one way to execute the query. For example, heap tables can only acquire data through table scanning, and to avoid wasting time optimizing such queries, SQL SERVER maintains a garbage list for selection, and if the optimization stage is to find a plan in the garbage list that matches the query, then similar plans are generated without any optimization.
2. Multistage optimization-for complex queries, the number of alternative processing strategies that need to be analyzed can be large, and evaluating each selection can take a long time. Thus, the optimization stage does not analyze all possible processing strategies, but rather divides them into several configurations, each containing different indexing and connection techniques.
Index variations consider different index characteristics, single column index, compound index, index column order, index density, etc. Similarly, connection variants consider different connection techniques available in the engine, nested loop connections, merged connections, and hash matches.
Learning optimization considers the statistics of columns caused in the WHERE clause to evaluate the validity of the indexing and connection policies, which are counted to evaluate the configuration overhead in multiple optimization stages, including many factors such as CPU, memory usage, and disk I/O required to perform the query. After each optimization phase, the cost of the processing strategy is evaluated, if the cost is sufficiently economical, the further looping through the optimization phase is stopped and the optimization process is exited, otherwise the looping through the optimization phase is continued to determine a cost-effective processing strategy.
Step 3, the engine executes the execution plan generated in step 2, the engine comprises Row Based Execution Engine (a row-based execution engine for executing the row-based plan) and Vectorized Execution Engine (a vectorization execution engine for executing the column-based plan), and the SQL execution result and the detailed information of the SQL execution process are sent to the training set of the deep learning model.
During the execution of the engine, data (which is some intermediate data and data caches generated in SQL execution of the data after SQL execution, such as data page, index page, redox log, etc.) are put into a Row Buffer Pool (Row Buffer Pool) and a column index Buffer Pool (Column Index Buffer Pool) in the storage engine. And after the SQL execution is finished each time, the SQL execution result and the detailed information of the SQL execution process are sent to a training set of the deep learning model.
The execution engine based on the line mainly processes the execution plan related to the transaction, selects operators (such as operators of full table scanning, nested loop connection and the like) of different modules according to the execution plan, and determines that a plurality of tables and APIs (application program interfaces) needing to be called need to be operated in the execution.
The full table scanning in the execution engine based on the rows is to scan a certain table specified by the execution plan, the expression frame generates corresponding instructions and calls corresponding processing APIs until the needed data is found.
The nested loop connection in the row-based execution engine is that the nested loop connects one external dataset to the internal dataset, and the database matches all rows in the internal dataset that match predicate conditions for each row of data of the external dataset. If an index is available to the internal data set or internal table at this time, the database will acquire the data by using it to locate rowid.
The vectorized execution engine mainly processes execution plans related to data analysis, most of requests are mainly related to inquiry and analysis, and data analysis is very convenient because the data storage is multidimensional. The conventional transactional model is a two-dimensional data model that is inconvenient to operate and analyze when multiple sets of forms are required to operate. In addition, a special SIMD instruction (single instruction stream multiple data stream) is generated through the expression frame, and a corresponding column index is called to access the API, so that the overall analysis efficiency is improved.
The full-table scanning of the vectorization execution engine is that, due to the fact that the full-table scanning is performed by the column-type database, certain needed column data can be extracted for analysis, and the data size of the full-table scanning is greatly reduced.
The nested loop connection of the vectorization execution engine is that according to the execution plan, the characteristics of the column database are fully utilized, and for each row of data of the external data set, only needed columns are associated, and corresponding column data are recombined.
In the hash connection of the vectorized execution engine, the hash connection is mainly divided into two phases, namely an establishment phase (build phase) and a probe phase (probe phase). The creation phase is to select a table (typically a smaller one to reduce the time and space for creating a hash table) and apply a hash function to the connection attribute (join attribute) on each tuple to obtain a hash value, thereby creating a hash table. And a detection stage, namely scanning each row of the other table, calculating the hash value of the connection attribute, comparing the hash value with the hash table established in the establishment stage, and if the hash table is in the same socket, connecting the hash table into a new table if a connection predicate (predicate) is satisfied. When the hash table is built under the condition that the memory is large enough, the whole table is in the memory, and the hash table is put into a disk after the connection operation is completed. But this process also brings about many I/O operations.
The aggregate packet of the vectorized execution engine is commonly an aggregate packet (GROUP BY), and the GROUP BY combines all, cube and roolup as a common function of OLAP (online analysis), so that the use is very convenient. The OALP series of functions can realize a plurality of functions in OLAP scenes such as database report forms, statistical analysis, warehouse processing and the like, and the functions can play a larger role by combining window functions.
And 4, the deep learning model performs learning and training based on the training set, and then the results (the optimal optimization scheme) of the model learning and training are fed back to the execution plan module, and the execution plan module automatically optimizes the execution plan and applies the execution plan to the next SQL query after receiving the feedback of the deep learning model.
The interaction agreement is preset between the execution plan module and the deep learning model, and the data structures of the execution plan are defined in advance and comprise abstract syntax trees, access types type, connection matching conditions ref of tables and the like.
The deep learning model learns and trains the training set data sent by the engine in the step 3, quantifies and designs the feedbacks of points (such as the altitude estimation of the abstract syntax tree, the access type and the like), and finally feeds back the results of model learning and training to the execution plan module. After receiving feedback of the deep learning model, the execution plan module directly modifies a data structure of a corresponding execution plan in the abstract syntax tree in the memory to optimize, ensures that the next query can be applied to the optimized feedback, and timely drops the optimized information for analysis by developers.
FIG. 2 illustrates the principles of one embodiment of a system of the present invention for deep learning participation in an HTAP database to perform SQL optimization. Referring to fig. 2, the system of the present embodiment includes an SQL parser, an execution plan module, an engine module, and a deep learning model.
The SQL parser is used for receiving the SQL execution statement, and checking whether the SQL execution statement meets the specification or not, typically includes checking whether the SQL format, grammar are correct or not, etc.
The execution plan module is used for generating an execution plan through the deep learning model based on the SQL sentences which are checked by the SQL parser and accord with the specifications.
The execution plan module generates a logic layer plan and generates a physical layer plan according to the logic layer plan. Since the HTAP database is a row-column mixed database, the physical layer planning is divided into a row-based plan and a column-based plan.
The specific process of the execution plan module generating an execution plan based on the deep learning model is as follows.
The execution plan module utilizes a cyclic neural network structure (the cyclic neural network comprises an input layer, a hidden layer and an output layer), the input layer inputs SQL execution results and detailed information of SQL execution processes, the hidden layer makes cost calculation, and the output layer outputs SQL sentences and an optimal execution plan.
In the cost calculation of the hidden layer, cost estimation is carried out aiming at the SQL execution result and the detailed information of the SQL execution process, loss function parameters are continuously adjusted in the process of loop iteration (the iteration process is advanced by preset iteration times) so as to reduce the cost, and when the reduced cost is superior to an expected value of the cost estimation, an optimization result of the loop iteration is constructed into a data structure of a defined execution plan and fed back to an execution plan module.
The cost estimation is divided into 2 stages, one is cost estimation analysis and the other is learning optimization stage.
The cost estimation analysis stage comprises a plurality of execution conditions such as full-table scanning estimation, common index estimation, multi-table connection query and the like, the deep learning model carries out overall estimation on the different execution conditions to obtain an expected value of the cost estimation, and the expected value comprises IO cost (IO times), CPU calculation cost, memory occupation cost and the like.
In the subsequent learning optimization stage, according to training set data of the deep learning model, the weight parameters of the deep learning model are continuously optimized by using a cyclic neural network, expected values of the cost estimation analysis stage are compared, if learning optimization results are better than the expected values of the cost estimation analysis, the learning optimization results are fed back to an execution plan module after a certain learning times, and iterative optimization is continued.
The learning optimization stage and SQL execution are decoupled, the SQL execution result and the detailed information of the SQL execution process are sent to a training set of the deep learning model in an asynchronous mode each time, and the cyclic neural network feeds back to the execution plan module after a certain optimization times.
The execution plan generated is generally divided into the following cases:
3. Useless plan matching-there may sometimes be only one way to execute the query. For example, heap tables can only acquire data through table scanning, and to avoid wasting time optimizing such queries, SQL SERVER maintains a garbage list for selection, and if the optimization stage is to find a plan in the garbage list that matches the query, then similar plans are generated without any optimization.
4. Multistage optimization-for complex queries, the number of alternative processing strategies that need to be analyzed can be large, and evaluating each selection can take a long time. Thus, the optimization stage does not analyze all possible processing strategies, but rather divides them into several configurations, each containing different indexing and connection techniques.
Index variations consider different index characteristics, single column index, compound index, index column order, index density, etc. Similarly, connection variants consider different connection techniques available in the engine, nested loop connections, merged connections, and hash matches.
Learning optimization considers the statistics of columns caused in the WHERE clause to evaluate the validity of the indexing and connection policies, which are counted to evaluate the configuration overhead in multiple optimization stages, including many factors such as CPU, memory usage, and disk I/O required to perform the query. After each optimization phase, the cost of the processing strategy is evaluated, if the cost is sufficiently economical, the further looping through the optimization phase is stopped and the optimization process is exited, otherwise the looping through the optimization phase is continued to determine a cost-effective processing strategy.
The engine module is used for executing the generated execution plan and sending the SQL execution result and the detailed information of the SQL execution process to the training set of the deep learning model.
The engines include Row Based Execution Engine (row-based execution engine for executing row-based plans) and Vectorized Execution Engine (vectorized execution engine for executing column-based plans) that send each SQL execution result and detailed information of the SQL execution process to the training set of the deep learning model.
During the execution of the engine, data (which is some intermediate data and data caches generated in SQL execution of the data after SQL execution, such as data page, index page, redox log, etc.) are put into a Row Buffer Pool (Row Buffer Pool) and a column index Buffer Pool (Column Index Buffer Pool) in the storage engine. And after the SQL execution is finished each time, the SQL execution result and the detailed information of the SQL execution process are sent to a training set of the deep learning model.
The execution engine based on the line mainly processes the execution plan related to the transaction, selects operators (such as operators of full table scanning, nested loop connection and the like) of different modules according to the execution plan, and determines that a plurality of tables and APIs (application program interfaces) needing to be called need to be operated in the execution.
The full table scanning in the execution engine based on the rows is to scan a certain table specified by the execution plan, the expression frame generates corresponding instructions and calls corresponding processing APIs until the needed data is found.
The nested loop connection in the row-based execution engine is that the nested loop connects one external dataset to the internal dataset, and the database matches all rows in the internal dataset that match predicate conditions for each row of data of the external dataset. If an index is available to the internal data set or internal table at this time, the database will acquire the data by using it to locate rowid.
The vectorized execution engine mainly processes execution plans related to data analysis, most of requests are mainly related to inquiry and analysis, and data analysis is very convenient because the data storage is multidimensional. The conventional transactional model is a two-dimensional data model that is inconvenient to operate and analyze when multiple sets of forms are required to operate. In addition, a special SIMD instruction (single instruction stream multiple data stream) is generated through the expression frame, and a corresponding column index is called to access the API, so that the overall analysis efficiency is improved.
The full-table scanning of the vectorization execution engine is that, due to the fact that the full-table scanning is performed by the column-type database, certain needed column data can be extracted for analysis, and the data size of the full-table scanning is greatly reduced.
The nested loop connection of the vectorization execution engine is that according to the execution plan, the characteristics of the column database are fully utilized, and for each row of data of the external data set, only needed columns are associated, and corresponding column data are recombined.
In the hash connection of the vectorized execution engine, the hash connection is mainly divided into two phases, namely an establishment phase (build phase) and a probe phase (probe phase). The creation phase is to select a table (typically a smaller one to reduce the time and space for creating a hash table) and apply a hash function to the connection attribute (join attribute) on each tuple to obtain a hash value, thereby creating a hash table. And a detection stage, namely scanning each row of the other table, calculating the hash value of the connection attribute, comparing the hash value with the hash table established in the establishment stage, and if the hash table is in the same socket, connecting the hash table into a new table if a connection predicate (predicate) is satisfied. When the hash table is built under the condition that the memory is large enough, the whole table is in the memory, and the hash table is put into a disk after the connection operation is completed. But this process also brings about many I/O operations.
The aggregate packet of the vectorized execution engine is commonly an aggregate packet (GROUP BY), and the GROUP BY combines all, cube and roolup as a common function of OLAP (online analysis), so that the use is very convenient. The OALP series of functions can realize a plurality of functions in OLAP scenes such as database report forms, statistical analysis, warehouse processing and the like, and the functions can play a larger role by combining window functions.
The deep learning model carries out learning and training based on the training set, and then the results of model learning and training are fed back to the execution plan module in the mode of an optimal optimization scheme, so that the execution plan module automatically optimizes the execution plan and applies the execution plan to the next SQL query after receiving the feedback of the deep learning model.
The interaction agreement is preset between the execution plan module and the deep learning model, and the data structures of the execution plan are defined in advance and comprise abstract syntax trees, access types type, connection matching conditions ref of tables and the like.
The deep learning model learns and trains training set data sent by the engine module, quantifies and designs feedbacks to points (such as altitude estimation of abstract syntax tree, access type and the like), and finally feeds back model learning and training results to the execution plan module. After receiving feedback of the deep learning model, the execution plan module directly modifies a data structure of a corresponding execution plan in the abstract syntax tree in the memory to optimize, ensures that the next query can be applied to the optimized feedback, and timely drops the optimized information for analysis by developers.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood and appreciated by those skilled in the art.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disk) as used herein include Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disk) usually reproduce data magnetically, while discs (disk) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.