CN119760704B - A GPGPU secure calling method, device, equipment and medium - Google Patents
A GPGPU secure calling method, device, equipment and medium Download PDFInfo
- Publication number
- CN119760704B CN119760704B CN202510266147.3A CN202510266147A CN119760704B CN 119760704 B CN119760704 B CN 119760704B CN 202510266147 A CN202510266147 A CN 202510266147A CN 119760704 B CN119760704 B CN 119760704B
- Authority
- CN
- China
- Prior art keywords
- gpgpu
- instruction
- target
- ciphertext
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Storage Device Security (AREA)
Abstract
The application discloses a secure call method, a device, equipment and a medium of a GPGPU, which relate to the technical field of graphic processing and are applied to a secure control module in the GPGPU, and comprise the steps of acquiring a ciphertext instruction and request data which are sent by a target host after analyzing a GPGPU access request sent by a remote user node; the method comprises the steps of obtaining a plaintext instruction by decrypting the ciphertext instruction through a target decryption algorithm corresponding to a remote user node, verifying the validity of a GPGPU access request, if the plaintext instruction and request data are legal, sending the plaintext instruction and the request data to a dispatcher through a bus so that the dispatcher calls a target computing unit determined from the GPGPU based on a preset dispatching algorithm to process the plaintext instruction and the request data, encrypting a processing result through a target encryption algorithm corresponding to the remote user node, and sending a ciphertext result to a target host so that the target host can send the ciphertext result to the remote user node. Safety protection efficiency is improved.
Description
Technical Field
The present invention relates to the field of graphics processing technologies, and in particular, to a method, an apparatus, a device, and a medium for secure calling of a GPGPU.
Background
GPGPU (General-purpose computing on graphics processing units, general-purpose graphics processor) is a powerful computing tool, and unlike GPU, GPGPU's compute kernel is more parallel, better at computing some non-graphics related programs, and repeatable, data-intensive tasks. Such as large-scale data encryption, decryption, data computation, AI computation acceleration, etc.
As the current technology level is rapidly rising, the application requirements of GPGPU are significantly rising. At present, an application mode that a plurality of clients share the GPGPU is generally supported, the design supports a plurality of users to remotely call one heterogeneous GPGPU in a network mode to execute calculation, the number of GPGPU applications can be reduced, the GPGPU application efficiency is improved to the greatest extent on the premise that the calculation requirement is not influenced, and the economical efficiency is improved. However, under this design, the remote user accesses the Host (Host) through the network and invokes the heterogeneous GPGPU, but security challenges such as cache side channel attack, memory vulnerability attack, and heterogeneous unit misaccess caused by the remote access are difficult to avoid.
At present, a software security protection method is generally adopted, namely illegal access to the heterogeneous GPGPU is avoided by installing a software security driver in a Host and the like. However, the software security protection verification scheme based on the CPU (Central Processing Unit ) is low in efficiency, long in delay, and complicated security authentication process can directly influence the feedback efficiency of GPGPU calculation, and the scheme can cause the increase of the task load of the Host CPU, thereby influencing the execution of other important processes. In addition, the existing hardware scheme cannot achieve physical isolation between the hardware security control module and the GPGPU computing unit, which is liable to cause logic resource waste of the GPGPU computing unit, and the recognition accuracy of the GPGPU to illegal access is possibly affected due to problems such as metastable data transmission and the like, so that user experience is seriously affected.
In summary, how to improve the security protection efficiency of GPGPU access is a problem to be solved at present.
Disclosure of Invention
Accordingly, the present invention aims to provide a method, apparatus, device and medium for secure call of GPGPU, which can improve the security protection efficiency of GPGPU access. The specific scheme is as follows:
In a first aspect, the present application discloses a secure call method for a GPGPU, which is applied to a secure control module in the GPGPU, and the method includes:
Acquiring a ciphertext instruction and request data carried in a GPGPU access request sent by a target host after analyzing the GPGPU access request sent by a remote user node;
decrypting the ciphertext instruction by using a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, and verifying the validity of the GPGPU access request based on the plaintext instruction;
If the GPGPU access request is a legal request, the plaintext instruction and the request data are sent to a preset scheduler through a bus, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained;
And obtaining the processing result sent by the target computing unit through a bus, encrypting the processing result by utilizing a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then sending the ciphertext result to the target host so that the target host can send the ciphertext result to the remote user node.
Optionally, the decrypting the ciphertext instruction by using a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction includes:
Carrying out identity verification on the target host, if the identity verification is passed, extracting a ciphertext to be decrypted, an instruction identification number and IP information of the remote user node from the ciphertext instruction, and sending the ciphertext to be decrypted and the instruction identification number to a locally preset decryption state machine after establishing an association relationship;
Determining a corresponding target decryption algorithm based on the IP information, establishing an association relation between the target decryption algorithm and the instruction identification number, and then sending the association relation to the decryption state machine;
determining an instruction encryption format by the decryption state machine by using the access request type and the calculation type obtained from a preset instruction comparison table based on the instruction identification number;
And decrypting the ciphertext to be decrypted in the ciphertext instruction by using the target decryption algorithm and the instruction encryption format based on the instruction identification number in the decryption state machine so as to obtain a plaintext instruction.
Optionally, the preset instruction comparison table stores preset legal instruction parameters;
Correspondingly, the verifying the validity of the GPGPU access request based on the plaintext instruction includes:
Judging whether the instruction parameters of the plaintext instruction are consistent with legal instruction parameters stored in the preset instruction comparison table;
And if the access requests are consistent, judging that the GPGPU access requests are legal requests, otherwise, judging that the GPGPU access requests are illegal requests.
Optionally, after the GPGPU access request is a legal request, the method further includes:
Splicing the corresponding plaintext instruction and the request data according to a first preset format based on the instruction identification number to obtain first spliced data;
correspondingly, the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and the method comprises the following steps:
and the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the first spliced data.
Optionally, the encrypting the processing result by using a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result includes:
determining a unit identification number of the target computing unit, and acquiring an encryption operation code and encryption algorithm data corresponding to the IP information of the remote user node from a computing unit identification list corresponding to the unit identification number;
The encryption algorithm data and the unit identification number are sent to a local preset encryption state machine after an association relation is established, and a target encryption algorithm corresponding to the encryption operation code is obtained from a preset kernel;
And establishing an association relation between the target encryption algorithm and the unit identification number and then sending the association relation to the encryption state machine so as to encrypt the processing result by utilizing the target encryption algorithm and the encryption algorithm data in the encryption state machine based on the unit identification number to obtain a ciphertext result.
Optionally, after encrypting the processing result by using a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, the method further includes:
splicing the ciphertext result and the processing result according to a second preset format based on the unit identification number to obtain second spliced data;
Correspondingly, the sending the ciphertext result to the target host, so that the target host sends the ciphertext result to the remote user node, includes:
And sending the second spliced data to the target host, so that the target host sends the second spliced data to the remote user node.
Optionally, after verifying the validity of the GPGPU access request based on the plaintext instruction, the method further includes:
and if the GPGPU access request is an illegal request, discarding the GPGPU access request to prohibit the execution of the step of processing the plaintext instruction and the request data by the target computing unit determined from the GPGPU based on a preset scheduling algorithm.
In a second aspect, the present application discloses a security call device for a GPGPU, which is applied to a security control module in the GPGPU, and the device includes:
The information acquisition module is used for acquiring ciphertext instructions and request data carried in a GPGPU access request sent by a remote user node after the GPGPU access request sent by the target host is analyzed;
The decryption verification module is used for decrypting the ciphertext instruction by utilizing a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, and verifying the validity of the GPGPU access request based on the plaintext instruction;
The computing module is used for sending the plaintext instruction and the request data to a preset scheduler through a bus if the GPGPU access request is a legal request, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained;
And the encryption module is used for acquiring the processing result sent by the target computing unit through the bus, encrypting the processing result by utilizing a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then sending the ciphertext result to the target host so that the target host can send the ciphertext result to the remote user node.
In a third aspect, the present application discloses an electronic device, comprising:
A memory for storing a computer program;
and the processor is used for executing the computer program to realize the steps of the safety calling method of the GPGPU.
In a fourth aspect, the present application discloses a computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the aforementioned disclosed secure invocation method of a GPGPU.
The method comprises the steps of obtaining a ciphertext instruction and request data carried in a GPGPU access request sent by a remote user node through a target host after the GPGPU access request sent by the remote user node is analyzed, decrypting the ciphertext instruction through a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, verifying the validity of the GPGPU access request based on the plaintext instruction, sending the plaintext instruction and the request data to a preset scheduler through a bus if the GPGPU access request is a legal request, so that the preset scheduler can call a target computing unit determined from the GPGPU based on the preset scheduling algorithm to process the plaintext instruction and the request data and obtain a processing result, obtaining the processing result sent by the target computing unit through a bus, encrypting the processing result through a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then sending the ciphertext result to the target host so that the target host can send the ciphertext result to the remote user node.
After receiving a GPGPU access request sent by a remote user node, a target host in the application needs to analyze the GPGPU access request to convert the GPGPU access request into a relevant machine code stream identifiable by the GPGPU, so as to send a ciphertext instruction and request data carried in the GPGPU access request to a security control module in the GPGPU. The security control module firstly needs to decrypt the ciphertext instruction by utilizing a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, so as to verify the validity of the GPGPU access request based on the plaintext instruction. That is, the security control module in the application is provided with a plurality of decryption algorithms, and different users have independent decryption protection modes, namely, the security control module can configure different decryption algorithms for the users according to the information of legal users, and the flexibility is high. Further, if it is determined that the access is legal, the plaintext instruction and the request data are sent to a preset scheduler through a bus, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained. That is, communication is realized between the safety control module and the preset scheduler through a bus, so that physical isolation between the safety control module and the preset scheduler is realized, and different processes are not disturbed. And the safety control module and the computing unit are communicated through the bus, so that the safety control module obtains the processing result sent by the target computing unit through the bus, and in the same way, the safety control module can configure different encryption algorithms for the user according to the information of the legal user, namely, the processing result is encrypted by utilizing the target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then the ciphertext result is sent to the host, so that the target host can send the ciphertext result to the remote user node. Therefore, the safety protection efficiency of GPGPU access can be improved through the scheme, physical isolation between the safety control module and the computing unit and between the safety control module and the dispatcher is realized, the user can be helped to better realize uniform dispatching control of the GPGPU, and the system performance is remarkably improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a system architecture diagram for secure invocation of a GPGPU according to the present disclosure;
FIG. 2 is a detailed architecture diagram of a GPGPU according to the present disclosure;
FIG. 3 is a flowchart of a method for secure invocation of a GPGPU according to the present disclosure;
FIG. 4 is a schematic diagram of an input instruction decryption architecture according to the present disclosure;
FIG. 5 is a flow chart of decryption of a ciphertext instruction of the present disclosure;
FIG. 6 is a schematic diagram of a processing result output encryption architecture according to the present disclosure;
FIG. 7 is a flow chart of the encryption of a processing result according to the present disclosure;
FIG. 8 is a schematic diagram of a security call device of a GPGPU according to the present application;
fig. 9 is a block diagram of an electronic device according to the present disclosure.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The remote user accesses the Host (Host) and invokes the heterogeneous GPGPU through the network, but security challenges such as cache side channel attack, memory vulnerability attack, heterogeneous unit misoperation access and the like caused by the remote access are difficult to avoid. At present, a software security protection method is generally adopted, namely illegal access to the heterogeneous GPGPU is avoided by installing a software security driver in a Host and the like. However, the software security protection verification scheme based on the CPU is low in efficiency, long in delay and complex in security authentication process, the feedback efficiency of GPGPU calculation can be directly affected, and the scheme can cause the increase of the task load of the Host CPU, so that the execution of other important processes is affected. In addition, the existing hardware scheme cannot achieve physical isolation between the hardware security control module and the GPGPU computing unit, which is liable to cause logic resource waste of the GPGPU computing unit, and the recognition accuracy of the GPGPU to illegal access is possibly affected due to problems such as metastable data transmission and the like, so that user experience is seriously affected.
Therefore, the embodiment of the application discloses a method, a device, equipment and a medium for safely calling a GPGPU, which can improve the safety protection efficiency of GPGPU access.
In the secure call scheme of the GPGPU of the present application, the system architecture used may be shown in fig. 1, and mainly includes a remote user, a host, and a GPGPU, where multiple user nodes at the remote end are respectively and independently connected to the host through a network, and the heterogeneous GPGPU implements communication and information exchange with the host in a DMA (Direct Memory Access ) form through a PCIe (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, a high-speed serial computer expansion bus standard) interface. The user node refers to a user terminal, and may be intelligent devices such as a mobile phone end and a computer end.
In addition, the GPGPU mainly comprises a security control module, a calculation scheduling module (a preset scheduler is arranged in the GPGPU), a plurality of calculation units, an on-chip cache and the like. After the host receives the GPGPU access request of the remote user node, the encrypted computing instruction and related data are sent to the GPGPU through a PCIe interface, the GPGPU dispatches a computing task to a target computing unit after completing security verification on the current GPGPU access request, after the target computing unit completes processing, the security control module encrypts a processing result, and the processing result is fed back to the requesting user node through the host through the PCIe interface.
Further, fig. 2 shows a detailed architecture of the GPGPU, including an interrupt control module, a configuration module, a computation scheduling module, a DMA data transceiver module, a security control module, a GPGPU computation module, and a corresponding data storage module, and an AXI (Advanced eXtensible Interface, a high-performance, high-bandwidth on-chip bus) bus connecting the respective modules.
The top layer of the interrupt control module is PIC (Programmable Interrupt Controller ), TMR (Timer Module) timer module, MTHD (Multiple THREAD HARDWARE DISPATCH) interrupt processing mode configuration module, CTXSW (Context Switch) process/thread switching module and the like are deployed in the interrupt control module, and are responsible for processing interrupt processing requests from a host;
the configuration module is provided with a CFG BUS (Configuration Bus) configuration BUS module and is responsible for completing the configuration of relevant parameters related to a security control module, a kernel scheduling module, a GPGPU computing module and the like by a host;
The computation scheduling module is provided with a thread bundle/thread scheduling functional kernel, and necessary Icache (Instruction Cache ) and Dcache (DATA CACHE, data cache) and is responsible for distributing computation tasks which pass the security verification to corresponding computation units of the GPGPU computation module;
the DMA data transceiving module is provided with a DMA data transceiving engine and is responsible for data exchange between the GPGPU and the host memory, and the calculation task and the processing result are received and transmitted in real time;
The security control module mainly comprises an input instruction decryption architecture and a processing result output encryption architecture, the module is provided with SHA (Secure Hash Algorithm ), AES (Advanced Encryption Standard, advanced encryption standard, a symmetric encryption algorithm), RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve Cryptography ) and other related symmetric or asymmetric encryption algorithms and corresponding decryption algorithms, the GPGPU needs to execute security detection on related requests after receiving a GPGPU access request, so that the current request is legal access, and the request can be sent to the computation scheduling module only after the security detection of the security control module is passed;
the GPGPU computing module deploys all computing units of the GPGPU and is responsible for executing all computing tasks sent by the user nodes;
The data storage module comprises instruction TCM (Tightly-Coupled Memory) storage and data TCM storage, is responsible for storing related instruction data and calculation demand data, and completes data exchange with a host through an AXI bus;
The modules are connected through an AXI control bus and an AXI data bus, the AXI bus transmits instructions and related data necessary for the work of each module, and all the functional modules are controlled and deployed by the GPGPU in a unified way.
Referring to fig. 3, an embodiment of the application discloses a secure call method of a GPGPU, which is applied to a secure control module in the GPGPU, and the method includes:
Step S11, acquiring a ciphertext instruction and request data carried in a GPGPU access request sent by a target host after analyzing the GPGPU access request sent by a remote user node.
In this embodiment, the remote user node sends a GPGPU access request to the target host through the wired/wireless network, and after the target host receives the GPGPU access request, the target host needs to parse the GPGPU access request to convert the GPGPU access request into a relevant machine code stream identifiable by the GPGPU, so as to send a ciphertext instruction and request data carried in the GPGPU access request to the security control module in the GPGPU in a DMA form through the PCIe interface.
And step S12, decrypting the ciphertext instruction by utilizing a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, and verifying the validity of the GPGPU access request based on the plaintext instruction.
In this embodiment, the security control module needs to decrypt the ciphertext instruction by using a target decryption algorithm corresponding to the remote user node to obtain the plaintext instruction, so as to verify the validity of the GPGPU access request based on the plaintext instruction, for example, whether the decrypted data conforms to a predefined data format, and whether the user identity information carried in the instruction belongs to prestored legal user identity information.
That is, the security control module in the application prestores a plurality of encryption and decryption algorithms, such as SHA, AES, RSA, ECC, and different users have independent decryption protection modes, namely the security control module can configure different decryption algorithms for the users according to the information of legal users, and the flexibility is high.
It should be noted that, the security control module is provided with an input instruction decryption architecture for decrypting the ciphertext instruction to obtain the plaintext instruction and verifying the validity of the GPGPU access request, and the architecture diagram is shown in fig. 4. As can be seen from fig. 4, the input instruction decryption architecture includes a management information configuration module, an instruction queue module, an instruction ciphertext decomposition module, a user IP decomposition module, an algorithm decryption FSM state machine, a decryption algorithm information selection module, a request side instruction comparison table, a MUX data merging module, an instruction decryption result caching module, and an instruction parameter caching module.
The management information configuration module is responsible for completing necessary parameterization setting and initialization configuration for other modules, and configuring preset information of the management terminal; the instruction queue module is responsible for verifying the legitimacy of the host administrator identity, caching relevant instructions of the user according to the input sequence, and sequentially sending out the instructions; the instruction ciphertext decomposition module is used for decomposing ciphertext to be decrypted, which is necessary for the instruction to execute a decryption algorithm; the user IP decomposition module is responsible for decomposing the IP information of the remote user node in the instruction information; the instruction parameter caching module is responsible for caching relevant request data in the instruction according to the instruction identification number indicated by the instruction PC (Program Counter) as a mark; the decryption algorithm information selecting module prestores a plurality of decryption algorithms corresponding to the IP information of legal users, such as symmetric decryption algorithms AES and SHA, asymmetric decryption algorithms RSA, judging the type of a decryption algorithm corresponding to the IP of the current GPGPU access request user, and selecting related decryption parameters related to a corresponding decryption algorithm from a module kernel; algorithm decryption FSM (FINITE STATE MACHINE ) state machines are responsible for sequentially decrypting input instructions, further judging the validity of the current request; the request side instruction comparison table stores information such as instruction access request types and calculation types corresponding to different instruction PCs; the MUX (Multiplexer) data merging module calculates relevant request data based on the instruction PC by using the decrypted relevant GPGPU access request instruction and the instruction of the instruction parameter cache, carrying out data merging and splicing according to a format preset by a user; the instruction decryption result buffer module is responsible for buffering the decryption result of the related instruction and the related data necessary for the execution of the instruction.
Thus, as shown in fig. 5, in some embodiments, the step of decrypting the ciphertext instruction using a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction may include:
Step S121, carrying out identity verification on the target host, if the identity verification is passed, extracting a ciphertext to be decrypted, an instruction identification number and IP information of the remote user node from the ciphertext instruction, and sending the ciphertext to be decrypted and the instruction identification number to a locally preset decryption state machine after establishing an association relation.
In this embodiment, the security control module first needs to perform identity verification on the target host, which may specifically be to verify the identity of the current administrator of the host. For example, when an instruction from the host is received, the administrator identity information contained in the instruction is extracted from the instruction and compared with the pre-stored information, and if the two information are consistent, the administrator identity authentication is determined to pass.
If the identity verification is passed, caching related instructions according to the instruction input sequence, and sequentially sending the instructions and related data to an instruction ciphertext decomposition module, a user IP decomposition module and an instruction parameter caching module. The user IP decomposition module is responsible for decomposing the IP information of a remote user node in the instruction information and sending the decomposed user IP information to the decryption algorithm information selection module, and the decryption algorithm information selection module prestores a plurality of decryption algorithms corresponding to legal user IP, such as symmetric decryption algorithms AES, SHA, asymmetric decryption algorithm RSA and the like.
And step S122, determining a corresponding target decryption algorithm based on the IP information, and transmitting the target decryption algorithm and the instruction identification number to the decryption state machine after establishing an association relation.
In this embodiment, the decryption algorithm information selecting module selects a corresponding target decryption algorithm and related decryption parameters related to the algorithm from the module kernel according to the decryption algorithm category corresponding to the user IP information of the current GPGPU access request, and sends the selected target decryption algorithm and related decryption parameters to the algorithm decryption FSM state machine with the instruction identification number as a tag.
And step 123, determining the instruction encryption format by the decryption state machine by using the access request type and the calculation type obtained from the preset instruction comparison table based on the instruction identification number.
In this embodiment, the decryption state machine obtains the corresponding access request type and the corresponding calculation type from a preset instruction comparison table (i.e. a request side instruction comparison table) based on the instruction identification number, so as to determine the instruction encryption format inside the instruction. It should be noted that, the access request category refers to a calculation category requiring participation of the model or a conventional data processing category, and the calculation category refers to a specific calculation mode under the corresponding category, such as sorting, filtering, and the like.
And step S124, decrypting the ciphertext to be decrypted in the ciphertext instruction by utilizing the target decryption algorithm and the instruction encryption format based on the instruction identification number in the decryption state machine so as to obtain a plaintext instruction.
In this embodiment, the decryption state machine decrypts the ciphertext to be decrypted in the ciphertext instruction according to the instruction encryption format, the target decryption algorithm and the related parameters, thereby obtaining the plaintext instruction.
Further, a preset legal instruction parameter can be stored in a preset instruction comparison table, and correspondingly, the verification of the validity of the GPGPU access request based on the plaintext instruction comprises judging whether the instruction parameter of the plaintext instruction is consistent with the legal instruction parameter stored in the preset instruction comparison table, if so, judging that the GPGPU access request is a legal request, and if not, judging that the GPGPU access request is an illegal request. That is, the decrypted instruction may be compared with legal instruction parameters stored in the request side instruction comparison table, and whether the information of the decrypted instruction and the legal instruction parameters is consistent is determined, if so, the GPGPU access request is determined to be a legal request, otherwise, the GPGPU access request is determined to be an illegal request. In addition, it is also possible to check whether the access request category and the calculation category of the instruction are consistent with the information recorded in the comparison table, and if the instruction claims to perform matrix multiplication operation, but the corresponding instruction PC in the comparison table marks data reading operation, the request is illegal.
In addition, the validity of the GPGPU access request can be verified through a timestamp, for example, in some systems with high security requirements, the instruction may carry timestamp information. The state machine needs to check whether the timestamp is within a reasonable range to prevent replay attacks, if the timestamp shows that the instruction was sent long before or is too far from the current time of the system, the request may be resent after being intercepted by an attacker, and should be determined as an illegal request.
And step S13, if the GPGPU access request is a legal request, the plaintext instruction and the request data are sent to a preset scheduler through a bus, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained.
In this embodiment, if it is determined that the current access is legal, the plaintext instruction and the request data are sent to a preset scheduler in the computing scheduling module through the bus, so that the preset scheduler determines a target computing unit based on a preset scheduling algorithm, and sends the plaintext instruction and the request data to the target computing unit to perform a computing operation, and a processing result is obtained. That is, communication is realized between the safety control module and the preset scheduler through a bus, so that physical isolation between the safety control module and the preset scheduler is realized, and different processes are not disturbed. And the safety control module and the computing unit are communicated through the bus, so that the transmission efficiency is high and the stability is good.
In addition, it may be noted that the preset scheduling algorithm may specifically be a first come first serve algorithm, a shortest job priority algorithm, a scheduling algorithm based on load balancing, and so on. The first-come first-serve algorithm is to sequentially allocate computing units to process according to the arrival sequence of the tasks, the first-arrival tasks are preferentially executed, and the subsequent tasks are not processed until the tasks are completed. The shortest job priority algorithm refers to that the scheduler selects the task priority allocation calculation unit with the shortest predicted execution time for processing. The scheduling algorithm based on load balancing needs to monitor the load condition of each computing unit in real time, so that new tasks are distributed to the computing units with the lightest loads, and load balancing among the computing units is realized.
Further, after the GPGPU access request is a legal request, the method further comprises the steps of splicing the corresponding plaintext instruction and the request data according to a first preset format based on the instruction identification number to obtain first spliced data, and correspondingly, the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, wherein the preset scheduler calls a target computing unit determined from the GPGPU based on the preset scheduling algorithm to process the first spliced data.
That is, if the current GPGPU access request is legal, the algorithm decryption FSM state machine sends the decrypted plaintext instruction to the MUX module, the MUX performs data merging and splicing on the plaintext instruction and the relevant request data cached by the instruction parameters based on the instruction PC according to a first preset format of the user, and then sends the obtained first spliced data to the instruction decryption result caching module, and the instruction decryption result caching module sequentially outputs the instruction decryption result to be sent to the computation scheduling module, so that the preset scheduler invokes each corresponding computation unit to complete the computation task. The instruction identification number is used for splicing the instruction and the request data, so that the corresponding relation between the instruction and the parameters is accurate, the consistency and the accuracy in data transmission are ensured, in addition, the data can be transmitted as a whole in the data transmission process, the additional expenditure caused by multiple transmissions is reduced, and the data transmission efficiency is improved.
In addition, after verifying the validity of the GPGPU access request based on the plaintext instruction, the method further comprises the step of discarding the GPGPU access request if the GPGPU access request is an illegal request so as to prohibit execution of the target computing unit determined from the GPGPU based on a preset scheduling algorithm for processing the plaintext instruction and the request data. That is, if the current GPGPU access request is an illegal request, the GPGPU access request is directly discarded without further calculation, that is, the execution of the step of calling the target calculation unit determined from the GPGPU based on the preset scheduling algorithm to process the instruction and the request data is prohibited.
Step S14, the processing result sent by the target computing unit through a bus is obtained, the processing result is encrypted by utilizing a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then the ciphertext result is sent to the target host, so that the target host can send the ciphertext result to the remote user node.
In this embodiment, the security control module obtains the processing result sent by the target computing unit through the bus, and similarly, the security control module may also configure different encryption algorithms for the user according to the information of the legal user, that is, encrypt the processing result by using the target encryption algorithm corresponding to the remote user node to obtain the ciphertext result, and then send the ciphertext result to the host through the PCIe interface in the form of DMA, so that the target host sends the ciphertext result to the remote user node. Therefore, the safety protection efficiency of GPGPU access can be improved through the scheme, physical isolation between the safety control module and the computing unit and between the safety control module and the dispatcher is realized, the user can be helped to better realize uniform dispatching control of the GPGPU, and the system performance is remarkably improved.
It should be noted that, the security control module is provided with a processing result output encryption architecture, so as to encrypt the processing result by using a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and the architecture diagram is shown in fig. 6. As can be seen from fig. 6, the processing result output encryption architecture includes a management information configuration module, a processing result receiving module, a computing unit ID comparison extraction module, a GPGPU computing unit ID list, a processing result caching module, an encryption algorithm information selection module, an algorithm encryption FSM state machine, a ciphertext caching module, a MUX data merging module, and a processing result encryption processing caching module.
The management information configuration module is responsible for completing necessary parameterization setting and initialization configuration for other modules and configuring information of a management host, the processing result receiving module is responsible for receiving processing results output by the GPGPU computing module from an AXI bus and sequentially caching the original unencrypted processing results to be output by the processing result caching module, the computing unit ID contrast extracting module is used for extracting encryption operation codes and encryption algorithm data corresponding to IP information of a remote user node from a GPGPU computing unit ID list according to computing unit ID information of the input processing results, the encryption algorithm information selecting module is used for pre-storing a plurality of decryption algorithms corresponding to legal user IP, such as symmetric decryption algorithm AES and SHA, asymmetric decryption algorithm RSA, judging the type of the encryption algorithm corresponding to the current GPGPU access request user IP according to the encryption operation codes, selecting a corresponding encryption algorithm and related encryption parameters, the algorithm encryption FSM state machine is used for generating a ciphertext corresponding to a specific format of the encryption algorithm according to the encryption algorithm corresponding to the current processing result, the ciphertext caching module is used for caching the processing result MUX generated by the algorithm FSM state machine, the encryption algorithm data and the encryption algorithm information selecting module is used for combining the processing results and the specific encryption algorithm data and the related to the specific encryption format after the encryption algorithm and the encryption algorithm data are accessed to the specific user terminal according to the specific encryption format.
Thus, as shown in fig. 7, in some embodiments, the step of encrypting the processing result using the target encryption algorithm corresponding to the remote user node to obtain the ciphertext result may include:
Step S141, determining the unit identification number of the target computing unit, and acquiring the encryption operation code and encryption algorithm data corresponding to the IP information of the remote user node from a computing unit identification list corresponding to the unit identification number.
In this embodiment, the processing result receiving module receives the processing result from the target computing unit, determines the unit identification number of the target computing unit, and caches the original unencrypted processing result in the processing result caching module with the ID (unit identification number) of the computing unit as a reference. Further, the GPGPU computing unit ID contrast extraction module extracts an encryption operation code and related encryption algorithm data corresponding to the IP information of the remote user node from the corresponding computing unit identification list according to the unit identification number of the target computing unit of the input processing result.
And step S142, after establishing an association relation between the encryption algorithm data and the unit identification number, sending the association relation to a local preset encryption state machine, and acquiring a target encryption algorithm corresponding to the encryption operation code from a preset kernel.
In the embodiment, the encryption algorithm information selecting module pre-stores a plurality of decryption algorithms corresponding to legal user IP, such as symmetric decryption algorithms AES and SHA and asymmetric decryption algorithm RSA, and judges the type of the encryption algorithm corresponding to the current GPGPU access request user IP according to the encryption operation code so as to select a corresponding target encryption algorithm and related encryption parameters from a preset kernel.
And step S143, after establishing an association relation between the target encryption algorithm and the unit identification number, transmitting the association relation to the encryption state machine so as to encrypt the processing result by utilizing the target encryption algorithm and the encryption algorithm data in the encryption state machine based on the unit identification number to obtain a ciphertext result.
In this embodiment, the unit identification numbers of the target encryption algorithm and the corresponding computing units are marked and sent to the algorithm encryption FSM state machine, the algorithm encryption FSM state machine generates a ciphertext result of a specific format corresponding to the encryption algorithm for the current processing result according to the target encryption algorithm and related encryption algorithm data corresponding to the unit identification numbers, and sends the ciphertext result to the ciphertext buffer module with the corresponding computing unit ID of the processing result as a mark.
Further, after the processing result is encrypted by using the target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, the method further comprises the steps of splicing the ciphertext result and the processing result according to a second preset format based on the unit identification number to obtain second spliced data, and correspondingly, the step of sending the ciphertext result to the target host so that the target host sends the ciphertext result to the remote user node comprises the step of sending the second spliced data to the target host so that the target host sends the second spliced data to the remote user node.
That is, the MUX data merging module marks the ciphertext result cached by the ciphertext caching module and the original unencrypted processing result cached by the processing result caching module with the unit identification number, performs data merging and splicing according to a second preset format of the user, and sends the obtained second spliced data to the processing result encryption processing caching module for storage. Further, the processing result encryption processing buffer module sequentially outputs the second spliced data marked based on the unit identification number to the target data until the transmission is completed, and the target host subsequently transmits the second spliced data to the remote user node. And the corresponding relation between each computing unit and the result thereof can be ensured to be accurate by splicing the ciphertext result and the processing result by the unit identification number, and the consistency and the accuracy in data transmission are ensured. In addition, the data can be transmitted as a whole in the data transmission process, so that the additional cost caused by multiple transmissions is reduced, and the data transmission efficiency is improved.
Therefore, after receiving the GPGPU access request sent by the remote user node, the target host in the application needs to analyze the GPGPU access request first to convert the GPGPU access request into a relevant machine code stream identifiable by the GPGPU, so as to send the ciphertext instruction and the request data carried in the GPGPU access request to the security control module in the GPGPU. The security control module firstly needs to decrypt the ciphertext instruction by utilizing a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, so as to verify the validity of the GPGPU access request based on the plaintext instruction. That is, the security control module in the application is provided with a plurality of decryption algorithms, and different users have independent decryption protection modes, namely, the security control module can configure different decryption algorithms for the users according to the information of legal users, and the flexibility is high. Further, if it is determined that the access is legal, the plaintext instruction and the request data are sent to a preset scheduler through a bus, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained. That is, communication is realized between the safety control module and the preset scheduler through a bus, so that physical isolation between the safety control module and the preset scheduler is realized, and different processes are not disturbed. And the safety control module and the computing unit are communicated through the bus, so that the safety control module obtains the processing result sent by the target computing unit through the bus, and in the same way, the safety control module can configure different encryption algorithms for the user according to the information of the legal user, namely, the processing result is encrypted by utilizing the target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then the ciphertext result is sent to the host, so that the target host can send the ciphertext result to the remote user node. Therefore, the safety protection efficiency of GPGPU access can be improved through the scheme, physical isolation between the safety control module and the computing unit and between the safety control module and the dispatcher is realized, the user can be helped to better realize uniform dispatching control of the GPGPU, and the system performance is remarkably improved.
Referring to fig. 8, an embodiment of the present application discloses a security call device of a GPGPU, which is applied to a security control module in the GPGPU, and the device includes:
The information acquisition module 11 is used for acquiring a ciphertext instruction and request data carried in a GPGPU access request sent by a remote user node after the GPGPU access request sent by the target host is analyzed;
The decryption verification module 12 is configured to decrypt the ciphertext instruction by using a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, and verify validity of the GPGPU access request based on the plaintext instruction;
the computing module 13 is configured to send the plaintext instruction and the request data to a preset scheduler through a bus if the GPGPU access request is a legal request, so that the preset scheduler invokes a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and obtain a processing result;
And the encryption module 14 is configured to obtain the processing result sent by the target computing unit through the bus, encrypt the processing result by using a target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then send the ciphertext result to the target host, so that the target host sends the ciphertext result to the remote user node.
Therefore, after receiving the GPGPU access request sent by the remote user node, the target host in the application needs to analyze the GPGPU access request first to convert the GPGPU access request into a relevant machine code stream identifiable by the GPGPU, so as to send the ciphertext instruction and the request data carried in the GPGPU access request to the security control module in the GPGPU. The security control module firstly needs to decrypt the ciphertext instruction by utilizing a target decryption algorithm corresponding to the remote user node to obtain a plaintext instruction, so as to verify the validity of the GPGPU access request based on the plaintext instruction. That is, the security control module in the application is provided with a plurality of decryption algorithms, and different users have independent decryption protection modes, namely, the security control module can configure different decryption algorithms for the users according to the information of legal users, and the flexibility is high. Further, if it is determined that the access is legal, the plaintext instruction and the request data are sent to a preset scheduler through a bus, so that the preset scheduler calls a target computing unit determined from the GPGPU based on a preset scheduling algorithm to process the plaintext instruction and the request data, and a processing result is obtained. That is, communication is realized between the safety control module and the preset scheduler through a bus, so that physical isolation between the safety control module and the preset scheduler is realized, and different processes are not disturbed. And the safety control module and the computing unit are communicated through the bus, so that the safety control module obtains the processing result sent by the target computing unit through the bus, and in the same way, the safety control module can configure different encryption algorithms for the user according to the information of the legal user, namely, the processing result is encrypted by utilizing the target encryption algorithm corresponding to the remote user node to obtain a ciphertext result, and then the ciphertext result is sent to the host, so that the target host can send the ciphertext result to the remote user node. Therefore, the safety protection efficiency of GPGPU access can be improved through the scheme, physical isolation between the safety control module and the computing unit and between the safety control module and the dispatcher is realized, the user can be helped to better realize uniform dispatching control of the GPGPU, and the system performance is remarkably improved.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Specifically, the system comprises at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input/output interface 25 and a communication bus 26. The memory 22 is configured to store a computer program, where the computer program is loaded and executed by the processor 21, so as to implement relevant steps in the GPGPU security invoking method executed by the electronic device disclosed in any of the foregoing embodiments.
In this embodiment, the power supply 23 is configured to provide working voltages for each hardware device on the electronic device 20, the communication interface 24 is capable of creating a data transmission channel with an external device for the electronic device 20, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein, and the input/output interface 25 is configured to obtain external input data or output data to the external device, and the specific interface type of the input/output interface may be selected according to the specific application needs and is not specifically limited herein.
Processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 21 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 21 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 21 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 21 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon include an operating system 221, a computer program 222, and data 223, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device 20 and the computer program 222, so as to implement the operation and processing of the processor 21 on the mass data 223 in the memory 22, which may be Windows, unix, linux. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the secure invocation method of the GPGPU performed by the electronic device 20 as disclosed in any of the previous embodiments. The data 223 may include, in addition to data received by the electronic device and transmitted by the external device, data collected by the input/output interface 25 itself, and so on.
Further, the embodiment of the application also discloses a computer readable storage medium, wherein the storage medium stores a computer program, and when the computer program is loaded and executed by a processor, the steps of the security calling method of the GPGPU disclosed in any embodiment are realized.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not intended to be limiting.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in random access Memory (Random Access Memory, i.e., RAM), memory, read-Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a compact disc Read-Only Memory (Compact Disc Read-Only Memory, i.e., CD-ROM), or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing describes the method, apparatus, device and storage medium for secure call of GPGPU provided by the present invention in detail, and specific examples are provided herein to illustrate the principles and embodiments of the present invention, and the above description of the embodiments is only for aiding in understanding the method and core concept of the present invention, and meanwhile, to those skilled in the art, according to the concept of the present invention, there are variations in the specific embodiments and application ranges, so the disclosure should not be construed as limiting the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510266147.3A CN119760704B (en) | 2025-03-07 | 2025-03-07 | A GPGPU secure calling method, device, equipment and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510266147.3A CN119760704B (en) | 2025-03-07 | 2025-03-07 | A GPGPU secure calling method, device, equipment and medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119760704A CN119760704A (en) | 2025-04-04 |
| CN119760704B true CN119760704B (en) | 2025-06-17 |
Family
ID=95191303
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510266147.3A Active CN119760704B (en) | 2025-03-07 | 2025-03-07 | A GPGPU secure calling method, device, equipment and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119760704B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114385987A (en) * | 2021-12-14 | 2022-04-22 | 深圳市梦网物联科技发展有限公司 | Dynamic multi-factor identity authentication and certification method and storage medium |
| CN117081815A (en) * | 2023-08-23 | 2023-11-17 | 平安银行股份有限公司 | Method, device, computer equipment and storage medium for data security transmission |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114124364B (en) * | 2020-08-27 | 2024-05-24 | 国民技术股份有限公司 | Key security processing method, device, equipment and computer readable storage medium |
| US12353520B2 (en) * | 2020-11-02 | 2025-07-08 | Intel Corporation | Graphics security with synergistic encryption, content-based and resource management technology |
| US11625337B2 (en) * | 2020-12-26 | 2023-04-11 | Intel Corporation | Encoded pointer based data encryption |
| CN118449771A (en) * | 2024-05-31 | 2024-08-06 | 中国移动通信集团设计院有限公司 | Security authentication methods, devices, systems, equipment, media and products |
| CN119226220A (en) * | 2024-09-25 | 2024-12-31 | 山东云海国创云计算装备产业创新中心有限公司 | A data transmission method, device, equipment, medium and computer program product |
| CN118898084B (en) * | 2024-10-08 | 2025-01-24 | 杭州卡方分布信息科技有限公司 | Client security protection method, device, computer equipment and storage medium |
-
2025
- 2025-03-07 CN CN202510266147.3A patent/CN119760704B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114385987A (en) * | 2021-12-14 | 2022-04-22 | 深圳市梦网物联科技发展有限公司 | Dynamic multi-factor identity authentication and certification method and storage medium |
| CN117081815A (en) * | 2023-08-23 | 2023-11-17 | 平安银行股份有限公司 | Method, device, computer equipment and storage medium for data security transmission |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119760704A (en) | 2025-04-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9807066B2 (en) | Secure data transmission and verification with untrusted computing devices | |
| CN111737366B (en) | Private data processing method, device, equipment and storage medium of block chain | |
| US9948616B2 (en) | Apparatus and method for providing security service based on virtualization | |
| US11025415B2 (en) | Cryptographic operation method, method for creating working key, cryptographic service platform, and cryptographic service device | |
| EP0876026A2 (en) | Programmable crypto processing system and method | |
| CN104951712B (en) | A kind of data security protection method under Xen virtualized environment | |
| US8200960B2 (en) | Tracking of resource utilization during cryptographic transformations | |
| CN110138818B (en) | Method, website application, system, device and service back-end for transmitting parameters | |
| US12189775B2 (en) | Seamless firmware update mechanism | |
| CN104951688B (en) | Suitable for the exclusive data encryption method and encrypted card under Xen virtualized environment | |
| CN110417756A (en) | Cross-network data transmission method and device | |
| CN112954050A (en) | Distributed management method and device, management equipment and computer storage medium | |
| US20240356909A1 (en) | Signing messages using public key cryptography and certificate verification | |
| CN119760704B (en) | A GPGPU secure calling method, device, equipment and medium | |
| CN116527257B (en) | Heterogeneous computing system and resource processing method based on same | |
| CN116070240B (en) | Data encryption processing method and device of multi-chip calling mechanism | |
| CN115801286A (en) | Calling method, device, equipment and storage medium of microservice | |
| CN114969851A (en) | A FPGA-based data processing method, device, equipment and medium | |
| Zhong et al. | SAED: A self-adaptive encryption and decryption architecture | |
| Bian et al. | Asyncgbp: Unleashing the potential of heterogeneous computing for SSL/TLS with GPU-based provider | |
| Xiao et al. | Hardware/software adaptive cryptographic acceleration for big data processing | |
| CN119759593B (en) | GPGPU scheduling method, device, equipment and medium | |
| Boubakri et al. | Architectural Security and Trust Foundation for RISC-V | |
| US12432048B2 (en) | Agentless single sign-on techniques | |
| CN118300832B (en) | Multi-device access platform processing method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |