US20150042665A1 - Gpgpu systems and services - Google Patents
Gpgpu systems and services Download PDFInfo
- Publication number
- US20150042665A1 US20150042665A1 US14/335,105 US201414335105A US2015042665A1 US 20150042665 A1 US20150042665 A1 US 20150042665A1 US 201414335105 A US201414335105 A US 201414335105A US 2015042665 A1 US2015042665 A1 US 2015042665A1
- Authority
- US
- United States
- Prior art keywords
- gpgpu
- cluster
- units
- compute cluster
- access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 10
- 238000013500 data storage Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000001816 cooling Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000329 molecular dynamics simulation Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/504—Resource capping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- Embodiments relate to computing clusters, cloud computing, and general purpose computing based on graphic processor units. Embodiments also relate to massive computing power offered on a subscription basis. Embodiments additionally relate to profiling massively parallel programs on a variety of cluster configurations.
- GPU graphics processing unit
- the CPU is a general purpose processor designed to efficiently run a great variety of algorithms. Graphics processing, however, consists of a limited and well known set of algorithms. GPUs are specialized processors that are very good at graphics processing but not necessarily good at other tasks.
- the GPGPU cluster consists of a number of GPGPU units.
- Each GPGPU unit is a self contained computer having an enclosure, CPU, cooling fan, GPU, memory for the CPU and GPU, and a communications interface.
- a user access the subscription server module through the user's own computer.
- the subscription server module governs the users access to the GPGPU units, related hardware, and related software tools.
- the user provides a GPGPU application to be run on the GPGPU cluster.
- the GPGPU application can be developed by on the user's computer or on the GPGPU cluster itself.
- the user can obtain the application development tools from the GPGPU cluster, from the entity providing access to the GPGPU cluster, or from another source.
- the GPGPU application can be designed to run on a specific configuration of GPGPU units or can otherwise specify a configuration.
- the GPGPU application has GPU instructions and application data.
- the GPUs in the GPU units can operate on the application data while executing the GPU instructions.
- the GPGPU cluster can be interconnected in accordance with the configuration and the GPGPU application then run.
- the profiling module tracks the GPGPU cluster resources consumed by the GPGPU application.
- the resources can include the number of GPGPU units, the amounts of memory, the amounts of processing time, the numbers of GPU cores, and similar information that the user can interpret to optimize the GPGPU application.
- the GPGPU application can be optimized by altering the control flow of the instructions, flow of the data, or configuration of the GPGPU cluster.
- FIG. 1 illustrates a subscription based service by which a user can test an algorithm, application, or utility upon a number of different GPGPU configurations in accordance with aspects of the embodiments;
- FIG. 2 illustrates one possible GPGPU configuration units in accordance with aspects of the embodiments.
- FIG. 3 illustrates a GPGPU configuration having GPGPU units in accordance with aspects of the embodiments.
- GPUs Graphics processing units deployed in general purpose GPU (GPGPU) units are combined into a GPGPU cluster. Access to the GPGPU cluster is then offered as a service to users who can use their own computers to communicate with the GPGPU cluster.
- the users develop applications to be run on the cluster and a profiling module tracks the applications' resource utilization and can report it to the user and to a subscription server.
- the user can examine the report to thereby optimize the application or the cluster's configuration.
- the subscription server can interpret the report to thereby invoice the user or otherwise govern the users' access to the cluster.
- FIG. 1 illustrates a subscription based service by which a user 101 can test an algorithm, application, or utility upon a number of different GPGPU configurations 105 , 106 , 107 .
- the user 101 can access the user's computer 102 to develop, compile, etc a GPGPU application.
- a service provider can provide the user with access to a number of different GPGPU configurations such as GPGPU configuration 1 105 , GPGPU configuration 2 106 , and GPGPU configuration 3 107 .
- the user 101 can download the application to a suitably configured GPGPU cluster and run it.
- a data storage array 108 can store data for the user such that the data is available to the user's application.
- a profiling module 104 can track the number of processors, amount of processing time, amount of memory, and other resources utilized by the application and report those utilizations back to the user.
- the user's computer 102 connects to the service using a communications network.
- a second communications network can interconnect the configurations, modules, and data storage array 108 .
- the user's computer might over the internet whereas the GPGPU cluster communicates internally using infiniband or some other very high speed interconnect.
- the various networks must also include network hardware as required (not shown) such as routers and switches.
- a subscription module 103 can control the user's access to the GPGPU configurations such that only certain users have access.
- the subscription module 103 can also limit the amount of resources consumed by the user such as how much data can be stored in the data storage array 108 or how much total GPU time can be consumed by the user.
- the subscription module can track the user's resource consumption such that the user 101 can be invoiced after the fact or on a pay-as-you-go basis.
- the user's application can include a specification of the GPGPU cluster configuration.
- the user can produce multiple applications that are substantially similar with the exception that each specifies a different configuration. Testing and profiling the different applications provides the user with information leading to the selection of a preferred GPGPU cluster configuration for running the application.
- the cluster configuration can be tuned to run an application such as a molecular dynamics simulator.
- the application can be tuned for the configuration.
- a service provider can provide access to a number of different cluster configurations.
- a user accessing the service can submit an application that is then run and profiled on each of the available configurations or on a subset of the available configurations. This embodiment eases the users burden of generating numerous cluster configuration specifications because those specifications are available from the service provider.
- FIG. 2 illustrates one possible GPGPU configuration.
- GPGPU configuration A 201 has a CPU 202 , memory 203 , a network interface 204 , and three GPUs 205 .
- a single computer holds all the processing capability.
- GPGPU configuration A 201 can be deployed as a unit within a much larger configuration that contains numerous computers. However, should GPGPU configuration A encompass all of the available resources then the subscription server module and the profiling module can run as application programs on the single computer.
- FIG. 3 illustrates a GPGPU configuration having numerous GPGPU units.
- GPGPU configuration B 301 has a control computer 301 , GPGPU unit 1 303 and GPGPU unit 2 304 interconnected by a communications network 306 .
- each of the GPGPU units has a single GPU 205 and the control computer 302 has none.
- the communications network can be a single technology such as infiniband or Ethernet. Alternatively, the communications network can be a combination of technologies.
- the communications module 305 in each computer has the hardware, firmware, and software required for operation with the communications network 306 .
- the control computer 302 can run the subscription server module and the profiling module as application programs.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Stored Programmes (AREA)
- Debugging And Monitoring (AREA)
Abstract
Graphics processing units (GPUs) deployed in general purpose GPU (GPGPU) units are combined into a GPGPU cluster. Access to the GPGPU cluster is then offered as a service to users who can use their own computers to communicate with the GPGPU cluster. The users develop applications to be run on the cluster and a profiling module tracks the applications' resource utilization and can report it to the user and to a subscription server. The user can examine the report to thereby optimize the application or the cluster's configuration. The subscription server can interpret the report to thereby invoice the user or otherwise govern the users' access to the cluster.
Description
- The present application is a continuation and claims the priority benefit of U.S. patent application Ser. No. 12/895,554 filed Sep. 30, 2010, which claims the priority benefit of U.S. provisional application 61/261,973 filed Nov. 17, 2009 and U.S. provisional application 61/247,237 filed Sep. 30, 2009, the disclosures of which are incorporated herein by reference.
- 1. Field of the Invention
- Embodiments relate to computing clusters, cloud computing, and general purpose computing based on graphic processor units. Embodiments also relate to massive computing power offered on a subscription basis. Embodiments additionally relate to profiling massively parallel programs on a variety of cluster configurations.
- 2. Description of the Related Art
- Massive computing capability has traditionally been provided by highly specialized and very expensive supercomputers. As technology advances, however, inexpensive desktop and server hardware has steadily supplanted expensive high end systems. More recently, inexpensive hardware has been gathered together to form computing clusters. The individual computers in a compute cluster are typically not as expensive or reliable as their supercomputer and mainframe forbearers but overcome those limitations with sheer numbers.
- The drawback of compute clusters is that they are difficult to maintain and to program. In order to harness the power of a compute cluster, a program must be split into a great number of pieces and the multitudinous results later reconciled and reassembled. Furthermore, the program itself must be fault tolerant because there is a risk of individual failures amongst the great number of inexpensive computers.
- Desktop and gaming computers often conserve central processing unit (CPU) resources by employing a graphics subsystems dedicated to drive one or more computer displays. A graphics processing unit (GPU) is at the heart of the graphics subsystem. The CPU is a general purpose processor designed to efficiently run a great variety of algorithms. Graphics processing, however, consists of a limited and well known set of algorithms. GPUs are specialized processors that are very good at graphics processing but not necessarily good at other tasks.
- Another recent development is the identification of algorithms, other than graphics algorithms, that are well suited for GPUs. These algorithms currently require expert programming in order to put them into a form that a GPU can run. Further optimization is required to for a GPU to run the algorithm well. The effort is often worthwhile because the speedup can be orders of magnitude faster. Unfortunately, properly configured computing systems having the software tools required for developing algorithms to run on GPUs are rare. As such, expertise in the required programming techniques is rare and difficult to develop.
- Systems and methods for providing GPU powered compute clusters and for deploying non-graphics applications to efficiently run on those GPU powered compute clusters are needed.
- The following summary is provided to facilitate an understanding of some of the innovative features unique to the embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
- It is therefore an aspect of the embodiments to provide service granting remote users with access to a general purpose GPU (GPGPU) based compute cluster. The GPGPU cluster consists of a number of GPGPU units. Each GPGPU unit is a self contained computer having an enclosure, CPU, cooling fan, GPU, memory for the CPU and GPU, and a communications interface.
- It is another aspect of the embodiments to provide a subscription server module. A user access the subscription server module through the user's own computer. The subscription server module governs the users access to the GPGPU units, related hardware, and related software tools.
- The user provides a GPGPU application to be run on the GPGPU cluster. The GPGPU application can be developed by on the user's computer or on the GPGPU cluster itself. The user can obtain the application development tools from the GPGPU cluster, from the entity providing access to the GPGPU cluster, or from another source.
- The GPGPU application can be designed to run on a specific configuration of GPGPU units or can otherwise specify a configuration. The GPGPU application has GPU instructions and application data. The GPUs in the GPU units can operate on the application data while executing the GPU instructions. Furthermore, the GPGPU cluster can be interconnected in accordance with the configuration and the GPGPU application then run.
- It is a further aspect of the embodiments to provide a profiling module. The profiling module tracks the GPGPU cluster resources consumed by the GPGPU application. The resources can include the number of GPGPU units, the amounts of memory, the amounts of processing time, the numbers of GPU cores, and similar information that the user can interpret to optimize the GPGPU application. The GPGPU application can be optimized by altering the control flow of the instructions, flow of the data, or configuration of the GPGPU cluster.
- The accompanying figures, in which like reference numerals refer to identical or functionally similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate aspects of the embodiments and, together with the background, brief summary, and detailed description serve to explain the principles of the embodiments.
-
FIG. 1 illustrates a subscription based service by which a user can test an algorithm, application, or utility upon a number of different GPGPU configurations in accordance with aspects of the embodiments; -
FIG. 2 illustrates one possible GPGPU configuration units in accordance with aspects of the embodiments; and -
FIG. 3 illustrates a GPGPU configuration having GPGPU units in accordance with aspects of the embodiments. - The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof. In general, the figures are not to scale.
- Graphics processing units (GPUs) deployed in general purpose GPU (GPGPU) units are combined into a GPGPU cluster. Access to the GPGPU cluster is then offered as a service to users who can use their own computers to communicate with the GPGPU cluster. The users develop applications to be run on the cluster and a profiling module tracks the applications' resource utilization and can report it to the user and to a subscription server. The user can examine the report to thereby optimize the application or the cluster's configuration. The subscription server can interpret the report to thereby invoice the user or otherwise govern the users' access to the cluster.
-
FIG. 1 illustrates a subscription based service by which a user 101 can test an algorithm, application, or utility upon a number ofdifferent GPGPU configurations GPGPU configuration 1 105,GPGPU configuration 2 106, and GPGPU configuration 3 107. The user 101 can download the application to a suitably configured GPGPU cluster and run it. Adata storage array 108 can store data for the user such that the data is available to the user's application. Aprofiling module 104 can track the number of processors, amount of processing time, amount of memory, and other resources utilized by the application and report those utilizations back to the user. - The user's computer 102 connects to the service using a communications network. As illustrated, a second communications network can interconnect the configurations, modules, and
data storage array 108. For example, the user's computer might over the internet whereas the GPGPU cluster communicates internally using infiniband or some other very high speed interconnect. The various networks must also include network hardware as required (not shown) such as routers and switches. - A
subscription module 103 can control the user's access to the GPGPU configurations such that only certain users have access. Thesubscription module 103 can also limit the amount of resources consumed by the user such as how much data can be stored in thedata storage array 108 or how much total GPU time can be consumed by the user. Alternatively, the subscription module can track the user's resource consumption such that the user 101 can be invoiced after the fact or on a pay-as-you-go basis. - The user's application can include a specification of the GPGPU cluster configuration. In this case, the user can produce multiple applications that are substantially similar with the exception that each specifies a different configuration. Testing and profiling the different applications provides the user with information leading to the selection of a preferred GPGPU cluster configuration for running the application. As such, the cluster configuration can be tuned to run an application such as a molecular dynamics simulator. Alternatively, the application can be tuned for the configuration.
- A service provider can provide access to a number of different cluster configurations. A user accessing the service can submit an application that is then run and profiled on each of the available configurations or on a subset of the available configurations. This embodiment eases the users burden of generating numerous cluster configuration specifications because those specifications are available from the service provider.
-
FIG. 2 illustrates one possible GPGPU configuration.GPGPU configuration A 201 has aCPU 202,memory 203, anetwork interface 204, and threeGPUs 205. In GPGPU configuration A 201 a single computer holds all the processing capability. Note thatGPGPU configuration A 201 can be deployed as a unit within a much larger configuration that contains numerous computers. However, should GPGPU configuration A encompass all of the available resources then the subscription server module and the profiling module can run as application programs on the single computer. -
FIG. 3 illustrates a GPGPU configuration having numerous GPGPU units.GPGPU configuration B 301 has acontrol computer 301,GPGPU unit 1 303 andGPGPU unit 2 304 interconnected by acommunications network 306. Note that each of the GPGPU units has asingle GPU 205 and thecontrol computer 302 has none. As such, this is a non limiting example because a controller can contain multiple GPUs as can each of the GPGPU units. The communications network can be a single technology such as infiniband or Ethernet. Alternatively, the communications network can be a combination of technologies. In any case, thecommunications module 305 in each computer has the hardware, firmware, and software required for operation with thecommunications network 306. Thecontrol computer 302 can run the subscription server module and the profiling module as application programs. - It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Claims (7)
1. (canceled)
2. A method for offering access to a general purpose graphics processing unit (GPGPU) compute cluster, the method comprising:
communicating with a user computer seeking access to the GPGPU compute cluster to control access by the user computer to the GPGPU compute cluster;
determining that the user computer is presently subscribed to and has the requisite permissions to access one or more GPGPU units in the GPGPU compute cluster;
receiving a specification for submission to the GPGPU compute cluster, the specification received from the user computer seeking access to the GPGPU compute cluster;
executing the specification at the GPGPU compute cluster to produce one or more computational results as defined by the specification
tracking resource utilization data during execution of the specification by one or more GPGPU units in the GPGPU computer cluster; and
controlling utilization of one or more GPGPU units in the GPGPU compute cluster during execution of the specification and responsive to the resource utilization data.
3. The method of claim 2 , further comprising storing resource utilization data in a data array communicatively coupled to the GPGPU computer cluster for subsequent control of one or more units in the GPGPU compute cluster during execution of a later received specification.
4. The method of claim 2 , further comprising invoicing a user of the user computer based on the resource utilization data.
5. A method for offering access to a general purpose graphics processing unit (GPGPU) compute cluster, the method comprising:
communicating with a user computer seeking access to the GPGPU compute cluster to control access by the user computer to the GPGPU compute cluster;
determining that the user computer is presently subscribed to and has the requisite permissions to access one or more GPGPU units in the GPGPU compute cluster;
receiving a specification for submission to the GPGPU compute cluster, the specification received from the user computer seeking access to the GPGPU compute cluster;
configuring one or more units in the GPGPU compute cluster in accordance with the specification;
producing one or more computational results as defined by the specification, the computational results generated by the GPGPU compute cluster following configuration as defined by the specification;
tracking resource utilization data during execution of the specification by one or more GPGPU units in the GPGPU computer cluster; and
controlling utilization of one or more GPGPU units in the GPGPU compute cluster during execution of the specification and responsive to the resource utilization data.
6. The method of claim 5 , further comprising:
alternatively configuring one or more units in the GPGPU computer cluster in a manner not set forth in the specification;
producing one or more computation results as defined by the specification, the computational results generated by the alternatively configured GPGPU units in parallel with the computational results generated by the one or more units of the GPGPU units as defined by the specification; and
tracking resource utilization data during execution of the specification by the one or more GPGPU units in the alternatively configured GPGPU computer cluster.
7. The method of claim 6 , further comprising identifying the more optimal GPGPU computer cluster configuration for execution of the specification and subsequently executing the specification in accordance with the more optimal configuration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/335,105 US20150042665A1 (en) | 2009-09-30 | 2014-07-18 | Gpgpu systems and services |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US24723709P | 2009-09-30 | 2009-09-30 | |
US26197309P | 2009-11-17 | 2009-11-17 | |
US12/895,554 US8817030B2 (en) | 2009-09-30 | 2010-09-30 | GPGPU systems and services |
US14/335,105 US20150042665A1 (en) | 2009-09-30 | 2014-07-18 | Gpgpu systems and services |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/895,554 Continuation US8817030B2 (en) | 2009-09-30 | 2010-09-30 | GPGPU systems and services |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150042665A1 true US20150042665A1 (en) | 2015-02-12 |
Family
ID=43779819
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/895,554 Active 2032-01-06 US8817030B2 (en) | 2009-09-30 | 2010-09-30 | GPGPU systems and services |
US14/335,105 Abandoned US20150042665A1 (en) | 2009-09-30 | 2014-07-18 | Gpgpu systems and services |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/895,554 Active 2032-01-06 US8817030B2 (en) | 2009-09-30 | 2010-09-30 | GPGPU systems and services |
Country Status (1)
Country | Link |
---|---|
US (2) | US8817030B2 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708088A (en) * | 2012-05-08 | 2012-10-03 | 北京理工大学 | CPU/GPU (Central Processing Unit/ Graphic Processing Unit) cooperative processing method oriented to mass data high-performance computation |
CN106155804A (en) * | 2015-04-12 | 2016-11-23 | 北京典赞科技有限公司 | Method and system to the unified management service of GPU cloud computing resources |
CN111913816B (en) * | 2020-07-14 | 2024-08-16 | 长沙景嘉微电子股份有限公司 | Method, device, terminal and medium for realizing clusters in GPGPU (graphics processing Unit) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050055316A1 (en) * | 2003-09-04 | 2005-03-10 | Sun Microsystems, Inc. | Method and apparatus having multiple identifiers for use in making transactions |
US20070118295A1 (en) * | 2005-03-02 | 2007-05-24 | Al-Murrani Samer Waleed Khedhe | Methods and Systems for Designing Animal Food Compositions |
US20090119677A1 (en) * | 2007-02-14 | 2009-05-07 | The Mathworks, Inc. | Bi-directional communication in a parallel processing environment |
US20090182605A1 (en) * | 2007-08-06 | 2009-07-16 | Paul Lappas | System and Method for Billing for Hosted Services |
US7849359B2 (en) * | 2007-11-20 | 2010-12-07 | The Mathworks, Inc. | Parallel programming error constructs |
-
2010
- 2010-09-30 US US12/895,554 patent/US8817030B2/en active Active
-
2014
- 2014-07-18 US US14/335,105 patent/US20150042665A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050055316A1 (en) * | 2003-09-04 | 2005-03-10 | Sun Microsystems, Inc. | Method and apparatus having multiple identifiers for use in making transactions |
US20070118295A1 (en) * | 2005-03-02 | 2007-05-24 | Al-Murrani Samer Waleed Khedhe | Methods and Systems for Designing Animal Food Compositions |
US20090119677A1 (en) * | 2007-02-14 | 2009-05-07 | The Mathworks, Inc. | Bi-directional communication in a parallel processing environment |
US20090182605A1 (en) * | 2007-08-06 | 2009-07-16 | Paul Lappas | System and Method for Billing for Hosted Services |
US7849359B2 (en) * | 2007-11-20 | 2010-12-07 | The Mathworks, Inc. | Parallel programming error constructs |
Non-Patent Citations (1)
Title |
---|
Kindratenko, V. V., Enos, J. J., Shi, G., Showerman, M. T., Arnold, G. W., Stone, J. E., ... & Hwu, W. M. (2009, August). GPU clusters for high-performance computing. In Cluster Computing and Workshops, 2009. CLUSTER'09. IEEE International Conference on (pp. 1-8). IEEE. * |
Also Published As
Publication number | Publication date |
---|---|
US20110074791A1 (en) | 2011-03-31 |
US8817030B2 (en) | 2014-08-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cheng et al. | DRL-cloud: Deep reinforcement learning-based resource provisioning and task scheduling for cloud service providers | |
Kutzner et al. | Best bang for your buck: GPU nodes for GROMACS biomolecular simulations | |
US9785472B2 (en) | Computing cluster performance simulation using a genetic algorithm solution | |
US7647590B2 (en) | Parallel computing system using coordinator and master nodes for load balancing and distributing work | |
Li et al. | Ai-enabling workloads on large-scale gpu-accelerated system: characterization, opportunities, and implications | |
Guo et al. | Automated exploration and implementation of distributed cnn inference at the edge | |
Wang et al. | Designing cloud servers for lower carbon | |
Zakarya | Energy and performance aware resource management in heterogeneous cloud datacenters | |
Harichane et al. | KubeSC‐RTP: Smart scheduler for Kubernetes platform on CPU‐GPU heterogeneous systems | |
Sievert et al. | A simple MPI process swapping architecture for iterative applications | |
US20150042665A1 (en) | Gpgpu systems and services | |
Yetim et al. | EPROF: An energy/performance/reliability optimization framework for streaming applications | |
Zhang et al. | Scheduling challenges for variable capacity resources | |
Marinescu et al. | An auction-driven self-organizing cloud delivery model | |
Guerrero et al. | A performance/cost model for a CUDA drug discovery application on physical and public cloud infrastructures | |
Hsia et al. | MAD-Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems | |
Jung et al. | A workflow scheduling technique using genetic algorithm in spot instance-based cloud | |
Artail et al. | Speedy cloud: Cloud computing with support for hardware acceleration services | |
Yang et al. | Tear up the bubble boom: Lessons learned from a deep learning research and development cluster | |
US20170153920A1 (en) | Recruiting additional resource for hpc simulation | |
Krzywda et al. | Modeling and simulation of qos-aware power budgeting in cloud data centers | |
Moore et al. | Inflation and deflation of self-adaptive applications | |
Pinel et al. | Energy-efficient scheduling on milliclusters with performance constraints | |
Saeedizade et al. | Scientific workflow scheduling algorithms in cloud environments: a comprehensive taxonomy, survey, and future directions | |
Al Shehri et al. | Evaluation of high-performance computing techniques for big data applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CREATIVEC LLC, NEW MEXICO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCANTLEN, GREG;SCANTLEN, GARY;SIGNING DATES FROM 20140624 TO 20140706;REEL/FRAME:033343/0122 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |