+

WO2009087548A2 - Scalable server architecture - Google Patents

Scalable server architecture Download PDF

Info

Publication number
WO2009087548A2
WO2009087548A2 PCT/IB2008/055680 IB2008055680W WO2009087548A2 WO 2009087548 A2 WO2009087548 A2 WO 2009087548A2 IB 2008055680 W IB2008055680 W IB 2008055680W WO 2009087548 A2 WO2009087548 A2 WO 2009087548A2
Authority
WO
WIPO (PCT)
Prior art keywords
servers
application
data processing
server
blade
Prior art date
Application number
PCT/IB2008/055680
Other languages
French (fr)
Other versions
WO2009087548A3 (en
Inventor
Soumik Sinharoy
Jean Baptiste Broccard
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom filed Critical France Telecom
Publication of WO2009087548A2 publication Critical patent/WO2009087548A2/en
Publication of WO2009087548A3 publication Critical patent/WO2009087548A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Definitions

  • the present system relates to a method, architecture, and system for providing a scalable processing capability.
  • these servers are clustered together to provide additional computational capability, although clustering is not failure proof.
  • Clustered servers tend to be located in the same geographic area (e.g., with the same room) as there are constraints on how the servers may be physically connected together. Even with a cluster of servers, a sleeping CPU is retained to replace a failed server. Furthermore, if a server fails, a transfer of the whole application to the sleeping CPU takes time during which, an application previously running on the cluster of servers is halted.
  • Grid computing also called distributed computing has traditionally been a key area of interest for the High Performance Computing (HPC) community in academics, bio-molecular and life sciences research and has also been widely used in nuclear research by government sponsored R&D. In recent years there has been a significant adoption of grid computing in key vertical businesses, such as Financial Services, Telecommunications and Internet Growth firms .
  • Grid computing applies several computer, storage and network resources in a network to solve a single problem.
  • geographically dispersed resources may work in aggregation and coherence to deliver improved performance, higher Quality of Service (QoS) , better utilization of resources and easier access to data.
  • QoS Quality of Service
  • resources grouped together can act as a collaborative virtual organization sharing application and data in an open heterogeneous environment.
  • Grids can be comprised of computer resources that are locally consolidated or distributed in an open environment. Grids have an attribute of being self -managing in that each of the resources of the Grid are aware of each other of the resources typically as a result of a middleware application.
  • Grid computing computer assets may be treated as a utility.
  • the location of data and site of computation is transparent to the user of the HPC infrastructure.
  • Any enterprise grade application can leverage a grid infrastructure to provide a operating benefits.
  • the benefits of Grid computing includes, but are not limited to lower Total Cost of Ownership (TCO) using commodity hardware, and a high availability by factoring out most single points of failure.
  • Policy based dynamic provisioning capabilities becomes an enabler for a real time enterprise.
  • Grid computing provides a virtualized infrastructure to the end user.
  • Data and computing centers that manipulate huge amounts of data daily are physically organized as separate server portions wherein a first server portion is dedicated to computing (computing layer whether grid or cluster) , a second server portion is dedicated to data (data layer) and a third server portion is dedicated to storage (storage layer) .
  • the processing layer may operate on data that is temporally stored in the data layer. Once the processing layer is finished processing the data, the processed data is stored in the storage layer. However, if the processing layer fails, then the data layer and storage layer is halted since no data is processed by the failed processing layer.
  • Rack-based servers have a further problem in that the system can be scaled out by adding other servers, however the rack-based systems cannot be readily or continuously scaled out since the racks are limited (e.g., due to available areas for additional servers) in the resources that may be added.
  • Blade-based servers solve this problem in that blade-based servers separate tasks across the blade devices also called nodes. In this way, scaling out may be achieved by adding additional blades (nodes) to the clustered application or may be scaled up by adding/upgrading processors, cache, bus architecture, etc. , on the blades to increase performance. Further, since the blade servers are removable, the blade servers may also be replaced and upgraded to scale up the capabilities of a given system.
  • the blade servers are grouped together in two or more groups within a rack system to form two or more blade centers, each including multiple blade servers.
  • Oracle offers a grid architecture comprising blade-based servers.
  • a logical layer (grid middleware) is implemented in the Oracle architecture to divide an application over the data and computing blade centers.
  • the physical implementation is an organization into basic racks comprising the different blade centers with each one dedicated to a given processing layer.
  • This Oracle solution has nevertheless a problem in that similar to the rack-based system, if any one of the blade centers fails, the processing layer is halted resulting in a halting of all layers.
  • the Oracle solution is called Oracle 1Og RAC TM (real application cluster) . While the Oracle solution may be scalable due to the use of blade servers, it is not failure proof in that the physical structure still manages the data serially across the computing, data and storage layers .
  • the present system includes a method, architecture, and system for providing a scalable processing capability.
  • a server system supports a server-based operation including a related application and data processing operation.
  • the server system includes two or more servers, with each of the servers including application processing and data processing portions.
  • Each of the servers are configured to process different application and data processing portions of the application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of the related application and data processing operations.
  • Each of the plurality of servers may be blade servers with each organized as a blade center.
  • the blade centers may be organized as a Grid computing system.
  • the Grid computing system may be geographically dispersed.
  • the related application and data processing operation may be a telephony operation for creating call data records from at least one of a fixed line telephony service, a mobile telephony service and a voice over Internet Protocol telephony service.
  • the servers may be configured to support scaling up by receiving an increase in at least one of computing and memory capabilities of at least one of the servers without halting processing of the related application and data processing operation by another one of the servers.
  • the servers may be configured to support scaling out by receiving an additional server without halting processing of the related application and data processing operation by the plurality of servers .
  • FIG. 1 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system
  • FIG. 2 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system
  • FIG. 3 shows a portion of a server system in accordance with an embodiment of the present system
  • FIG. 4 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system
  • FIG. 5 shows a physical implementation of a blade center architecture in accordance with an embodiment of the present system.
  • FIG. 1 shows a logical and physical configuration of a system architecture 100 in accordance with an embodiment of the present system.
  • Triangles in the figure represent an application running in a computing layer Ll.
  • Storage devices depicted represent data management (e.g., database) for the application running in a data layer L2.
  • data management e.g., database
  • FIG. 1 logically, the application and data layers Ll, L2 are separate such that processing occurs through use of the data layer L2 for data manipulation by the processing layer Ll.
  • servers 130, 140 each include a portion of the logical processing and data layers Ll, L2.
  • Ll the logical processing and data layers
  • the parallel processing paths 150 may operate utilizing a software pipeline architecture to define a "logical execution facility for executing discrete tasks of a business process in an ordered execution manner" .
  • This methodology may be well suited for high volume stream transactions and mixed workload in business application processing.
  • pipelines may be used to group transactions or business logic while preserving an order and priority of manipulation at the same time.
  • an addition of more server systems further reduces the impact of any single server system failure.
  • a loss of one server system results in a negligible five-percent loss of processing (data manipulation) capability.
  • spare server systems may be brought on-line with a negligible reduction in processing capability even during the process of bringing the spare server system on line.
  • the resultant loss of capability due to a loss of a server system may be readily calculated as the number of faulty server systems divided by the total number of server systems (including faulty servers) .
  • the servers may be implemented as blade-based server devices .
  • the system may be based on other server systems, such as a rack-based system, however, the blade-based system provides advantages over the rack-based system.
  • a blade -based system in accordance with the present system consumes less electricity than a rack-based embodiment of the present system.
  • less cooling is required for a blade-based system since the blade-based system consumes less power, and therefore generates less heat.
  • the blade-based system is more scaleable in that, the system may be scaled up by upgrading blade devices in each server system without a limitation as to a rack's capacity.
  • Blade-based systems also utilize more commodity hardware than rack-based systems and therefore, the initial purchases and subsequent upgrades and repairs are less costly.
  • the blade-based server system has an advantage of occupying a smaller footprint than the rack-based system and thereby, a cost associated with required datacenter space, is reduced by the blade-based system.
  • FIG. 1 while the logical configuration of the system architecture 100 is similar as prior systems, the physical configuration is quite different in that each server contains processing and data layers .
  • a middleware layer operates to distribute processes through the servers 130, 140 to coordinate those processes and result in the logical configuration based on the physical configuration shown.
  • Implementing the system architecture 100 as a Grid computing system enables a distribution of processes over widely dispersed geographic areas, if desired, although even as a Grid computing system, geographic dispersion is not required in that all elements of the system may just as readily be physically located in a same room or building.
  • utilizing a layered Grid computing approach provides flexibility in adding additional resources and since logically, each process is aware of the multiple portions of the Grid computing system, the physical location of resources is immaterial as long as all portions have an ability to communicate with all other portions of the Grid computing system.
  • FIG. 2 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system. Notation in FIG. 2 is similar as the notation utilized in FIG. 1 with an addition of a star-indication to signify a second application process running in a logical processing layer L3.
  • a physical configuration of the system architecture 200 includes each of logical layers Ll, L2 , L3 being physically apportioned in each of servers 230, 240, 270.
  • the architecture depicted in FIG. 2 in accordance with the present system may be expanded to include any number of desired processing and data layers.
  • the servers may also be expanded out to provide an N-Tier server system and as such, server 270 is designated as "SERVER N" .
  • the servers 230 , 240, 270 may be based on blade centers.
  • each of blade centers may be housed in a single-rack system, however, unlike prior systems, the logical layers Ll, L2 , L3 are distributed in each of the blade centers.
  • the servers 230, 240, 270 may operate as a grid computing network. As shown, each of the servers 230, 240, 270 are organized by suitable middleware to communicate along parallel paths 250 such that each of the servers 230, 240, 270 (e.g., blade centers) have a capability to perform each of the processing and data tasks.
  • the system By operating the servers in a mixed-mode (e.g., each server performs processing and data task) in accordance with the present system, logically, the system operates as prior systems, yet the present system ensures a reduced impact from faulty servers as compared to prior systems due to the change in the physical configuration.
  • a mixed-mode e.g., each server performs processing and data task
  • the architecture 100, 200 of the present system may be applied to perform a data center task for large, processing intensive data applications, such as creating call data records (CDRs) for a telephony service provider.
  • CDRs call data records
  • telephony service providers may need to create CDRs for multiple types of telephony services such as fixed line services, mobile services, voice over Internet Protocol (VoIP) telephony services and Internet services.
  • VoIP voice over Internet Protocol
  • each of the computing tasks to create CDRs from the different telephony sources may be distributed to each of the servers 230, 240, 270 such that each of fixed, mobile, VoIP and Internet is processed in each of the servers 230, 240, 270 including computing and data tasks.
  • the present system may be utilized to to condense other types of raw data into manageable, similarly structured data records to facilitate storage and data mining of the data records.
  • the parallel data path ensures that a single server failure does not halt any of the applications.
  • Parallel processing paths in accordance with the present system may operate utilizing a software pipeline architecture similar as shown in FIG. 1.
  • pipelines may be used to group transactions or business logic while preserving an order and priority of manipulation at the same time.
  • FIG. 3 shows a portion of a server system 300, such as a blade based server system, in accordance with the present system.
  • the system includes for example blade centers 330, 340, 370 and in operation, are similarly configured as the systems 100, 200 in that each of the blade centers 330, 340, 370, operate in a mixed mode wherein processing and data requirements are distributed across each of the blade centers 330, 340, 370.
  • each of the blade centers 330, 340, 370 are formed from blade servers.
  • the blade center 330 is formed from blade servers 330A, 330B, 330C, although the illustrative number of blade servers that make up a blade center may be adjusted as desired.
  • the blade centers 330, 340, 370 may further operate as a Grid (distributed) computing system.
  • a Grid computing system in accordance with an embodiment may operate wherein each blade center is functionally interconnected through use of a core fabric 380 which performs a middleware function of distributing computing tasks across each of the blade centers 330, 340, 370.
  • the core fabric may distribute the computing tasks to blade centers 330, 340, 370 that are geographically close (e.g.
  • blade centers 330, 340, 370 may distribute the computing tasks to blade centers 330, 340, 370 that are geographically dispersed (e.g. , wherein one or more of the blade centers 330, 340, 370 are positioned in a different location than one or more others of the blade centers 330, 340, 370) .
  • the architecture of the present system is well suited to support applications involving large volumes of data manipulation, such as may be involved in supporting a telephony service providers creation of call data records (CDRs) from raw calling information provided from telephony services such as fixed land-line services, mobile telephony services (e.g., cellular based) and voice over Internet Protocol (VoIP) telephony services.
  • CDRs call data records
  • telephony services such as fixed land-line services, mobile telephony services (e.g., cellular based) and voice over Internet Protocol (VoIP) telephony services.
  • VoIP voice over Internet Protocol
  • the present architecture is well suited to hosting any application that may be hosted by a service-based processing center.
  • the present architecture, system and method provides an ability to host large applications that are required (or desired) to run on high capacity servers.
  • the possibilities to scale up (e.g., upgrade processors) or scale out (add servers) is unlimited.
  • any failure of a server results in only a loss of the processing capabilities of that server.
  • processing of an application does not collapse (e.g., halt) due to the failure of one or more of the servers as the processing will continue across the servers that are not affected by the failure.
  • each of the servers is responsible for application processing and data processing, unlike prior systems, as a server fails, no transfer of an application to a sleeping CPU is required to maintain processing of the application. Accordingly, no delay in continued processing is introduced by a failure since processing continues on the servers that are unaffected, albeit at a lower processing capability due to the loss of the faulty server. Naturally a sleeping server may be brought online to make up for the faulty server, however, this process may be achieved while the unaffected servers (e.g. , those unaffected by the faulty server) continue processing of the application.
  • the unaffected servers e.g. , those unaffected by the faulty server
  • the servers may be geographically dispersed and a core fabric in accordance with the present system, will operate to distribute processing across each of the distributed servers the same as if each of the servers were positioned in a single enclosure.
  • geographic dispersion is not required but may add flexibility in adding servers should a need arise.
  • the loss in capacity due to a faulty blade center is even less significant.
  • compliance with service level agreements (SLA) for example that require a given quality of service (QoS)
  • QoS quality of service
  • the present system provides an architecture that may be scaled up, by readily upgrading or adding computing, storage, etc., facilities without halting of the application and data processing as in prior systems.
  • application and data processing may continue even during an upgrading of the one or more of the server systems.
  • the present system may also be scaled-out by adding server systems under coordination of the middleware process which merely distributes the application and data processing across the further server systems added by the upgrading.
  • you may numerically expend the computing power by adding participating nodes while having an added benefit of minimizing an effect of a loss of any given node.
  • the Grid computing system may be comprised of computing resources (e.g., blade centers) that are locally consolidated or distributed in an open environment that is self -managing through operation of the system middleware.
  • the present system provides an agile infrastructure capable of adapting to increased demands over time by providing a hardware platform that may include high density multi core units (servers, etc.), and in an embodiment, software that is thread profiled and performance tuned for operation.
  • the present system provides an architecture that may be scaled substantially linearly with the number of cores (servers or server centers) .
  • a server architecture, method and system that mixes application and data processing across two or more server systems to help ensure that even in a face of a failure in one or more of the server systems, application and data processing may continue on the servers systems that are unaffected by the faulty server.
  • a middleware application in accordance with an embodiment of the present system provides for parallel application and data processing across each of two or more server systems (e.g., blade centers) such that any failure of less than all of the server systems, does not halt the application and data processing.
  • blade centers organizing in accordance with the present system, commodity hardware may be utilized that provides a compact solution, requiring less power and cooling to provide a lower total cost of ownership (TCO) , yet support is provided for a data centers needs both presently and in the future even in the face of increased future demands on the present system.
  • TCO total cost of ownership
  • FIG. 4 shows a logical and physical configuration of a system architecture organized as object or application level grids in accordance with an embodiment of the present system.
  • the figures includes a star-indication, as for example similarly shown in FIG. 2, to signify an application process, such as a Hyperion call datacenter processing application as utilized by France Telecom, running in a logical processing layer.
  • Triangles similarly represent another application Triangles in the figure represent a different application running in a computing layer as do squares in the figure.
  • FIG. 5 shows a physical implementation of a blade center architecture running illustrative applications and processes in accordance with an embodiment of the present system.
  • the illustrative embodiment shown in FIG. 5 in an event that one blade center chassis fails, only 25% of computing power is lost. The system still has high availability. By scaling out to a real production scenario with N blade centers in a datacenter, losing one chassis will cause loss of only 100/N % of computing power. In fact, the larger the N is, the better the availability of system resources, even in an event of a blade center failure.
  • middleware ensures that an embodiment of the present system may scale out without substantial additional overhead beyond a prior scaled embodiment .
  • hardware portions may be comprised of one or both of analog and digital portions; g) any of the disclosed devices or portions thereof may be combined together or separated into further portions unless specifically stated otherwise; h) no specific sequence of acts or steps is intended to be required unless specifically indicated; and i) the term "plurality of" an element includes two or more of the claimed element, and does not imply any particular range of number of elements; that is, a plurality of elements may be as few as two elements, and may include an immeasurable number of elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Hardware Redundancy (AREA)

Abstract

A server system supports a server-based operation including a related application and data processing operation. The server system includes two or more servers, with each of the servers including application processing and data processing portions. Each of the servers are configured to process different application and data processing portions of the application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of the related application and data processing operations. Each of the plurality of servers may be blade servers with each organized as a blade center that may be organized as a Grid computing system. The Grid computing system may be geographically dispersed. The related application and data processing operation may be a telephony operation for creating call data records from a fixed line telephony service, a mobile telephony service and a voice over Internet Protocol telephony service.

Description

SCALABLE SERVER ARCHITECTURE
FIELD OF THE PRESENT SYSTEM:
The present system relates to a method, architecture, and system for providing a scalable processing capability.
BACKGROUND OF THE PRESENT SYSTEM:
Business Intelligence applications are based on customer data. A growing business needs to demonstrate the capability of handling an exponentially increasing data volume with the same or better expectation on performance of Business Intelligence Applications . In today's enterprise, a dynamic scalable adaptive real-time infrastructure is necessary to keep up with the challenges of meeting Service Level Agreements (SLAs) . Large applications run on high capacity servers called symmetric multiprocessing (SMP) systems. In these systems, the possibilities to scale up (add/upgrade processors, memory and other components) or scale out (add servers) are limited. In addition, any failure of a server results in a collapse of the application until the failed server can be replaced by a backup server or the failed server can be repaired. Typically, these servers are clustered together to provide additional computational capability, although clustering is not failure proof. Clustered servers tend to be located in the same geographic area (e.g., with the same room) as there are constraints on how the servers may be physically connected together. Even with a cluster of servers, a sleeping CPU is retained to replace a failed server. Furthermore, if a server fails, a transfer of the whole application to the sleeping CPU takes time during which, an application previously running on the cluster of servers is halted. Grid computing also called distributed computing has traditionally been a key area of interest for the High Performance Computing (HPC) community in academics, bio-molecular and life sciences research and has also been widely used in nuclear research by government sponsored R&D. In recent years there has been a significant adoption of grid computing in key vertical businesses, such as Financial Services, Telecommunications and Internet Growth firms .
Grid computing applies several computer, storage and network resources in a network to solve a single problem. Using Grid computing, geographically dispersed resources may work in aggregation and coherence to deliver improved performance, higher Quality of Service (QoS) , better utilization of resources and easier access to data. In Grid computing, resources grouped together can act as a collaborative virtual organization sharing application and data in an open heterogeneous environment. Grids can be comprised of computer resources that are locally consolidated or distributed in an open environment. Grids have an attribute of being self -managing in that each of the resources of the Grid are aware of each other of the resources typically as a result of a middleware application.
One draw of the move of industry to grid computing is that with Grid computing, computer assets may be treated as a utility. With Grid computing, the location of data and site of computation is transparent to the user of the HPC infrastructure. Any enterprise grade application can leverage a grid infrastructure to provide a operating benefits. The benefits of Grid computing includes, but are not limited to lower Total Cost of Ownership (TCO) using commodity hardware, and a high availability by factoring out most single points of failure. Policy based dynamic provisioning capabilities becomes an enabler for a real time enterprise. Grid computing provides a virtualized infrastructure to the end user.
Data and computing centers that manipulate huge amounts of data daily are physically organized as separate server portions wherein a first server portion is dedicated to computing (computing layer whether grid or cluster) , a second server portion is dedicated to data (data layer) and a third server portion is dedicated to storage (storage layer) .
A problem exists in this physical structure in that if any server portion fails (e.g., a failure that affects a back-plane of a server in the data layer) , the entire process is halted in that all processing is performed though the layers in a serial manner. For example, the processing layer may operate on data that is temporally stored in the data layer. Once the processing layer is finished processing the data, the processed data is stored in the storage layer. However, if the processing layer fails, then the data layer and storage layer is halted since no data is processed by the failed processing layer.
Rack-based servers have a further problem in that the system can be scaled out by adding other servers, however the rack-based systems cannot be readily or continuously scaled out since the racks are limited (e.g., due to available areas for additional servers) in the resources that may be added. Blade-based servers solve this problem in that blade-based servers separate tasks across the blade devices also called nodes. In this way, scaling out may be achieved by adding additional blades (nodes) to the clustered application or may be scaled up by adding/upgrading processors, cache, bus architecture, etc. , on the blades to increase performance. Further, since the blade servers are removable, the blade servers may also be replaced and upgraded to scale up the capabilities of a given system. In one configuration, the blade servers are grouped together in two or more groups within a rack system to form two or more blade centers, each including multiple blade servers.
For example, Oracle offers a grid architecture comprising blade-based servers. A logical layer (grid middleware) is implemented in the Oracle architecture to divide an application over the data and computing blade centers. The physical implementation is an organization into basic racks comprising the different blade centers with each one dedicated to a given processing layer. This Oracle solution has nevertheless a problem in that similar to the rack-based system, if any one of the blade centers fails, the processing layer is halted resulting in a halting of all layers. The Oracle solution is called Oracle 1Og RAC (real application cluster) . While the Oracle solution may be scalable due to the use of blade servers, it is not failure proof in that the physical structure still manages the data serially across the computing, data and storage layers .
It is an object of the present system to overcome disadvantages and/or make improvements in the prior art.
SUMMARY OF THE PRESENT SYSTEM:
The present system includes a method, architecture, and system for providing a scalable processing capability. A server system supports a server-based operation including a related application and data processing operation. The server system includes two or more servers, with each of the servers including application processing and data processing portions. Each of the servers are configured to process different application and data processing portions of the application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of the related application and data processing operations. Each of the plurality of servers may be blade servers with each organized as a blade center. The blade centers may be organized as a Grid computing system. The Grid computing system may be geographically dispersed. The related application and data processing operation may be a telephony operation for creating call data records from at least one of a fixed line telephony service, a mobile telephony service and a voice over Internet Protocol telephony service.
The servers may be configured to support scaling up by receiving an increase in at least one of computing and memory capabilities of at least one of the servers without halting processing of the related application and data processing operation by another one of the servers. The servers may be configured to support scaling out by receiving an additional server without halting processing of the related application and data processing operation by the plurality of servers . BRIEF DESCRIPTION OF THE DRAWINGS:
The invention is explained in further detail, and by way of example, with reference to the accompanying drawings wherein:
FIG. 1 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system;
FIG. 2 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system;
FIG. 3 shows a portion of a server system in accordance with an embodiment of the present system; FIG. 4 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system; and
FIG. 5 shows a physical implementation of a blade center architecture in accordance with an embodiment of the present system.
DETAILED DESCRIPTION OF THE PRESENT SYSTEM:
The following are descriptions of illustrative embodiments that when taken in conjunction with the following drawings will demonstrate the above noted features and advantages, as well as further ones. In the following description, for purposes of explanation rather than limitation, illustrative details are set forth such as architecture, interfaces, techniques, etc. However, it will be apparent to those of ordinary skill in the art that other embodiments that depart from these details would still be understood to be within the scope of the appended claims. Moreover, for the purpose of clarity, detailed descriptions of well known devices, circuits, techniques and methods are omitted so as not to obscure the description of the present system. It should be expressly understood that the drawings are included for illustrative purposes and do not represent the scope of the present system.
The method, architecture, and system described herein address problems in prior art systems. In accordance with an embodiment of the present system, method, and architecture, two or more processing layers are allocated to each server system. FIG. 1 shows a logical and physical configuration of a system architecture 100 in accordance with an embodiment of the present system. Triangles in the figure represent an application running in a computing layer Ll. Storage devices depicted represent data management (e.g., database) for the application running in a data layer L2. As shown in FIG. 1, logically, the application and data layers Ll, L2 are separate such that processing occurs through use of the data layer L2 for data manipulation by the processing layer Ll. However, unlike prior systems, the physical implementation is such that servers 130, 140 each include a portion of the logical processing and data layers Ll, L2. It should be noted that although only two servers are illustratively shown in FIG. 1, as may be readily appreciated, 20, 40 or more servers (N) may be similarly configured in accordance with the present system. Since each of the server systems provide each of processing and data layers, processing operates in parallel as opposed to the serial processing of prior systems. In accordance with a two-server embodiment as shown in FIG. 1, should a single server fail, only one-half of the processing capability is lost as opposed to a complete loss of processing capability such as in systems that physically separate layers between servers since the present system utilizes parallel processing paths 150 to manage data flow though the layers. As may be readily appreciated, the splitting of application processing and data processing needs may be allocated across servers based on criteria, such as other individual processing demands (e.g. , from other applications), capabilities and/or resources. The parallel processing paths 150 may operate utilizing a software pipeline architecture to define a "logical execution facility for executing discrete tasks of a business process in an ordered execution manner" . With application thread profiling and applying correct software pipelining techniques, data throughput of the system may be increased. This methodology may be well suited for high volume stream transactions and mixed workload in business application processing. Within a business application operating on a server in accordance with the present system, pipelines may be used to group transactions or business logic while preserving an order and priority of manipulation at the same time.
As may be readily appreciated, an addition of more server systems further reduces the impact of any single server system failure. For example, in a system with 20 server systems, a loss of one server system results in a negligible five-percent loss of processing (data manipulation) capability. Thereafter, spare server systems may be brought on-line with a negligible reduction in processing capability even during the process of bringing the spare server system on line.
The resultant loss of capability due to a loss of a server system may be readily calculated as the number of faulty server systems divided by the total number of server systems (including faulty servers) .
In accordance with an embodiment, the servers may be implemented as blade-based server devices . The system may be based on other server systems, such as a rack-based system, however, the blade-based system provides advantages over the rack-based system. For example, a blade -based system in accordance with the present system consumes less electricity than a rack-based embodiment of the present system. Further, less cooling is required for a blade-based system since the blade-based system consumes less power, and therefore generates less heat. Further, the blade-based system is more scaleable in that, the system may be scaled up by upgrading blade devices in each server system without a limitation as to a rack's capacity. Blade-based systems also utilize more commodity hardware than rack-based systems and therefore, the initial purchases and subsequent upgrades and repairs are less costly. Further, in a blade-based system, such as in a Grid computing configuration, the cost of interconnecting devices is much lower than in a comparable rack-based system. Interconnections in a rack-based system, such as utilization of SAN switch ports, Fiber Channel Host Bus Adapters
(HBAs) , Ethernet Cables, Ethernet switch ports, Fiber channel cables, etc., while speeding communications between the systems, significantly increases capital expenditures necessary to initiate the system and/or add portions thereto. Additionally, the blade-based server system has an advantage of occupying a smaller footprint than the rack-based system and thereby, a cost associated with required datacenter space, is reduced by the blade-based system. As shown in FIG. 1, while the logical configuration of the system architecture 100 is similar as prior systems, the physical configuration is quite different in that each server contains processing and data layers . A middleware layer operates to distribute processes through the servers 130, 140 to coordinate those processes and result in the logical configuration based on the physical configuration shown.
Implementing the system architecture 100 as a Grid computing system enables a distribution of processes over widely dispersed geographic areas, if desired, although even as a Grid computing system, geographic dispersion is not required in that all elements of the system may just as readily be physically located in a same room or building. However, utilizing a layered Grid computing approach provides flexibility in adding additional resources and since logically, each process is aware of the multiple portions of the Grid computing system, the physical location of resources is immaterial as long as all portions have an ability to communicate with all other portions of the Grid computing system.
FIG. 2 shows a logical and physical configuration of a system architecture in accordance with an embodiment of the present system. Notation in FIG. 2 is similar as the notation utilized in FIG. 1 with an addition of a star-indication to signify a second application process running in a logical processing layer L3. As indicated, a physical configuration of the system architecture 200 includes each of logical layers Ll, L2 , L3 being physically apportioned in each of servers 230, 240, 270. As may be readily appreciated, the architecture depicted in FIG. 2 in accordance with the present system may be expanded to include any number of desired processing and data layers. The servers may also be expanded out to provide an N-Tier server system and as such, server 270 is designated as "SERVER N" . The servers 230 , 240, 270, in accordance with the present system, may be based on blade centers. In one embodiment, each of blade centers may be housed in a single-rack system, however, unlike prior systems, the logical layers Ll, L2 , L3 are distributed in each of the blade centers. In accordance with an embodiment of the present system, the servers 230, 240, 270 may operate as a grid computing network. As shown, each of the servers 230, 240, 270 are organized by suitable middleware to communicate along parallel paths 250 such that each of the servers 230, 240, 270 (e.g., blade centers) have a capability to perform each of the processing and data tasks. By operating the servers in a mixed-mode (e.g., each server performs processing and data task) in accordance with the present system, logically, the system operates as prior systems, yet the present system ensures a reduced impact from faulty servers as compared to prior systems due to the change in the physical configuration.
In accordance with an embodiment of the present system, the architecture 100, 200 of the present system may be applied to perform a data center task for large, processing intensive data applications, such as creating call data records (CDRs) for a telephony service provider. In today's market, telephony service providers may need to create CDRs for multiple types of telephony services such as fixed line services, mobile services, voice over Internet Protocol (VoIP) telephony services and Internet services. A problem exists in that each of the different services provides telephony data and other data in different formats and therefore requires different processing, yet the telephony service provider desires to save CDRs in a single format. In accordance with the present system, each of the computing tasks to create CDRs from the different telephony sources may be distributed to each of the servers 230, 240, 270 such that each of fixed, mobile, VoIP and Internet is processed in each of the servers 230, 240, 270 including computing and data tasks. Naturally, the present system may be utilized to to condense other types of raw data into manageable, similarly structured data records to facilitate storage and data mining of the data records. By running the applications in parallel across each of the servers 230, 240, 270 through a middleware application, the parallel data path ensures that a single server failure does not halt any of the applications. Parallel processing paths in accordance with the present system may operate utilizing a software pipeline architecture similar as shown in FIG. 1. With application thread profiling and applying correct software pipelining techniques, data throughput of the system may be increased. This methodology may be well suited for high volume stream transactions. Within a business application operating on a server in accordance with the present system, pipelines may be used to group transactions or business logic while preserving an order and priority of manipulation at the same time.
FIG. 3 shows a portion of a server system 300, such as a blade based server system, in accordance with the present system. The system includes for example blade centers 330, 340, 370 and in operation, are similarly configured as the systems 100, 200 in that each of the blade centers 330, 340, 370, operate in a mixed mode wherein processing and data requirements are distributed across each of the blade centers 330, 340, 370. As shown, each of the blade centers 330, 340, 370 are formed from blade servers. For example, the blade center 330 is formed from blade servers 330A, 330B, 330C, although the illustrative number of blade servers that make up a blade center may be adjusted as desired. As may be readily appreciated, by running one or more applications across each of the blade centers 330, 340, 370, the system is capable to continue processing the application even in a case of a blade center failure. In accordance with an embodiment of the present system, the blade centers 330, 340, 370 may further operate as a Grid (distributed) computing system. A Grid computing system in accordance with an embodiment may operate wherein each blade center is functionally interconnected through use of a core fabric 380 which performs a middleware function of distributing computing tasks across each of the blade centers 330, 340, 370. In a Grid computing configuration, the core fabric may distribute the computing tasks to blade centers 330, 340, 370 that are geographically close (e.g. , within a same rack enclosure) or may distribute the computing tasks to blade centers 330, 340, 370 that are geographically dispersed (e.g. , wherein one or more of the blade centers 330, 340, 370 are positioned in a different location than one or more others of the blade centers 330, 340, 370) .
The architecture of the present system is well suited to support applications involving large volumes of data manipulation, such as may be involved in supporting a telephony service providers creation of call data records (CDRs) from raw calling information provided from telephony services such as fixed land-line services, mobile telephony services (e.g., cellular based) and voice over Internet Protocol (VoIP) telephony services. As would be readily apparent to a person of ordinary skill in the art, the present architecture is well suited to hosting any application that may be hosted by a service-based processing center.
The present architecture, system and method provides an ability to host large applications that are required (or desired) to run on high capacity servers. In accordance with the present system, the possibilities to scale up (e.g., upgrade processors) or scale out (add servers) is unlimited. As provided by the mixing of applications and related data processing across each of the servers, any failure of a server results in only a loss of the processing capabilities of that server. Due to the parallel processing across all servers in accordance with an embodiment of the present system, processing of an application does not collapse (e.g., halt) due to the failure of one or more of the servers as the processing will continue across the servers that are not affected by the failure. Since each of the servers is responsible for application processing and data processing, unlike prior systems, as a server fails, no transfer of an application to a sleeping CPU is required to maintain processing of the application. Accordingly, no delay in continued processing is introduced by a failure since processing continues on the servers that are unaffected, albeit at a lower processing capability due to the loss of the faulty server. Naturally a sleeping server may be brought online to make up for the faulty server, however, this process may be achieved while the unaffected servers (e.g. , those unaffected by the faulty server) continue processing of the application.
By distributing servers over a Grid computing system, the servers may be geographically dispersed and a core fabric in accordance with the present system, will operate to distribute processing across each of the distributed servers the same as if each of the servers were positioned in a single enclosure. Although even in a Grid computing system in accordance with the present system, geographic dispersion is not required but may add flexibility in adding servers should a need arise. In an architecture in accordance with the present system which utilizes blade centers as the server portions, by mixing up application processing and data processing into a same blade center (e.g., an interlacing on the physical implementation, which is transparent to the logical layer, e.g., the grid middleware) , instead of having a processing grid and a data grid operating on different blade centers, you may be provided two or more identical grids or blade centers, with each corresponding to a mixed processing and data blade centers. In this embodiment, if a blade center fails, you are still provided a second blade center (which is naturally also interlaced) that functions . In a system having only two blade centers , the present system is only reduced to 50% capacity, not a 100% loss as in the prior solutions. Naturally if the system includes more than two blade centers and only one is faulty, the loss in capacity due to a faulty blade center, is even less significant. In this way, compliance with service level agreements (SLA) , for example that require a given quality of service (QoS) , may be more easily met since a faulty server does not halt application and data processing across the system. The present system provides an architecture that may be scaled up, by readily upgrading or adding computing, storage, etc., facilities without halting of the application and data processing as in prior systems. In accordance with an embodiment of the present system, application and data processing may continue even during an upgrading of the one or more of the server systems. The present system may also be scaled-out by adding server systems under coordination of the middleware process which merely distributes the application and data processing across the further server systems added by the upgrading. By scaling out in this illustrative embodiment, you may numerically expend the computing power by adding participating nodes while having an added benefit of minimizing an effect of a loss of any given node.
In accordance with an embodiment, by providing geographically dispersed resources in a Grid computing system that work in aggregation and coherence, improved performance can be delivered over prior systems to provide a higher quality of service, better utilization and easier access to data. In accordance with this embodiment, resources grouped together can act as a collaborative virtual organization sharing application and data processing in an open heterogeneous environment. In accordance with the present system, the Grid computing system may be comprised of computing resources (e.g., blade centers) that are locally consolidated or distributed in an open environment that is self -managing through operation of the system middleware.
The present system provides an agile infrastructure capable of adapting to increased demands over time by providing a hardware platform that may include high density multi core units (servers, etc.), and in an embodiment, software that is thread profiled and performance tuned for operation. The present system provides an architecture that may be scaled substantially linearly with the number of cores (servers or server centers) .
Further variations of the present system would readily occur to a person of ordinary skill in the art and are encompassed by the following claims. Through operation of the present system, a server architecture, method and system is provided that mixes application and data processing across two or more server systems to help ensure that even in a face of a failure in one or more of the server systems, application and data processing may continue on the servers systems that are unaffected by the faulty server. A middleware application in accordance with an embodiment of the present system provides for parallel application and data processing across each of two or more server systems (e.g., blade centers) such that any failure of less than all of the server systems, does not halt the application and data processing. Through use of blade centers organizing in accordance with the present system, commodity hardware may be utilized that provides a compact solution, requiring less power and cooling to provide a lower total cost of ownership (TCO) , yet support is provided for a data centers needs both presently and in the future even in the face of increased future demands on the present system.
FIG. 4 shows a logical and physical configuration of a system architecture organized as object or application level grids in accordance with an embodiment of the present system. The figures includes a star-indication, as for example similarly shown in FIG. 2, to signify an application process, such as a Hyperion call datacenter processing application as utilized by France Telecom, running in a logical processing layer. Triangles similarly represent another application Triangles in the figure represent a different application running in a computing layer as do squares in the figure.
FIG. 5 shows a physical implementation of a blade center architecture running illustrative applications and processes in accordance with an embodiment of the present system. In the illustrative embodiment shown in FIG. 5, in an event that one blade center chassis fails, only 25% of computing power is lost. The system still has high availability. By scaling out to a real production scenario with N blade centers in a datacenter, losing one chassis will cause loss of only 100/N % of computing power. In fact, the larger the N is, the better the availability of system resources, even in an event of a blade center failure. In accordance with the present system, middleware ensures that an embodiment of the present system may scale out without substantial additional overhead beyond a prior scaled embodiment . Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. In addition, the section headings included herein are intended to facilitate a review but are not intended to limit the scope of the present system. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims. In interpreting the appended claims, it should be understood that: a) the word "comprising" does not exclude the presence of other elements or acts than those listed in a given claim; b) the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements; c) any reference signs in the claims do not limit their scope; d) several "means" may be represented by the same item or hardware or software implemented structure or function; e) any of the disclosed elements may be comprised of hardware portions (e.g., including discrete and integrated electronic circuitry) , software portions (e.g. , computer programming) , and any combination thereof; f ) hardware portions may be comprised of one or both of analog and digital portions; g) any of the disclosed devices or portions thereof may be combined together or separated into further portions unless specifically stated otherwise; h) no specific sequence of acts or steps is intended to be required unless specifically indicated; and i) the term "plurality of" an element includes two or more of the claimed element, and does not imply any particular range of number of elements; that is, a plurality of elements may be as few as two elements, and may include an immeasurable number of elements.

Claims

ClaimsWhat is claimed is:
1. A method of supporting a server-based operation including a related application and data processing operation, the method comprising acts of: providing a plurality of servers; and providing a middleware application that disperses the related application and data processing application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of the application and data processing operations.
2. The method of claim 1, wherein the servers are blade servers with each organized as a blade center.
3. The method of claim 2, wherein the blade centers are organized as a Grid computing system.
4. The method of claim 3, wherein the Grid computing system is geographically dispersed.
5. The method of claim 1, wherein the related application and data processing application is a telephony application for creating call data records from at least one of a fixed line telephony service, a mobile telephony service and a voice over Internet Protocol telephony service .
6. The method of claim 1, comprising an act of scaling up the plurality of servers by increasing at least one of computing and memory capabilities of at least one of the plurality of servers without halting processing of the related application and data processing operation by another one of the plurality of servers.
7. The method of claim 1, comprising an act of scaling out the plurality of servers by adding an additional server to the plurality of servers without halting processing of the related application and data processing operation by the plurality of servers.
8. An application embodied on a computer readable medium arranged to operate as a middleware application supporting a server-based operation including a related application and data processing operation, the application comprising: a portion configured to disperse the related application and data processing application in parallel across each of a plurality of servers such that each of the plurality of servers supports both of the application and data processing operations; and a portion configured to store results of the related application and data processing application in a storage medium.
9. The application of claim 8, wherein the middleware application is configured to support each of the plurality of servers as a blade center.
10. The application of claim 9, wherein the middleware application is configured to organize the blade centers as a Grid computing system.
11. The application of claim 10, wherein the Grid computing system is geographically dispersed.
12. The application of claim 8, wherein the related application and data processing application is a telephony application for creating call data records from at least one of a fixed line telephony service, a mobile telephony service, a voice over Internet Protocol telephony service and an Internet service.
13. The application of claim 8, wherein the middleware application is configured to support scaling up the plurality of servers by at least one of computing and memory capabilities of at least one of the plurality of servers without halting processing of the related application and data processing operation by another one of the plurality of servers.
14. The application of claim 8, wherein the middleware application is configured to support scaling out the plurality of servers by supporting adding an additional server to the plurality of servers without halting processing of the related application and data processing operation by the plurality of servers.
15. An application running on a server system, the server system comprising a plurality of servers, with each of the plurality of servers including application processing and data processing portions, wherein each of the plurality of servers are configured to process different application and data processing portions of the application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of application and data processing operations of the application.
16. The application running on the server system of claim 15 , wherein each of the plurality of servers are blade servers with each organized as a blade center.
17. The application running on the server system of claim 16 , wherein the blade centers are organized as a Grid computing system.
18. The application running on the server system of claim 17, wherein the Grid computing system is geographically dispersed.
19. The application running on the server system of claim 15 , wherein the application and data processing operations are telephony operations for creating call data records from at least one of a fixed line telephony service, a mobile telephony service, a voice over Internet Protocol telephony service and Internet services.
20. The application running on the server system of claim 15, wherein the plurality of servers are configured to support scaling up by receiving an increase in at least one of computing and memory capabilities of at least one of the plurality of servers without halting processing of the related application and data processing operation by another one of the plurality of servers.
21. The application running on the server system of claim 15, wherein the plurality of servers are configured to support scaling out by receiving an additional server to the plurality of servers without halting processing of the related application and data processing operation by the plurality of servers.
22. A server system configured to support a server-based operation including a related application and data processing operation, the server system comprising a plurality of servers, with each of the plurality of servers including application processing and data processing portions, wherein each of the plurality of servers are configured to process different application and data processing portions of the application in parallel across each of the plurality of servers such that each of the plurality of servers supports both of the related application and data processing operations.
23. The server system of claim 22, wherein each of the plurality of servers are blade servers with each organized as a blade center.
24. The server system of claim 23, wherein the blade centers are organized as a Grid computing system.
25. The server system of claim 24, wherein the Grid computing system is geographically dispersed.
26. The server system of claim 22, wherein the related application and data processing operation is a telephony operation for creating call data records from at least one of a fixed line telephony service, a mobile telephony service, a voice over Internet Protocol telephony service and an Internet service.
27. The server system of claim 22, wherein the plurality of servers are configured to support scaling up by receiving an increase in at least one of computing and memory capabilities of at least one of the plurality of servers without halting processing of the related application and data processing operation by another one of the plurality of servers.
28. The server system of claim 22, wherein the plurality of servers are configured to support scaling out by receiving an additional server to the plurality of servers without halting processing of the related application and data processing operation by the plurality of servers.
PCT/IB2008/055680 2007-12-31 2008-12-17 Scalable server architecture WO2009087548A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1829507P 2007-12-31 2007-12-31
US61/018,295 2007-12-31

Publications (2)

Publication Number Publication Date
WO2009087548A2 true WO2009087548A2 (en) 2009-07-16
WO2009087548A3 WO2009087548A3 (en) 2009-09-11

Family

ID=40723173

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/055680 WO2009087548A2 (en) 2007-12-31 2008-12-17 Scalable server architecture

Country Status (1)

Country Link
WO (1) WO2009087548A2 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002101573A2 (en) * 2001-06-13 2002-12-19 Intel Corporation Modular server architecture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002101573A2 (en) * 2001-06-13 2002-12-19 Intel Corporation Modular server architecture

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"hp blade server data sheet" INTERNET CITATION November 2001 (2001-11), XP002280196 Retrieved from the Internet: URL:http://www.cypress-systems.com/PDFs/pdf_35.pdf> [retrieved on 2004-05-13] *
Abdelsalam A. Helal et al: "Replication techniques in distributed systems" 1996, Springer , XP002532698 , pages xiii-xvi the whole document *
Anonymous: "Improving Research Productivity with Blade Servers and Grid Computing" Sun Microsystems Website 30 March 2004 (2004-03-30), XP002532697 Retrieved from the Internet: URL:http://www.sun.com/servers/entry/solutions/docs/grid_computing.pdf> [retrieved on 2009-06-18] -& "Sun Search" Sun Microsystems website XP002532700 Retrieved from the Internet: URL:http://search.sun.com/main/index.jsp?qt=%22Improving+Research+Productivity+with+Blade+Servers+and+Grid+Computing%22&charset=UTF-8&reslang=en&col=main-all> [retrieved on 2009-06-18] *

Also Published As

Publication number Publication date
WO2009087548A3 (en) 2009-09-11

Similar Documents

Publication Publication Date Title
CN105978704B (en) System and method for creating new cloud resource instruction set architecture
CN102103518B (en) System for managing resources in virtual environment and implementation method thereof
US11940979B2 (en) Data compartments for read/write activity in a standby database
Candan et al. Frontiers in information and software as services
Bao et al. Massive sensor data management framework in cloud manufacturing based on Hadoop
US10637733B2 (en) Dynamic grouping and repurposing of general purpose links in disaggregated datacenters
US20120005345A1 (en) Optimized resource management for map/reduce computing
CN106780027A (en) A kind of data handling system and method
US20200097428A1 (en) Dynamic component communication using general purpose links between respectively pooled together of like typed devices in disaggregated datacenters
CN103595799B (en) A kind of method realizing distributed shared data storehouse
US20060129559A1 (en) Concurrent access to RAID data in shared storage
US20060277155A1 (en) Virtual solution architecture for computer data systems
US10802988B2 (en) Dynamic memory-based communication in disaggregated datacenters
Yi et al. Cloud computing architecture design of database resource pool based on cloud computing
US11797282B2 (en) Optimizing services deployment in a cloud computing environment
US20200162538A1 (en) Method for increasing file transmission speed
Nivetha et al. Modeling fuzzy based replication strategy to improve data availabiity in cloud datacenter
US9984041B2 (en) System, method, and recording medium for mirroring matrices for batched cholesky decomposition on a graphic processing unit
Bhathal et al. Big data computing with distributed computing frameworks
CN102404393A (en) Method for realizing private cloud framework of large centralized system of tobacco company
WO2009087548A2 (en) Scalable server architecture
US20200099664A1 (en) Maximizing resource utilization through efficient component communication in disaggregated datacenters
US11792066B2 (en) File server array and enhanced pipeline transmission
US10915493B2 (en) Component building blocks and optimized compositions thereof in disaggregated datacenters
US20230409602A1 (en) Data management

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08869696

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 08869696

Country of ref document: EP

Kind code of ref document: A2

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载