News Column

Patent Application Titled "Distributed Computing Architecture" Published Online

August 21, 2014



By a News Reporter-Staff News Editor at Computer Weekly News -- According to news reporting originating from Washington, D.C., by VerticalNews journalists, a patent application by the inventors Tamas, Alexis (Paris, FR); Bellier, Gregory (Sevres, FR), filed on January 29, 2013, was made available online on August 7, 2014.

The assignee for this patent application is Stg Interactive S.a.

Reporters obtained the following quote from the background information supplied by the inventors: "The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.

"Computer systems have finite limits in terms of both storage capacity and processing capacity. When either or both of these capacities are reached, performance of the computer system suffers. To prevent or mitigate loss of performance, additional computing hardware may be added to increase the processing and/or storage capacities. This process is called scaling, and different types of workloads present different scaling challenges.

"One approach to scaling is to parallelize computing processes among multiple computer systems, which then interact via a message passing interface (MPI). The MPI may allow parallel computing systems to coordinate processing to avoid conflicts between changes made by one system and changes made by another system. MPI has been implemented in a number of languages, including C, C++, and Fortran. The separate computing systems may be in separate physical enclosures and/or may be multiple processors within a single computer chassis or even multiple cores within a single processor. MPI may allow for high performance on massively parallel shared-memory machines and on clusters of heterogeneous distributed memory computers.

"Another scaling approach uses distributed storage for structured query language (SQL) databases. However, transactional operations in distributed SQL databases are generally slowed because of the need to keep separate computers synchronized. Even when fast networks, such as InfiniBand, are used, synchronization may impose limits on performance and scalability. Further, an additional limitation of the SQL database approach is that often data processing is not executed on the SQL database server but on another server. This increases latency because of the transportation of data from the SQL server to the computing server and back again to the SQL server.

"Parallel processing is beneficial for large data sets, portions of which can be spread across different nodes and processed independently. However, transactional processing, where some or all transactions may depend on one or more previous transactions, is not as easily parallelized. For example, computers may synchronize access to a portion of data by locking the portion of data before processing of the transaction begins and unlocking the data upon successful completion of the transaction. While the data is locked, other computers cannot change the data, and in some instances cannot even read the data. As a result of the locking/unlocking process, there may be significant latency as well as significant variations in latency."

In addition to obtaining background information on this patent application, VerticalNews editors also obtained the inventors' summary information for this patent application: "A system includes a plurality of servers that each include a processor and memory, and a border server that includes a processor and memory. The border server (i) receives a request for a transaction, which can be accomplished by performing a plurality of tasks, (ii) identifies a first task of the plurality of tasks, (iii) identifies an initial server of the plurality of servers to perform the first task by consulting, based on a type of the first task, routing data stored in the memory of the border server, and (iv) requests that the initial server perform the first task. Each server of the plurality of servers is configured to, in response to receiving a request for a task from the border server, (i) perform the received task using data related to the received task that is stored exclusively on the server, (ii) determine whether the received task requires an additional task, (iii) identify a next server of the plurality of servers to perform the additional task by consulting, based on a type of the additional task, routing data stored in the memory of the server, (iv) request that the next server perform the additional task, and (v) in response to receiving a completion indication from the next server, respond to the border server with a completion indication corresponding to the received task.

"In other features, the system includes a networking device configured to interconnect the plurality of servers with each other and with the border server. The border server (i) includes a first network port in communication with the networking device and a second network port not in communication with the networking device and (ii) receives the request for the transaction over the second network port. The border server is configured to receive the transaction request over the Internet through a web service.

"In further features, the border server is configured to receive the transaction request from a web server. The web server serves a web page to a user and prepares the transaction request based on input provided by the user via the web page. The networking device includes an InfiniBand switch. The initial server and the next server are configured such that data is selectively transferred from the memory of the initial server to the memory of the next server via the networking device without involvement of the processor of the initial server or the processor of the next server. The plurality of servers each implement remote direct memory access (RDMA).

"In other features, the system includes a mirror server that stores a copy of data stored by a designated server of the plurality of servers. The mirror server is configured to execute tasks in place of the designated server in response to a failure of the designated server. The plurality of servers collectively performs a plurality of types of tasks. The initial server is configured to perform a first type of task of the plurality of types of tasks, and the next server is configured to perform a second type of task of the plurality of types of tasks.

"In further features, a first server of the plurality of servers is configured to perform the second type of task, a second server of the plurality of servers is also configured to perform the second type of task, the first server stores a first set of data related to the second type of task, the second server stores a second set of data related to the second type of task, and the first set of data is mutually exclusive with the second set of data. The routing data specifies the first server as the next server in response to the additional task corresponding to the first set of data. The routing data specifies the second server as the next server in response to the additional task corresponding to the second set of data.

"In other features, the first server and the second server are configured to dynamically move data from the first set of data into the second set of data in response to over-utilization of the first server. In response to over-utilization of the initial server, (i) a first server of the plurality of servers is dynamically configured to also perform the first type of task, (ii) the data related to the received task stored by the initial server is split into a first set of data and a second set of data, (iii) the first set of data is mutually exclusive with the second set of data, and (iv) the second set of data is moved to the first server.

"A system includes a plurality of servers each including a processor and memory. A first server of the plurality of servers is configured to (i) receive a request for a transaction, wherein the transaction can be accomplished by performing a plurality of tasks, (ii) select a first task of the plurality of tasks, (iii) identify a second server of the plurality of servers to perform the first task by consulting, based on a type of the first task, routing data stored in the memory of the first server, and (iv) request that the second server perform the first task. The second server is configured to, in response to receiving the request for the first task, (i) perform the first task using data stored exclusively on the second server, and (ii) determine whether the first task requires an additional task. The second server is configured to, in response to the first task requiring an additional task, (i) identify a third server of the plurality of servers to perform the additional task by consulting, based on a type of the additional task, routing data stored in the memory of the second server, (ii) request that the third server perform the additional task, and (iii) in response to receiving a completion indication from the third server, respond to the first server with a completion indication corresponding to the first task.

"A system includes a border server and a plurality of servers, the border server includes a processor and memory, and each of the plurality of servers includes a processor and memory. A method of controlling the system includes, at the border server, receiving a request for a transaction. The transaction can be accomplished by performing a plurality of tasks. The method includes, at the border server, identifying a first task of the plurality of tasks. The method includes, at the border server, identifying an initial server of the plurality of servers to perform the first task by consulting, based on a type of the first task, routing data stored in the memory of the border server. The method includes, at the border server, requesting that the initial server perform the first task. The method includes, at a server of the plurality of servers, in response to receiving a request for a task from the border server, (i) performing the received task using data related to the received task that is stored exclusively on the server, (ii) determining whether the received task requires an additional task, (iii) identifying a next server of the plurality of servers to perform the additional task by consulting, based on a type of the additional task, routing data stored in the memory of the server, (iv) requesting that the next server perform the additional task, and (v) in response to receiving a completion indication from the next server, responding to the border server with a completion indication corresponding to the received task.

"In other features, the method further includes receiving the transaction request, at the border server, over the Internet using a web service. The method further includes receiving the transaction request, at the border server, from a web server. The method further includes, at the web server, serving a web page to a user and preparing the transaction request based on input provided by the user via the web page. The method further includes selectively transferring data from the memory of the initial server to the memory of the next server without involvement of the processor of the initial server or the processor of the next server. The method further includes implementing remote direct memory access (RDMA) at each of the plurality of servers.

"In further features, the method further includes, at a mirror server, storing a copy of data stored by a designated server of the plurality of servers, and at the mirror server, executing tasks in place of the designated server in response to a failure of the designated server. The method further includes collectively performing a plurality of types of tasks using the plurality of servers. The method further includes at, the initial server, performing a first type of task of the plurality of types of tasks, and at the next server, performing a second type of task of the plurality of types of tasks.

"In other features, the method further includes, at a first server of the plurality of servers, performing the second type of task; at the first server, storing a first set of data related to the second type of task; at a second server of the plurality of servers, performing the second type of task; and at the second server, storing a second set of data related to the second type of task. The first set of data is mutually exclusive with the second set of data.

"In further features, the routing data specifies the first server as the next server in response to the additional task corresponding to the first set of data. The routing data specifies the second server as the next server in response to the additional task corresponding to the second set of data. The method further includes dynamically moving data from the first set of data into the second set of data in response to over-utilization of the first server. The method further includes, at a first server of the plurality of servers, performing the first type of task, and, in response to over-utilization of the initial server, (i) splitting the data related to the received task into a first set of data and a second set of data, and (ii) moving the second set of data to the first server. The first set of data is mutually exclusive with the second set of data.

"Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

"FIGS. 1A and 1B are high level functional block diagrams of example implementations of the principles of the present disclosure.

"FIG. 2A is a high level functional block diagram of tasks assigned to servers.

"FIG. 2B is a flowchart describing operation of the blocks of FIG. 2A.

"FIG. 2C is a high level functional block diagram of tasks assigned to specific servers based on an example transaction.

"FIG. 2D is a flowchart depicting operation of the servers of FIG. 2C in processing the example transaction.

"FIG. 3 is a simplified block diagram of an example implementation of one of the servers.

"FIG. 4 is a functional block diagram of an example implementation of one of the servers.

"FIG. 5 is a flowchart depicting example operation of web server functionality.

"FIG. 6A is a high level flowchart of example rebalancing operation.

"FIG. 6B is a flowchart depicting an example method of determining when rebalancing is to be performed.

"FIG. 6C is a flowchart of example rebalancing operation.

"FIG. 6D is a high level flowchart depicting task assignment before and after an example rebalancing.

"FIG. 7 is a flowchart of example operation of a border server.

"FIGS. 8A and 8B together are a flowchart of example operation of one of the node servers.

"In the drawings, reference numbers may be reused to identify similar and/or identical elements."

For more information, see this patent application: Tamas, Alexis; Bellier, Gregory. Distributed Computing Architecture. Filed January 29, 2013 and posted August 7, 2014. Patent URL: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=677&p=14&f=G&l=50&d=PG01&S1=20140731.PD.&OS=PD/20140731&RS=PD/20140731

Keywords for this news article include: Software, Web Server, Stg Interactive S.a..

Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC


For more stories covering the world of technology, please see HispanicBusiness' Tech Channel



Source: Computer Weekly News


Story Tools






HispanicBusiness.com Facebook Linkedin Twitter RSS Feed Email Alerts & Newsletters