The assignee for this patent application is
Reporters obtained the following quote from the background information supplied by the inventors: "The present invention relates to a database management technique which can be applied widely to a database management system (DBMS).
"Such a technique as disclosed in JP-A-2007-249468 has been so far employed for a database management system to dynamically allocate CPU resources and execute a database processing request.
"As the need for large-scale analysis is increasing in these years, such a technique as disclosed in JP-A-2007-34414 is employed, wherein high-speed processing is required by parallel processing which utilizes many resources including processes and threads within a database management system.
"Further, even with respect to data operations such as index creation, data insertion and data update in the database management system, such a technique as to increase the processing speed by parallelly processing a single processing request from a user or an application program has also become popular."
In addition to obtaining background information on this patent application, VerticalNews editors also obtained the inventor's summary information for this patent application: "Such processing requires use of many resources including processes and threads. However, there exists such a state that allocation of even an increased number of resources results in the fact that a performance cannot be increased depending upon, for example, the input/output performance of a storage device. In the case of, for example, processes or threads, even allocation of an increased number of resources in such a state involves an increased cost necessary for switching such processes or threads and so on, and in some cases, even increase of the resources count beyond a predetermined value reversely deteriorates the performance.
"Such a system as an operating system has generally an upper value for the number of resources usable for the entire system. There occurs, in some cases, such a situation that, when a plurality of processing requests are executed and one of the executed processing requests executed firstly uses resources up to its upper limit, a necessary number of resources cannot be allocated to the next processing request, thus resulting in a low performance.
"In the database management system, on the other hand, when each access is made to each of threads prepared for accesses to already stored data for example, the access can be made with a less number of threads. In other words, in order to processing a request with use of an optimum number of processes or threads, it becomes important to determine an upper limit value for the number of processes or threads.
"An object of the present invention is to increase a processing performance by setting a suitable upper limit for the number of resources for each of processing requests depending upon the arrangement of hardware such as a storage device or the contents of each processing request.
"In accordance with an aspect of the present invention, the above object is attained by providing a database management system which includes a processing request acceptor which accepts a processing request as a data query; an auxiliary storage device in which storage areas for storing data stored in a database are arranged; a data operation executor which analyzes the accepted processing request and operates a plurality of pieces of data on the basis of the analyzed result; a resource manager which manages each of the data operations allocated to generated processes or threads; and a buffer manager which caches data as a target of the data operation from the auxiliary storage device into a memory upon execution of the data operations and determines whether or not the data as the target of the data operations is present in a cache. When the data operation executor executes the data operations, the buffer manager determines whether or not the data is present in the cache. The resource manager determines the usable state of the processes or threads in the data operations when the data is not present in the cache. When some of the processes or threads are free or usable, the resource manager sends an access request of caching data as the target of the executed data operations in the memory to the auxiliary storage device in order to execute the data operations through the processes or threads. When all the processes or threads are already used, the resource manager executes the data operations after some of the processes or threads are released or free. In the presence of the data in the cache, the resource manager executes the data operations.
"In the present invention, since an upper limit for the number of resources allocated to a single processing request is set in the database management system, the resources can be efficiently used.
"Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
"FIG. 1 shows a conceptual view for explaining an each-request resources-count determiner for determining the number of resources for each processing request in the present invention;
"FIG. 2 shows a schematic configuration of a computer system in an embodiment of the present invention;
"FIG. 3 is a flow chart for explaining details of the each-request resources-count determiner;
"FIG. 4 schematically shows a table for explaining schema definition information;
"FIG. 5 schematically shows a table for explaining storage arrangement information;
"FIG. 6 schematically shows a diagram for explaining the storage arrangement information when the hierarchy of a storage device is made complex;
"FIG. 7 schematically shows a table for explaining mapping information about schema and storage:
"FIG. 8 shows a flow chart for explaining details of the each-request resources-count determiner when an I/O performance is used as the storage arrangement information;
"FIG. 9 schematically shows a table for explaining an example of the storage arrangement information when the I/O performance is treated as the storage arrangement information;
"FIG. 10 is a flow chart for explaining the operation when resources are allocated to each operation target table in linking operation;
"FIG. 11 is a flow chart for explaining the operation of the data operation executor for executing operations involved by input/output (I/O) to/from a disk;
"FIG. 12 schematically shows a table for explaining a relationship between a reference resources count and a disk device;
"FIG. 13 schematically shows a table for explaining performance statistical information;
"FIG. 14 is a diagram for explaining the effects of the resources count allocation in the linking operation;
"FIG. 15 is a flowchart for explaining the resources count allocation when a linking sequence in the linking operation is considered; and
"FIG. 16 schematically shows a linking order coefficient management table."
For more information, see this patent application:
Keywords for this news article include:
Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC
Most Popular Stories
- Miley Cyrus Performs in Undies After Costume Goes Missing
- HBO No Go During 'True Detective' Finale
- FBI Helping Ukraine Recover Stolen Billions
- Colo. Raises $3.5 Million in Pot Revenue
- Uninsured Rate Continues to Fall
- Kim Jong Un Elected in Remarkable Unanimous Vote
- Neil Young Debuts PonoMusic This Week
- Rodman Calls It Quits With Kim Jong Un
- Growth Expected if Congress Passes Budget
- Shipwright Jobs Offered in N.C.