No assignee for this patent application has been made.
News editors obtained the following quote from the background information supplied by the inventors: "Virtual memory allows a processor to address a memory space that is larger than physical memory. The translation between physical memory and virtual memory is typically performed using page tables. Often several page table levels are employed, where each page table level helps in translating a part of the virtual address. For instructions that access memory, virtual addresses need to be translated to physical memory using the page tables. A Translation Lookaside Buffer (TLB) cache is common in processors to facilitate translations between the physical and virtual memory. TLBs are populated via different mechanisms, for example, for AMD64 type processor architecture, the processor employs a page-table walker that establishes required translations and fills the TLB.
"If page tables change and a re-walk is desired, TLBs often need to be flushed to trigger a new page-table walk operation. This operation is usually desired when the operating system determines that the TLB entries should be filled again. The 'walk' refers to the process of going through (i.e., walking) the page-table to establish a virtual to physical mapping. A page-table walker performs the page-table walk operation. The page-table walker sets ACCESSED/DIRTY bits depending on the access type (load/store) upon first access. Generally, the processor does not clear these bits. The operating system (OS) can use these bits to determine which memory pages have been accessed and how they have been accessed. Often, these bits need to be cleared in page tables to force a store to the page table entry (PTE) upon page table walks on other processors. Additionally, remote TLB shoot down can be used to remove translations from remote TLBs for which a re-walk is desired.
"Shared-memory computer systems (e.g., computer systems that include multiple processors) allow multiple concurrent threads of execution to access shared memory locations. Unfortunately, writing correct multi-threaded programs is difficult due to the complexities of coordinating concurrent memory access. One approach to concurrency control between multiple threads of execution is transactional memory. In a transactional memory programming model, a programmer may designate a section of code (e.g., an execution path or a set of program instructions) as a 'transaction,' which a transactional memory system should execute atomically with respect to other threads of execution. For example, if the transaction includes two memory store operations, then the transactional memory system ensures that all other threads may only observe either the cumulative effects of both memory operations or of neither, but not the effects of only one.
"Various transactional memory systems have been proposed, including those implemented by software, by hardware, or by a combination thereof. However, many traditional implementations are bound by various limitations. For example, hardware-based transactional memory (HTM) proposals sometimes impose limitations on the size of transactions supported (i.e., maximum number of speculative memory operations that can be executed before the transaction is committed). Often, this may be a product of limited hardware resources, such as the size of one or more speculative data buffers used to buffer speculative data during transactional execution.
"One example of a transactional memory system is the Advanced Synchronization Facility (ASF) proposed by
As a supplement to the background information on this patent application, VerticalNews correspondents also obtained the inventors' summary information for this patent application: "A system and method are disclosed for providing very large read-sets (i.e., read sets that are larger than can be achieved by directly tracking system data) for hardware transactional memory with limited hardware support (i.e., transactional memory systems that do not have an ability to access large regions of transactional memory) by monitoring meta data such as page table entries.
"In some embodiments, an HTM mechanism tracks meta-data such as page-table entries (PTE) rather than the data itself. The HTM mechanism protects large regions of memory by providing conflict detection so that regions of memory can be located within a local read or write set. In some embodiments, the HTM mechanism functions at a cache-line level. However, it will be appreciated that while the term cache line is used to refer to units protected by the HTM, other units may also be protected. Also, in some embodiments, an ASF mechanism follows an early-abort principle and realizes a requester-wins strategy.
"In some embodiments, using the HTM mechanism to protect large regions via the meta-data operates on transactional a read set (i.e., all the memory addresses and items (other than writes) being accessed inside a transaction) because no backing store is available for actually modified cache lines. Accordingly, the HTM mechanism uses a traditional HTM approach for protecting speculatively written cache lines. Since storage for old values is not required for cache lines that belong in a transactions (TXs) read set, it is possible to apply the large region protection at a meta-level (e.g., at the page table mechanism).
"In some embodiments, an apparatus includes a processor coupleable to a shared memory that is shared by one or more other processors. The processor is configured to execute a section of code that includes a plurality of memory access operations to the shared memory. The processor includes a large region protection module which is configured to allow protection of a large region of memory, the large region of memory being larger than a memory line, by monitoring meta data relating to the large region of memory.
BRIEF DESCRIPTION OF THE DRAWINGS
"The disclosed embodiments may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several Figures designates a like or similar element.
"FIG. 1 is a generalized block diagram showing components of a multi-processor computer system configured to implement an advanced synchronization facility, in accordance with some embodiments.
"FIG. 2 is a block diagram showing a more detailed view of components comprising a processor, in accordance with some embodiments.
"FIG. 3 is a general flow diagram showing a method for executing a transaction using an ASF, in accordance with some embodiments.
"FIG. 4 is a generalized flow diagram showing a method for performing a large region protection operation, in accordance with some embodiments.
"FIG. 5 is a generalized block diagram showing a computer system configured to implement various embodiments of an ASF, in accordance with some embodiments."
For additional information on this patent application, see: Pohlack, Martin T.; Diestelhorst, Stephan. Protecting Large Regions without Operating-System Support. Filed
Keywords for this news article include: Patents.
Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC
Most Popular Stories
- 3 Shot Dead in Venezuela Unrest
- Chinese May Have Spotted Malaysia Airlines Debris
- Several Texas Cities Top Job Search List
- Why Buffett Bets Big on Green Energy
- Wall Street Rally Heads Off 3rd Day of Decline
- Senate Committee OKs Bill to Sanction Russia
- Better Pay Means Bigger Profits: Strategist
- G7 Presses Russia to Pull Troops Out of Crimea
- Obama's 'Between Two Ferns' Appearance Has Conservatives Upset
- Jan Brewer Out on Term Limits