The assignee for this patent application is
Reporters obtained the following quote from the background information supplied by the inventors: "Network processors are generally used for analyzing and processing packet data for routing and switching packets in a variety of applications, such as network surveillance, video transmission, protocol conversion, voice processing, and internet traffic routing. Early types of network processors were based on software-based approaches with general-purpose processors, either singly or in a multi-core implementation, but such software-based approaches are slow. Further, increasing the number of general-purpose processors had diminishing performance improvements, or might actually slow down overall network processor throughput. Newer designs add hardware accelerators in a system on chip (SoC) architecture to offload certain tasks from the general-purpose processors, such as encryption/decryption, packet data inspections, and the like.
"Network processors implemented as an SoC having multiple processing modules might typically employ one or more general-purpose processors and one or more hardware accelerators, the hardware accelerators implementing well defined procedures to improve the efficiency and performance of the SoC. However, the general-purpose processors might be required for certain packet processing functions, such as deep-packet inspection, that might not be efficiently implemented using the hardware accelerators alone. Further, overall throughput of the SoC might be limited where the processors 'stall' waiting for packet data to be become available for processing when using memory, particularly memories external to the SoC, to communicate between the accelerators and the processors. For example, if a processor core tries to access memory addresses which are not in its cache and the memory system has to go to other memory (e.g., dynamic random access memory or 'DRAM') to get them, it can cause the processor core to stall for hundreds of processor clock cycles per address to wait for the memory system to deliver the requested data to processor core. In another example, an external memory might include two or more substructures (e.g., multiple banks of DRAM). In such a system, a latency penalty might be incurred for multiple access requests to the same memory substructure. Additionally, a given set of operations for a data flow might be required to be completed in a given order, further adding to latency. Thus, a technique for reducing latency when accessing memory is desirable."
In addition to obtaining background information on this patent application, VerticalNews editors also obtained the inventor's summary information for this patent application: "This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
"Described embodiments provide a network processor comprising a shared memory, an input/output module configured to receive a packet and store the received packet in the shared memory, a packet processing module, and a processor core module having a local memory. The packet processing module is configured to classify the received packet stored in the shared memory, identify which one of one or more of a known flow the received packet pertains, retrieve structural metadata corresponding to the identified how of the received packet, and pass to the processor core module the structural metadata. The processor core module is configured to pre-fetch content at address locations in the shared memory specified by the structural metadata, store the pre-fetched content into the local memory, and process the pre-fetched content stored in the local memory.
BRIEF DESCRIPTION OF THE DRAWING FIGURES
"Other aspects, features, and advantages of described embodiments will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements.
"FIG. 1 shows a block diagram of a network processor operating in accordance with exemplary embodiments;
"FIG. 2 shows a block diagram of a system cache of the network processor of FIG. 1;
"FIG. 3 shows a block diagram of a modular packet processor sub-module of the network processor of FIG. 1 in accordance with exemplary embodiments;
"FIG. 4 illustrates an exemplary configuration table used in the modular packet processor of FIG. 3; and
"FIG. 5 illustrates an exemplary process for processing a received packet in accordance with one embodiment of the invention."
For more information, see this patent application: Munoz, Robert J. Packet Data Processor in a Communications Processor Architecture. Filed
Keywords for this news article include: Software,
Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC
Most Popular Stories
- Americans Still Pessimistic Despite Economic Growth
- Bogdanovitch Delivers Laughs With 'She's Funny'
- Labor Day Travel Up, Gas Prices Down
- Nintendo Launching 'Amiibo' Toy-game Franchise
- U.K. Raises Terror Threat Level to 'Severe'
- Apple to Unveil New Items on Sept. 9
- Parra Joins Exclusive Club of Hispanic CEOs
- Canada, Russia Go to War (on Twitter)
- Axxis Solutions Appoints Benites as CEO
- Obama Puts Ukraine Violence on Russia