News Column

"Large Receive Offload Functionality for a System on Chip" in Patent Application Approval Process

September 11, 2014



By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Chudgar, Keyur (San Jose, CA); Sankaran, Kumar (San Jose, CA), filed on February 21, 2013, was made available online on August 28, 2014, according to news reporting originating from Washington, D.C., by VerticalNews correspondents.

This patent application is assigned to Applied Micro Circuits Corporation.

The following quote was obtained by the news editors from the background information supplied by the inventors: "The amount of web traffic over the internet is ever-increasing. Much of the increase in web traffic is due to increased social media usage, cloud based storage, online media steaming services, etc. Therefore, the amount of data to be processed by network devices and/or throughput requirements for network devices is ever-increasing. The majority of internet web traffic is Transmission Control Protocol (TCP) based web traffic. However, there is a significant overhead for network devices to process TCP based web traffic. As such, processing TCP based web traffic reduces throughput for network devices and/or reduces network data rates. Additionally, TCP based web traffic increases processing requirements of network devices (e.g., increases central processing unit (CPU) usage for network devices). Therefore, resources for other network applications performed by network devices are reduced (e.g., CPU usage for other network applications is reduced).

"One software-based solution is to gather data (e.g., TCP segments) when receiving packets of data. For example, an operating system (e.g., a kernel network stack) of a network device can gather data when receiving packets of data. As such, the number of data packets to be processed by a network device can be reduced. However, this solution still increases processing requirements of the network device (e.g., increases CPU usage for the network device). For example, the CPU of the network device is required to perform the gathering of the data and TCP protocol level functions. Therefore, TCP level protocol processing on the network device (e.g., the CPU) cannot be performed in parallel with the gathering of the data.

"The above-described description is merely intended to provide a contextual overview of current techniques for processing data in a network and is not intended to be exhaustive."

In addition to the background information obtained for this patent application, VerticalNews journalists also obtained the inventors' summary information for this patent application: "The following presents a simplified summary in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview of the disclosed subject matter. It is intended to neither identify key nor critical elements of the disclosure nor delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.

"In an example embodiment, a system comprises a classifier engine, a first memory and at least one processor. The classifier engine is configured to classify one or more network packets received from a data stream as one or more network segments. The first memory is configured to store one or more packet headers associated with the one or more network segments. The at least one processor is configured to receive the one or more packet headers and generate a single packet header for the one or more network segments in response to a determination that a gather buffer that stores packet data for the one or more network segments has reached a predetermined size.

"In another example embodiment, a method comprises classifying one or more network packets received from a data stream as one or more network segments. The method also includes storing one or more packet headers associated with the one or more network segments in a first memory. The method also includes storing packet data for the one or more network segments in a gather buffer. The method can also include generating a single packet header for the one or more network segments in response to a determination that the gather buffer that stores the packet data for the one or more network segments has reached a predetermined memory size.

"In yet another example embodiment, a system includes a means for means for classifying one or more network packets received from a data stream as one or more network segments. The system also includes a means for storing one or more packet headers associated with the one or more network segments. The system also includes a means for storing packet data for the one or more network segments in a gather buffer. The system can also include a means for generating a single packet header for the one or more network segments in response to a determination that the gather buffer that stores packet data for the one or more network segments has reached a predetermined size.

"The following description and the annexed drawings set forth in detail certain illustrative aspects of the subject disclosure. These aspects are indicative, however, of but a few of the various ways in which the principles of various disclosed aspects can be employed and the disclosure is intended to include all such aspects and their equivalents. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

"FIG. 1 is a block diagram illustrating an example, non-limiting embodiment of a large receive offload (LRO) system in accordance with various aspects described herein.

"FIG. 2 is a block diagram illustrating an example, non-limiting embodiment of a LRO system implementing a queue manager in accordance with various aspects described herein.

"FIG. 3 is a block diagram illustrating an example, non-limiting embodiment of a LRO system implementing at least one system-level central processing unit in accordance with various aspects described herein.

"FIG. 4 is a block diagram illustrating an example, non-limiting embodiment of a LRO system implementing multiple memories in accordance with various aspects described herein.

"FIG. 5 is a block diagram illustrating an example, non-limiting embodiment of a LRO system with one or more network interfaces in accordance with various aspects described herein.

"FIG. 6 is a block diagram illustrating an example, non-limiting embodiment of a LRO system implementing a direct memory access engine in accordance with various aspects described herein.

"FIG. 7 is a block diagram illustrating an example, non-limiting embodiment of a LRO system for generating a network packet segment in accordance with various aspects described herein.

"FIG. 8 illustrates a flow diagram of an example, non-limiting embodiment of a method for implementing LRO functionality on a system on chip (SoC).

"FIG. 9 illustrates a flow diagram of another example, non-limiting embodiment of a method for implementing LRO functionality on a SoC.

"FIG. 10 illustrates a flow diagram of an example, non-limiting embodiment of a method for implementing LRO functionality on a SoC.

"FIG. 11 illustrates a flow diagram of another example, non-limiting embodiment of a method for implementing LRO functionality via one or more co-processors.

"FIG. 12 illustrates a block diagram of an example electronic computing environment that can be implemented in conjunction with one or more aspects described herein.

"FIG. 13 illustrates a block diagram of an example data communication network that can be operable in conjunction with various aspects described herein."

URL and more information on this patent application, see: Chudgar, Keyur; Sankaran, Kumar. Large Receive Offload Functionality for a System on Chip. Filed February 21, 2013 and posted August 28, 2014. Patent URL: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=4107&p=83&f=G&l=50&d=PG01&S1=20140821.PD.&OS=PD/20140821&RS=PD/20140821

Keywords for this news article include: Applied Micro Circuits Corporation, Internet, Web Traffic, World Wide Web.

Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC


For more stories covering the world of technology, please see HispanicBusiness' Tech Channel



Source: Politics & Government Week


Story Tools






HispanicBusiness.com Facebook Linkedin Twitter RSS Feed Email Alerts & Newsletters