Free Essay

Congestion Control

In:

Submitted By pareeshaagg
Words 5320
Pages 22
TCP Congestion Control
Abstract
This paper is an exploratory survey of TCP congestion control principles and techniques.
In addition to the standard algorithms used in common software implementations of TCP, this paper also describes some of the more common proposals developed by researchers over the years. By studying congestion control techniques used in TCP implementation software and network hardware we can better comprehend the performance issues of packet switched networks and in particular, the public Internet.
1

Introduction

There has been some serious discussion given to the potential of a large-scale Internet collapse due to network overload or congestion [6], [17]. So far the Internet has survived, but there has been a number of incidents throughout the years where serious problems have disabled large parts of the network. Some of these incidents have been a result of algorithms used or not used in the Transmission Control Protocol (TCP) [19].
Others are a result of problems in areas such as security, or perhaps more accurately, the lack thereof [24].
The popularity of the Internet has heightened the need for more bandwidth throughout all tiers of the network. Home users need more bandwidth than the traditional 64Kb/s channel a telephone provider typically allows. Video, music, games, file sharing and browsing the web requires more and more bandwidth to avoid the “World Wide Wait” as it has come to be known by those with slower and often heavily congested connections.
Internet Service Providers (ISPs) who provide the access to the average home customer have had to keep up as more and more users get connected to the information superhighway. Core backbone providers have had to ramp up their infrastructure to support the increasing demand from their customers below.
Today it would be unusual to find someone in the U.S. that has not heard of the Internet, let alone experienced it in one form or another. The Internet has become the fastest growing technology of all time [8]. So far, the Internet is still chugging along, but a good question to ask is “Will it continue to do so?” Although this paper does not attempt to answer that question, it can help us to understand why it will or why it might not.
Good and bad network performance is largely dependent on the effective implementation of network protocols. TCP, easily the most widely used protocol in the transport layer on

the Internet (e.g. HTTP, TELNET, and SMTP), plays an integral role in determining overall network performance.
Amazingly, TCP has changed very little since its initial design in the early 1980’s. A few
“tweaks” and “knobs” have been added, but for the most part, the protocol has withstood the test of time. However, there are still a number of performance problems on the
Internet and fine tuning TCP software continues to be an area of work for a number of people [21].

Over the past few years, researchers have spent a great deal of effort exploring alternative and additional mechanisms for TCP and related technologies in lieu of potential network overload problems. Some techniques have been implemented; others left behind and still others remain on the drawing board. We’ll begin our examination of TCP by trying to understand the underlying design concepts that have made it so successful.
This paper does not cover the basics of the TCP protocol itself, but rather the underlying designs and techniques as they apply to problems of network overload and congestion.
For a brief description on the basics of TCP, a companion paper is provided in [14].
2

The End-to-End Argument

The design of TCP was heavily influenced by what has come to be known as the end-toend argument [18]. The key component of the end-to-end argument for our purposes is in its method of handling congestion and network overload. The premise of the argument and fundamental to TCP’s design is that the end stations are responsible for controlling the rate of data flow. In this model, there are no explicit signaling mechanisms in the network which tell the end stations how fast to transmit, when to transmit, when to speed up or when to slow down. The TCP software in each of the end stations is responsible for answering these questions from implicit knowledge it obtains from the network or the explicit knowledge it receives from the other TCP host.
2.1 An Overview of TCP Flow Control
One of TCP’s primary functions is to properly match the transmission rate of the sender to that of the receiver and the network. It is important for the transmission to be at a high enough rate to ensure good performance, but also to protect against overwhelming the network or receiving host.
TCP’s 16-bit window field is used by the receiver to tell the sender how many bytes of data the receiver is willing to accept. Since the window field is limited to a maximum of
16 bits, this provides for a maximum window size of 65,535 bytes.

The window size advertised by the receiver tells the sender how much data, starting from the current position in the TCP data byte stream can be sent without waiting for further acknowledgements. As data is sent by the sender and then acknowledged by the receiver, the window slides forward to cover more data in the byte stream. This concept is known as a “sliding window” and is depicted in figure 1 below.

Figure 1 Sliding Window
As shown above, data within the window boundary is eligible to be sent by the sender.
Those bytes in the stream prior to the window have already been sent and acknowledged.
Bytes ahead of the window have not been sent and must wait for the window to “slide” forward before they can be transmitted by the sender. A receiver can adjust the window size each time it sends acknowledgements to the sender. The maximum transmission rate is ultimately bound by the receiver’s ability to accept and process data. However, this technique implies an implicit trust arrangement between the TCP sender and receiver. It has been shown that aggressive or unfriendly TCP software implementations can take advantage of this trust relationship to unfairly increase the transmission rate or even to intentionally cause network overload situations [20].
As we will see shortly, the sender and also the network can play a part in determining the transmission rate of data flow as well.
It is important to consider the limitation on the window size of 65,535 bytes. Consider a typical internetwork that may have link speeds of up to 1 Gb/s or more. On a 1 Gb/s network 125,000,000 bytes can be transmitted in one second. This means that if only two

TCP stations are communicating on this link, at best 65,535/125,000,000 or only about
.0005 of the bandwidth will be used in each direction each second!
Recognizing the need for larger windows on high-speed networks, the Internet
Engineering Task Force released a standard for a “window scale option” defined in RFC
1323 [12]. This standard effectively allows the window to increase from 16 to 32 bits or over 4 billion bytes of data in the window.1
2.2 Retransmissions, Timeouts and Duplicate Acknowledgements
TCP is relegated to rely mostly upon implicit signals it learns from the network and remote host. TCP must make an educated guess as to the state of the network and trust the information from the remote host in order to control the rate of data flow. This may seem like an awfully tricky problem, but in most cases TCP handles it in a seemingly simple and straightforward way.
A sender’s implicit knowledge of network conditions may be achieved through the use of a timer. For each TCP segment sent the sender expects to receive an acknowledgement within some period of time otherwise an error in the form of a timer expiring signals that that something is wrong.
Somewhere in the end-to-end path of a TCP connection a segment can be lost along the way. Often this is due to congestion in network routers where excess packets must be dropped. TCP not only must correct for this situation, but it can also learn something about network conditions from it.
Whenever TCP transmits a segment the sender starts a timer which keeps track of how long it takes for an acknowledgment for that segment to return. This timer is known as the retransmission timer. If an acknowledgement is returned before the timer expires, which by default is often initialized to 1.5 seconds, the timer is reset with no consequence. If however an acknowledgement for the segment does not return within the timeout period, the sender would retransmit the segment and double the retransmission timer value for each consecutive timeout up to a maximum of about 64 seconds [22]. If there are serious network problems, segments may take a few minutes to be successfully transmitted before the sender eventually times out and generates an error to the sending application. Fundamental to the timeout and retransmission strategy of TCP is the measurement of the round-trip time between two communicating TCP hosts. The round-trip time may vary during the TCP connection as network traffic patterns fluctuate and as routes become available or unavailable.

1

A TCP option negotiated in the TCP connection establishment phase sets the number of bits by which the window is right-shifted in order to increase the value of the window.

TCP keeps track of when data is sent and at what time acknowledgements covering those sent bytes are returned. TCP uses this information to calculate an estimate of round trip time. As packets are sent and acknowledged, TCP adjusts its round-trip time estimate and uses this information to come up with a reasonable timeout value for packets sent. If acknowledgements return quickly, the round-trip time is short and the retransmission timer is thus set to a lower value. This allows TCP to quickly retransmit data when network response time is good, alleviating the need for a long delay between the occasional lost segment. The converse is also true. TCP does not retransmit data too quickly during times when network response time is long.
If a TCP data segment is lost in the network, a receiver will never even know it was once sent. However, the sender is waiting for an acknowledgement for that segment to return.
In one case, if an acknowledgement doesn’t return, the sender’s retransmission timer expires which causes a retransmission of the segment. If however the sender had sent at least one additional segment after the one that was lost and that later segment is received correctly, the receiver does not send an acknowledgement for the later, out of order segment. The receiver cannot acknowledgement out of order data; it must acknowledge the last contiguous byte it has received in the byte stream prior to the lost segment. In this case, the receiver will send an acknowledgement indicating the last contiguous byte it has received. If that last contiguous byte was already acknowledged, we call this a duplicate
ACK. The reception of duplicate ACKs can implicitly tell the sender that a segment may have been lost or delayed. The sender knows this because the receiver only generates a duplicate ACK when it receives other, out of order segments. In fact, the Fast Retransmit algorithm described later uses duplicate ACKs as a way of speeding up the retransmission process.
3.0 Standard TCP Congestion Control Algorithms
The standard fare in TCP implementations today can be found in RFC 2581 [2]. This reference document specifies four standard congestion control algorithms that are now in common use. Each of the algorithms noted within that document was actually designed long before the standard was published [9], [11]. Their usefulness has passed the test of time. The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast
Recovery are described below.
3.1 Slow Start
Slow Start, a requirement for TCP software implementations is a mechanism used by the sender to control the transmission rate, otherwise known as sender-based flow control.
This is accomplished through the return rate of acknowledgements from the receiver. In

other words, the rate of acknowledgements returned by the receiver determine the rate at which the sender can transmit data.
When a TCP connection first begins, the Slow Start algorithm initializes a congestion window to one segment, which is the maximum segment size (MSS) initialized by the receiver during the connection establishment phase. When acknowledgements are returned by the receiver, the congestion window increases by one segment for each acknowledgement returned. Thus, the sender can transmit the minimum of the congestion window and the advertised window of the receiver, which is simply called the transmission window.
Slow Start is actually not very slow when the network is not congested and network response time is good.
For example, the first successful transmission and acknowledgement of a TCP segment increases the window to two segments. After successful transmission of these two segments and acknowledgements completes, the window is increased to four segments. Then eight segments, then sixteen segments and so on, doubling from there on out up to the maximum window size advertised by the receiver or until congestion finally does occur.
At some point the congestion window may become too large for the network or network conditions may change such that packets may be dropped. Packets lost will trigger a timeout at the sender. When this happens, the sender goes into congestion avoidance mode as described in the next section.
3.2 Congestion Avoidance
During the initial data transfer phase of a TCP connection the Slow Start algorithm is used. However, there may be a point during Slow Start that the network is forced to drop one or more packets due to overload or congestion. If this happens, Congestion
Avoidance is used to slow the transmission rate. However, Slow Start is used in conjunction with Congestion Avoidance as the means to get the data transfer going again so it doesn’t slow down and stay slow.
In the Congestion Avoidance algorithm a retransmission timer expiring or the reception of duplicate ACKs can implicitly signal the sender that a network congestion situation is occurring. The sender immediately sets its transmission window to one half of the current window size (the minimum of the congestion window and the receiver’s advertised window size), but to at least two segments. If congestion was indicated by a timeout, the congestion window is reset to one segment, which automatically puts the sender into Slow Start mode. If congestion was indicated by duplicate ACKs, the Fast
Retransmit and Fast Recovery algorithms are invoked (see below).
As data is received during Congestion Avoidance, the congestion window is increased.
However, Slow Start is only used up to the halfway point where congestion originally

occurred. This halfway point was recorded earlier as the new transmission window.
After this halfway point, the congestion window is increased by one segment for all segments in the transmission window that are acknowledged. This mechanism will force the sender to more slowly grow its transmission rate, as it will approach the point where congestion had previously been detected.
3.3 Fast Retransmit
When a duplicate ACK is received, the sender does not know if it is because a TCP segment was lost or simply that a segment was delayed and received out of order at the receiver. If the receiver can re-order segments, it should not be long before the receiver sends the latest expected acknowledgement. Typically no more than one or two duplicate
ACKs should be received when simple out of order conditions exist. If however more than two duplicate ACKs are received by the sender, it is a strong indication that at least one segment has been lost. The TCP sender will assume enough time has lapsed for all segments to be properly re-ordered by the fact that the receiver had enough time to send three duplicate ACKs.
When three or more duplicate ACKs are received, the sender does not even wait for a retransmission timer to expire before retransmitting the segment (as indicated by the position of the duplicate ACK in the byte stream). This process is called the Fast
Retransmit algorithm and was first defined in [11]. Immediately following Fast
Retransmit is the Fast Recovery algorithm.
3.4 Fast Recovery
Since the Fast Retransmit algorithm is used when duplicate ACKs are being received, the
TCP sender has implicit knowledge that there is data still flowing to the receiver. Why?
The reason is because duplicate ACKs can only be generated when a segment is received.
This is a strong indication that serious network congestion may not exist and that the lost segment was a rare event. So instead of reducing the flow of data abruptly by going all the way into Slow Start, the sender only enters Congestion Avoidance mode.
Rather than start at a window of one segment as in Slow Start mode, the sender resumes transmission with a larger window, incrementing as if in Congestion Avoidance mode.
This allows for higher throughput under the condition of only moderate congestion [23].
To summarize this section of the paper, figure 2 below depicts what a typical TCP data transfer phase using TCP congestion control might look like. Notice the periods of exponential window size increase, linear increase and drop-off. Each of these scenarios depicts the sender’s response to implicit or explicit signals it receives about network conditions. Figure 2 Congestion Control Overview

4.0 Latest Techniques
Although RFC 2581 and its associated algorithms have been doing an excellent job in ensuring top performance in lieu of congestion on TCP/IP networks, there are still a lot of work going into enhancing TCP performance and responsiveness to congestion. During the 1990’s researchers such as Sally Floyd, Van Jacobson, Mark Allman, W. Richard
Stevens, Jamshid Mahdavi and a host of others starting producing a massive amount of research and experiments with TCP and related congestion control ideas. The wealth of information in this area is really phenomenal and it is hard to pick out some of the best ideas to present in this paper. Nevertheless, this section is an attempt to provide an overview of some of those popular ideas over the last decade. TCP and congestion control on the Internet is an area that is still actively being researched. For more information, consult the references noted in this paper.
4.1 Selective Acknowledgements
Whenever a TCP segment has been sent and the sender’s retransmission timer expires, the sender is forced to retransmit the segment, which the sender assumes has been lost.
However, it is possible that between the time when the segment was initially sent and the time when the retransmission window expired, other segments in the window may have been sent after the lost segment. It is also possible that these later segments arrived at the receiver and are simply queued awaiting the missing segment so they can be properly reordered. The receiver has no way of informing the sender that it has received other segments because of the requirement to acknowledgement only the contiguous bytes it has received. This case demonstrates a potential inefficiency in the way TCP handles the occasional loss of segments.

Ideally, the sender should only retransmit the lost segment(s) while the receiver continues to queue the later segments. This behavior was identified as a potential improvement in
TCP’s congestion control algorithms as early as 1988 [10]. It was only until recently that a mechanism to retransmit just the lost segments in these situations was put into standard
TCP implementations [15], [16].
Selective Acknowledgement (or SACK) is this technique implemented as a TCP option that can help reduce unnecessary retransmissions on the part of the sender. If the TCP connection has negotiated the use of SACK (through the use of the TCP header option fields), the receiver can offer feedback to the sender in the form of the selective acknowledgement option. The receiver reports to the sender, which blocks of data have arrived using the format show in figure 3 below.

Figure 3 SACK Option
This list of blocks in the SACK option tells the sender which contiguous byte stream blocks it has received. At maximum, four SACK blocks can be sent in one TCP segment because of the maximum size of the options field in a TCP head is 40 bytes and each block report consists of 8 bytes plus the option header field of 4 bytes (for a total of 36 bytes). Note that the SACK information is advisory information only. The sender cannot rely upon the receiver to maintain the out-of-order data. Obviously the performance gain is to be had when the receiver does queue and re-order data that has been reported with the
SACK option so that the sender limits its retransmissions.
4.2 NewReno
The Tahoe implementation of BSD included the ability to do Slow Start, Congestion
Avoidance and Fast Recovery. Reno was the implementation of TCP that included the
Tahoe implementation plus the ability to do Fast Retransmit. NewReno is a slight modification to the Reno implementation of TCP [7] that can improve performance during Fast Recovery and Fast Retransmit mode.2
2

The original names of Reno and Tahoe were derived from the names of BSD TCP/IP implementations.

The NewReno implementation only applies if SACK has not been negotiated in a connection. From [7], an overview of the NewReno is as follows:
In the absence of SACK, there is little information available to the TCP sender in making retransmission decisions during Fast Recovery. From the three duplicate acknowledgements, the sender infers a packet loss, and retransmits the indicated packet. After this, the data sender could receive additional duplicate acknowledgements, as the data receiver acknowledges additional data packets that were already in flight when the sender entered Fast Retransmit.
In the case of multiple packets dropped from a single window of data, the first new information available to the sender comes when the sender receives an acknowledgement for the retransmitted packet (that is the packet retransmitted when Fast Retransmit was first entered). If there had been a single packet drop, then the acknowledgement for this packet will acknowledge all of the packets transmitted before Fast Retransmit was entered (in the absence of reordering).
However, when there were multiple packet drops, then the acknowledgement for the retransmitted packet will acknowledge some but not all of the packets transmitted before the Fast Retransmit. We call this packet a partial acknowledgment. According to RFC 2582 above, the TCP sender should infer from the partial acknowledgement that the indicated segment has been lost and immediately retransmit the segment.3
4.3 Other TCP Congestion Control Techniques
There are a number of other proposals and experiments that have been performed on TCP to improve performance. In this section, we will just briefly cover two of the most recent from some of the leading researchers in the field.
4.3.1 Increasing TCP’s Initial Window Size
The experimental RFC 2414 [Allman98] suggests increasing TCP’s initial window size from one segment to roughly 4 kilobytes. By doing so, in certain situations it is believed to offer better performance by being able to fill the “pipe” quicker.
4.3.2 TCP Pacing
If a TCP sender, a router or other intermediate device space TCP packets apart, the bursty nature of data transmission may be reduced. The intended affect is that by reducing the
We use the term segment in place of packet as is used in RFC 2582, but essentially the meanings of the two are the same for our purposes here.
3

bursts in network traffic, periods of congestion and eventual packet loss are also reduced.
Its advantages and disadvantages are only beginning to be understood [1]. However, a popular commercial product already implements a technique similar to TCP pacing and is being installed in many large organizations [Packeteer00].
4.4 Non-TCP Congestion Control Techniques
There are number of techniques which are worth our time to examine even though they are not directly implemented within TCP software on end systems. These techniques can indirectly affect TCP congestion control.
For example, whenever a router drops a packet, it in effect is providing a signal to the
TCP sender to slow down by causing a retransmission timer to expire. If routers could use some advanced packet drop techniques, they may be able to better control network congestion through the implicit signals TCP senders detect.
Also, there are non-technical designs that can affect network performance. By implementing a system where a cost is associated with network transmission, end stations may adjust their transmission rate up or down based on the value of performance they may require. Both of these types of techniques are briefly explored below.
4.4.1 Random Early Detection
Perhaps one of the most notable enhancements to congestion control techniques has been the development of Random Early Detection (RED) for internetwork routers. This algorithm manages router queues and drops packets based on a queue threshold. This in effect causes congestion control to be activated just prior to any actual network congestion event.
For example, to signal traditionally implemented TCP senders to slow down, a router using RED will monitor its average queue depth. As network traffic increases and crosses a threshold, RED will begin to drop packets with a certain probability. TCP senders will go into Slow Start and Congestion Avoidance mode once they have detected a lost packet. This helps the network slow down before actual congestion occurs.
The beauty of this technique is that is fairly simple to implement and it helps prevent high bandwidth TCP connections from starving low bandwidth TCP connections. It also does not allow unfriendly TCP implementations to gain an unfair advantage by removing the sole reliance on the TCP sender/receiver trust relationship.4 Since the packet drop function is based on a certain probability, connections using a larger share of the bandwidth will have more of their traffic dropped than low bandwidth users.

4

Such as in the case where TCP senders must rely on explicit window and ACK information from the
TCP receivers.

RED was first described in [4] and is recommended in [3].
4.4.2 Explicit Congestion Notification
Also briefly described in [4] and further expanded upon in [5], Explicit Congestion
Notification (ECN) is a technique that just marks packets instead of dropping them as
RED usually does. The idea behind implementing ECN instead of RED is to avoid packet drops, particularly where the delay involved caused by retransmission needs to be avoided. Good examples of cases where this delay should be avoided are with real-time applications such as two-way voice communications or when using a terminal program such as TELNET.
Routers can mark two bits in the IP Type of Service (ToS) header field to signal whether or not congestion is occurring. TCP senders can then adjust their rate of transmission appropriately if they see that these bits are set to indicate a network congestion condition is occurring.
4.4.3 Network Pricing
An entirely different category of congestion control is through the use of a networkpricing model. In this case, the cost of transmission either in time, usage or capacity can be on a fee basis. By making the transmission of TCP non-free, there may be a monetary incentive to avoid congestion [13]. Applying a cost on transmission may help force senders to minimize the amount of traffic they generate. This in effect attempts to make it expensive for users to cause congestion and high load conditions. It is analogous to getting a speeding ticket on the highway.
5.0 Conclusion
Over the past decade a large amount of research and experimentation has gone into TCP performance and congestion control. A great deal of that work has paid off in the form of an Internet that continues to function considerably well even in light of the increasing traffic demands.
Now of great concern to a number of network practitioners is the concept of “network fairness”. Here the goal is to provide some level playing field for all participants, and to avoid the greedy or “eager” TCP senders to make room for low bandwidth connections.
The use of RED is one mechanism that is becoming popular among Internet Service
Providers and large organizations.
It remains to be seen how far current congestion control techniques can carry the Internet as its growth continues. So far they have performed admirably.

Abbreviations
ACK
bit
BSD
ECN
Gb/s
HTTP
IETF
IP
ISP
Kb/s
MSS
RED
RFC
RSVP
SACK
SMTP
TCP
TCP/IP
ToS
UDP

Acknowledgement binary digit
Berkely Software Distribution
Explicit Congestion Notification
Gibabits per second
HyperText Transfer Protocol
Internet Engineering Task Force
Internet Protocol
Internet Service Provider
Kilobits per second
Maximum Segment Size
Random Early Detection
Request For Comments
Resource ReSerVation Protocol
Selective ACKnowledgement
Simple Mail Transfer Protocol
Transmission Control Protocol
Transmission Control Protocol/Internet Protocol
Type of Service
User Datagram Protocol

References
[1]

Amit Aggarwal, Stefan Savage, and Thomas Anderson.
Understanding the Performance of TCP Pacing, March 30, 2000,
IEEE InfoCom 2000.

[2]

M. Allman, V. Paxson, and W. Stevens. TCP Congestion Control,
April 1999, RFC 2581.

[3]

B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin,
S. Floyd, V. Jacobson, G. Minshall, C. Patridge, L. Peterson, K.
Ramakrishnan, S. Shenker, J. WrocLawski, and Lixia Zhang.
Recommendations on Queue Management and Congestion
Avoidance in the Internet, April 1998, RFC 2309.

[4]

Sally Floyd and Van Jacobson.
Random Early Detection
Gateways for Congestion Avoidance. IEEE/ACM Transactions on
Networking, August 1993.

[5]

Sally Floyd. TCP and Explicit Congestion Notification, ACM
Computer Communications Review, October 1994, p. 10-23.

[6]

Sally Floyd and Kevin Fall. Promoting the Use of End-to-End
Congestion Control in Internet. IEEE/ACM Transactions on
Networking, August 1999.

[7]

S. Floyd and T. Henderson. The NewReno Modification to TCP’s
Fast Recovery Mechanism, April 1999, RFC 2582.

[8]

Harris Interactive. P.C. and Internet Use Continue to Grow at
Record Pace. Press Release, February 7, 2000.

[9]

Van Jacobson. Congestion Avoidance and Control. Computer
Communications Review, Volume 18 number 4, pp. 314-329,
August 1988.

[10]

V. Jacobson and R. Braden. TCP Extensions for Long-Delay
Paths, October 1988, RFC 1072.

[11]

Van Jacobson. Modified TCP Congestion Control Avoidance
Algorithm. end-2-end-interest mailing list, April 30, 1990.

[12]

V. Jacobson, R. Braden and D. Borman. TCP Extensions for High
Performance, May 1992, RFC 1323.

[13]

Scott Jordan. Pricing and Differentiated Services in Internet and
ATM, http://www.eng.uci.edu/~sjordan/pubs/Pricing/index.htm,
March 11, 1999.

[14]

John Kristoff. The Transmission Control Protocol, March 2000.

[15]

Jamshid Mahdavi. Private e-mail to John Kristoff, December 12,
1999.

[16]

M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP
Selective Acknowledgement Options, October 1996, RFC 2018.

[17]

Bob Metcalfe. From the Ether. InfoWorld, December 4, 1995.

[18]

David D. Clark. The Design Philosophy of the DARPA Internet
Protocols. In Proceedings SIGCOMM '88, Computer
Communications Review Vol. 18, No. 4, August 1988, pp. 106114).

[19]

Jon Postel. Transmission Control Protocol, September 1981, RFC
793.

[20]

Stefan Savage, Neal Cardwell, David Wetherall, and Tom
Anderson, TCP Congestion Control with a Misbehaving Receiver,
ACM Computer Communications Review, October 1999.

[21]

Jeffrey Semke, Jamshid Mahdavi and Matthew Mathis. Automatic
TCP Buffer Tuning, Computer Communications Review, ACM
SIGCOMM, Volume 28, Number 4, October 1998.

[22]

W. Richard Stevens. TCP/IP Illustrated, Volume 1: The Protocols.
Addison Wesley, ISBN: 0-201-63346-9. January 1994.

[23]

W. Stevens. TCP Slow Start, Congestion Avoidance, Fast
Retransmit, and Fast Recovery Algorithms, January 1997, RFC
2001.

[24]

Bob Sullivan. Remembering the Net Crash of ’88, MSNBC,
November 2 1998.

Similar Documents

Free Essay

Frame Relay Congestion Control

...Frame Relay Congestion Control CIS532004016-201003: Network Architecture and Analysis TABLE OF CONTENTS Chapter 1- Introduction …………………………………………………………. 3 Chapter 2- Background …………………………………………………………. 4 Chapter 3- Review and Findings ………………………………………………. 11 Chapter 4- Conclusion …………………………………………………………. 12 References ……………………………………………………………………... 13 Frame Relay Congestion Control This document is a study of the principles of congestion control within the frame relay protocol. From examining existing congestion management efforts to up and coming possible solutions there are a multiplicity of efforts intent on solving network congestion issues. These efforts include work by independent research groups as well standards groups like the Internet Engineering Task Force (IETF) and IEEE (802.1Qau - Congestion Notification). Congestion is defined as the condition in which demand exceeds available network resources (i.e., bandwidth or buffer space) for a sustained period of time. Congestion control deals with the resource allocation and traffic management mechanisms that avoid or recover from conditions causing congestive situations. (McDysan, Spohn (1999). The methods for congestion control in frame relay involve congestion management and avoidance. Congestion management attempts to make sure the network never experiences congestion. One method of management attempts to avoid congestion entirely. This involves network designers allocating proper resource...

Words: 2007 - Pages: 9

Free Essay

End to End Congestion Control for Tcp

...IJCST Vol. 3, Issue 4, Oct - Dec 2012 ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print) End-to-End Congestion Control for TCP 1 1 K. Pavan Kumar, 2Y. Padma Dept. of CSE, Usha Rama College of Engineering and Technology, Telaprolu, AP, India 2 Dept. of IT, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, AP, India Abstract Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to-end performance in wireless and lossy systems. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments (wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network. Our survey shows that over the last 20 years many host-to-host techniques have been developed...

Words: 3640 - Pages: 15

Free Essay

Transportation System

...technologies have enabled the collection of data or intelligence which provides relevant and timely information to transport managers and users. In a rapidly changing society the emphasis on road technology improvements to assist in road management has been identified as immediate need of the day. Intelligent Transport systems include wider application of technology to transit systems as well as private car and highways. Therefore, the benefits given by ITS to any transportation system by introducing it are, improved safety, improved traffic efficiency, reduced congestion, improved environmental quality & energy efficiency and improved economic productivity. With the alarming increase in the population, only building new infrastructure cannot solve all transport problems of congestion and emissions. Instead it will go other way by increasing the environmental pollution. In this situation, keeping traffic moving efficiently without congestion is the big challenge that all levels of government are facing worldwide. In the recent times, all the private travellers, commercial road users, and the public sector are continuously searching for new and faster travel routes. Without quality and dynamic data, route selection is...

Words: 2921 - Pages: 12

Premium Essay

Traffic Jam

...to our destination place within hours. You must go out from your house before two or three hours to reach destination which is only 20 or 30 minutes required. Who is responsible for this? Traffic jam is that magician. Traffic jam has become a great monster in our modern life. Traffic congestion is not only affecting our business but the education sector as well. Students cannot do other tasks such as photocopying or collecting notes before attending class in the morning because a lot of time gets wasted on traffic congestions. It wastes lot of working hours of students as well as teachers. In many occasions, students and teachers fail to attend classes in due time. And it is more painful when students fail to reach exam halls and fails just due to a social problem. Now just think everyday how much national time is wasting, how much economical activities are losing, for the traffic congestion problem. This assignment is focusing current situation of traffic congestion problem in Dhaka city. What is Traffic jam? A traffic jam is a long line of vehicles that cannot move forward because there is too much traffic, or because the road is blocked by something. Over the last few years the traffic congestion problem of Dhaka City has visibly been deteriorating steadily. Limited resources, invested for the development of transport facilities, such as infrastructure and vehicles, coupled with the rapid rise in transport demand, existence of old transport and also huge number of non-motorized...

Words: 3286 - Pages: 14

Free Essay

Sasa

...Traffic congestion is a growing problem in many metropolitan areas. Congestion increases travel time, air pollution, carbon dioxide (CO2) emissions and fuel because cannot run efficiently. London’s traffic at the turn of the millennium may have been far worse than that of the average metropolis. When driving in London’s downtown area, drivers spend around half their time waiting in traffic, incurring 2.3 minutes of delay every kilometer they traveled. Traffic congestion is a growing problem in many metropolitan areas. People waste a lot of time in traffic jams every day. Big cities are never good, and London’s downtown area’s traffic is to say unbelievable. Congestion increases travel time, air pollution, carbon dioxide (CO2) emissions and fuel because cannot run efficiently. London’s traffic at the turn of the millennium may have been far worse than that of the average metropolis. When driving in London’s downtown area, drivers spend around half their time waiting in traffic, incurring 2.3 minutes of delay every kilometer they traveled. In the case we are studying the government decided to use the information technology to help the city control the traffic jam situation. The government decided to use 699 cameras at 203 sites in the 8 square miles in the city. To do this would be a challenging project because the project has a very limited time and there is no other successful case to follow.   A political risk is really high and a new transit authority working under a new...

Words: 306 - Pages: 2

Free Essay

Traffic

...Background: Traffic congestion is not a new problem. The number of automobiles and trucks have been increased during the last few decades after the car boom period. So there was an urgent need for constructing more highways and bridges , improving traffic-signal controllers, making changeable highway signs, rerouting rush hour traffic, creating traffic-control centers that monitor and display the overall traffic conditions, using preplanned alternative traffic solutions based on repeated daily traffic patterns, etc. However, some strict financial sanctions or fines must be applied to eliminate the problem of traffic jam. For Jeddah city, the streets are overcrowded with cars and people. At the same time, the airports are welcoming a large number of newcomers all over the world every day. Moreover, the number of roads and bridges is not sufficient to the increasing number of vehicles. In addition, in Jeddah there are approximately four million people both Saudis and expatriates who use over one million vehicles to move around the city. We also expect more. Causes of traffic congestion inside the cities Saudi Arabia is a modern Country, but it also has a modern and serious problem. Every day in the morning and evening, the roads and streets are so crowded with cars, taxis, buses and trucks. In this part will mention the causes of traffic congestion in Saudi Arabia’s major cities. In the next lines, we will speak about the several reasons for traffic problems in...

Words: 1731 - Pages: 7

Free Essay

How Does Transport Influence Land Use and Development

...development of traffic and access way may produce the micro land use such as industry and agriculture. The interrelationships between transportation and land use remain contentious despite extensive research. In particular, the influence of transportation system performance on land use development, although recognized as a lagged relationship. In addition, the land use been influenced because of certain problem that need to be overcome especially on traffic congestion. In response, communities are beginning to implement new approaches to transportation planning, such as better coordinating land use and transportation; increasing the availability of high quality transit service; creating redundancy, resiliency and connectivity within their road networks; and ensuring connectivity between pedestrian, bike, transit, and road facilities. In short, they are coupling a multi-modal approach to transportation with supportive development patterns, to create a variety of transportation options. Concerned that traffic congestion and urban sprawl are overwhelming the human scale of city area, an increasing...

Words: 1435 - Pages: 6

Premium Essay

Traffic Jam

...Term Paper on Traffic Jam in Bangladesh TERM PAPER ON Traffic Jam in Dhaka City Prepared For Mohd. H.R. Joarder United International University Prepared By MD. Aminul Islam Ariful islam MD. Mesbahuddin Shahanaz Parvin Mou Jeenniifeerr Islam Date of submission Name of Group Members Letter of Transmittal May 5, 2012 Mohd. H.R. Joarder Faculty School of Business Administration United International University Dear Sir, It is a great pleasure for us to submit our Term papers on “Traffic Jam in Dhaka city”. It is a great opportunity for us to implement the knowledge that we have learned in our academic career to work a Term papers on “Traffic Jam in Dhaka city”. We try our level best to make this Term papers to the required standard. We hope that this report will fulfill your expectation. We therefore, hope that you would be kind enough to go through this report for evaluation Sincerely yours ------------------------- --------------------- -------------------- MD. Aminul Islam Ariful Islam Shahanaz Parvin Mou ----------------- ------------------- Jennifer Islam MD. Mesbah...

Words: 3508 - Pages: 15

Free Essay

Combating Congestion Utilizing Adaptive Traffic Signal Systems

...TRAFFIC SIGNAL SYSTEMS 1 Combating Congestion Utilizing Adaptive Traffic Signal Systems Gregory Blazina University of St Francis MBAD 656 – Transportation Management & Economics Donald Maier PH.D. August 10, 2007 TRAFFIC SIGNAL SYSTEMS 2 Abstract Traffic congestion is an inescapable problem in large and growing metropolitan areas across the country. With 88% of America’s daily commuters using private vehicles, eliminating traffic congestion is practically impossible. The challenge then becomes finding the best methods to manage congestion as to minimize its impact on commuters, the environment, and the economy. The field of transportation engineering continues to grow and expand with an increasing use of advanced technologies to reduce congestion within our nation’s roadways. One set of technologies, which has evolved over the past 30 years and shows excellent potential for improving traffic conditions in urban areas is advanced traffic signal systems. These systems are the focus of this paper. TRAFFIC SIGNAL SYSTEMS 3 Congestion has traditionally been labeled as a problem to be solved. Between 1980 and 1999, vehicle miles of travel on U.S. roadways grew by 76 percent, while lane miles increased by only 3 percent. Average daily vehicle volumes in urban areas rose by 43...

Words: 2722 - Pages: 11

Premium Essay

Speed-Density Relationship for Arterial Roads

...CHAPTER 1 INTRODUCTION 1.1 Background of study The primary function of any highway is the transportation of people and goods from its source across to its intended destination. The functional effectiveness of any facility as such should be to ensure it serves its designed capacity with the most of convenience and safety for the users and vehicles plying it. This would help improve the level of service of facility and overall see to it that the socio-economic importance of it is realised. Traffic engineering analysis has to be employed to study the characteristic traffic flow conditions of the highway. The speed, density and flow are the basic traffic stream variables that need to be studied together with their relationship with one another in the traffic stream. These variables are very much linked and their relationship would provide an indication of the level of usage and efficiency of a roadway system and provide transportation planners the basic data in the evaluation of the effectiveness of capacity improvement measures. Information on highway traffic conditions is usually collected by organising and conducting traffic surveys and studies. Traffic studies may be grouped into three main categories as inventories, administrative studies, and dynamic studies. Of these three the dynamic traffic studies involve the collection of data under operational conditions and include studies of speed, traffic volume, travel time and delay, parking and accidents. This study is made...

Words: 10806 - Pages: 44

Free Essay

Dealing with Traffic Jams in London

...Dealing With Traffic Jams in London Michael M. Reynolds Grantham University Abstract Traffic congestion is a growing problem in many metropolitan areas. Congestion increases travel time, air pollution, carbon dioxide (CO2) emissions and fuel because cannot run efficiently. London’s traffic at the turn of the millennium may have been far worse than that of the average metropolis. When driving in London’s downtown area, drivers spend around half their time waiting in traffic, incurring 2.3 minutes of delay every kilometer they traveled. Traffic congestion is a growing problem in many metropolitan areas. People waste a lot of time in traffic jams every day. Big cities are never good, and London’s downtown area’s traffic is to say unbelievable. Congestion increases travel time, air pollution, carbon dioxide (CO2) emissions and fuel because cannot run efficiently. London’s traffic at the turn of the millennium may have been far worse than that of the average metropolis. When driving in London’s downtown area, drivers spend around half their time waiting in traffic, incurring 2.3 minutes of delay every kilometer they traveled. In the case we are studying the government decided to use the information technology to help the city control the traffic jam situation. The government decided to use 699 cameras at 203 sites in the 8 square miles in the city. To do this would be a challenging project because the project has a very limited time and there is no other successful case...

Words: 469 - Pages: 2

Premium Essay

Traffic Congestion Levels Are Rising in Major Cities Around the World.

...medical risks. The second part will offer several feasible solutions to the traffic congestion issue, which are carpooling, establishing an intelligent traffic control system and raise taxes. In the first place, it can be argued that a series of problems can be incurred by high traffic congestion levels. To begin with, traffic jams, arguably, will give rise to increased travel time. Taking China for example, Yang et al. (2011) indicated that traffic congestion, in China, was a serious “urban illness” problem, which had resulted in delays in travel time. Accordingly, the economic efficiency of the whole city is understood to be affected to some extent. Secondly, traffic congestion is responsible for bringing about a large number of economic losses for individuals as well as the waste of oil resources. Generally speaking, the continuous stop and start driving in traffic jams will burn more fuel than smooth driving on an open highway. As a result, extra money will be spent on fuels. Just as Du and Zheng (2012) pointed out that traffic jams could result in tremendous economic losses and waste of fuel resources. In addition, another issue is that high traffic congestion levels may cause significant medical risks. Encountering traffic jams will probably be a severe problem during transferring or picking up a patient to an emergency medical procedure. Carnall (1996) argued that traffic congestion, around the hospital sites, would delay...

Words: 610 - Pages: 3

Premium Essay

Dss and Traffic Management

...Application of DSS to Traffic Management Table of Contents Table of Figures ............................................................................................................................................. 1 Table of Tables .............................................................................................................................................. 1 Introduction .................................................................................................................................................. 2 Framework and Infrastructure...................................................................................................................... 3 MODELLING AND APPLICATION OF DSS TO TRAFFIC MANAGEMENT .......................................................... 8 MSTEM By (Hasan, 2010) .......................................................................................................................... 8 Fuzzy Logic Application by (Almejalli, Dahal, & Hossain, 2006) .............................................................. 12 MultiAgent Modelling by (Ossowski, et al., 2005) .................................................................................. 15 Conclusion ................................................................................................................................................... 18 References .........................................................................................................

Words: 3368 - Pages: 14

Free Essay

Thesis

...Chapter 1 INTRODUCTION Background of the Study Technology played a vital part in our society which helps everyone to easily adopt environment to an ever-changing world. One example of technology is the computer. According to Thakur (n.d.), computer is an advance electronic device that takes raw data as input from the user and process these data under the control of set of instructions (called programs) and gives the result (output) and saves the output for future use (What is Computer, n.d.). Today, computers are used in every field and made the day to day tasks very easy. Computers are also used in many companies that help every employee easily and effectively finish their works such as reports and presentations. In the age of booming technology, running a business without computer and/or information technology is like trying to breathe without lungs. According to Charlie (2012), the business process is under the Information Technology revolution, which is transforming the way in doing business. The way the basic operations like decision making, customer services, operations, marketing strategies, financial management, human resources management, etc. are being reformed with the use of computer (Computer in Business, 2012). Technology has an advanced remarkably that those who are using computers in their businesses are at advantage and those who are not using computers in their businesses are at disadvantage against their competitors. Businesses had been affected...

Words: 2268 - Pages: 10

Free Essay

Document

...ABSTRACT Due to the increase in the number of vehicles day by day, traffic congestions and traffic jams are very common. One method to overcome the traffic problem is to develop an intelligent traffic control system which is based on the measurement of traffic density on the road using real time video and image processing techniques. The theme is to control the traffic by determining the traffic density on each side of the road and control the traffic signal intelligently by using the density information. This paper presents the algorithm to determine the number of vehicles on the road. The density counting algorithm works by comparing the real time frame of live video by the reference image and by searching vehicles only in the region of interest (i.e., road area). The computed vehicle density can be compared with other direction of the traffic in order to control the traffic signal smartly. Keywords Traffic density count, image processing, intelligent controlling of traffic. 1. INTRODUCTION The number of vehicles on the road increases day by day therefore for the best utilization of existing road capacity, it is important to manage the traffic flow efficiently. Traffic congestion has become a serious issue especially in the modern cities. The main reason is the increase in the population of the large cities that subsequently raise vehicular travel, which creates congestion...

Words: 2431 - Pages: 10