Hiromasa IKEDA Masafumi KATOH Naohisa KOMATSU Toshikane ODA Hiroshi SAITO Hiroshi SUZUKI Miki YAMAMOTO
Quality of service requirements are satisfied conjointly by the service model, which determines how resources are shared and by network engineering, which determines how much capacity is provided. In this paper we consider the impact of the adopted charging scheme on the feasibility of fulfilling QoS requirements. We identify three categories of charging scheme based respectively on flat rate pricing, congestion pricing and transaction pricing.
Brian L. MARK Gopalakrishnan RAMAMURTHY
One of the important challenges in the design of ATM networks is how to provide quality-of-service (QoS) while maintaining high network resource utilization. In this paper, we discuss the role of real-time traffic characterization in QoS control for ATM networks and review several approaches to the problem of resource allocation. We then describe a particular framework for QoS control in which real-time measurements of a connection stream are used to determine appropriate parameters for usage parameter control (UPC). Connection admission control (CAC) is based on the characterization of the aggregate stream in terms of the individual stream UPC descriptors, together with real-time measurements.
Originally intended for application to B-ISDN, which is carrier oriented, ATM technology has been actively studied for application to LAN based environments since the beginning of the 1990s. One of the most notable things in LAN area is development of a rich set of application services. A number of technical specifications for major application services have been developed, which include LAN Emulation, IP over ATM, Multi-protocol over ATM, Voice and Telephony over ATM, as well as Native ATM services such as MPEG2 over ATM. Development of these new services raises new challenges related to traffic management. Keeping pace with the development, a number of traffic control mechanisms have also been developed to maximize the performance of these services. Traffic control and management techniques, however, are still in the early stage of their learning curve. Network engineers are facing challenging problems related to traffic management. This paper reviews major service-related technologies and discusses traffic management issues associated with these services. Especially, it describes the real world traffic management as practiced by average network engineers with state-of-the-art products. Although the thechnology developments have advanced through many research works, there seems to be a considerable gaps between the practice and principles. This paper discusses the traffic issues of ATM LAN from this perspective and points out some challenges for the future. Most of the difficulties in handling traffic issues stems from the differences in implementation details. To alleviate this difficulty, the introduction of a unified node model which describes the traffic handling capability of ATM nodes in sufficient detail is suggested.
Pier Luigi CONTI Hiroshi SAITO Livia DE GIOVANNI
In this paper an algorithm of Connection Admission Control in ATM is considered. It is shown that it works under many different kinds of dependence among arrivals, including long range dependence. This point is relevant, since recent papers show that ATM traffic is characterised by self-similarity, and hence by long range dependence. An upper bound for CLR is given, without assuming any specific cell arrival process. Applications to simulated and real data (obtained by segmenting and shaping Ethernet packets) are considered. They show the goodness and the tightness of the considered upper bound.
Arnold L. NEIDHARDT Frank HUEBNER Ashok ERRAMILLI
We examine the effectiveness of shaping and policing mechanisms in reducing the inherent variability of fractal traffic, with the objective of increasing network operating points. Whether a shaper simply spaces a flow or allows small bursts according to a leaky bucket, we show using analytical arguments that, i) the Hurst parameter, which describes the asymptotic variability of the traffic, is unaffected; and ii) while the traffic can be made smoother over time scales smaller than one corresponding to the shaper
Sven-Olof LARSSON Åke ARVIDSSON
By reserving transmission capacity on a series of links from one node to another, making a virtual path connection (VPC) between these nodes, several benefits are obtained. VPCs will simplify routing at transit nodes, connection admission control, and QoS management by traffic segregation. As telecommunications traffics experience variations in the number of calls per time unit due to office hours, inaccurate forecasting, quick changes in traffic loads, and changes in the types of traffic (as in introduction of new services), there is a need to cope for this by adaptive capacity reallocation between different VPCs. We have developed a type of local VPC capacity management policy that uses an allocation function to determine the needed capacity for the coming updating interval, based on the current number of active connections. We suggest an allocation function that is independent of the actual traffic, and determine its optimal parameters and the optimal updating interval for different overhead costs. The local approach is shown to be able to combine benefits from both VP and VC routing by fast capacity reallocations. The method of signaling is easy to implement and evaluations indicate that the method is robust.
Bursts from a number of variable bit rate sources allocated to a virtual path with a given capacity can inundate the channel. Buffers used to take care of such bursts can fill up rapidly. The buffer size limits its burst handling capability. With large bursts or a number of consecutive bursts, the buffers fill up and this leads to high cell losses. Channel reconfiguration with dynamic allocation of spare capacities is one of the methods used to alleviate such cell losses. In reconfigurable networks, spare capacity allocation can increase the channel rates for short durations, to cope with the excess loads from the bursts. The dynamic capacity allocations are adaptable to the loads and have fast response times. We propose heuristic rules for spare capacity assignments in ATM networks. By monitoring buffer occupancy, triggers which anticipate excess traffic can be used to assign spare capacities to reduce the cell loss probabilities in the network.
In the current ATM AAL5 implementation, even a single cell loss event can lead to the corruption of one whole packet. Hence, it has been observed that the throughput of upper layer protocol may easily collapse on a congested ATM network. In this paper, we propose a buffer management method called Age Priority Packet Discarding (APPD) scheme to be used along with two other schemes: the Early Packet Discarding (EPD) and the Partial Packet Discarding (PPD) schemes. After describing the operations and the pseudo code of the proposed APPD scheme and how it operates with the EPD/PPD schemes, the packet level QoS of APPD and its extended versions are derived analytically under homogeneous ON-OFF source model. Numerical results obtained via analytical approach suggest that the proposed APPD scheme can more effectively and fairly reduce packet loss probability than other schemes.
This paper deals with overload control during congestion in a shared buffer ATM switch via selective cell discard and buffer management. Specifically, we consider the question of efficiency in buffer control schemes in order to reduce the number of cells that have to be discarded during congestion, in the meantime provide "fair" access to the shared buffer by all users. To prevent performance degradation of the shared buffer switch under imbalance traffic conditions, a "gated" buffer control scheme is proposed. The concept of the "gated" control policy is that we add a control gate in front of the corresponding logical queue of each overloaded output port. Some incoming cells destined for the overloaded ports can be blocked before entering the shared buffer. It will make rooms in the shared buffer for those incoming cells destined for the non-overloaded ports. This gated buffer control scheme can be modeled as a variation of SMXQ (sharing with a maximum queue length) scheme with a set of dynamically adjusted queue length thresholds. The simulation study of the gated buffer control is applied to a shared buffer ATM switch under various cell discard mechanisms. In most cases the proposed gated buffer control scheme can not only reduce the overall cell loss but also satisfy the "fair" access requirement under network congestion conditions, if we adjust the dynamical queue length thresholds properly.
Hideyuki SHIMONISHI Hiroshi SUZUKI
Weighted Round Robin (WRR) scheduling is an extension of round robin scheduling. Because of its simplicity and bandwidth guarantee, WRR cell scheduling is commonly used in ATM switches. However, since cells in individual queues are sent cyclically, the delay bounds in WRR scheduling grow as the number of queues increases. Thus, static priority scheduling is often used with WRR to improve the delay bounds of real-time queues. In this paper, we show that the burstiness generated in the network is an even greater factor affecting the degradation of delay bounds. In ATM switches with per-class queueing, a number of connections are multiplexed into one class-queue. The multiplexed traffic will have a burstiness even if each connection has no burstiness, and when the multiplexed traffic is separated at the down stream switches, the separated traffic will have a burstiness even if the multiplexed traffic has been shaped in the upstream switches. In this paper, we propose a new WRR scheme, namely, WRR with Save and Borrow (WRR/SB), that helps improving the delay bound performance of WRR by taking into account the burstiness generated in the network. We analyze these cell scheduling methods to discuss their delay characteristics. Through some numerical examples, we show that delay bounds in WRR are mainly dominated by the burstiness of input traffic and, thus WRR/SP, which is a combination of WRR and static priority scheduling, is less effective in improving delay bounds. We show that WRR/SB can provide better delay bounds than WRR and that it can achieve the same target delay bound with a smaller extra bandwidth, while large extra bandwidth must be allocated for WRR.
Cheng-Shong WU Jin-Chyang JIAU Kim-Joan CHEN
Cell delay variation (CDV) has been considered as an important performance measure due to the stringent timing requirement for video and multimedia services. In this paper we address the problem of CDV performance guarantee in virtual path (VP)-based ATM multiplexing. We propose a rate-based and non-work-conserving scheduling algorithm, called interleaved round robin (IRR), for serving traffic streams among VPs into the outgoing link. Through our performance analysis, the proposed scheme is capable of providing upper and lower bounds on the inter-visit time (IVT) for each VP, where the difference between the upper bound and the lower bound is simply dependent upon the number of multiplexed VPs. The distribution of VP IVT scheduled by an IRR server can also be well approximated using a random incidence technique. In addition to the VP-level CDV performance, we further examine the virtual connection (VC)-level CDV incurred within a multi-stage network through simulation study. The simulation results show that the IRR server can provide traffic regulation and smoothness at each network node. Moreover, the CDV distribution of a tagged VC is insensitive to the source traffic characteristic, node location, and the hop count traversed in the network.
Woo-Yong CHOI Chi-Hyuck JUN Jae Joon SUH
We propose a new approach to the exact performance analysis of a shared buffer ATM multiplexer, which is loaded with mixed correlated and uncorrelated traffic sources. We obtain the joint steady-state probabilities of both states of the input process and the buffer using a one-dimensional Markov chain. From these probabilities we calculate the loss probabilities and the average delays of the correlated and the uncorrelated traffic sources.
Udo R. KRIEGER Valeri NAOUMOV Dietmar WAGNER
We analyze the behavior of a finite FIFO buffer in an advanced packet-switched network. It is modeled by a multi-class single-server delay-loss system Σχi MAP i/ PH /1/m. As stochastic process of the system it yields a finite Markov chain with QBD structure and two boundary sets. Our main result is a new representation of its steady-state vector in terms of a linear combination of exactly two matrix-geometric components. Furthermore, we present an efficient algorithm to solve the corresponding matrix-quadratic equations. As second key result we state a new efficient recursive procedure to calculate the congestion characteristics of this delay-loss system.
Zhisheng NIU Yoshitaka TAKAHASHI Noboru ENDO
We propose a finite-capacity single-vacation model, with close-down/setup times and a Markovian arrival process (MAP), for SVC-based IP-over-ATM networks. This model considers the SVC processing overhead and the bursty nature of IP packet arrivals. Specifically, the setup time corresponds to the SVC setup time and the vacation time corresponds to the SVC release time, while the close-down time corresponds to the SVC timeout. The MAP is a versatile point process by which typical bursty arrival processes like the IPP (interrupted Poisson process) or the MMPP (Markov modulated Poisson process) is treated as a special case. The approach we take here is the supplementary variable technique. Compared with the embedded Markov chain approach, it is more straightforward to obtain the steady-state probabilities at an arbitrary instant and the practical performance measures such as packet loss probability, packet delay time, and SVC setup rate. For the purpose of optimal design of the SVC-based IP-over-ATM networks, we also propose and derive a new performance measure called the SVC utilization ratio. Numerical results show the sensitivity of these performance measures to the SVC timeout period as well as to the burstiness of the input process.
Yiwei Thomas HOU Henry H. -Y. TZENG Shivendra S. PANWAR Vijay P. KUMAR
The classical max-min policy has been suggested by the ATM Forum to support the available bit rate (ABR) service class. However, there are several drawbacks in adopting the max-min rate allocation policy. In particular, the max-min policy is not able to support the minimum cell rate (MCR) requirement and the peak cell rate (PCR) constraint for each ABR connection. Furthermore, the max-min policy does not offer flexible options for network providers wishing to establish a usage-based pricing criterion. In this paper, we present a generic weight-based rate allocation policy, which generalizes the classical max-min policy by supporting the MCR/PCR for each connection. Our rate allocation policy offers a flexible usage-based pricing strategy to network providers. A centralized algorithm is presented to compute network-wide bandwidth allocation to achieve this policy. Furthermore, a simple switch algorithm using ABR flow control protocol is developed with the aim of achieving our rate allocation policy in a distributed networking environment. The effectiveness of our distributed algorithm in a local area environment is substantiated by simulation results based on the benchmark network configurations suggested by the ATM Forum.
In live multimedia applications with multiple videos, it is necessary to develop an efficient mechanism of multiplexing several MPEG video streams into a single stream and transmitting it over network without wasting excessive bandwidth. In this paper, we present an efficient multiplexing and traffic smoothing scheme for multiple variable bit rate (VBR) MPEG video streams in live video applications with finite buffer sizes. First, we describe the constraints imposed by the allowable delay bound for each elementary stream and by the multiplexer/receiver buffer sizes. Based on these constraints, a new multiplexing and traffic smoothing scheme is designed in such a way as to smooth maximally the multiplexed transmission rate by exploiting temporal and spatial averaging effects, while avoiding the buffer overflow and underflow. Through computer experiments based on an MPEG-coded video trace of Star-wars, it is shown that the proposed scheme significantly reduces the peak rate, coefficient of variation, and effective bandwidth of the multiplexed transmission rate.
Gabor FODOR Andras RACZ Sφren BLAABJERG
In this paper an ATM call level model, where service classes with QoS guarantees (CBR/VBR) as well as elastic (best effort) services (ABR/UBR) coexist, is proposed and a number of simulations have been carried out on three different network topologies. Elastic traffic gives on the network level rise to new challenging problems since for a given elastic connection the bottleneck link determines the available bandwidth and thereby put constraints on bandwidth at other links. Thereby bandwidth allocation at call arrivals but also bandwidth reallocation at call departure becomes, together with routing, an important issue for investigation. Two series of simulations have been carried out where three different routing schemes have been evaluated together with two bandwidth allocation algorithms. The results indicate that the choice of routing algorithm is load dependent and in a large range the shortest path algorithm
Toshiaki TSUCHIYA Hiroshi SAITO
We introduce the concepts of conservative cell loss ratio (CLR) estimation and worst case cell arrival patterns, and apply them to cell arrival patterns that conform to the generic cell rate algorithm (GCRA). We define new sets of cell arrival patterns which contain the worst case cell arrival patterns for conforming cell arrival patterns. Based on these sets, we propose an upper bound formula using the burst tolerance as well as peak cell rate and sustainable cell rate, and develope a connection admission control method that guarantees cell loss ratio performance satisfying its objective.
Kohei SHIOMOTO Qiyong BIAN Jonathan S. TURNER
In recent years, there has been a rapid growth in applications such as World Wide Web browsing, which are characterized by fairly short sessions that transfer substantial amounts of data. Conventional connection-oriented and datagram services are not ideally engineered to handle this kind of traffic. We present a new ATM service, called Dynaflow service, in which virtual circuits are created on a burst-by-burst basis and we evaluate key aspects of its performance. We compare Dynaflow to the Fast reservation protocol (FRP) and show that Dynaflow can achieve higher overall throughput due to the elimination of reservation delays, and through the use of shared "burst-stores. " We study the queueing performance of the dynaflow switch and quantify the relationship between the loss ratio and the buffer size.
Doo Seop EOM Masashi SUGANO Masayuki MURATA Hideo MIYAHARA
In the wireless ATM network, the key issue is to guarantee various QoS (Quality of Service) under the conditions of the limited radio link bandwidth and error prone characteristics. In this paper, we show a combination method of the error correction schemes, which is suitable to establish multimedia wireless ATM Networks while keeping an efficient use of the limited bandwidth. We consider two levels of FEC; a bit-level and a cell-level to guarantee cell loss probabilities of real time applications. By combining two levels of FEC, various requirements on cell loss can be met. We then apply the bit-level FEC and ARQ protocol for the data communication; tolerant to the delay characteristics. Through the analytical methods, the required overheads of FECs are examined to satisfy the various QoS requirements of CBR connections. The mean delay analysis for the UBR service class is also presented. In numerical examples, we show how the combination scheme to guarantee various cell loss requirements affects the call blocking probability of the CBR service class and the delay of UBR service class.
We present techniques to implement fair-sharing on both link bandwidth and buffer space in a switch or router. Together they possess the following merits: 1. solving the counter-overflow problem; 2. avoiding the "credit" accumulation issue; 3. integrating bandwidth allocation with buffer management. The simplicity of this method makes it a viable candidate for implementational use on switches and routers.
Yibo ZHANG Weiping ZHAO Shunji ABE Shoichiro ASANO
This paper addresses the optimum routing problem of multipoint connection in large-scale networks. A number of algorithms for routing of multipoint connection have been studied so far, most of them, however, assume the availability of complete network information. Herein, we study the problem under the condition that only partial information is available to routing nodes and that routing decision is carried out in a distributed cooperative manner. We consider the network being partitioned into clusters and propose a cluster-based routing approach for multipoint connection. Some basic principles for network clustering are discussed first. Next, the original multipoint routing problem is defined and is divided into two types of subproblems. The global optimum multicast tree then can be obtained asymptotically by solving the subproblems one after another iteratively. We propose an algorithm and evaluate it with computer simulations. By measuring the running time of the algorithm and the optimality of resultant multicast tree, we show analysis on the convergent property with varying network cluster sizes, multicast group sizes and network sizes. The presented approach has two main characteristics, 1) it can yield asymptotical optimum solutions for the routing of multipoint connection, and 2) the routing decisions can be made in the environment where only partial information is available to routing nodes.
Hisaya HADAMA Takashi SHIMIZU Masayoshi NABESHIMA Toshinori TSUBOI
This paper shows new techniques to construct a service network which realizes responsive large-size data transmission for widely distributed mass users. We set our service target as transferring mega-byte scale data from a server to a client within one second. ATM is recognized as a powerful technology with which to construct a wide area network infrastructure that supports multiple bandwidth services. Our fundamental principles in developing such a service network are as follows: a) The bandwidth sharing mechanism should be of the best effort rather than resource reservation type. This is because only best effort schemes remove bandwidth reservation/release overheads. b) More than a 100 Mb/s data transmission rate should be supported throughout data transfer. c) Data transfer should be completed within the round trip through the network (or a small multiple thereof). This is necessary to minimize the effect of transmission time in large-scale networks. d) The user network interface should be simply defined to allow independent evolution of both network and terminal technologies. e) Congestion control must block the spread of congestion within the network. Based on these principles, we propose the "ATM superpacket network (ATM-SN)" as the service network to realize our target service. Key techniques are as follows. (1) Best effort and cut-through transmission of superpackets whose length reaches ten mega-bytes. (2) Network nodes with large-capacity buffer memories that prevent superpacket collisions. (3) Superpacket admission control at network nodes to prevent cell overflow. (4) Superpacket-based congestion control. Our proposal assumes the existence of a high-quality ATM infrastructure that can provide a large bandwidth with a high-quality DBR cell transmission capability (cell loss ratio is less than 10E-7) and small bit error ratios (less than 10E-10). First, we detail our proposal of the ATM-SN. Next, we propose a superpacket-based congestion control technique coupled with a simple Usage Parameter Control function. We then show the evaluation results of those key techniques to confirm the effectiveness of the superpacket network.
Sirirat TREETASANATAVORN Toshiyuki YOSHIDA Yoshinori SAKAI
In this paper, we propose an idea for intramedia synchronization control using a method of end-to-end delay monitoring to estimate future delay in delay compensation protocol. The estimated value by Kalman filtering at the presentation site is used for feedback control to adjust the retrieval schedule at the source according to the network conditions. The proposed approach is applicable for the real time retrieving application where `tightness' of temporal synchronization is required. The retrieval schedule adjustment is achieved by two resynchronization mechanisms-retrieval offset adjustment and data unit skipping. The retrieval offset adjustment is performed along with a buffer level check in order to compensate for the change in delay jitter, while the data unit skipping control is performed to accelerate the recovery of unsynchronization period under severe conditions. Simulations are performed to verify the effectiveness of the proposed scheme. It is found that with a limited buffer size and tolerable latency in initial presentation, using a higher efficient delay estimator in our proposed resynchronization scheme, the synchronization performance can be improved particularly in the critically congested network condition. In the study, Kalman filtering is shown to perform better than the existing estimation methods using the previous measured jitter or the average value as an estimate.
This paper presents a finite buffer M/G/1 queue with two classes of customers who are served by a combination of head-of-the-line priority and buffer reservation schemes. This combination gives each class of customers high or low priorities in terms of both delay and loss. The scheme is analyzed for the model in which one class of customers has high priorities over the other class of customers with respect to both delay and loss. First, steady-state joint probability distribution of the number of each class of customers in the buffer and remaining service time is derived by a supplementary variable method. Second, loss probability and mean waiting time for each class of customers are provided using this probability distribution. Finally, a combination of head-of-the-line priority and buffer reservation schemes is numerically compared with other buffer management schemes in terms of admissible offered load to show its effectiveness under differing QoS requirements.
In the future, more and more network providers will be established through the introduction of an open telecommunications market. At this time, it is necessary to guarantee the fair competition between these network providers. In this paper, a negotiation protocol for connection establishment is proposed. This negotiation protocol is based on the concept of open, competitive bidding and can guarantee fair competition between the network providers. In this negotiation protocol, each network provider
Seiichi MUROYAMA Mikio YAMASAKI Kazuhiko TAKENO Naoki KATO Ichiro YAMADA
This paper describes the design concept and characteristics of a power supply for optical network units in Fiber To The Home (FTTH) systems. Powering architectures of local powering, network powering and power hub powering are compared in terms of cost and maintainability. A local powering architecture is selected for an ONU power supply because it is the most cost-effective overall compared with the others. The local power supply is mainly composed of a rectifier, DC-DC converters, a ringer, and batteries. A battery deterioration test function is important for the local power supply because battery lifetime varies depending on ambient temperature, discharge history, and charging conditions, and it is shorter than other electrical components used in ONU. Supplying power using alternative batteries is also necessary because the capacity of batteries installed in the power supply is limited. These functions and electrical characteristics are checked using an experimental power supply with Ni-Cd batteries.
This paper introduces an error controlled decision feedback (ECDF) multiuser receiver, which integrates a successive canceller with linear block channel coding to mitigate decision error propagation. In particular, it uses a switching successive cancellation feedback loop, which can be open if excessive bit errors occur to prevent decision error propagation. The results of computer simulation show that the ECDF receiver possesses advantages in terms of near-far resistance and BER over many reported schemes.
In this paper, the effects of the speed on the user
Hidetoshi KAYAMA Takeo ICHIKAWA Hitoshi TAKANASHI Masahiro MORIKURA Toshiaki TANAKA
This paper proposes a new MAC protocol and physical channel control schemes for TDMA-TDD multi-slot packet channel. The goal of this study is to support both circuit-switched and packet-switched communications on the same resources and to enable high-speed packet transmission using a multi-slot packet channel. In the proposed channel control schemes, three points are taken into account; 1) effective sharing of time slots and frequencies with minimum impact on circuit communications, 2) compatibility with the existing access protocol and equipment, and 3) dynamic allocation of uplink and downlink slots. As for the MAC protocol, we adopt BRS (Block Reservation Scheme) and adaptive access control scheme to the proposed MAC protocol. In addition, to overcome the inherent disadvantage of TDD channels, packet scheduling and access randomizing control are newly proposed in this paper. The results of throughput and delay evaluations confirm that downlink capacity can be drastically enhanced by the dynamic allocation of uplink and downlink slots while corruption under heavy traffic loads is prevented by applying the adaptive traffic load control scheme.
SooKun KWON HyoungGoo JEON KyungRok CHO
A novel channel assignment scheme in DS-CDMA cellular systems is proposed, which overcomes the handoff interruptions of delay sensitive services by increasing the probability that soft handoff occurs in handoff for them. For that purpose, the priority of using the frequency channels served by all of cells is given to delay sensitive services over delay insensitive ones.