1-18hit |
Yuli ZHA Pengshuai CUI Yuxiang HU Julong LAN Yu WANG
Named Data Networking (NDN) uses name to indicate content mechanism to divide content, and uses content names for routing and addressing. However, the traditional network devices that support the TCP/IP protocol stack and location-centric communication mechanisms cannot support functions such as in-network storage and multicast distribution of NDN effectively. The performance of NDN routers designed for specific functional platforms is limited, and it is difficult to deploy on a large scale, so the NDN network can only be implemented by software. With the development of data plane languages such as Programmable Protocol-Independent Packet Processors (P4), the practical deployment of NDN becomes achievable. To ensure efficient data distribution in the network, this paper proposes a protocol-independent multicast method according to each binary bit. The P4 language is used to define a bit vector in the data packet intrinsic metadata field, which is used to mark the requested port. When the requested content is returned, the routing node will check which port has requested the content according to the bit vector recorded in the register, and multicast the Data packet. The experimental results show that bitwise multicast technology can eliminate the number of flow tables distributed compared with the dynamic multicast group technology, and reduce the content response delay by 57% compared to unicast transmission technology.
Tatsuyuki MATSUSHITA Shinji YAMANAKA Fangming ZHAO
Peer-to-peer (P2P) networks have attracted increasing attention in the distribution of large-volume and frequently accessed content. In this paper, we mainly consider the problem of key leakage in secure P2P content distribution. In secure content distribution, content is encrypted so that only legitimate users can access the content. Usually, users (peers) cannot be fully trusted in a P2P network because malicious ones might leak their decryption keys. If the redistribution of decryption keys occurs, copyright holders may incur great losses caused by free riders who access content without purchasing it. To decrease the damage caused by the key leakage, the individualization of encrypted content is necessary. The individualization means that a different (set of) decryption key(s) is required for each user to access content. In this paper, we propose a P2P content distribution scheme resilient to the key leakage that achieves the individualization of encrypted content. We show the feasibility of our scheme by conducting a large-scale P2P experiment in a real network.
Shigeyuki YAMASHITA Tomohiko YAGYU Miki YAMAMOTO
Because of the popularity of rich content, such as video files, the amount of traffic on the Internet continues to grow every year. Not only is the overall traffic increasing, but also the temporal fluctuations in traffic are increasing, and differences in the amounts of traffic between peak and off-peak periods are becoming very large. Consequently, efficient use of link bandwidth is becoming more challenging. In this paper, we propose a new system for content distribution: storage aware routing (SAR). With SAR, routers having large storage capacities can exploit those links that are underutilized. Our performance evaluations show that SAR can smooth the fluctuations in link utilization.
Akihiro FUJIMOTO Yusuke HIROTA Hideki TODE Koso MURAKAMI
To establish seamless and highly robust content distribution, we proposed the new concept of Inter-Stream Forward Error Correction (FEC), an efficient data recovery method leveraging several video streams. Our previous research showed that Inter-Stream FEC had significant recovery capability compared with the conventional FEC method under ideal modeling conditions and assumptions. In this paper, we design the Inter-Stream FEC architecture in detail with a view to practical application. The functional requirements for practical feasibility are investigated, such as simplicity and flexibility. Further, the investigation clarifies a challenging problem: the increase in processing delay created by the asynchronous arrival of packets. To solve this problem, we propose a pragmatic parity stream construction method. We implement and evaluate experimentally a prototype system with Inter-Stream FEC. The results demonstrate that the proposed system could achieve high recovery performance in our experimental environment.
We study the use of network coding to speed up content distribution in peer-to-peer (P2P) networks. Our goal is to get the underlying reason for network coding's improved performance in P2P content distribution and to optimize resource consumption of network coding. We observe analytically and experimentally that in pure P2P networks, a considerable amount of data is sent multiple times from one peer to another when there are multiple paths connecting those two particular peers. Network coding, on the other hand, when applied at upstream peers, eliminates information duplication on paths to downstream peers, which results in more efficient content distribution. Based on that insight, we propose a network coder placement algorithm which achieves comparable distribution time as network coding, yet substantially reduces the number of encoders compared to a pure network coding solution in which all peers have to encode. Our placement method puts encoders at critical network positions to eliminate information duplication the most, thus, effectively shortens distribution time with just a portion of encoders.
Hiroshi YAMAMOTO Katsuyuki YAMAZAKI
With the wide-spread use of high-speed network connections and high performance mobile/sensor terminals available, new interactive services based on real-time contents have become available over the Internet. In these services, end-nodes (e.g, smart phone, sensors), which are dispersed over the Internet, generates the real-time contents (e.g, live video, sensor data about human activity), and those contents are utilized to support many kinds of human activities seen in the real world. For the services, a new decentralized contents distribution system which can accommodate a large number of content distributions and which can minimize the end-to-end streaming delay between the content publisher and the subscribers is proposed. In order to satisfy the requirements, the proposed content distribution system is equipped with utilizing two distributed resource selection methods. The first method, distributed hash table (DHT)-based contents management, makes it possible for the system to efficiently decide and locate the server managing content distributions in completely decentralized manner. And, the second one, location-aware server selection, is utilized to quickly select the appropriate servers that distribute the streamed contents to all subscribers in real time. This paper considers the performance of the proposed resource selection methods using a realistic computer simulation and shows that the system with the proposed methods has scalability for a large-scale distributed system that attracts a very large number of users, and achieves real-time locating of the contents without degrading end-to-end streaming delay of content.
Noriaki KAMIYAMA Ryoichi KAWAHARA Tatsuya MORI Haruhisa HASEGAWA
In Video on Demand (VoD) services, the demand for content items greatly changes daily over the course of the day. Because service providers are required to maintain a stable service during peak hours, they need to design system resources on the basis of peak demand time, so reducing the server load at peak times is important. To reduce the peak load of a content server, we propose to multicast popular content items to all users independently of actual requests as well as providing on-demand unicast delivery. With this solution, however, the hit ratio of pre-distributed content items is small, and large-capacity storage is required at each set-top box (STB). We can expect to cope with this problem by limiting the number of pre-distributed content items or clustering users based on their viewing histories. We evaluated the effect of these techniques by using actual VoD access log data. We also evaluated the total cost of the multicast pre-distribution VoD system with the proposed two techniques.
As one innovative research that heavily depends on the network virtualization for its realization and deployment on an Internet-scale, we propose an approach to utilize user resources in information-centric network (ICN). We try to fully benefit from the in-network cache that is one attractive feature of ICN by expanding the in-network cache indirectly based on the user resources. To achieve this, in this paper, we focus on how to encourage users to contribute their resources in ICN. Through simulations, we examine a feasibility of our approach and an effect of user participation on the content distribution performance in ICN. We also briefly discuss how the network virtualization technique can be utilized for our research in terms of its evaluation and deployment.
In this letter, we argue that user resources will be still useful in the information-centric network (ICN). From this point of view, we first examine how P2P utilizing user resources looks like in ICN. Then, we identify challenging research issues to utilize user resources in ICN.
Yi WAN Takuya ASAKA Tatsuro TAKAHASHI
Searching mechanisms employed in unstructured overlay networks typically hit multiple peers for the desired content. We propose the use of a simple method that raises the hit rates of unpopular contents and balances the loads by choosing the peer holding the least contents as the provider when multiple candidates exist.
Go OHTAKE Kazuto OGAWA Goichiro HANAOKA Hideki IMAI
There has been a wide-ranging discussion on the issue of content copyright protection in digital content distribution systems. Fiat and Tassa proposed the framework of dynamic traitor tracing. Their framework requires dynamic computation transactions according to the real-time responses of the pirate, and it presumes real-time observation of content redistribution. Therefore, it cannot be simply utilized in an application where such an assumption is not valid. In this paper, we propose a new scheme that provides the advantages of dynamic traitor tracing schemes and also overcomes their problems.
Kazuto OGAWA Goichiro HANAOKA Hideki IMAI
In the current broadcasting system or Internet content distribution system, content providers distribute decoders (STB) that contain secret keys for content decryption, prior to content distribution. A content provider sends encrypted content to each user, who then decodes it with his or her STB. While users can get the services at their houses if they have an STB, it is hard for them to get the services outside their houses. A system that allowed users to carry around their secret keys would improve usability, but it would require countermeasures against secret key exposure. In this paper, we propose such an extended broadcasting system using tokens and group signature. The content providers can control the number of keys that users can use outside their houses. The system enables the broadcasters to minimize the damage caused by group signature key exposures and the user to get services outside his or her home.
Under the broadband-ubiquitous environment, digital content creation/distribution will be the key factor to activating new industries. This paper first describes the impact of a broadband-ubiquitous environment on digital content creation/distribution; then it proposes new models for digital content creation/distribution businesses. In a broadband-ubiquitous environment, the key is creation of moving picture content; thus the paper describes a system that allows non-CG experts to make CG movies easily.
In this paper, we propose multicast technique in order to reduce the required network bandwidth by n times, by merging the adjacent multicasts depending on the number of HENs (Head-End-Nodes) n that request the same video. Allowing new clients to immediately join an existing multicast through patching improves the efficiency of the multicast and offers services without any initial latency. A client might have to download data through two channels simultaneously, one for multicast and the other for patching. Each video stream is divided into blocks which are the same size of multicast grouping interval Im. Blocks then are evenly distributed into different HENs according to their popularity and the order of requests. Only when the playback time exceeds the amount of cached video data, server generates new multicast channel. Since the interval of multicast can be dynamically expanded according to the popularity of videos, it can be reduced the server's workload and the network bandwidth. We adopt the cache replacement strategy as LFU (Least-Frequently-Used) for popular videos, LRU (Least-Recently-Used) for unpopular videos, and the method for replacing the first block of video last to reduce end-to-end latency. We perform simulations to compare its performance with that of conventional multicast. From simulation results, we confirm that the proposed multicast technique offers substantially better performance.
Hiroyuki EBARA Yasutomo ABE Daisuke IKEDA Tomoya TSUTSUI Kazuya SAKAI Akiko NAKANIWA Hiromi OKADA
Content Distribution Networks (CDNs) are highly advanced architectures for networks on the Internet, providing low latency, scalability, fault tolerance, and load balancing. One of the most important issues to realize these advantages of CDNs is dynamic content allocation to deal with temporal load fluctuation, which provides mirroring of content files in order to distribute user accesses. Since user accesses for content files change over time, the content files need to be reallocated appropriately. In this paper, we propose a cost-effective content migration method called the Step-by-Step (SxS) Migration Algorithm for CDNs, which can dynamically relocate content files while reducing transmission cost. We show that our method maintains sufficient performance while reducing cost in comparison to the conventional shortest-path migration method. Furthermore, we present six life cycle models of content to consider realistic traffic patterns in our simulation experiments. Finally, we evaluate the effectiveness of our SxS Migration Algorithm for dynamic content reconfiguration across time.
SeongOun HWANG KiSong YOON KwangHyung LEE
The widespread use of the Internet raises issues regarding intellectual property. After content is downloaded, no further protection is provided on that content. DRM (Digital Rights Management) technologies were developed to ensure secure management of digital processes and information. In this paper, we present a multilevel content distribution model of which we present formal descriptions.
In content distribution networks (CDNs), the content routing which directs user requests to an adequate server from the viewpoint of improvement of latency for obtaining contents is one of the most important technical issues. Several information, e.g. server load or network delay, can be used for content routing. Network support, e.g. active network, enables a router to select an adequate server by using these information. In the paper, we investigate a server selection policy of a network support approach from the viewpoint of which information to be used for effective server selection. We propose a server selection policy using RTT information measured at a router. Simulation results show that our proposed server selection policy in content routing selects a good server under both conditions where server latency and network delay is a dominant element of user response time. Furthermore, we also investigate about location of routers with network support bringing good performance for our proposed scheme.
Natsume MATSUZAKI Toshihisa NAKANO Tsutomu MATSUMOTO
This paper proposes a flexible tree-based key management framework for a terminal to connect with multiple content distribution systems (called as CDSs in this paper). In an existing tree-based key management scheme, a terminal keeps previously distributed node keys which are used for decrypting contents from a CDS. According to our proposal, the terminal can calculate its node keys of a selected CDS as the need arises, using the "public bulletin board" of the CDS. The public bulletin board is generated by a management center of the individual CDS, depending on a tree structure which it determines in its convenience. After the terminal calculates its node keys, it can get a content of the CDS using the calculated node keys.