Keitaro NAKASAI Shin KOMEDA Masateru TSUNODA Masayuki KASHIMA
To automatically measure the mental workload of developers, existing studies have used biometric measures such as brain waves and the heart rate. However, developers are often required to equip certain devices when measuring them, and can therefore be physically burdened. In this study, we evaluated the feasibility of non-contact biometric measures based on the nasal skin temperature (NST). In the experiment, the proposed biometric measures were more accurate than non-biometric measures.
With the network function virtualization technology, a middlebox can be deployed as software on commercial servers rather than on dedicated physical servers. A backup server is necessary to ensure the normal operation of the middlebox. The workload can affect the failure rate of backup server; the impact of workload-dependent failure rate on backup server allocation considering unavailability has not been extensively studied. This paper proposes a shared backup allocation model of middlebox with consideration of the workload-dependent failure rate of backup server. Backup resources on a backup server can be assigned to multiple functions. We observe that a function has four possible states and analyze the state transitions within the system. Through the queuing approach, we compute the probability of each function being available or unavailable for a certain assignment, and obtain the unavailability of each function. The proposed model is designed to find an assignment that minimizes the maximum unavailability among functions. We develop a simulated annealing algorithm to solve this problem. We evaluate and compare the performances of proposed and baseline models under different experimental conditions. Based on the results, we observe that, compared to the baseline model, the proposed model reduces the maximum unavailability by an average of 29% in our examined cases.
Kazuichi OE Mitsuru SATO Takeshi NANRI
The response times of solid state drives (SSDs) have decreased dramatically due to the growing use of non-volatile memory express (NVMe) devices. Such devices have response times of less than 100 micro seconds on average. The response times of all-flash-array systems have also decreased dramatically through the use of NVMe SSDs. However, there are applications, particularly virtual desktop infrastructure and in-memory database systems, that require storage systems with even shorter response times. Their workloads tend to contain many input-output (IO) concentrations, which are aggregations of IO accesses. They target narrow regions of the storage volume and can continue for up to an hour. These narrow regions occupy a few percent of the logical unit number capacity, are the target of most IO accesses, and appear at unpredictable logical block addresses. To drastically reduce the response times for such workloads, we developed an automated tiered storage system called “automated tiered storage with fast memory and slow flash storage” (ATSMF) in which the data in targeted regions are migrated between storage devices depending on the predicted remaining duration of the concentration. The assumed environment is a server with non-volatile memory and directly attached SSDs, with the user applications executed on the server as this reduces the average response time. Our system predicts the effect of migration by using the previously monitored values of the increase in response time during migration and the change in response time after migration. These values are consistent for each type of workload if the system is built using both non-volatile memory and SSDs. In particular, the system predicts the remaining duration of an IO concentration, calculates the expected response-time increase during migration and the expected response-time decrease after migration, and migrates the data in the targeted regions if the sum of response-time decrease after migration exceeds the sum of response-time increase during migration. Experimental results indicate that ATSMF is at least 20% faster than flash storage only and that its memory access ratio is more than 50%.
Dai SUZUKI Satoshi IMAI Toru KATAGIRI
Network Functions Virtualization (NFV) is expected to provide network systems that offer significantly lower cost and greatly flexibility to network service providers and their users. Unfortunately, it is extremely difficult to implement Virtualized Network Functions (VNFs) that can equal the performance of Physical Network Functions. To realize NFV systems that have adequate performance, it is critical to accurately grasp VNF workload. In this paper, we focus on the virtual firewall as a representative VNF. The workload of the virtual firewall is mostly determined by firewall rule processing and the Access Control List (ACL) configurations. Therefore, we first reveal the major factors influencing the workload of the virtual firewall and some issues of monitoring CPU load as a traditional way of understanding the workload of virtual firewalls through preliminary experiments. Additionally, we propose a new workload metric for the virtual firewall that is derived by mathematical models of the firewall workload in consideration of the packet processing in each rule and the ACL configurations. Furthermore, we show the effectiveness of the proposed workload metric through various experiments.
Researchers have already attributed a certain amount of variability and “drift” in an individual's handwriting pattern to mental workload, but this phenomenon has not been explored adequately. Especially, there still lacks an automated method for accurately predicting mental workload using handwriting features. To solve the problem, we first conducted an experiment to collect handwriting data under different mental workload conditions. Then, a predictive model (called SVM-GA) on two-level handwriting features (i.e., sentence- and stroke-level) was created by combining support vector machines and genetic algorithms. The results show that (1) the SVM-GA model can differentiate three mental workload conditions with accuracy of 87.36% and 82.34% for the child and adult data sets, respectively and (2) children demonstrate different changes in handwriting features from adults when experiencing mental workload.
Min SHAO Min S. KIM Victor C. VALGENTI Jungkeun PARK
Network Intrusion Detection Systems (NIDS) are deployed to protect computer networks from malicious attacks. Proper evaluation of NIDS requires more scrutiny than the evaluation for general network appliances. This evaluation is commonly performed by sending pre-generated traffic through the NIDS. Unfortunately, this technique is often limited in diversity resulting in evaluations incapable of examining the complex data structures employed by NIDS. More sophisticated methods that generate workload directly from NIDS rules consume excessive resources and are incapable of running in real-time. This work proposes a novel approach to real-time workload generation for NIDS evaluation to improve evaluation diversity while maintaining much higher throughput. This work proposes a generative grammar which represents an optimized version of a context-free grammar derived from the set of strings matching to the given NIDS rule database. The grammar is memory-efficient and computationally light when generating workload. Experiments demonstrate that grammar-generated workloads exert an order of magnitude more effort on the target NIDS. Even better, this improved diversity comes at much smaller cost in memory and speeds four times faster than current approaches.
Takaaki DEGUCHI Yoshiaki TANIGUCHI Go HASEGAWA Yutaka NAKAMURA Norimichi UKITA Kazuhiro MATSUDA Morito MATSUOKA
In this paper, we propose a workload assignment policy for reducing power consumption by air conditioners in data centers. In the proposed policy, to reduce the air conditioner power consumption by raising the temperature set points of the air conditioners, the temperatures of all server back-planes are equalized by moving workload from the servers with the highest temperatures to the servers with the lowest temperatures. To evaluate the proposed policy, we use a computational fluid dynamics simulator for obtaining airflow and air temperature in data centers, and an air conditioner model based on experimental results from actual data center. Through evaluation, we show that the air conditioners' power consumption is reduced by 10.4% in a conventional data center. In addition, in a tandem data center proposed in our research group, the air conditioners' power consumption is reduced by 53%, and the total power consumption of the whole data center is exhibited to be reduced by 23% by reusing the exhaust heat from the servers.
Dafei HUANG Changqing XUN Nan WU Mei WEN Chunyuan ZHANG Xing CAI Qianming YANG
Aiming to ease the parallel programming for heterogeneous architectures, we propose and implement a high-level OpenCL runtime that conceptually merges multiple heterogeneous hardware devices into one virtual heterogeneous compute device (VHCD). Moreover, automated workload distribution among the devices is based on offline profiling, together with new programming directives that define the device-independent data access range per work-group. Therefore, an OpenCL program originally written for a single compute device can, after inserting a small number of programming directives, run efficiently on a platform consisting of heterogeneous compute devices. Performance is ensured by introducing the technique of virtual cache management, which minimizes the amount of host-device data transfer. Our new OpenCL runtime is evaluated by a diverse set of OpenCL benchmarks, demonstrating good performance on various configurations of a heterogeneous system.
Xiangxu MENG Xiaodong WANG Xinye LIN
The GPS trajectory databases serve as bases for many intelligent applications that need to extract some trajectories for future processing or mining. When doing such tasks, spatio-temporal range queries based methods, which find all sub-trajectories within the given spatial extent and time interval, are commonly used. However, the history trajectory indexes of such methods suffer from two problems. First, temporal and spatial factors are not considered simutaneously, resulting in low performance when processing spatio-temporal queries. Second, the efficiency of indexes is sensitive to query size. The query performance changes dramatically as the query size changed. This paper proposes workload-aware Adaptive OcTree based Trajectory clustering Index (ATTI) aiming at optimizing trajectory storage and index performance. The contributions are three-folds. First, the distribution and time delay of the trajectory storage are introduced into the cost model of spatio-temporal range query; Second, the distribution of spatial division is dynamically adjusted based on GPS update workload; Third, the query workload adaptive mechanism is proposed based on virtual OcTree forest. A wide range of experiments are carried out over Microsoft GeoLife project dataset, and the results show that query delay of ATTI could be about 50% shorter than that of the nested index.
Paulo GONÇALVES Shubhabrata ROY Thomas BEGIN Patrick LOISEAU
Dynamic resource management has become an active area of research in the Cloud Computing paradigm. Cost of resources varies significantly depending on configuration for using them. Hence efficient management of resources is of prime interest to both Cloud Providers and Cloud Users. In this work we suggest a probabilistic resource provisioning approach that can be exploited as the input of a dynamic resource management scheme. Using a Video on Demand use case to justify our claims, we propose an analytical model inspired from standard models developed for epidemiology spreading, to represent sudden and intense workload variations. We show that the resulting model verifies a Large Deviation Principle that statistically characterizes extreme rare events, such as the ones produced by “buzz/flash crowd effects” that may cause workload overflow in the VoD context. This analysis provides valuable insight on expectable abnormal behaviors of systems. We exploit the information obtained using the Large Deviation Principle for the proposed Video on Demand use-case for defining policies (Service Level Agreements). We believe these policies for elastic resource provisioning and usage may be of some interest to all stakeholders in the emerging context of cloud networking.
Hiroshi YAMAMOTO Masato TSURU Katsuyuki YAMAZAKI Yuji OIE
In parallel computing systems using the master/worker model for distributed grid computing, as the size of handling data grows, the increase in the data transmission time degrades the performance. For divisible workload applications, therefore, multiple-round scheduling algorithms have been being developed to mitigate the adverse effect of longer data transmission time by dividing the data into chunks to be sent out in multiple rounds, thus overlapping the times required for computation and transmission. However, a standard multiple-round scheduling algorithm, Uniform Multi-Round (UMR), adopts a sequential transmission model where the master communicates with one worker at a time, thus the transmission capacity of the link attached to the master cannot be fully utilized due to the limits of worker-side capacity. In the present study, a Parallel Transferable Uniform Multi-Round algorithm (PTUMR) is proposed. It efficiently utilizes the data transmission capacity of network links by allowing chunks to be transmitted in parallel to workers. This algorithm divides workers into groups in a way that fully uses the link bandwidth of the master under some constraints and considers each group of workers as one virtual worker. In particular, introducing a Grouping Threshold effectively deals with very heterogeneous workers in both data transmission and computation capacities. Then, the master schedules sequential data transmissions to the virtual workers in an optimal way like in UMR. The performance evaluations show that the proposed algorithm achieves significantly shorter turnaround times (i.e., makespan) compared with UMR regardless of heterogeneity of workers, which are close to the theoretical lower limits.
Yoshinobu MAEDA Kentaro TANI Nao ITO Michio MIYAKAWA
In this paper we show that the performance workload of button-input interfaces do not monotonically increase with the number of buttons, but there is an optimal number of buttons in the sense that the performance workload is minimized. As the number of buttons increases, it becomes more difficult to search for the target button, and, as such, the user's cognitive workload is increased. As the number of buttons decreases, the user's cognitive workload decreases but his operational workload increases, i.e., the amount of operations becomes larger because one button has to be used for plural functions. The optimal number of buttons emerges by combining the cognitive and operational workloads. The experiments used to measure performance were such that we were able to describe a multiple regression equation using two observable variables related to the cognitive and operational workloads. As a result, our equation explained the data well and the optimal number of buttons was found to be about 8, similar to the number adopted by commercial cell phone manufacturers. It was clarified that an interface with a number of buttons close to the number of letters in the alphabet was not necessarily easy to use.
In recent years, heterogeneous devices have been employed frequently in mobile storage systems because a combination of such devices can supply a synergistically useful storage solution by taking advantage of each device. One important design constraint in heterogeneous storage systems is to mitigate I/O performance degradation stemming from the difference between access times of different devices. To this end, there has not been much work to devise proper buffer cache management algorithms. This paper presents a novel buffer cache management algorithm which considers both I/O cost per device and workload patterns in mobile computing systems with a heterogeneous storage pair of a hard disk and a NAND flash memory. In order to minimize the total I/O cost under varying workload patterns, the proposed algorithm employs a dynamic cache partitioning technique over different devices and manages each partition according to request patterns and I/O types along with the temporal locality. Trace-based simulations show that the proposed algorithm reduces the total I/O cost and flash write count significantly over the existing buffer cache algorithms on typical mobile traces.
DongWoo LEE Rudrapatna Subramanyam RAMAKRISHNA
Resource performance prediction is known to be useful in resource scheduling in the Grid. The disk I/O workload is another important factor that influences the performance of the CPU and the network which are commonly used in resource scheduling. In the case of disk I/O workload time-series, the adaptation of a prediction algorithm to new time-series should be rapid. Further, the prediction should ensure that the prediction error is minimum in the heterogeneous environment. The storage workload (i.e., the disk I/O load) is a dynamic variable. A prediction parameter based on the characteristics of the current workload must be prepared for prediction purposes. In this paper, we propose and implement the OPHB (On-Line Parameter History Bank). This is a method that stabilizes the incoming disk I/O workload time-series fairly quickly with the help of accurately determined ESM (Exponential Smoothing Method) parameters. The parameters are drawn from a history database. In the case of forecasting with ESM, a smoothing parameter must be specified in advance. If the parameter is statically estimated from observed data found in previous executions, the forecasts would be inaccurate because they do not capture the actual I/O behavior. The smoothing parameter has to be adjusted in accordance with the shape of the new disk I/O workload. The ESM algorithms utilise one of the accumulated parameter histories chronicled by OPHB's Deposit operation. When a new time-series is started, an appropriate parameter value is looked up in the Bank by OPHB's Lookup operation. This is used for the time-series. This process is fully adaptive. We evaluate the proposed method with SES (Single Exponential Smoothing) and ARRSES (Auto-Responsive SES) methods.
Ji Hwan CHA Hisashi YAMAMOTO Won Young YUN
In this paper the problem of determining optimal workload for a load sharing system is considered. The system is composed of total n components and it functions until (n-k+1) components are failed. The works that should be performed by the system arrive at the system according to a homogeneous Poisson process and it is assumed that the system can perform sufficiently large number of works simultaneously. The system is subject to a workload which can be expressed in terms of the arrival rate of the work and the workload is equally shared by surviving components in the system. We assume that an increased workload induces a higher failure rate of each remaining component. The time consumed for the completion of each work is assumed to be a constant or a random quantity following an Exponential distribution. Under this model, as a measure for system performance, we derive the long-run average number of works performed per unit time and consider optimal workload which maximizes the system performance.
Koji MURAI Yuji HAYASHI Seiji INOKUCHI
Ship handling for leaving and entering port always carries out for a captain, deck officers and quartermasters and sometimes include a pilot. For navigational watch keeping at sea except for a narrow channel and under restricted visibility etc., the deck officer and quartermaster do it. They achieve safe and efficient navigational watch keeping with their teamwork at a ship's bridge. The importance of teamwork has been recognized in the shipping world, and its training and education methods are also thought over. However, their evaluation is not clear, because they are depended on the experience of the trainers. Therefore, we need to make an evaluation method of teamwork for education and training of the ship handling. In this paper, we define that ship's bridge teamwork is shown by 1) a change of mental workload level and 2) a change of mental workload for time. We challenge to evaluate teammates' mental workload in the ship's bridge with R-R interval of subjects' heart rate variability, and we evaluate their mental workloads with the following three steps. 1) To confirm the evaluation of the mental workload of a ship's navigator with R-R interval. 2) To evaluate teamwork with R-R interval in case of an oral presentation at meetings as pre-experiments. 3) To evaluate the teammates' mental workload among ship's bridge team in case of a leaving port. Their results showed that the method using R-R interval was sufficient for the evaluation of teamwork effects.
Hiroyuki OKAMURA Satoshi MIYAHARA Tadashi DOHI Shunji OSAKI
The software rejuvenation is one of the most effective preventive maintenance technique for operational software systems with high assurance requirement. In this paper, we propose the workload-based software rejuvenation scheme for a server type of software system, and develop stochastic models to determine the optimal software rejuvenation schedules for some dependability measures. In numerical examples, we evaluate quantitatively the performance of workload-based software rejuvenation scheme and compare it with the time-based rejuvenation scheme.
In this paper, an attempt was made to evaluate mental workload using chaotic analysis of EEG. EEG signals registered from Fz and Cz during a mental task (mental addition) were recorded and analyzed using attractor plots, fractal dimensions, and Lyapunov exponents in order to clarify chaotic dynamics and to investigate whether mental workload can be assessed using these chaotic measures. The largest Lyapunov exponent for all experimental conditions took positive values, which indicated chaotic dynamics in the EEG signals. However, we could not evaluate mental workload using the largest Lyapunov exponent or attractor plot. The fractal dimension, on the other hand, tended to increase with the work level. We concluded that the fractal dimension might be used to evaluate a mental state, especially a mental workload induced by mental task loading.
It is generally known that the autonomic nervous system regulates the pupil. In this study, we attempted to assess mental workload on the basis of the fluctuation rhythm in the pupil area. Controlling the respiration interval, we measured the pupil area during mental tasking for one minute. We simultaneously measured the respiration curve to monitor the respiration interval. We required the subject to perform two mental tasks. One was a mathematical division task, the difficulty of which was set to two, three, four, and five dividends. The other was a Sternberg memory search task, which had four work levels defined by the number of memory sets. In the Sternberg memory search, the number of memory set changed from five to eight. In such a way, we changed the mental workload induced by mental loading. As a result of calculating an autoregressive (AR) power spectrum, we could observe two peaks which corresponded to the blood pressure variation and respiratory sinus arrhythmia under a low workload. With an increased workload, the spectral peak related to the respiratory sinus arrhythmia disappeared. The ratio of the power at the low frequency band, from 0.05-0.15Hz, to the power at the respiration frequency band, from 0.35-0.4Hz, increased with the work level. In conclusion, the fluctuation of the pupil area is a promising means for the evaluation of mental workload or autonomic nervous function.
Workers involved in software projects are unlike those working on a production line in a manufacturing field usually engaged in plural work (that is, not only main development work but also various other work), concurrently. Such other work might put pressure on the schedule of the whole project. Therefore, to manage the whole project, not only main development work but also various other work should be dealt with as management objects and workers' workload should be taken into consideration (that is, who is doing what work at what workload at what time). This paper proposes a framework for workload management facilities for managing software projects. This framework proposes to relate not only main development work but also various other work and each work step within cooperative work to the workers. This paper also shows the behavior of the facilities by using an example and shows its usefulness based on the application of a prototype system. Using this system, users can assign work to workers by simulating workers' workload. These facilities help managers grasp workers' workload as well as help workers grasp their assigned work.