Reinforcement learning has been used to adaptive service composition. However, traditional algorithms are not suitable for large-scale service composition. Based on Q-Learning algorithm, a multi-task oriented algorithm named multi-Q learning is proposed to realize subtask-assistance strategy for large-scale and adaptive service composition. Differ from previous studies that focus on one task, we take the relationship between multiple service composition tasks into account. We decompose complex service composition task into multiple subtasks according to the graph theory. Different tasks with the same subtasks can assist each other to improve their learning speed. The results of experiments show that our algorithm could obtain faster learning speed obviously than traditional Q-learning algorithm. Compared with multi-agent Q-learning, our algorithm also has faster convergence speed. Moreover, for all involved service composition tasks that have the same subtasks between each other, our algorithm can improve their speed of learning optimal policy simultaneously in real-time.
Li QUAN
University of Science and Technology Beijing
Zhi-liang WANG
University of Science and Technology Beijing
Xin LIU
University of Science and Technology Beijing
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Li QUAN, Zhi-liang WANG, Xin LIU, "A Real-Time Subtask-Assistance Strategy for Adaptive Services Composition" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 5, pp. 1361-1369, May 2018, doi: 10.1587/transinf.2017EDP7131.
Abstract: Reinforcement learning has been used to adaptive service composition. However, traditional algorithms are not suitable for large-scale service composition. Based on Q-Learning algorithm, a multi-task oriented algorithm named multi-Q learning is proposed to realize subtask-assistance strategy for large-scale and adaptive service composition. Differ from previous studies that focus on one task, we take the relationship between multiple service composition tasks into account. We decompose complex service composition task into multiple subtasks according to the graph theory. Different tasks with the same subtasks can assist each other to improve their learning speed. The results of experiments show that our algorithm could obtain faster learning speed obviously than traditional Q-learning algorithm. Compared with multi-agent Q-learning, our algorithm also has faster convergence speed. Moreover, for all involved service composition tasks that have the same subtasks between each other, our algorithm can improve their speed of learning optimal policy simultaneously in real-time.
URL: https://globals.ieice.org/en_transactions/information/10.1587/transinf.2017EDP7131/_p
Copy
@ARTICLE{e101-d_5_1361,
author={Li QUAN, Zhi-liang WANG, Xin LIU, },
journal={IEICE TRANSACTIONS on Information},
title={A Real-Time Subtask-Assistance Strategy for Adaptive Services Composition},
year={2018},
volume={E101-D},
number={5},
pages={1361-1369},
abstract={Reinforcement learning has been used to adaptive service composition. However, traditional algorithms are not suitable for large-scale service composition. Based on Q-Learning algorithm, a multi-task oriented algorithm named multi-Q learning is proposed to realize subtask-assistance strategy for large-scale and adaptive service composition. Differ from previous studies that focus on one task, we take the relationship between multiple service composition tasks into account. We decompose complex service composition task into multiple subtasks according to the graph theory. Different tasks with the same subtasks can assist each other to improve their learning speed. The results of experiments show that our algorithm could obtain faster learning speed obviously than traditional Q-learning algorithm. Compared with multi-agent Q-learning, our algorithm also has faster convergence speed. Moreover, for all involved service composition tasks that have the same subtasks between each other, our algorithm can improve their speed of learning optimal policy simultaneously in real-time.},
keywords={},
doi={10.1587/transinf.2017EDP7131},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - A Real-Time Subtask-Assistance Strategy for Adaptive Services Composition
T2 - IEICE TRANSACTIONS on Information
SP - 1361
EP - 1369
AU - Li QUAN
AU - Zhi-liang WANG
AU - Xin LIU
PY - 2018
DO - 10.1587/transinf.2017EDP7131
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2018
AB - Reinforcement learning has been used to adaptive service composition. However, traditional algorithms are not suitable for large-scale service composition. Based on Q-Learning algorithm, a multi-task oriented algorithm named multi-Q learning is proposed to realize subtask-assistance strategy for large-scale and adaptive service composition. Differ from previous studies that focus on one task, we take the relationship between multiple service composition tasks into account. We decompose complex service composition task into multiple subtasks according to the graph theory. Different tasks with the same subtasks can assist each other to improve their learning speed. The results of experiments show that our algorithm could obtain faster learning speed obviously than traditional Q-learning algorithm. Compared with multi-agent Q-learning, our algorithm also has faster convergence speed. Moreover, for all involved service composition tasks that have the same subtasks between each other, our algorithm can improve their speed of learning optimal policy simultaneously in real-time.
ER -