Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: https://ruomo.lib.uom.gr/handle/7000/1591
Πλήρης εγγραφή μεταδεδομένων
Πεδίο DCΤιμήΓλώσσα
dc.contributor.authorPentelas, Angelos-
dc.contributor.authorDe Vleeschauwer, Danny-
dc.contributor.authorChang, Chia-Yu-
dc.contributor.authorDe Schepper, Koen-
dc.contributor.authorPapadimitriou, Panagiotis-
dc.date.accessioned2023-09-06T11:16:37Z-
dc.date.available2023-09-06T11:16:37Z-
dc.date.issued2023-
dc.identifier10.1109/ACCESS.2023.3269576en_US
dc.identifier.issn2169-3536en_US
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2023.3269576en_US
dc.identifier.urihttps://ruomo.lib.uom.gr/handle/7000/1591-
dc.description.abstractNetwork Function Virtualization (NFV) decouples network functions from the underlying specialized devices, enabling network processing with higher flexibility and resource efficiency. This promotes the use of virtual network functions (VNFs), which can be grouped to form a service function chain (SFC). A critical challenge in NFV is SFC partitioning (SFCP), which is mathematically expressed as a graph-to-graph mapping problem. Given its NP-hardness, SFCP is commonly solved by approximation methods. Yet, the relevant literature exhibits a gradual shift towards data-driven SFCP frameworks, such as (deep) reinforcement learning (RL). In this article, we initially identify crucial limitations of existing RL-based SFCP approaches. In particular, we argue that most of them stem from the centralized implementation of RL schemes. Therefore, we devise a cooperative deep multi-agent reinforcement learning (DMARL) scheme for decentralized SFCP, which fosters the efficient communication of neighboring agents. Our simulation results (i) demonstrate that DMARL outperforms a state-of-the-art centralized double deep Q -learning algorithm, (ii) unfold the fundamental behaviors learned by the team of agents, (iii) highlight the importance of information exchange between agents, and (iv) showcase the implications stemming from various network topologies on the DMARL efficiency.en_US
dc.language.isoenen_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.sourceIEEE Accessen_US
dc.subjectFRASCATI::Natural sciences::Computer and information sciencesen_US
dc.subject.otherMulti-agent reinforcement learningen_US
dc.subject.othernetwork function virtualizationen_US
dc.subject.otherself-learning orchestrationen_US
dc.titleDeep Multi-Agent Reinforcement Learning With Minimal Cross-Agent Communication for SFC Partitioningen_US
dc.typeArticleen_US
dc.contributor.departmentΤμήμα Εφαρμοσμένης Πληροφορικήςen_US
local.identifier.volume11en_US
local.identifier.firstpage40384en_US
local.identifier.lastpage40398en_US
Εμφανίζεται στις Συλλογές: Τμήμα Εφαρμοσμένης Πληροφορικής

Αρχεία σε αυτό το Τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος 
Deep_Multi-Agent_Reinforcement_Learning_With_Minimal_Cross-Agent_Communication_for_SFC_Partitioning.pdf1,63 MBAdobe PDFThumbnail
Προβολή/Ανοιγμα


Αυτό το τεκμήριο προστατεύεται από Αδεια Creative Commons Creative Commons