Digitální knihovna UHK

A Comparative Study of Reinforcement Learning Algorithms for Distribution Network Reconfiguration With Deep Q-Learning-Based Action Sampling

Zobrazit minimální záznam

dc.rights.license CC BY eng
dc.contributor.author Gholizadeh, Nastaran cze
dc.contributor.author Kazemi, Nazli cze
dc.contributor.author Musílek, Petr cze
dc.date.accessioned 2025-12-05T11:58:34Z
dc.date.available 2025-12-05T11:58:34Z
dc.date.issued 2023 eng
dc.identifier.issn 2169-3536 eng
dc.identifier.uri http://hdl.handle.net/20.500.12603/1746
dc.description.abstract Distribution network reconfiguration (DNR) is one of the most important methods to cope with the increasing electricity demand due to the massive integration of electric vehicles. Most existing DNR methods rely on accurate network parameters and lack scalability and optimality. This study uses model-free reinforcement learning algorithms for training agents to take the best DNR actions in a given distribution system. Five reinforcement algorithms are applied to the DNR problem in 33- and 136-node test systems and their performances are compared: deep Q-learning, dueling deep Q-learning, deep Q-learning with prioritized experience replay, soft actor-critic, and proximal policy optimization. In addition, a new deep Q-learning-based action sampling method is developed to reduce the size of the action space and optimize the loss reduction in the system. Finally, the developed algorithms are compared against the existing methods in literature. eng
dc.format p. 13714-13723 eng
dc.language.iso eng eng
dc.publisher IEEE eng
dc.relation.ispartof IEEE Access, volume 11, issue: February eng
dc.subject Comparative Study eng
dc.subject Reinforcement eng
dc.subject Learning eng
dc.subject Algorithms eng
dc.subject Distribution eng
dc.subject Network eng
dc.subject Reconfiguration eng
dc.subject Deep eng
dc.subject Q-Learning-Based eng
dc.subject Action eng
dc.subject Sampling eng
dc.title A Comparative Study of Reinforcement Learning Algorithms for Distribution Network Reconfiguration With Deep Q-Learning-Based Action Sampling eng
dc.type article eng
dc.identifier.obd 43879897 eng
dc.identifier.wos 000933724700001 eng
dc.identifier.doi 10.1109/ACCESS.2023.3243549 eng
dc.publicationstatus postprint eng
dc.peerreviewed yes eng
dc.source.url https://ieeexplore.ieee.org/document/10040655 cze
dc.relation.publisherversion https://ieeexplore.ieee.org/document/10040655 eng
dc.rights.access Open Access eng


Soubory tohoto záznamu

Tento záznam se objevuje v následujících kolekcích

Zobrazit minimální záznam

Prohledat DSpace


Procházet

Můj účet