Репозиторий Dspace

Decentralized coordination of distributed energy resources through local energy markets and deep reinforcement learning

Показать сокращенную информацию

dc.rights.license CC BY eng
dc.contributor.author May, Daniel C. cze
dc.contributor.author Taylor, Matthew cze
dc.contributor.author Musílek, Petr cze
dc.date.accessioned 2025-12-05T16:10:57Z
dc.date.available 2025-12-05T16:10:57Z
dc.date.issued 2024 eng
dc.identifier.issn 2666-5468 eng
dc.identifier.uri http://hdl.handle.net/20.500.12603/2467
dc.description.abstract As the energy landscape evolves towards sustainability, the accelerating integration of distributed energy resources poses challenges to the operability and reliability of the electricity grid. One significant aspect of this issue is the notable increase in net load variability at the grid edge. Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to address the grid challenges in the form of decentralized, indirect demand response on a community level. Model-free control approaches, such as deep reinforcement learning (DRL), show promise for the decentralized automation of participation within this context. Existing studies at the intersection of transactive energy and model-free control primarily focus on socioeconomic and self-consumption metrics, overlooking the crucial goal of reducing community-level net load variability. This study addresses this gap by training a set of deep reinforcement learning agents to automate end-user participation in an economy-driven, autonomous local energy market (ALEX). In this setting, agents do not share information and only prioritize individual bill optimization. The study unveils a clear correlation between bill reduction and reduced net load variability. The impact on net load variability is assessed over various time horizons using metrics such as ramping rate, daily and monthly load factor, as well as daily average and total peak export and import on an open-source dataset. To examine the performance of the proposed DRL method, its agents are benchmarked against a near- optimal dynamic programming method, using a no-control scenario as the baseline. The dynamic programming benchmark reduces average daily import, export, and peak demand by 22.05%, 83.92%, and 24.09%, respectively. The RL agents demonstrate comparable or superior performance, with improvements of 21.93%, 84.46%, and 27.02% on these metrics. This demonstrates that DRL can be effectively employed for such tasks, as they are inherently scalable with near-optimal performance in decentralized grid management. eng
dc.format p. "Article Number: 100446" eng
dc.language.iso eng eng
dc.publisher ELSEVIER eng
dc.relation.ispartof Energy and AI, volume 18, issue: December eng
dc.subject Reinforcement learning eng
dc.subject Deep reinforcement learning eng
dc.subject Distributed energy resources eng
dc.subject Local energy markets eng
dc.subject Demand response eng
dc.subject Distributed energy resource management eng
dc.subject Transactive energy eng
dc.title Decentralized coordination of distributed energy resources through local energy markets and deep reinforcement learning eng
dc.type article eng
dc.identifier.obd 43882267 eng
dc.identifier.wos 001396166300001 eng
dc.identifier.doi 10.1016/j.egyai.2024.100446 eng
dc.publicationstatus postprint eng
dc.peerreviewed yes eng
dc.source.url https://www.sciencedirect.com/science/article/pii/S2666546824001125?via%3Dihub cze
dc.relation.publisherversion https://www.sciencedirect.com/science/article/pii/S2666546824001125?via%3Dihub eng
dc.rights.access Open Access eng


Файлы в этом документе

Данный элемент включен в следующие коллекции

Показать сокращенную информацию

Поиск в DSpace


Просмотр

Моя учетная запись