Semi-Active Control of a Shear Building based on Reinforcement Learning: Robustness to measurement noise and model error
Aleksandra Jedlińska, Dominik Pisarski, Grzegorz Mikułowski, Bartłomiej Błachowski, Łukasz Jankowski
DOI: http://dx.doi.org/10.15439/2023F8946
Citation: Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 35, pages 1007–1010 (2023)
Abstract. This paper considers structural control by reinforcement learning. The aim is to mitigate vibrations of a shear building subjected to an earthquake-like excitation and fitted with a semi-active tuned mass damper (TMD). The control force is coupled with the structural response, making the problem intrinsically nonlinear and challenging to solve using classical methods. Structural control by reinforcement learning has not been extensively explored yet. Here, Deep-Q-Learning is used, which appriximates the Q-function with a neural network and optimizes initially random control sequences through interaction with the controlled system. For safety reasons, training must be performed using an inevitably inexact numerical model instead of the real structure. It is thus crucial to assess the robustness of the control with respect to measurement noise and model errors. It is verified to significantly outperform an optimally tuned conventional TMD, and the key outcome is the high robustness to measurement noise and model error. Index Terms---structural control, semi-active control, reinforcement learning, tuned mass damper (TMD).
References
- B. F. Spencer Jr., S. Nagarajaiah. 2003. “State of the art of structural control.” J. Struct. Eng. 129:845–856, https://doi.org/10.1061/(ASCE)0733-9445(2003)129:7(845)
- B. Basu, O. S. Bursi, F. Cascati, S. Cascati, A. E. Del Grosso, M. Domaneshi, L. Faravelli, J. Holnicki-Szulc, H. Irshik, M. Krommer, M. Lepidi, A. Martelli, B. Ozturk, F. Pozo, G. Pujol, Z. Rakicevic, and J. Rodellar. 2014. “A European Association for the Control of Structures joint perspective. Recent studies in civil structural control across Europe,” Struct. Contrl Health Monit. 21:1414–1436, https://doi.org/10.1002 / stc.1652
- F. Casciati, J. Rodellar, and U. Yildirim. 2012. “Active and semi-active control of structures-theory and applications: A review of recent advances.” J. Intell. Mater. Syst. Struct. 23:1181–1195, https://doi.org/10.1177/1045389X12445029
- N. R. Fisco, H. Adeli. 2011. “Smart structures: Part I—Active and semi-active control.” Sci. Iran. 18(3):275–284, https://doi.org/10.1016/j.scient.2011.05.034
- M. Gutierrez Soto, H. Adeli. 2013. “Tuned mass dampers.” Arch. Comput. Meth. E. 20:419–431, https://doi.org/10.1007/s11831-013-9091-7
- S. Elias, V. Matsagar. 2017. “Research developments in vibration control of structures using passive tuned mass dampers.” Annu. Rev. Control 44:129–156, https://doi.org/10.1016/j.arcontrol.2017.09.015
- S. Pourzeynali, H. H. Lavasani, and A. H. Modarayi. 2007. “Active control of high rise building structures using fuzzy logic and genetic algorithms.” Eng. Struct. 29:346–357, https://doi.org/10.1016/j.engstruct.2006.04.015
- D. E. Kirk. Optimal control theory. Courier Corporation, 2004.
- G. Rypeść, Ł. Lepak, P. Wawrzyński. 2022. “Reinforcement Learning for on-line Sequence Transformation,” in Proc. 17th Conf. on Computer Science and Intelligence Systems, Sofia, ACSIS 30, pp. 133–139. https://doi.org/10.15439/2022F70
- D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis. 2018. “A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.” Science 362(6419):1140–1144, https://doi.org/10.1126/science.aar6404
- A. El Sallab, M. Abdou, E. Perot, and S. Yogamani. 2017.” Deep reinforcement learning framework for autonomous driving.” in Proc. IS&T International Symposium on Electronic Imaging Science and Technology, Burlingame, 2017, pp. 70–76, https://doi.org/10.2352/ISSN.2470-1173.2017.19.AVM-023
- G. Reddy, A. Celani, T. Sejnowski, and M. Vergassola. 2016.” Learning to soar in turbulent environments.” PNAS 113(33):E4877–E4884, https://doi.org/10.1073/pnas.1606075113
- H. Shi, Y. Zhou, X. Wang, S. Fu, S. Gong, and B. Ran. 2022. “A deep reinforcement learning‐based distributed connected automated vehicle control under communication failure.” Comput.-Aided Civ. Inf. 37(15):2033–2051, https://doi.org/10.1111/mice.12825
- A. Bernard, I. F. Smith. 2008. “Reinforcement learning for structural control.” J. Comput. Civil Eng. 22(2):133–139, https://doi.org/10.1061/(ASCE)0887-3801(2008)22:2(133)
- A. Khalatbarisoltani, M. Soleymani, and M. Khodadadi. 2019. “Online control of an active seismic system via reinforcement learning.” Struct. Contrl Health Monit. 26(3):e2298, https://doi.org/10.1002/stc.2298
- Z.-C. Qiu, G.-H. Chen, and X.-M. Zhang. 2021. “Reinforcement learning vibration control for a flexible hinged plate.” Aerosp. Sci. Technol. 118:107056, https://doi.org/10.1016/j.ast.2021.107056
- F. L. Lewis, D. Vrabie, and K. G. Vamvoudakis. 2012. “Reinforcement learning and feedback control: Using natural decision methods to design optimal adaptive controllers.” IEEE Contr. Syst. Mag. 32(6):76–105, https://doi.org/10.1109/MCS.2012.2214134