An RL Agent to Find Minimum Energy in a Tensegrity Representing a Cell
Mustafa Shah, Arsenio Cutolo, Muddasar Naeem, Muhammad Waris, Musarat Abbas
DOI: http://dx.doi.org/10.15439/2025F7678
Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 765–770 (2025)
Abstract. Understanding the mechanical behavior of cells is a complex challenge at the crossroads of physics, biology, and engineering. The cytoskeleton which is a dynamic network of filaments which helps cells maintain shape, move, and respond to their environment. Tensegrity structures, made of interconnected tensile and compressive elements, offer a compelling way to model these internal forces. In this work, we use Reinforcement Learning (RL) to simulate and optimize cellular mechanics. We propose an RL framework where an agent learns to minimize the total mechanical energy of tensegrity-based cell models by adjusting node positions. We consider diverse shapes from simple shapes like lines and triangles, to more complex shpapes like cell-like geometries. Our approach shows that RL can effectively model mechanical adaptations in cells and opens the door to intelligent, bio-inspired simulations. This work bridges biophysics, AI, and structural mechanics, offering new ways to predict and understand how cells respond to mechanical stress.
References
- D. E. Ingber, “Cellular tensegrity: defining new rules of biological design that govern the cytoskeleton,” Journal of Cell Science, vol. 104, pp. 613–627, 1993.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” arXiv preprint https://arxiv.org/abs/1707.06347, 2017.
- R. E. Skelton and M. C. de Oliveira, Tensegrity Systems, Springer, 2009.
- D. E. Ingber, “Tensegrity I. Cell structure and hierarchical systems biology,” Journal of Cell Science, vol. 116, no. 7, pp. 1157–1173, 2003.
- Y. Chen, X. Wang, L. Yang, et al., “Machine Learning for Mechanobiology: From Cells to Tissues,” Annual Review of Biomedical Engineering, vol. 25, pp. 377–406, 2023.
- C. Paul and M. Spröwitz, “Reinforcement learning for tensegrity robot locomotion,” Frontiers in Robotics and AI, vol. 5, p. 77, 2018.
- K. Caluwaerts, J. Despraz, A. Işçen, A. Sabelhaus, V. SunSpiral, and R. D. Beer, “Design and control of compliant tensegrity robots through simulation and hardware validation,” Journal of The Royal Society Interface, vol. 11, no. 98, 2014.
- Naeem, M., Coronato, A., Ullah, Z., Bashir, S. and Paragliola, G., 2022. Optimal user scheduling in multi antenna system using multi agent reinforcement learning. Sensors, 22(21), p.8278.
- Wang, Z., Xu, Y., Wang, D., Yang, J. and Bao, Z., 2022. Hierarchical deep reinforcement learning reveals a modular mechanism of cell movement. Nature machine intelligence, 4(1), pp.73-83.
- V. SunSpiral, A. Agogino, D. Atkinson, and A. Moored, “Tensegrity based probes for planetary exploration: Entry, descent and landing (EDL) and surface mobility analysis,” NASA Ames Research Center, 2013.
- M. Ahn, J. Lee, S. Lee, and Y. Lee, “Policy gradient reinforcement learning for locomotion control of tensegrity robots,” International Conference on Ubiquitous Robots, 2019.
- Zheng, H., Xie, W., Wang, K. and Li, Z., 2022. Opportunities of Hybrid Model-based Reinforcement Learning for Cell Therapy Manufacturing Process Control. arXiv preprint https://arxiv.org/abs/2201.03116.
- D. Pathak, P. Agrawal, A. Efros, and T. Darrell, “Curiosity-driven exploration by self-supervised prediction,” Proceedings of the 34th International Conference on Machine Learning, 2017.
- Naeem, M., Fiorino, M., Addabbo, P. and Coronato, A., 2024. Integrating Artificial Intelligence Techniques in Cell Mechanics. Annals of Computer Science and Information Systems, 41, pp. 111-116.
- M. Pagitz, M. Pellegrino, and A. Daynes, “Bio-inspired tensegrity soft robots,” Bioinspiration & Biomimetics, vol. 7, no. 4, 2012.
- Fiorino, M., Naeem, M., Ciampi, M. and Coronato, A., 2024. Defining a metric-driven approach for learning hazardous situations. Technologies, 12(7), p.103.
- J. Kim, Y. Kim, and J. Ryu, “Bio-inspired reinforcement learning control for soft tensegrity robot locomotion,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3937–3944, 2022.
- Schmitt, M.S., Colen, J., Sala, S., Devany, J., Seetharaman, S., Caillier, A., Gardel, M.L., Oakes, P.W. and Vitelli, V., 2024. Machine learning interpretable models of cell mechanics from protein images. Cell, 187(2), pp.481-494.
- Ismail, A., Naeem, M., Khalid, U.B. and Abbas, M., 2025. Improving adherence to medication in an intelligent environment using reinforcement learning. Journal of Reliable Intelligent Environments, 11(1), pp.1-10.
- Ma, J., Yu, M.K., Fong, S., Ono, K., Sage, E., Demchak, B., Sharan, R. and Ideker, T., 2018. Using deep learning to model the hierarchical structure and function of a cell. Nature methods, 15(4), pp.290-298.
- Ismail, A., Naeem, M., Syed, M.H., Abbas, M. and Coronato, A., 2024. Advancing Patient Care with an Intelligent and Personalized Medication Engagement System. Information, 15(10), p.609.
- Fiorino, M., Naeem, M., Ciampi, M. and Coronato, A., 2024. Defining a metric-driven approach for learning hazardous situations. Technologies, 12(7), p.103.
- Qayyum H., Rizvi S.T.H., Naeem M., Khalid U.b., Abbas M., and Coronato A., “Enhancing Diagnostic Accuracy for Skin Cancer and COVID-19 Detection: A Comparative Study Using a Stacked Ensemble Method,” Technologies, vol. 12, no. 9, p. 142, 2024. https://doi.org/10.3390/technologies1209014210.3390/technologies12090142