Logo PTI Logo rice

Proceedings of the 2021 Sixth International Conference on Research in Intelligent and Computing

Annals of Computer Science and Information Systems, Volume 27

Mobile robots interacting with obstacles control based on artificial intelligence

, , ,

DOI: http://dx.doi.org/10.15439/2021R21

Citation: Proceedings of the 2021 Sixth International Conference on Research in Intelligent and Computing, Vijender Kumar Solanki, Nguyen Ho Quang (eds). ACSIS, Vol. 27, pages 1316 ()

Full text

Abstract. In this paper, research on the applications of artificial intelligence in implementing Deep Deterministic Policy Gradient (DDPG) on Gazebo model and the reality of mobile robot has been studied and applied. The goal of the experimental studies is to navigate the mobile robot to learn the best possible action to move in real-world environments when facing fixed and mobile obstacles. When the robot moves in an environment with obstacles, the robot will automatically control to avoid these obstacles. Then, the more time that can be maintained within a specific limit, the more rewards are accumulated and therefore better results will be achieved. The authors performed various tests with many transform parameters and proved that the DDPG algorithm is more efficient than algorithms like Q-learning, Machine learning, deep Q-network, etc. Then execute SLAM to recognize the robot positions, and virtual maps are precisely built and displayed in Rviz. The research results will be the basis for the design and construction of control algorithms for mobile robots and industrial robots applied in programming techniques and industrial factory automation control.

References

  1. M. N. Cirstea, A. Dinu, J.G. Khor, M. McCormick, “Neural and Fuzzy Logic Control of Drives and Power Systems”, Linacre House, Jordan Hill, Oxford OX2 8DP , First published 2002.
  2. Charu C. Aggarwal, “Neural Networks and Deep Learning”, Springer International Publishing AG, part of Springer Nature, 2018.
  3. Nils J. Nilsson, “The quest for artificial interlligence a history of ideas and achievements”, Web Version Print version published by Cambridge University Press, Publishing September 13, 2010, http://www.cambridge.org/us/0521122937.
  4. Mohit Sewak, “Deep Reinforcement Learning”, Springer Nature Singapore Pte Ltd. 2019.
  5. Latombe, J.C. “Robot Motion Planning”; Kluwer Academic Publishers: Norwell, MA, USA, 1992.
  6. Vu Thi Thuy Nga, Ong Xuan Loc, Trinh Hai Nam, “Enhanced learning in automatic control with Matlab simulink”, Hanoi Polytechnic Publishing House, 2020.
  7. Nguyen Thanh Tuan, “Base Deep learning”, The Legrand Orange Book. Version 2, last update, August 2020.
  8. Tran Hoai Linh, “Neural network and its application in signal processing”, Hanoi Polytechnic Publishing House, 2015.
  9. Do Quang Hiep, Ngo Manh Tien, Nguyen Manh Cuong, Pham Tien Dung, Tran Van Manh, Nguyen Tien Kiem, Nguyen Duc Duy, “An Approach to Design Navigation System for Omnidirectional Mobile Robot Based on ROS”, (IJMERR); pp: 1502-1508, Volume 11; Issue 9; 2020.
  10. Roan Van Hoa, Tran Duc Chuyen, Nguyen Tung Lam, Nguyen Duc Dien, Tran Ngoc Son, Vu Thi To Linh, "Reinforcement Learning based Method for Autonomous Navigation of Mobile Robots in Unknown Environments", Proceedings of the 2020 International Conference on Advanced Mechatronic Systems, Hanoi, Vietnam, December 10 - 13, 2020.
  11. Evan Prianto, MyeongSeop Kim, Jae-Han Park, Ji-Hun Bae, and Jung-Su Kim, “Path Planning for Multi-Arm Manipulators Using Deep Reinforcement Learning: Soft Actor–Critic with Hindsight Experience Replay”, Sensors, Published: 19 October 2020.
  12. Deepak Ramachandran, Rakesh Gupta, "Smoothed Sarsa: Reinforcement Learning for Robot Delivery Tasks", 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, May 12-17, (2009).
  13. M. U. KHAN, Mobile Robot Navigation Using Reinforcement Learning in Unknown Environments, BALKAN JOURNAL OF ELECTRICAL & COMPUTER ENGINEERING, Vol. 7, No. 3, July 2019 (2019).
  14. G. A. Cardona, C. Bravo, W. Quesada, D. Ruiz, M. Obeng, X. Wu, and J. M. Calderon, Autonomous Navigation for Exploration of Unknown Environments and Collision Avoidance in Mobile Robots Using Reinforcement Learning, Conference Paper, April 2019, http://dx.doi.org/10.1109/SoutheastCon42311.2019.9020521, (2020).
  15. Luis V. Calderita, Araceli Vega, Sergio Barroso-Ramírez, Pablo Bustos and Pedro Núñez, Designing a Cyber-Physical System for Ambient Assisted Living: A Use-Case Analysis for Social Robot Navigation in Caregiving Centers, pp 2-24, Sensor. (2020).
  16. A. D. Pambudi, T. Agustinah and R. Effendi, “Reinforcement Point and Fuzzy Input Design of Fuzzy Q-Learning for Mobile Robot Navigation System,” 2019 International Conference of Artificial Intelligence and Information Technology, 2019.