Towards the actual deployment of robust, adaptable, and maintainable AI models for sustainable agriculture
Giacomo Ignesti, Davide Moroni, Massimo Martinelli
DOI: http://dx.doi.org/10.15439/2024F2991
Citation: Position Papers of the 19th Conference on Computer Science and Intelligence Systems, M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 40, pages 33–39 (2024)
Abstract. In the past two decades, computer vision and artificial intelligence (AI) have made significant strides in delivering practical solutions to aid farmers directly in the fields, thereby contributing to the integration of advanced technology in precision agriculture. However, extending these methods to diverse crops and broader applications, including low-resource situations, raises several concerns. Indeed, the adaptability of AI methods to new cases and domains is not always straightforward. Moreover, the dynamic global panorama requires continuous adaptation and refinement of artificial intelligence models. In this position paper, we examine the current opportunities and challenges and propose an approach to address these issues, which is currently in the implementation phase at CNR-ISTI.
References
- T. Saranya, C. Deisy, S. Sridevi, and K. S. M. Anbananthen, “A comparative study of deep learning and internet of things for precision agriculture,” Engineering Applications of Artificial Intelligence, 2023. http://dx.doi.org/10.1016/j.engappai.2023.106034
- I. Zualkernan, D. A. Abuhani, M. H. Hussain, J. Khan, and M. El-Mohandes, “Machine learning for precision agriculture using imagery from unmanned aerial vehicles (uavs): A survey,” Drones, 2023. doi: 10.1016/j.compag.2020.105760
- S. Koul, “Machine learning and deep learning in agriculture,” Smart Agriculture: Emerging Pedagogies of Deep Learning, Machine Learning and Internet of Things, 2021. http://dx.doi.org/10.1201/b22627-1
- R. Priya and D. Ramesh, “Ml based sustainable precision agriculture: A future generation perspective,” Sustainable Computing: Informatics and Systems, 2020. http://dx.doi.org/10.1016/j.suscom.2020.100439
- B. Antonio, D. Moroni, and M. Martinelli, “Efficient adaptive ensembling for image classification,” Expert Systems, 2022. http://dx.doi.org/10.1111/exsy.13424
- Ł. Błaszczyk, M. Mizura, A. Płocharski, and J. Porter-Sobieraj, “Simulating large-scale topographic terrain features with reservoirs and flowing water,” in 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), 2023. http://dx.doi.org/10.15439/2023F2137
- A. Bruno, D. Moroni, and M. Martinelli, “Efficient deep learning approach for olive disease classification,” in 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), 2023. http://dx.doi.org/10.15439/2023F4794
- G. Castellano, P. De Marinis, and G. Vessio, “Applying knowledge distillation to improve weed mapping with drones,” in 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), 2023. http://dx.doi.org/10.15439/2023F960
- N. Iqbal, C. Manss, C. Scholz, D. König, M. Igelbrink, and A. Ruckelshausen, “Ai-based maize and weeds detection on the edge with cornweed dataset,” in 2023 18th Conference on Computer Science and Intelligence Systems (FedCSIS), 2023. http://dx.doi.org/10.15439/2023F2125
- S. Kolhar and J. Jagtap, “Plant trait estimation and classification studies in plant phenotyping using machine vision–a review,” Information Processing in Agriculture, vol. 10, no. 1, pp. 114–135, 2023.
- European Commission, “Work programme 2023-2025 – 9. food, bioeconomy, natural resources, agriculture and environment,” 2024. [Online]. Available: https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/wp-call/2023-2024/wp-9-food-bioeconomy-natural-resources-agriculture-and-environment-horizon-2023-2024 en.pdf
- “THE 17 GOALS,” https://sdgs.un.org/goals, 2015, [Online; accessed 11-June-2024].
- Y. Lu and S. Young, “A survey of public datasets for computer vision tasks in precision agriculture,” Computers and Electronics in Agriculture, 2020. http://dx.doi.org/10.1016/j.compag.2020.105760
- D. Hughes, M. Salathé et al., “An open access repository of images on plant health to enable the development of mobile disease diagnostics,” arXiv preprint https://arxiv.org/abs/1511.08060, 2015. http://dx.doi.org/10.48550/arXiv.1511.08060
- A. Bruno, D. Moroni, R. Dainelli, L. Rocchi, S. Morelli, E. Ferrari, P. Toscano, and M. Martinelli, “Improving plant disease classification by adaptive minimal ensembling,” Frontiers in Artificial Intelligence, 2022. http://dx.doi.org/10.3389/frai.2022.868926
- R. Dainelli, M. Martinelli, A. Bruno, D. Moroni, S. Morelli, M. Silvestri, E. Ferrari, L. Rocchi, and P. Toscano, “A phenotyping weeds image dataset for open scientific research,” 2023. [Online]. Available: https://doi.org/10.5281/zenodo.7598372
- ——, 49. Recognition of weeds in cereals using AI architecture. Wageningen Academic, 2023.
- R. Dainelli, A. Bruno, M. Martinelli, D. Moroni, L. Rocchi, S. Morelli, E. Ferrari, M. Silvestri, S. Agostinelli, P. La Cava et al., “Granoscan: an ai-powered mobile app for in-field identification of biotic threats of wheat,” Frontiers in Plant Science, 2024. http://dx.doi.org/10.3389/fpls.2024.1298791
- T. Kattenborn, “Planttraits2023,” 2023. [Online]. Available: https://kaggle.com/competitions/planttraits2023
- N. Drenkow, N. Sani, I. Shpitser, and M. Unberath, “A systematic review of robustness in deep learning for computer vision: Mind the gap?” arXiv preprint https://arxiv.org/abs/2112.00639, 2021. http://dx.doi.org/10.48550/arXiv.2112.00639
- K. Kirkpatrick, “The carbon footprint of artificial intelligence,” Commu- nications of the ACM, 2023. http://dx.doi.org/10.1145/3603746
- Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM computing surveys (csur), 2020. http://dx.doi.org/10.48550/arXiv.1904.05046
- M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019. http://dx.doi.org/10.48550/arXiv.1905.11946
- A. Bruno, C. Caudai, G. R. Leone, M. Martinelli, D. Moroni, and F. Crotti, “Medical waste sorting: a computer vision approach for assisted primary sorting,” in 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), 2023. http://dx.doi.org/10.48550/arXiv.2303.04720
- J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žı́dek, A. Potapenko et al., “Highly accurate protein structure prediction with alphafold,” nature, 2021. http://dx.doi.org/10.1038/s41586-021-03819-2
- G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang, “Physics-informed machine learning,” Nature Reviews Physics, 2021. http://dx.doi.org/10.1038/s42254-021-00314-5
- W. Hou, X. Gao, D. Tao, and X. Li, “Blind image quality assessment via deep learning,” IEEE transactions on neural networks and learning systems, 2014. http://dx.doi.org/10.1109/TNNLS.2014.2336852
- S. Bianco, L. Celona, P. Napoletano, and R. Schettini, “On the use of deep learning for blind image quality assessment,” Signal, Image and Video Processing, 2018. http://dx.doi.org/10.1007/s11760-017-1166-8