As a data-driven approach to optimal control, reinforcement learning (RL) has tremendous potential to optimise a wide variety of real-world systems that were previously unamenable to mathematical optimisation due to the lack of explicit models of dynamics. Among the key challenges of real-world RL, I am interested in sample efficient learning and offline learning.
For applied work, I am interested in precision agriculture, a form of agriculture that exploits advanced farming technologies for increased productivity. Modern sensor devices and applicators provide high spatiotemporal granularity of management units. To fully exploit the technologies and achieve right management at the right place at the right time, it is necessary to discover good policies that process high-dimensional sensor feedback and prescribe right management for each parcel of a field each time farmers make decisions. I tackle this challenging spatiotemporal control problem using RL and Bayesian optimisation.
Papers
- Mixtures of Gaussian process experts based on kernel stick-breaking processes
- with Khue-Dung Dang
- Preprint, Code
- Deep reinforcement learning for irrigation scheduling using high-dimensional sensor feedback
- with Allan Peake and Karine Chenu
- Preprint, Code
- Published at PLOS Water
- The case for fully Bayesian optimisation in small-sample trials
- An agent-based model of insect resistance management and mitigation for Bt maize: A social science perspective
- with Paul Mitchell and Terrance Hurley
- Published at Pest Management Science
- Machine learning for optimizing complex site-specific management
- with Vivak Patel and Paul Mitchell
- Published at Computers and Electronics in Agriculture
- Adaptive experimental design using Bayesian optimization to improve the cost efficiency of field trials
- with Vivak Patel, Shawn Conley, and Paul Mitchell
- Presented at 2019 ASA-CSSA-SSSA International Annual Meeting
- A bandit algorithm for efficient on-farm research
- with Paul Mitchell
- Presented at 2018 AAEA Annual Meeting