A new generation of computer vision, namely event-based or neuromorphic vision, provides a new paradigm for capturing visual data and the way such data is processed. Event-based vision is a state-of-art technology of robot vision. It is particularly promising for use in both mobile robots and drones for visual navigation tasks. Due to a highly novel type of visual sensors used in event-based vision, only a few datasets aimed at visual navigation tasks are publicly available. Such datasets provide an opportunity to evaluate visual odometry and visual SLAM methods by imitating data readout from real sensors. This dataset is intended to cover visual navigation tasks for mobile robots navigating in different types of agricultural environment. The dataset might open new opportunities for the evaluation of existing and creation of new event-based visual navigation methods for use in agricultural scenes that contain a lot of vegetation, animals, and patterned objects. The new dataset was created using our own custom-designed Sensor Bundle, which was installed on a mobile robot platform. During data acquisition sessions, the platform was manually controlled in such environments as forests, plantations, farms, etc. The Sensor Bundle consists of the dynamic vision sensor, a LIDAR, an RGB-D camera, and environmental sensors (temperature, humidity, and air pressure). The provided data sequences are accompanied by calibration data. The dynamic visual sensor, the LIDAR, and environmental sensors were time-synchronized with a precision of 1 us and time-aligned with an accuracy of +/- 1 ms. Ground-truth was generated by Lidar-SLAM methods. In total, there are 11 data sequences in 6 different scenarios for the winter season and 31 data sequences in 14 different scenarios for the spring/summer season. Each data sequence is accompanied by a video demonstrating its content and a detailed description, including known issues. The reported common issues include relatively small missing fragments of data and the RGB-D sensor's frame number sequence issues. The new dataset is mostly designed for Visual Odometry tasks, however, it also includes loop-closures for applying event-based visual SLAM methods. A.Zujevs is supported by the European Regional Development Fund within the Activity 1.1.1.2 “Post-doctoral Research Aid” of the Specific Aid Objective 1.1.1 (No.1.1.1.2/VIAA/2/18/334), while the others are supported by the Latvian Council of Science (lzp-2018/1-0482).