9.9 KiB
greenhouse
This repository contains code for detecting heat pipes in the greenhouse as well as estimating the pose of the pipes.
Platform: ROS 2, Humble, Ubuntu 22.04
How to install dependencies??
- Install
git
$ sudo apt install git
- Install
ROS 2 HumbleonUbuntu 22.04by following https://docs.ros.org/en/foxy/Installation/Ubuntu-Install-Debians.html - Install
colcon buildby following https://docs.ros.org/en/foxy/Tutorials/Beginner-CLI-Tools/Configuring-ROS2-Environment.html
$ sudo apt install python3-colcon-common-extensions
- Instal
pipfor python packages
$ sudo apt install python3-pip
- Now clone the repository
$ git clone https://tea.der-space.de/apoorva/greenhouse.git
- Install
ultralyticsfor yolov3 package
$ pip install ultralytics
- For
yolov3_ros, there are a bunch of other requirements. Go toyolov3folder and install using following commands:
$ cd ~/greenhouse/yolov3
$ pip install -r requirements.txt
- Go to
ros2_wsinsidegreenhouse. Make sure you only havesrcfolder.
$ cd ~/greenhouse/ros2_ws
$ ls
src
- Inside
ros2_wsfolder, start building individual packages in the below sequence to avoid errors.
$ colcon build --packages-select pipe_msgs
$ colcon build --packages-select pcl_ros
$ colcon build --allow-overriding pcl_ros
$ colcon build --packages-select pcl_conversions
$ colcon build --allow-overriding pcl_conversions
$ colcon build --packages-select find-pose
$ colcon build --packages-select yolov3_ros
. install/setup.bash
- The code should be ready to launch as explained in How to run Live Detection? This section explains what each module is responsible for.
perception_pcl
This module is responsible for providing pcl_conversions and pcl_ros modules in ros 2.
To build, run the following command:
$ cd ros2_ws/
$ colcon build --packages-select perception_pcl
$ . install/setup.bash
pipe_msgs
This module contains ros msgs for storing information about the detected object's bounding box.
$ cd ros2_ws/
$ colcon build --packages-select pipe_msgs
$ . install/setup.bash
To check if msgs are built properly, run following command
$ ros2 interface show pipe_msgs/msg/BoundingBox
The output will be:
float64 probability
int64 xmin
int64 ymin
int64 xmax
int64 ymax
int16 id
string class_id
find-pose
This ROS module is responsible for determining the position of the detected objects. The following input/ros topics are needed:
- /rgb_img: RGB Image topic
- /camera_info: Camera calibration parameters topic
- /depth_img: Aligned depth image topic (aligned with rgb image)
- /bboxes: Bounding box of each detected object. (Comes from yolov3 detection module) Output:
- TF: Transform between camera_link and detected_object frame.
How to build and run?
This package is dependent on custom
pcl_conversionandpcl_rosmodule. Make sure you have built those before building this package.
$ colcon build --packages-select find-pose
$ . install/setup.bash
$ ros2 launch find-pose find-pose-node.launch.py
All the topics can be remapped in the launch file.
yolov3_ros
This ROS module is responsible for detecting the pipes from rgb image topic and also syncing the depth, rgb and camera_info topics. The following input/ros topics are needed:
- /camera/color/image_raw: RGB Image topic
- /camera/aligned_depth_to_color/image_raw: Aligned depth image topic (aligned with rgb image)
- /camera/color/camera_info: Camera calibration parameters topic
ROS Paramater Input:
- best_weights: String that is the path to the best weights file of yolov3 detection
Defaults:
'src/pipe_weights.pt'insideros2_wsfolder
The following are the output topics:
- /detection_image: The RGB Image topic with bounding box drawn on it for visualization and debugging purpose
- /bboxes: Bounding box of each detected object.
- /rgb_img: Time Synced RGB Image topic
- /camera_info: Time Synced Camera calibration parameters topic
- /depth_img: Time Synced Aligned depth image topic (aligned with rgb image)
How to build and run?
$ colcon build --packages-select yolov3_ros
$ . install/setup.bash
$ cd greenhouse/ros2_ws/
$ ros2 launch yolov3_ros pipe_detection.launch.py
All the topics can be remapped in the launch file. The path to best_weights can also be changed inside the launch file.
Launch file is stored in yolov3_ros/launch/.
yolov3
This module contains code for yolov3. This method is being used to train the model to detect pipes in the greenhouse.
-
Ros bag was converted from ros 1 to ros 2 using rosbags module
-
Ros 2 bag was played, convert_2_img was used to subscribe to images and all the images were saved as .jpeg in a folder.
-
Images were labeled used labelImg.
-
Create a
custom-yolov3.yamlfile that has information about number of classes, location for labels,images. -
Used google colab to run the yolov3 training.
-
Upload the label and images to drive and Mount the google drive in python notebook
from google.colab import drive
drive.mount("/content/gdrive")
- Upload the yolov3 code and cd into the location of code
%cd /content/gdrive/MyDrive/yolov3
- Run the training script
!python train.py --img 1280 --batch 16 --epochs 300 --data data/custom-yolov3.yaml --weights '' --cfg yolov3.yaml
- Important to note that we are not using any pre-trained weights.
- Once training is finished, a
runfolder is created with exp number that stores the best weights (a .pt file). - This file is then used by detection to node to detect on new data. Update the image path and weights path to run detection.
!python detect.py --img 1280 --source ../pipe-dataset/validate/images/stereo_image103.jpeg --weights runs/train/exp14/weights/best.pt
- Once detect.py is finished, it create a new folder called
detectinsiderunsfolder that store the image with bounding box of detected object. More details: https://github.com/ultralytics/yolov3
convert_2_img
This ROS module converts the rosbag data to images for yolo training purpose etc.
- Make sure you have this module inside a ros workspace.
- Create folder called
stereoinside convert_2_img module. - Run following command to launch the node. Currently, this node listens for
/camera/color/image_rawtopic.
$ cd ros2_ws/src/convert_2_img
$ python3 convert_2_img/convert_to_img.py
- Play the rosbag in another terminal
$ ros2 bag play bag/bag.db
- Once bag has finished playing, the images will be stored inside
stereofolder.
labelImg
This module is used to label images for yolo. The pre-defined custom classes file was changed to use new labels. This file is stored in cd labelImg/data/predefined_classes.txt
To launch the gui, run
$ cd labelImg
$ python3 labelImg.py
More details: https://github.com/heartexlabs/labelImg
rosbags
This module convert rosbags from ros 1 to ros 2.
git clone https://gitlab.com/ternaris/rosbags.git
cd rosbags
python -m venv venv
. venv/bin/activate
pip install -r requirements-dev.txt
pip install -e .
rosbags-convert ../single_depth_color_640x480.bag
flann_based
Module that uses 3D model of the pipe to estimate pose. This method was not successful.
yolov7
This module contains code for yolov7. This method was too heavy and didn't produce great results for detection.
darknet_ros2
This module was provides interface to run neural network as rosnodes. The purpose was to use yolo model as ros node but this method was not successful
darknet_vendor
This module was needed to build darknet_ros2.
How to run live detection?
Once you have succefully built the ROS Modules specifically, yolov3_ros and find-pose along with dependencies like pipe_msgs and perception_ros, perception_pcl,you can do the following:
Assumption:
You have the following required data in forms of ros topic:
- /camera/color/image_raw: RGB Image topic
- /camera/aligned_depth_to_color/image_raw: Aligned depth image topic (aligned with rgb image)
- /camera/color/camera_info: Camera calibration parameters topic How do you get this data?
- This data can come from a pre recorded Rosbag. If this is the case, do following:
$ ros2 bag play bag-folder/bag-name.db3
- This data can come directly from camera's (D455/ZED2I) ROS Node: Launch your node in a terminal.
- If your camera topics have different names than the default topic names mentioned above, update/remap ONLY the
pipe_detection.launch.pyscript stored inros2_ws/src/yolov3_ros/launchfolder. - Once you have updated the launch file, build the code again using
colcon build --packages-select yolov3_rosand proceedas mentioned below.
Steps to run:
- Open a terminal.
- Run
$ cd greenhouse/ros2_ws/and$ . install/setup.bash. - Launch node for object detection:
$ ros2 launch yolov3_ros pipe_detection.launch.py
- This node will output two topics:
/bboxesand/detection_image. - Make sure that you run this launch file from
greenhouse/ros2_ws/folder since the path of weights is relative (/src/pipe_weights.pt) inside the launch file. - Launch node for pose estimation:
$ ros2 launch find-pose find-pose-node.launch.py
- This node will output TF topics between
/camera_linkand/${detected_object_name}. - Open RVIZ by running
$ rviz2. Changefixed framefrommaptocamera_link.- Go to
Addbutton. UnderBy Display type, selecttf. Once tf is added, select required frames likecamera_linkandl_trailto see the tfs. - To see the current image with detected object, Go to
AddButton. UnderBy Topic, select topci calleddetection_image. -You can add other topics as per the need and topic names.
- Go to
- You can open launch files to update/remap topic name if different camera is being used.
- You can also update ros parameter from launch file. Currently, the
pipe_weights.ptfile is the one used. This file can be changed and you can update the parameter namebest_weightsinside thepipe_detection.launch.pyfile.