头图

文章列表

  1. 建模场地【已更新】
  2. Turtle Bot 4 环境安装 【已更新】
  3. 巡线【已更新】
  4. 红绿灯、停车牌建模【已更新】
  5. 红绿灯、停车牌识别【已更新】
  6. SLAM导航【已更新】

第五步:红绿灯、停车牌识别

所谓的识别,就是 MSE。

MSE 就是均方误差。

我们先给红灯截个图,然后比较现在摄像头拍到的图片与截图之间的均方误差。

具体来说,就是两张大小相同的图片,图片像素值逐一相减,然后把每个误差值求平方,最后再平均。这是一个非常粗糙,非常不准确的计算两个图片相似度的方式。不过既然我们的目的是应付作业,而且仿真环境中 mse 的效果还行,所以就这样吧。

读取相机图片

在仿真环境启动的情况下,用 ros2 topic list 查看所有消息。观察法发现 /oakd/rgb/preview/image_raw 这个是相机消息。

编写代码(auto_drive_node.py,见前几篇文章)

import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist, PoseStamped
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
import numpy as np
import time
import os
from threading import Thread
from nav2_simple_commander.robot_navigator import BasicNavigator, TaskResult


class ADNode(Node):
    def __init__(self):
        super().__init__('auto_drive_node')
        self.vel_pub = self.create_publisher(Twist, '/cmd_vel_unstamped', 1)

        self.timer = self.create_timer(0.02, self.timer_callback)

        self.bridge = CvBridge()
        self.img_sub = self.create_subscription(Image,
                                                '/oakd/rgb/preview/image_raw',
                                                self.img_callback,
                                                1)

        self.linear_speed = 1.2
        self.yaw_speed = 0.0

        self.red_pattern = cv2.imread('/turtlebot4_ws/src/auto_drive/auto_drive/red_pattern.png')
        self.green_pattern = cv2.imread('/turtlebot4_ws/src/auto_drive/auto_drive/green_pattern.png')
        self.stop_pattern = cv2.imread('~/turtlebot4_ws/src/auto_drive/auto_drive/stop_pattern.png')

        self.cnt = 0
        '''
        DO NOT USE BACK_SLASH OR ENTER

        create green light:
        gz service --service /world/drive_track/create --reqtype gz.msgs.EntityFactory --reptype gz.msgs.Boolean --timeout 2000 --req 'name: "green_light"; sdf_filename: "/home/zwz/turtlebot4_ws/src/auto_drive/world/green_light.sdf"; pose: {position: {x: -2, y: 4, z: 0}, orientation: {x: 0, y: 0, z: 1, w: 0}}'

        create red light:
        gz service --service /world/drive_track/create --reqtype gz.msgs.EntityFactory --reptype gz.msgs.Boolean --timeout 2000 --req 'name: "red_light"; sdf_filename: "/home/zwz/turtlebot4_ws/src/auto_drive/world/red_light.sdf"; pose: {position: {x: -2, y: 4, z: 0}, orientation: {x: 0, y: 0, z: 1, w: 0}}'

        remove red light:
        gz service --service /world/drive_track/remove --reqtype gz.msgs.Entity --reptype gz.msgs.Boolean --timeout 2000 --req 'type: MODEL; name: "red_light"'
        '''
        self.create_green_light_cmd = """gz service --service /world/drive_track/create --reqtype gz.msgs.EntityFactory --reptype gz.msgs.Boolean --timeout 2000 --req 'name: "green_light"; sdf_filename: "/home/zwz/turtlebot4_ws/src/auto_drive/world/green_light.sdf"; pose: {position: {x: -2, y: 4, z: 0}, orientation: {x: 0, y: 0, z: 1, w: 0}}'"""
        self.create_red_light_cmd = """gz service --service /world/drive_track/create --reqtype gz.msgs.EntityFactory --reptype gz.msgs.Boolean --timeout 2000 --req 'name: "red_light"; sdf_filename: "/home/zwz/turtlebot4_ws/src/auto_drive/world/red_light.sdf"; pose: {position: {x: -2, y: 4, z: 0}, orientation: {x: 0, y: 0, z: 1, w: 0}}'"""
        self.remove_red_light_cmd = """gz service --service /world/drive_track/remove --reqtype gz.msgs.Entity --reptype gz.msgs.Boolean --timeout 2000 --req 'type: MODEL; name: "red_light"'"""
        # for create and remove traffic light
        os.system(self.create_red_light_cmd)
        # os.system(self.create_green_light_cmd)

        self.change_traffic_light_thread = Thread(target=self.change_traffic_light)


    def change_traffic_light(self):
        time.sleep(7)
        os.system(self.create_green_light_cmd)
        os.system(self.remove_red_light_cmd)


    def img_callback(self, msg):
        try:
            cv_img = self.bridge.imgmsg_to_cv2(msg, "bgr8")
        except Exception as e:
            self.get_logger().error('bridge.imgmsg_to_cv2 failed')
            return

        # turn bgr8 into hsv
        hsv_img = cv2.cvtColor(cv_img, cv2.COLOR_BGR2HSV)
        lower_yellow = np.array([15, 50, 50])
        upper_yellow = np.array([45, 255, 255])

        # filter out yellow line
        mask = cv2.inRange(hsv_img, lower_yellow, upper_yellow)

        # cv2.imwrite(f'/home/zwz/turtlebot4_ws/src/auto_drive/auto_drive/imgs/{self.cnt}.png', cv_img)
        # self.cnt += 1
        cv2.imshow('img', cv_img)
        cv2.waitKey(1)

        # print(mask.shape) (240, 320)
        mask = mask[220:, :]

        moments = cv2.moments(mask)

        if moments['m00'] == 0 or self.start_navigation == True:
            # self.start_navigation = True
            self.timer.destroy()
            # self.nav()
            return

        # check traffic light
        now_traffic_light_pattern = cv_img[5:133, 270:315, :]
        now_stop_sign_pattern = cv_img[20:128, 2:113, :]

        red_diff = now_traffic_light_pattern.astype("float") - self.red_pattern.astype("float")
        squared_red_diff = np.square(red_diff)
        red_mse = np.mean(squared_red_diff)

        green_diff = now_traffic_light_pattern.astype("float") - self.green_pattern.astype("float")
        squared_green_diff = np.square(green_diff)
        green_mse = np.mean(squared_green_diff)

        stop_diff = now_stop_sign_pattern.astype("float") - self.stop_pattern.astype("float")
        squared_stop_diff = np.square(stop_diff)
        stop_mse = np.mean(squared_stop_diff)
        print(f'red_mse: {red_mse}; green_mse: {green_mse}; stop_mse: {stop_mse}')
        if red_mse < 1000 and red_mse < green_mse:
            self.linear_speed = 0.0
            if not self.change_traffic_light_thread.is_alive():
                self.change_traffic_light_thread.start()
        if green_mse < 4000 and green_mse < red_mse:
            self.linear_speed = 1.2
        if stop_mse < 3000:
            self.linear_speed = 0
            time.sleep(7)
            self.linear_speed = 1.2
            time.sleep(2)

        try:
            x_center = moments['m10'] / moments['m00']
            self.yaw_speed = (160 - x_center) * 0.08
        except:
            self.get_logger().error("ERROR:moments['m00'] == 0")
            pass


    def timer_callback(self):
        msg = Twist()
        msg.angular.z = self.yaw_speed
        msg.linear.x = self.linear_speed
        self.vel_pub.publish(msg)


def main(args=None):
    rclpy.init(args=args)

    ad_node = ADNode()

    rclpy.spin(ad_node)

    ad_node.destroy_node()
    rclpy.shutdown()


if __name__ == '__main__':
    main()

代码解释:

  • 把 77、78 行取消注释,可以把一路上拍到的所有图片都保存下来,用来截模式图。
  • 计算 MAE 的代码(如 97-99 行)使用 numpy 库实现的,cv2 图片的数据格式是 numpy。
  • MSE 的阈值是我随便写的,其实 10000 以下都可以。
  • 切换红绿灯,我用了多线程。多线程比较复杂,可以学习一下网上 python 多线程的文章。

模式图片

image.png
image.png
image.png

模式图片应放在代码中的路径,如 self.red_pattern = cv2.imread('~/turtlebot4_ws/src/auto_drive/auto_drive/red_pattern.png')

模式图片的坐标

now_traffic_light_pattern = cv_img[5:133, 270:315, :]
now_stop_sign_pattern = cv_img[20:128, 2:113, :]

都是我对着 cv2 的窗口数像素的。。。总之这个比较好用。

运行效果

ros2 launch auto_drive auto_drive_env.launch.py yaw:=1.5708
ros2 run auto_drive auto_drive_node.py

image.png

图中 slam 部分下一篇文章会讲。

红绿灯+STOP 牌识别成功!!!


赵为之
1 声望0 粉丝