文章列表:
第三步:巡线
所谓的自动驾驶(应付作业),就是小车沿着黄线走。如果你做过视觉智能车,就很简单(?)了。
导入地图
在 Turtle Bot 4 的仿真代码中导入我们自己建立的地图。
创建自定义功能包
首先,在 turtlebot4_ws
工作空间中新建一个我们自己的功能包。
参考资料:Creating a package — ROS 2 Documentation: Jazzy documentation
cd ~/turtlebot4_ws/src
ros2 pkg create --build-type ament_python auto_drive --dependencies rclpy
cd ~/turtlebot4_ws
colcon build --symlink-install
准备地图文件
在 ~/turtlebot4_ws/src/auto_drive
文件夹中创建 world
文件夹。
cd ~/turtlebot4_ws/src/auto_drive
mkdir world
把我们建模的 field.dae
文件放在 world
文件夹里。
创建 SDF 世界文件
因为 Turtle Bot 4 仅支持从它自己的路径中启动地图: Turtlebot in custom world file in humble. · Issue #47 · turtlebot/turtlebot4_simulator
所以我们在 ~/turtlebot4_ws/src/turtlebot4_simulator/turtlebot4_gz_bringup/worlds
文件夹中新建 drive_track.sdf
文件。仿照 maze.sdf
编写,并导入 dae
文件。参考代码如下:
<?xml version="1.0" ?>
<sdf version="1.11">
<world name="drive_track">
<plugin name="gz::sim::systems::Physics" filename="gz-sim-physics-system" />
<plugin name="gz::sim::systems::UserCommands" filename="gz-sim-user-commands-system" />
<plugin name="gz::sim::systems::SceneBroadcaster" filename="gz-sim-scene-broadcaster-system" />
<plugin name="gz::sim::systems::Contact" filename="gz-sim-contact-system" />
<physics name="1ms" type="ignored">
<max_step_size>0.003</max_step_size>
<real_time_factor>1.0</real_time_factor>
</physics>
<light name='sun' type='directional'>
<cast_shadows>1</cast_shadows>
<pose>0 0 10 0 -0 0</pose>
<diffuse>0.8 0.8 0.8 1</diffuse>
<specular>0.2 0.2 0.2 1</specular>
<attenuation>
<range>1000</range>
<constant>0.90000000000000002</constant>
<linear>0.01</linear>
<quadratic>0.001</quadratic>
</attenuation>
<direction>-0.5 0.1 -0.9</direction>
<spot>
<inner_angle>0</inner_angle>
<outer_angle>0</outer_angle>
<falloff>0</falloff>
</spot>
</light>
<gravity>0 0 -9.8</gravity>
<magnetic_field>6e-06 2.3e-05 -4.2e-05</magnetic_field>
<atmosphere type='adiabatic' />
<scene>
<ambient>0.4 0.4 0.4 1</ambient>
<background>0.7 0.7 0.7 1</background>
<shadows>1</shadows>
</scene>
<model name="field">
<pose>0 0 0 0 0 0</pose>
<static>true</static>
<link name="field_link">
<collision name="collision">
<geometry>
<!-- Collision mesh -->
<mesh>
<uri>(你的路径)/turtlebot4_ws/src/auto_drive/world/field.dae</uri>
</mesh>
</geometry>
</collision>
<visual name="visual">
<geometry>
<!-- Visual mesh -->
<mesh>
<uri>(你的路径)/turtlebot4_ws/src/auto_drive/world/field.dae</uri>
</mesh>
</geometry>
</visual>
</link>
</model>
<!-- <include>
<uri>/home/zwz/turtlebot4_ws/src/auto_drive/world/stop.sdf</uri>
<pose>-3.17 2.0 0.3 0.0 1.5708 0.0</pose>
</include> -->
</world>
</sdf>
创建自定义 Launch 文件
接着,我们修改原 launch 文件,让其能从我们的地图启动。在 auto_drive
功能包中,创建 launch
文件夹,新建 auto_drive_env.launch.py
。参考 src/turtlebot4_simulator/turtlebot4_gz_bringup/launch/turtlebot4_gz.launch.py
编写 launch 文件,但是把默认的世界改为 drive_track
。参考代码如下:
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
ARGUMENTS = [
DeclareLaunchArgument('namespace', default_value='',
description='Robot namespace'),
DeclareLaunchArgument('rviz', default_value='false',
choices=['true', 'false'], description='Start rviz.'),
DeclareLaunchArgument('world', default_value='drive_track',
description='Simulation World'),
DeclareLaunchArgument('model', default_value='standard',
choices=['standard', 'lite'],
description='Turtlebot4 Model'),
]
for pose_element in ['x', 'y', 'z', 'yaw']:
ARGUMENTS.append(DeclareLaunchArgument(pose_element, default_value='0.0',
description=f'{pose_element} component of the robot pose.'))
def generate_launch_description():
# Directories
pkg_turtlebot4_gz_bringup = get_package_share_directory(
'turtlebot4_gz_bringup')
# Paths
gazebo_launch = PathJoinSubstitution(
[pkg_turtlebot4_gz_bringup, 'launch', 'sim.launch.py'])
robot_spawn_launch = PathJoinSubstitution(
[pkg_turtlebot4_gz_bringup, 'launch', 'turtlebot4_spawn.launch.py'])
gazebo = IncludeLaunchDescription(
PythonLaunchDescriptionSource([gazebo_launch]),
launch_arguments=[
('world', LaunchConfiguration('world'))
]
)
robot_spawn = IncludeLaunchDescription(
PythonLaunchDescriptionSource([robot_spawn_launch]),
launch_arguments=[
('namespace', LaunchConfiguration('namespace')),
('rviz', LaunchConfiguration('rviz')),
('x', LaunchConfiguration('x')),
('y', LaunchConfiguration('y')),
('z', LaunchConfiguration('z')),
('yaw', LaunchConfiguration('yaw'))]
)
# Create launch description and add actions
ld = LaunchDescription(ARGUMENTS)
ld.add_action(gazebo)
ld.add_action(robot_spawn)
return ld
更新 setup.py
为了使 ROS2 能够从 auto_drive_env.launch.py
launch,我们还需要修改 auto_drive
功能包中的 setup.py
文件。
参考资料:Integrating launch files into ROS 2 packages — ROS 2 Documentation: Jazzy documentation
在 data_files
列表中,添加 (os.path.join('share', package_name, 'launch'), glob(os.path.join('launch', '*launch.[pxy][yma]*')))
。然后在工作空间目录下 colcon build --symlink-install
。
修改 TurtleBot 4 Spawn 配置
为了不让机器人卡在充电器上,我们要防止仿真环境生成充电器(dock)。
注释 ~/turtlebot4_ws/src/turtlebot4_simulator/turtlebot4_gz_bringup/launch/turtlebot4_spawn.launch.py
的 158-169 行防止生成充电器,124 行的 -0.0025
改为 0
,保证落地。
启动自定义环境
最后,launch 我们自己编写的文件。
ros2 launch auto_drive auto_drive_env.launch.py yaw:=1.5708
注意这里设置 yaw:=1.5708
的作用使使机器人正面向前。这时,通过键盘可以控制机器人在我们自己搭建的三维地图中移动。
寻线算法
巡线算法需要用到摄像头。为了研究如何读取摄像头消息,我们可以用 ros2 topic list
指令。观察输出结果,我认为摄像头 topic 为 /oakd/rgb/preview/image_raw
,而控制小车移动速度的 topic 为 /cmd_vel_unstamped
。
创建 ROS 2 节点
参考教程:Writing a simple publisher and subscriber (Python) — ROS 2 Documentation: Jazzy documentation 编写一个 ROS 节点,用于根据图片中黄色部分的 x 值调整小车的角速度。参考代码如下 (auto_drive_node.py
):
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist, PoseStamped
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
import numpy as np
import time
import os
class ADNode(Node):
def __init__(self):
super().__init__('auto_drive_node')
self.vel_pub = self.create_publisher(Twist, '/cmd_vel_unstamped', 1)
self.timer = self.create_timer(0.02, self.timer_callback)
self.bridge = CvBridge()
self.img_sub = self.create_subscription(Image,
'/oakd/rgb/preview/image_raw',
self.img_callback,
1)
self.linear_speed = 1.2
self.yaw_speed = 0.0
def img_callback(self, msg):
try:
cv_img = self.bridge.imgmsg_to_cv2(msg, "bgr8")
except Exception as e:
self.get_logger().error('bridge.imgmsg_to_cv2 failed')
return
# turn bgr8 into hsv
hsv_img = cv2.cvtColor(cv_img, cv2.COLOR_BGR2HSV)
lower_yellow = np.array([15, 50, 50])
upper_yellow = np.array([45, 255, 255])
# filter out yellow line
mask = cv2.inRange(hsv_img, lower_yellow, upper_yellow)
cv2.imshow('img', cv_img)
cv2.waitKey(1)
# Crop the image to focus on the lower part for line following
mask = mask[220:, :]
moments = cv2.moments(mask)
try:
# Calculate the x coordinate of the centroid of the yellow pixels
x_center = moments['m10'] / moments['m00']
# Calculate yaw speed based on the deviation from the image center (assuming image width 320)
# Adjust the gain (0.08) as needed
self.yaw_speed = (160 - x_center) * 0.08
except ZeroDivisionError:
# Handle the case where no yellow pixels are detected (m00 is zero)
self.get_logger().error("ERROR: moments['m00'] == 0 (No yellow line detected?)")
# Optionally, set a default behavior, e.g., stop or keep last command
# self.yaw_speed = 0.0 # Example: stop turning if line is lost
pass # Current behavior: keep the last calculated yaw_speed
def timer_callback(self):
msg = Twist()
msg.angular.z = self.yaw_speed
msg.linear.x = self.linear_speed
self.vel_pub.publish(msg)
def main(args=None):
rclpy.init(args=args)
ad_node = ADNode()
rclpy.spin(ad_node)
# Destroy the node explicitly
# (optional - otherwise it will be done automatically
# when the garbage collector destroys the node object)
ad_node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
算法原理解释
- 颜色过滤: 在 HSV 颜色空间中过滤出所有黄色像素,生成一个二值掩码 (mask)。
- 区域截取: 截取掩码图像的下半部分作为巡线依据,以减少图像上方无关物体(如天空、未来的红绿灯等)的干扰。
- 质心计算: 使用 OpenCV 的
cv2.moments
计算黄色像素区域的图像矩。通过一阶矩 (m10
) 和零阶矩 (m00
),计算黄色区域质心的 x 轴坐标 (x_center = m10 / m00
)。 - 误差计算: 计算质心 x 坐标与图像中心线(假设图像宽度为 320,中心为 160)之间的误差。
- 速度控制: 将计算出的误差乘以一个比例因子(P 控制器),作为机器人的角速度 (
yaw_speed
)。保持恒定的线速度 (linear_speed
)。 - 指令发布: 以一定的频率(由
timer_callback
的定时器设置)将包含线速度和角速度的Twist
消息发布到/cmd_vel_unstamped
话题。
更新 setup.py (再次)
别忘了在 setup.py
的 entry_points
-> console_scripts
的值中添加 'auto_drive_node = auto_drive.auto_drive_node:main'
。并 colcon build --symlink-install
。
测试巡线算法
是时候测试巡线算法的结果了。
# Terminal 1
ros2 launch auto_drive auto_drive_env.launch.py yaw:=1.5708
# Terminal 2
ros2 run auto_drive auto_drive_node.py
巡线算法完成!!!
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。