Skip to content

MDK Overview

A quick demonstration

Now that you've gone through the Getting Started page, you should know how to open a WSL-ROS terminal window and switch to the MiRo Developer Kit environment variables.

Let's start with a quick demonstration, by launching the Gazebo simulator with the MiRo robot and opening up an interactive GUI to control it. You'll need to have three terminal windows (or tabs) opened and execute the following commands:

    [Terminal 1]: $ roscore
    [Terminal 2]: $ miro_sim
    [Terminal 3]: $ miro_gui

Success

Now you should be able to see and control MiRo in it's default environment!

MiRo GUI

So what happened when you ran those commands?

  1. As you remember, roscore is used to start/connect to a ROS server and initialise networking.
  2. The miro_sim command in Terminal 2 is a WSL-ROS shorthand command (alias) for changing the folder to the MDK simulator folder cd ~/mdk/sim and then launching the simulator with the command ./launch_sim
  3. Finally, the miro_gui command in Terminal 3 is another WSL-ROS alias for switching to the scripts folder with cd ~/mdk/bin/shared and running ./client_gui.py.

Warning

When developing programs for MiRo be careful to use the MDK catkin workspace at ~/mdk/catkin_ws, and not the TB3 catkin workspace at ~/catkin_ws!

MDK structure

Let's take a closer look at what exactly MDK is and where it lives inside the bigger ROS ecosystem.

Success

The information here is accurate for the most recent MDK version at the time of writing: R230105

The MiRo Developer Kit (MDK) is an assortment of code, plugins and libraries required to simulate, control and interface the MiRo robot. As you've might have noticed, all the MDK files live in a dedicated folder, located in the user's home directory: ~/mdk

To be more precise, the ~/mdk is actually a soft link (i.e. a pointer) to the MDK package location on the disk. For the WSL-ROS environment this is ~/pkgs/mdk-230105/

Danger

Be careful not to accidentally delete the ~/pkgs/ folder and the mdk sub-folder(s) within, as this will break your environment!

Let's have a look at the most important files and folders within the MDK folder structure:

mdk
├── bin                                     # Compiled libraries and scripts
   ├── arm32
   ├── arm64
   ├── deb64                               # x64 architecture, the one we use
      ├── client_remote
      ├── install_mdk.sh
      ├── libmiro_gazebo.so               # MiRo compiled plugin for Gazebo
      ├── libmiro_kc.so                   # Kinematic chain compiled plugin
      ├── libmiro_net.a
      ├── libmiro_support.a
      ├── node_body                       # Kinematic chain API
      ├── node_voice                      # Vocalisation API
      └── README
   ├── script
      ├── common.sh
      └── install_mdk.sh                  # Install script
   └── shared
       ├── client_audio.py                 # Audio test client
       ├── client_demo_config.py
       ├── client_demo_interact.py
       ├── client_drive_voice.py
       ├── client_follower.py
       ├── client_gui.glade
       ├── client_gui.py                   # Main GUI interface
       ├── client_map.py
       ├── client_minimal.py
       ├── client_read_battery.py
       ├── client_show.py
       ├── client_stream.py
       ├── client_test.py
       ├── client_video.py
       ├── example_kc.py                   # Kinematic chain example
       ├── README
       └── robot
           ├── client_manual.py
           ├── client_minimal.py
           └── client_video.py
├── catkin_ws                               # MDK catkin workspace
   ├── build                               # `catkin build` output
   ├── devel                               # `catkin_build` intermediary
   ├── install                             # install space
   ├── logs
   ├── makefile
   └── src                                 # Source folder
       ├── CMakeLists.txt
       ├── COM3528                         # COM3528 teaching materials
       └── miro2_msg                       # MiRo custom messages (do not edit)
           ├── msg
              ├── adjust.msg
              ├── affect_adjust.msg
              ├── affect.msg
              ├── animal_adjust.msg
              ├── animal_state.msg
              ├── BatteryState.msg
              ├── funnel_web.msg
              ├── img_annotation.msg
              ├── object_ball.msg
              ├── object_face.msg
              ├── objects.msg
              ├── object_tag.msg
              ├── priority_peak.msg
              ├── push.msg
              ├── sensors_package.msg
              ├── sleep_adjust.msg
              ├── sleep.msg
              └── voice_state.msg
           ├── package.xml
           └── setup.py
├── CHANGELOG
├── LICENSE
├── README
├── RELEASE
├── share
   ├── config                              # MiRo config files:
      ├── demo_parameters.py              # A copy of this folder
      ├── platform_parameters             # is placed in ~/.miro2
      ├── simulation_parameters           # where these settings can be edited
      └── user_setup.bash
   ├── media                               # Built-in audio files
   ├── miroapp
   └── python                              # Python libraries and scripts
       └── miro2
           ├── constants.py                # Platform constants
           ├── core                        # The demo mode controller
              ├── action                  # Actions used in the biomimetic model
                 ├── action_approach.py
                 ├── action_avert.py
                 ├── action_flee.py
                 ├── action_halt.py
                 ├── action_mull.py
                 ├── action_orient.py
                 ├── action_retreat.py
                 ├── action_special.py
                 ├── action_types.py
                 └── basal_ganglia.py    # Action selection
              ├── client_demo.py          # Demo mode launch script
              ├── node_action.py          # Nodes with specific functionality
              ├── node_affect.py
              ├── node_decode.py
              ├── node_detect_april.py
              ├── node_detect_audio_engine.py
              ├── node_detect_audio_perftest.py
              ├── node_detect_audio.py
              ├── node_detect_ball.py
              ├── node_detect_face.py
              ├── node_detect_motion.py
              ├── node_express.py
              ├── node_loop.py
              ├── node_lower.py
              ├── node.py
              ├── nodes
              ├── node_spatial.py
              ├── pars.py
              ├── patterns
              └── systems
           └── lib                         # Various libraries
               ├── camera_model.py         # Camera model
               ├── general_utils.py
               ├── kc_interf.py
               ├── kc.py
               ├── miro_utils.py
               ├── platform_pars.py
               ├── PoseController.py
               ├── RobotInterface.py
               └── types.py
└── sim                                     # Assets for the simulator
    ├── launch_full.sh                      # Default script + Gazebo-ROS bridge
    ├── launch_sim.sh                       # Default simulator launch script
    ├── models                              # Gazebo models
       └── miro_arena                      # Example of a model structure
           ├── meshes
              ├── cqr_logo.png            # Mesh files
              ├── cqr_panel_coarse.dae    # Textures
              └── cqr_panel.dae
           ├── miro_arena.sdf              # Model description
           └── model.config                # Model metadata
    └── worlds                              # Gazebo worlds
        ├── cube.world
        ├── football.world
        ├── miro.world
        ├── petshop.world
        └── tables.world

To summarise:

  • Code for MiRo’s autonomous mode (~/mdk/share/python/miro2/core)
  • Various tests and utils (~/mdk/bin/shared) and (~/mdk/share/python/miro2/lib)
  • Gazebo: MiRo and other models, world files, and simulator launch scripts (~/mdk/sim)
  • MiRo catkin workspace (~/mdk/catkin_ws/src/) - place your packages here!
  • MiRo-specific ROS message definitions (~/mdk/catkin_ws/src/miro2_msg)

ROS refresher

Tip

To delve deeper, have a look at Tom's COM2009 tutorials

MiRo uses the Robot Operating System (ROS) as middleware to manage communication of data between program code and robot sensors and effectors. Every ROS network requires a single continually running ROS master (or ROS core) that manages all connected systems. The environmental variable $ROS_MASTER_URI specifies where to start (or look for) the ROS core on the network. When using a physical MiRo, a ROS core is started when the robot boots, therefore you don’t need to start one. When working with the simulator, the $ROS_MASTER_URI points at your machine through localhost, 127.0.0.1, or the machine's network IP. To start it, simply type roscore in the terminal. You don't have to change $ROS_MASTER_URI on WSL machines, but you will learn in the next session how to quickly do that for the laptops, since those can be used for all three MiRo profiles.

A ROS node is “a process that performs computation”. A MiRo robot in demo mode will have several co-existing ROS nodes managing the cameras and general sensory state, and your script will need to initialise a ROS node to issue and receive ROS commands. The rosnode command can perform various operations on ROS nodes. For example, if you have the simulator running or connected to the real robot, you can run $ rosnode list in the terminal to view the currently running nodes.

ROS topics are named buses that carry messages containing the actual packaged data sent between ROS network nodes. Run $ rostopic list in the terminal to view the topics currently running on your MiRo. For example, the topic /miro/sensors/caml contains message data from MiRo’s left camera, and /miro/control/kinematic_joints contains message data that controls the movement of MiRo’s head and neck.

Messages are encoded as a specific type; these may be a standard type such as String or a MiRo-specific type such as JointState. The rostopic and rosmsg commands can perform various operations on ROS topics and messages respectively.

All ROS topics are available to all nodes on the network; any node may publish (send data to) a topic of any name and may similarly subscribe to (request data from) any topic. For more information, see the ROS Computation Graph section of the ROS Wiki.

Python

When it comes to programming language, ROS (and hence MDK) widely supports only two languages: C++ and Python. However, most of the MDK code is written in Python, which is why this guide covers Python only.

Bug

Before the 2021 release, the MDK only supported ROS Kinetic and Melodic, which by default use Python 2. We aim to write our code in an agnostic way, but occasionally the Python 2/3 inconsistency may crop up. Please let us know if spot any errors of this nature.

The essential package that facilitates the ROS-Python communication is rospy. To get started with it, you will need to, at a minimum:

  1. Import message type definitions
  2. Initialise a ROS node
  3. Set up a publisher and/or subscriber to communicate with other nodes on the ROS network

These steps might look like this:

Message types

from std_msgs.msg import Float32MultiArray, UInt32MultiArray, UInt16MultiArray, UInt8MultiArray, UInt16, UInt32, Int16MultiArray, String
from geometry_msgs.msg import Pose2D, TwistStamped
from sensor_msgs.msg import BatteryState, CompressedImage, Image, Imu, JointState, Range

You may of course omit message types you do not intend to use.

ROS Node

rospy.init_node('<NODE NAME>', anonymous=True)

The anonymous=True flag is not strictly required but will help avoid conflicting node names.

ROS Publisher

pub = rospy.Publisher('<TOPIC NAME>', <MESSAGE TYPE>, queue_size=<N>)
pub.publish(<DATA>)

ROS Subscriber

rospy.Subscriber('<TOPIC NAME>', <MESSAGE TYPE>, <FUNCTION>)

Here, <FUNCTION> is callback function that will be called every time this ROS topic receives a new message.

Going further, it may be useful to look at the client GUI (~/mdk/bin/shared/client_gui.py), the demo code (~/mdk/share/python/miro2/core/client_demo.py), or one of the provided examples to see how this works in practice.

Download the COM3528 repo

The teaching materials for this course are available for download from Github, and are organised as a ROS metapackage for your convenience (meaning there are multiple packages bundled up under one name).

You can use the following commands to clone and compile the COM3528 code repo on your WSL-ROS environment; a shorthand 'source' command is also thrown there at the end so that ROS is aware of the new package:

[Terminal 3] $ cd ~/mdk/catkin_ws/src
[Terminal 3] $ git clone https://github.com/AlexandrLucas/COM3528
[Terminal 3] $ catkin build
[Terminal 3] $ src

MiRo playing with the toy ball

You should have three terminal windows open from previously, running ROS core, Gazebo simulator and Client GUI, respectively.
Close the GUI in the third tab and run the kick_blue_ball.py script, either using rosrun:

[Terminal 3] $ rosrun com3528_examples kick_blue_ball.py

or directly:

[Terminal 3] $ roscd com3528_examples/src
[Terminal 3] $ python3 kick_blue_ball.py

In this program, image processing is employed to detect a blue ball on both camera images. Based on the relative position of the ball on the images, MiRo tries to turn so that it faces the ball and then move forward to try and hit it.

Now let's have a look at the code in more detail.

1. Shebang line

#!/usr/bin/env python3

The presence of the shebang line informs the terminal shell that this file can be run directly and if so, what program should be used to do so. Now, assuming you're still in the same folder, try calling the script directly:

[Tab 3] $ ./kick_blue_ball.py

If you receive errors running the script this way, check that it has the correct permissions. To make a file executable, run chmod command: $ chmod +x kick_blue_ball.py

However, with the ROS packaging system, one can go even further. Having a compiled package in the catkin_ws means that ROS nodes can be called through rosrun. There are 2 main advantages of this approach:

  • Autocompletion
  • Path independence

This means you can be in any directory and only remember the first letters of the package and the script name to run it.

[Tab 3] $ rosrun mi[TAB]ro2_e[TAB]xamples ki[TAB]ck_blue_ball.py

Doing a double TAB input lists all of the available options.

This rosrun functionality should be enabled by default in the WSL-ROS environment. If using a custom ROS installation, add source ~/mdk/catkin_ws/devel/setup.bash on the next line after source ~/mdk/setup.bash in your .bashrc profile.

2. Imports

# Imports
##########################
import os
from math import radians  # This is used to reset the head pose
import numpy as np  # Numerical Analysis library
import cv2  # Computer Vision library

import rospy  # ROS Python interface
from sensor_msgs.msg import CompressedImage  # ROS CompressedImage message
from sensor_msgs.msg import JointState  # ROS joints state message
from cv_bridge import CvBridge, CvBridgeError  # ROS -> OpenCV converter
from geometry_msgs.msg import TwistStamped  # ROS cmd_vel (velocity control) message


import miro2 as miro  # Import MiRo Developer Kit library

try:  # For convenience, import this util separately
    from miro2.lib import wheel_speed2cmd_vel  # Python 3
except ImportError:
    from miro2.utils import wheel_speed2cmd_vel  # Python 2
##########################

For convenience, imports are divided into sections, going from the standard and widely-used packages, to ROS and ROS-specific messages, and then to the MDK, built on top of ROS specifically to interface with the MiRo robot. The final import is done separately, since it's sensitive to the Python version; there were changes in the MDK structure when moving from 2 to 3.

3. Class attributes

The main bulk of the program is contained in the attributes and functions of the MiRoClient class. Class constants (attributes in capital letters) come first and are used to set various parameters of the script. For example, changing DEBUG to True will display intermediate steps in the image processing. Try changing a few values!

TICK = 0.01  # This is the update interval for the main control loop in secs
CAM_FREQ = 1  # Number of ticks before camera gets a new frame, increase in case of network lag
NODE_EXISTS = True  # Disables (True) / Enables (False) rospy.init_node
SLOW = 0.1  # Radial speed when turning on the spot (rad/s)
FAST = 0.4  # Linear speed when kicking the ball (m/s)
DEBUG = False  # Set to True to enable debug views of the cameras

4. Initialisation

This is very the first thing that happens when the script is called. Here, we set up publishers, subscribers and their callbacks that are required to communicate with other nodes on the ROS network, as well as some variables to help transport information between topics.

def __init__(self):
    # Initialise a new ROS node to communicate with MiRo
    if not self.NODE_EXISTS:
        rospy.init_node("kick_blue_ball", anonymous=True)
    # Give it some time to make sure everything is initialised
    rospy.sleep(2.0)
    # Initialise CV Bridge
    self.image_converter = CvBridge()
    # Individual robot name acts as ROS topic prefix
    topic_base_name = "/" + os.getenv("MIRO_ROBOT_NAME")
    # Create two new subscribers to receive camera images with attached callbacks
    self.sub_caml = rospy.Subscriber(
        topic_base_name + "/sensors/caml/compressed",
        CompressedImage,
        self.callback_caml,
        queue_size=1,
        tcp_nodelay=True,
    )
    self.sub_camr = rospy.Subscriber(
        topic_base_name + "/sensors/camr/compressed",
        CompressedImage,
        self.callback_camr,
        queue_size=1,
        tcp_nodelay=True,
    )
    # Create a new publisher to send velocity commands to the robot
    self.vel_pub = rospy.Publisher(
        topic_base_name + "/control/cmd_vel", TwistStamped, queue_size=0
    )
    # Create a new publisher to move the robot head
    self.pub_kin = rospy.Publisher(
        topic_base_name + "/control/kinematic_joints", JointState, queue_size=0
    )
    # Create handle to store images
    self.input_camera = [None, None]
    # New frame notification
    self.new_frame = [False, False]
    # Create variable to store a list of ball's x, y, and r values for each camera
    self.ball = [None, None]
    # Set the default frame width (gets updated on receiving an image)
    self.frame_width = 640
    # Action selector to reduce duplicate printing to the terminal
    self.just_switched = True
    # Bookmark
    self.bookmark = 0
    # Move the head to default pose
    self.reset_head_pose()

The reset_head_pose function makes sure that MiRo's head is not tilted to any one side. If that would be the case, getting the correct position of the ball on the image would have to account also for the roll angle of the cameras.

5. Main loop

In the main loop, MiRo cycles between three behaviours via the status_code variable:

  1. Turning on the spot if it doesn't see a blue ball.
  2. Once MiRo caught the sight of it with at least one of the cameras, MiRo turns so that the ball is dead centre.
  3. MiRo moves forward to try and hit the ball with its body.

Upon successful completion of stage 3 or in case it fails any of the stages, MiRo goes back to stage 1. The main loop on its own can run as fast as the CPU allows (which is very fast), but in practical terms the usefulness is limited by the frequency of the robot's sensor reading updates. This is why you can see the rospy.sleep(self.TICK) added at the end.

print("MiRo plays ball, press CTRL+C to halt...")
# Main control loop iteration counter
self.counter = 0
# This switch loops through MiRo behaviours:
# Find ball, lock on to the ball and kick ball
self.status_code = 0
while not rospy.core.is_shutdown():

    # Step 1. Find ball
    if self.status_code == 1:
        # Every once in a while, look for ball
        if self.counter % self.CAM_FREQ == 0:
            self.look_for_ball()

    # Step 2. Orient towards it
    elif self.status_code == 2:
        self.lock_onto_ball()

    # Step 3. Kick!
    elif self.status_code == 3:
        self.kick()

    # Fall back
    else:
        self.status_code = 1

    # Yield
    self.counter += 1
    rospy.sleep(self.TICK)

6. Action 1: Looking for ball

First thing that happens upon look_for_ball action selection is lowering of the just_switched flag to prevent printing the same message over and over again. Then, MiRo checks if a new frame has arrived from the cameras (remember, those are unrelated and asynchronous events!), and if so, calls the detect_ball function (which will be explained later). If, indeed, there is a blue ball on the image, the MiRo switches to the next action; otherwise it simply rotates clockwise in hopes of finding one.

def look_for_ball(self):
    """
    [1 of 3] Rotate MiRo if it doesn't see a ball in its current
    position, until it sees one.
    """
    if self.just_switched:  # Print once
        print("MiRo is looking for the ball...")
        self.just_switched = False
    for index in range(2):  # For each camera (0 = left, 1 = right)
        # Skip if there's no new image, in case the network is choking
        if not self.new_frame[index]:
            continue
        image = self.input_camera[index]
        # Run the detect ball procedure
        self.ball[index] = self.detect_ball(image, index)
    # If no ball has been detected
    if not self.ball[0] and not self.ball[1]:
        self.drive(self.SLOW, -self.SLOW)
    else:
        self.status_code = 2  # Switch to the second action
        self.just_switched = True

7. Action 2: Locking onto the ball

The second action, lock_onto_ball is very similar to the first one, but with added conditions that check which cameras see the ball. Most importantly, if the ball is seen with both cameras, MiRo tries to tweak its orientation so that the ball is directly in front of it. This procedure uses the horizontal coordinate of centre of the ball on both frames. Once these values are within acceptable error range, MiRo switches to the third and final action.

def lock_onto_ball(self, error=25):
    """
    [2 of 3] Once a ball has been detected, turn MiRo to face it
    """
    if self.just_switched:  # Print once
        print("MiRo is locking on to the ball")
        self.just_switched = False
    for index in range(2):  # For each camera (0 = left, 1 = right)
        # Skip if there's no new image, in case the network is choking
        if not self.new_frame[index]:
            continue
        image = self.input_camera[index]
        # Run the detect ball procedure
        self.ball[index] = self.detect_ball(image, index)
    # If only the right camera sees the ball, rotate clockwise
    if not self.ball[0] and self.ball[1]:
        self.drive(self.SLOW, -self.SLOW)
    # Conversely, rotate counter-clockwise
    elif self.ball[0] and not self.ball[1]:
        self.drive(-self.SLOW, self.SLOW)
    # Make the MiRo face the ball if it's visible with both cameras
    elif self.ball[0] and self.ball[1]:
        error = 0.05  # 5% of image width
        # Use the normalised values
        left_x = self.ball[0][0]  # should be in range [0.0, 0.5]
        right_x = self.ball[1][0]  # should be in range [-0.5, 0.0]
        rotation_speed = 0.03  # Turn even slower now
        if abs(left_x) - abs(right_x) > error:
            self.drive(rotation_speed, -rotation_speed)  # turn clockwise
        elif abs(left_x) - abs(right_x) < -error:
            self.drive(-rotation_speed, rotation_speed)  # turn counterclockwise
        else:
            # Successfully turned to face the ball
            self.status_code = 3  # Switch to the third action
            self.just_switched = True
            self.bookmark = self.counter
    # Otherwise, the ball is lost :-(
    else:
        self.status_code = 0  # Go back to square 1...
        print("MiRo has lost the ball...")
        self.just_switched = True

8. Action 3: Kicking the ball

The last action is where MiRo (metaphorically) kicks the ball, by simply driving into it. The "drive forward" movement lasts for 2 seconds, irrespective of the self.TICK value. If the ball wasn't reached on the first try, the assumption is that it didn't roll very far away (and hopefully thus is still centred), and so MiRo can simply repeat this movement.

def kick(self):
    """
    [3 of 3] Once MiRO is in position, this function should drive the MiRo
    forward until it kicks the ball!
    """
    if self.just_switched:
        print("MiRo is kicking the ball!")
        self.just_switched = False
    if self.counter <= self.bookmark + 2 / self.TICK:
        self.drive(self.FAST, self.FAST)
    else:
        self.status_code = 0  # Back to the default state after the kick
        self.just_switched = True

9. Appendix: Detecting the ball

Finally, we'll look at the details of the image processing using the OpenCV library that helps detect the ball.

  1. It's important to remember that OpenCV's default colour space is BGR, and conversion to BGR needs to be done first.
  2. As was found empirically, the blue toy ball and blue colour in general is more distinct in the HSV (hue, saturation, value) space.
  3. The variance in the desired colour is transformed into a corresponding hue range. Based on this range, a mask is generated and subtracted from the image, leaving only the desired hue range.
  4. The image is smoothed to reduce noise and other artefacts.
  5. Hough Circle Transform is applied to get a list of circles on the image, together with their positions and radii.
  6. The biggest circle present is selected, and its normalised position and radius are returned as the function output.
def detect_ball(self, frame, index):
      """
      Image processing operations, fine-tuned to detect a small,
      toy blue ball in a given frame.
      """
      if frame is None:  # Sanity check
          return

      # Debug window to show the frame
      if self.DEBUG:
          cv2.imshow("camera" + str(index), frame)
          cv2.waitKey(1)

      # Flag this frame as processed, so that it's not reused in case of lag
      self.new_frame[index] = False
      # Get image in HSV (hue, saturation, value) colour format
      im_hsv = cv2.cvtColor(frame, cv2.COLOR_RGB2HSV)

      # Specify target ball colour
      rgb_colour = np.uint8([[[255, 0, 0]]])  # e.g. Blue (Note: BGR)
      # Convert this colour to HSV colour model
      hsv_colour = cv2.cvtColor(rgb_colour, cv2.COLOR_RGB2HSV)

      # Extract colour boundaries for masking image
      # Get the hue value from the numpy array containing target colour
      target_hue = hsv_colour[0, 0][0]
      hsv_lo_end = np.array([target_hue - 20, 70, 70])
      hsv_hi_end = np.array([target_hue + 20, 255, 255])

      # Generate the mask based on the desired hue range
      mask = cv2.inRange(im_hsv, hsv_lo_end, hsv_hi_end)
      mask_on_image = cv2.bitwise_and(im_hsv, im_hsv, mask=mask)

      # Debug window to show the mask
      if self.DEBUG:
          cv2.imshow("mask" + str(index), mask_on_image)
          cv2.waitKey(1)

      # Clean up the image
      seg = mask
      seg = cv2.GaussianBlur(seg, (5, 5), 0)
      seg = cv2.erode(seg, None, iterations=2)
      seg = cv2.dilate(seg, None, iterations=2)

      # Fine-tune parameters
      ball_detect_min_dist_between_cens = 40  # Empirical
      canny_high_thresh = 10  # Empirical
      ball_detect_sensitivity = 10  # Lower detects more circles, so it's a trade-off
      ball_detect_min_radius = 5  # Arbitrary, empirical
      ball_detect_max_radius = 50  # Arbitrary, empirical

      # Find circles using OpenCV routine
      # This function returns a list of circles, with their x, y and r values
      circles = cv2.HoughCircles(
          seg,
          cv2.HOUGH_GRADIENT,
          1,
          ball_detect_min_dist_between_cens,
          param1=canny_high_thresh,
          param2=ball_detect_sensitivity,
          minRadius=ball_detect_min_radius,
          maxRadius=ball_detect_max_radius,
      )

      if circles is None:
          # If no circles were found, just display the original image
          return

      # Get the largest circle
      max_circle = None
      self.max_rad = 0
      circles = np.uint16(np.around(circles))
      for c in circles[0, :]:
          if c[2] > self.max_rad:
              self.max_rad = c[2]
              max_circle = c
      # This shouldn't happen, but you never know...
      if max_circle is None:
          return

      # Append detected circle and its centre to the frame
      cv2.circle(frame, (max_circle[0], max_circle[1]), max_circle[2], (0, 255, 0), 2)
      cv2.circle(frame, (max_circle[0], max_circle[1]), 2, (0, 0, 255), 3)
      if self.DEBUG:
          cv2.imshow("circles" + str(index), frame)
          cv2.waitKey(1)

      # Normalise values to: x,y = [-0.5, 0.5], r = [0, 1]
      max_circle = np.array(max_circle).astype("float32")
      max_circle[0] -= self.x_centre
      max_circle[0] /= self.frame_width
      max_circle[1] -= self.y_centre
      max_circle[1] /= self.frame_width
      max_circle[1] *= -1.0
      max_circle[2] /= self.frame_width

      # Return a list values [x, y, r] for the largest circle
      return [max_circle[0], max_circle[1], max_circle[2]]

Exercises

Hopefully, this worked example helped you get to grips with the structure and flow of a typical MiRo program. Now would be a good time to reinforce what you've learned so far with some practice. Using the code in the kick_blue_ball.py script as a template, try making the following the changes/improvements to MiRo's behaviour in that scenario:

  1. As it is, MiRo always only turns right if it doesn't see a ball. Help MiRo memorise on which side the ball was last seen and turn in that direction.
  2. Sometimes just turning on the spot is not good enough. Add some short movements in a random direction to MiRo's search behaviour, in hopes of getting a better vantage point. >Don't forget to add cliff sensor checks in case MiRo is on an elevated surface!

  3. Try merging the turning and moving forward into one single movement. This might be especially useful in the enclosed MiRo Arena, as the ball can keep moving for a quite a long time after a good, clean kick.

You can edit the provided script or create new Python scripts in the same folder. These can then also be run using the same rosrun method.

=====

This completes the tutorial for the Week 1 lab session.

Remember to save your files and settings with wsl_ros backup once you're done!