Morloth's AIs

Creation of my 3D engine and horror game. Part 1 - introduction.

Permalink 12/11/13 06:20, by morloth, Categories: 3D Engine

Hello everybody,

I've been busy the past few weeks with the creation of my own 3D engine with the aim of creating a horror game. I will give regular updates on this blog and on my youtube channel. The aim is the have a fairly simple 3D engine that contain the bare minimum to create my game and to slowely extend it as I create subsequent games. The videos I have created to date can be watched here.

I currently try to get animation working within my engine using the ASSIMP library (http://assimp.sourceforge.net/). I will report on my progress as I go along. Do note that it is a hobby project that I work on in my free time, so progress is going to be slow and erratic.

Enjoy the upcoming posts and once the game is done it will be free for everyone :).

Cheers,

Bram

Integrating ROS with Gazebo Part 2

Permalink 02/15/13 08:24, by morloth, Categories: robots

Last time we showed how to get ROS and Gazebo up and running and we finished by spawning and driving around with a simplified Roomba robot. Now we are going to use ROS in order to control the robot using ROS nodes. This way we can detach the simulator from the control of the robot which will allow us to swap the simulator for a physical robot when the time comes.

Plugins

In order to connect the model of the robot to the actual control of the robot we are going to make use of plugins. Plugins allow us to connect parts of the model of a robot to ROS nodes which can control them. Lets have a look at how a plugin is defined in the model file. Open {roomba and quadrotor director}/trunk/models/my_robot/model.sdf and scroll all the way to the bottom, you see these lines:

    <plugin name="ros_based_plugin_for_motor" filename="libros_motor_controller.so">
      <left_wheel_hinge>left_wheel_hinge</left_wheel_hinge>
      <right_wheel_hinge>right_wheel_hinge</right_wheel_hinge>
    </plugin>
    <plugin name="ros_based_plugin_for_laser" filename="libros_laser_sensor.so">
      <gain>5.0</gain>
      <ray_sensor>laser</ray_sensor>
    </plugin>

These lines define two plugins. The first one is for the control of the motors and is aptly named "ros_based_plugin_for_motor". Note that there the value of 'filename' refers to a library. The second plugin is for the control of the laser named "ros_based_plugin_for_laser" and refers to a different library. All the children of the plugin elements (e.g. left_wheel_hinge) are parameters which can be recovered by the libraries.

In our case we have a very simple control scheme, one of the libraries take care of the control of the motors which the other is responsible for broadcasting readings made by the laser.

Making the plugins

Now that we have seen how the plugins are declared, we will not show them how they are implemented. first of all go to the directory {roomba and quadrotor director}/trunk/src and open CMakeLists.txt in your favourite editor. You will see that we build two libraries:

# Library to control the wheels of the roomba.
add_library(ros_motor_controller SHARED roomba/motor_controller_plugin.cpp)
set_target_properties(ros_motor_controller PROPERTIES COMPILE_FLAGS "${roscpp_CFLAGS_OTHER}")
set_target_properties(ros_motor_controller PROPERTIES LINK_FLAGS "${roscpp_LDFLAGS_OTHER}")
target_link_libraries(ros_motor_controller ${roscpp_LIBRARIES})
install (TARGETS ros_motor_controller DESTINATION ${CMAKE_INSTALL_PREFIX}/lib/gazebo_plugins/)

# Library to extract laser information from the sensor.
add_library(ros_laser_sensor SHARED roomba/laser_sensor_plugin.cpp)
set_target_properties(ros_laser_sensor PROPERTIES COMPILE_FLAGS "${roscpp_CFLAGS_OTHER}")
set_target_properties(ros_laser_sensor PROPERTIES LINK_FLAGS "${roscpp_LDFLAGS_OTHER}")
target_link_libraries(ros_laser_sensor ${roscpp_LIBRARIES})
install (TARGETS ros_laser_sensor DESTINATION ${CMAKE_INSTALL_PREFIX}/lib/gazebo_plugins/)

Both libraries only consist of a single source file -- roomba/motor_controller_plugin.cpp and roomba/laser_sensor_plugin.cpp respectively -- and are installed into the ${CMAKE_INSTALL_PREFIX}/lib/gazebo_plugins/ directory. You can choose the value of ${CMAKE_INSTALL_PREFIX} during configuration (see cmake) but it is important that you update the environment variable ROS_PACKAGE_PATH to include this directory, otherwise you libraries will not be found!

Also note that we have added the following lines:

rosbuild_init()
rosbuild_genmsg()

The first line initialised the build environment for ROS and the latter generates and messages which we wish the send or receive. In our case we want to sent messages to the motors of Roomba and receive messages from its laser. For the latter we use the default sensor_msgs/LaserScan messages, but of the former we have created our own message to pass (exciting!).

Custom motor messages

In order to create a custom message we create the directory {roomba and quadrotor director}/trunk/src/msg and created a file called motor_ctr.msg which looks as follows:

Header header
float64 left_wheel
float64 right_wheel

The first line is a header which includes data like a time stamp and sequence number of the message, etc. It allows us to determine when a message was sent and if we missed any messages, etc. The next two items are floats which specify the amount of force (N) we want to apply to both motors. When you execute 'make' ROS will create a number of source and header files stored in {roomba and quadrotor director}/trunk/src/msg_gen for all found messages. These files allow you to integrate the messages in your code.

Code for the motors

Now that we have covered the preliminaries lets dive into the code of both controllers, starting with the motor controller. Lets have a look at roomba/motor_controller_plugin.cpp. We will step through the lines and explain what they do.

#ifndef ROOMBA_MOTOR_CONTROLLER_PLUGIN_H
#define ROOMBA_MOTOR_CONTROLLER_PLUGIN_H

This is standard practice of headers guardians, will safe you a lot of trouble during linking!

#include <gazebo.hh>
#include "ros/ros.h"
//#include "std_msgs/Float64.h"
#include <src/motor_ctr.h>

The Gazebo header is added in order to make this class a plugin. Similarelly we need the ROS packages. The last package is one we have created ourselves and contains the functionality to send messages to both motors or Roomba (see previous section).

namespace gazebo
{

class ROSMotorControllerPlugin : public ModelPlugin

We create our plugin under the namespace package and we must inherit the base classgazebo::ModelPlugin which defines the method

void Load(physics::ModelPtr _parent, sdf::ElementPtr _sdf);

. This function is called when the plugin is loaded. The ModelPtr contains the model the plugin is created for. Recall from our definition of the model that we embedded the plugin elements in the model "my_robot". Through the physics::ModelPtr we can access all the links and joints which are children of "my_robot".

{
public:
        ROSMotorControllerPlugin();
        ~ROSMotorControllerPlugin();
        void Load(physics::ModelPtr _parent, sdf::ElementPtr _sdf);
private:
        bool LoadParams(sdf::ElementPtr _sdf);
        bool FindJointByParam(sdf::ElementPtr _sdf, physics::JointPtr &_joint, std::string _param);
        void ROSCallback(const src::motor_ctrPtr& msg);
        void OnUpdate();

Bunch of methods which we will explain in more detail when we detail the source file.

        // Pointer to the model
        physics::ModelPtr model;

        // Pointer to the update event connection
        event::ConnectionPtr updateConnection;

        physics::JointPtr leftWheelJoint;
        physics::JointPtr rightWheelJoint;

        // ROS Nodehandle
        ros::NodeHandle* node;

        // ROS Subscriber
        ros::Subscriber sub;
};

};
#endif

Here we store the data which we will need in order to operate the motors. These include the actual joints of the wheels and some ROS specific handles like a subscriber and node.

Lets have a look at the actual source code:

#include "motor_controller_plugin.h"

#include <boost/bind.hpp>
#include <physics/physics.hh>
#include <sensors/sensors.hh>
#include <common/common.hh>
#include <stdio.h> namespace gazebo { ROSMotorControllerPlugin::ROSMotorControllerPlugin() { // Start up ROS std::string name = "ros_model_plugin_node"; int argc = 0; ros::init(argc, NULL, name); } ROSMotorControllerPlugin::~ROSMotorControllerPlugin() { delete this->node; }

The constructor initialises ROS and registers itself under the name "ros_model_plugin_node". The destructor deletes the ROS node.

  void ROSMotorControllerPlugin::Load(physics::ModelPtr _parent, 
            sdf::ElementPtr _sdf)
  {
    // Store the pointer to the model
    this->model = _parent;

    // Load parameters for this plugin
    if (this->LoadParams(_sdf))
    {
      // testing to see if race condition exists
      gzerr << this ->leftWheelJoint->GetAngle(0) << "\n";
      gzerr << this ->rightWheelJoint->GetAngle(0) << "\n";

      // Listen to the update event. This event is broadcast every
      // simulation iteration.
      this->updateConnection = event::Events::ConnectWorldUpdateStart(
           boost::bind(&ROSMotorControllerPlugin::OnUpdate, this));

      // ROS Nodehandle
      this->node = new ros::NodeHandle("motor_controller");

      // ROS Listener
      this->sub = this->node->subscribe("/motor_controller/motor", 1000, 
             &ROSMotorControllerPlugin::ROSCallback, this );
      if (!sub)
      {
        ROS_ERROR("Could not instantiate subscriber for 
             /motor/controller/motor!");
      }
    }
    else
    {
      ROS_WARN("Could not load the model!");
    }
  }

led when the plugin is loaded. Note that we load the parameters which are defined in the model (left_wheel_hinge and right_wheel_hinge) in order to discover what the name of the joints are which control the wheels. Next we setup a subscriber which listens to /motorcontrol/motor messages which is the custom message type we defined above. When this message is received the method ROSCallback is called which looks as follows:

  void ROSMotorControllerPlugin::ROSCallback(const src::motor_ctrPtr& 
            msg)
  {
    ROS_INFO("Got a command for the motors: [%f,%f]", 
            msg->left_wheel, msg->right_wheel);
    // Make sure the jerk is not too bad.
    this->leftWheelJoint->SetForce(0, msg->left_wheel);
    this->rightWheelJoint->SetForce(0, msg->right_wheel);
  }

This methods sends commands to the joints of the wheels and applies a force to them which is equal to the force specified in the received message. These calls are part of the API of Gazebo, but when we work with a real model we can write another application which listens to the same messages but sends the commands to the actual robot instead of the simulator.

Finally we add:

GZ_REGISTER_MODEL_PLUGIN(ROSMotorControllerPlugin);

In order to register the application as a plugin for Gazebo.

Now this class listens for /motorcontrol/motor messages and sends to appropriate control commands to the motors (simulated or otherwise). Next we will look at a class that actually sends messages instead of receiving them -- the laser.

The Laser system

Luckily we do not have to create our own messages in order to communicate laser data. Instead there is a default message in ROS called sensor_msgs/LaserScan which is used in a few ROS packages. This message contains a header, the angles of the laser the ranges found the intensities found, etc. The header file of our laser system is very similar to that of the motor controller so we will not repeat it here, instead we will focus on the relevant bits in the source file.

  void ROSSensorPlugin::Load(physics::ModelPtr _parent, 
             sdf::ElementPtr _sdf)
  {
    // Store the pointer to the model
    this->model = _parent;

    // Load parameters for this plugin
    if (this->LoadParams(_sdf))
    {
      // Listen to the update event. This event is broadcast every
      // simulation iteration.
      this->updateConnection = event::Events::ConnectWorldUpdateStart(
             boost::bind(&ROSSensorPlugin::OnUpdate, this));

      // ROS Nodehandle
      this->node = new ros::NodeHandle("sensor");

      // Setup the publisher of laser info.
      scan_pub = node->advertise("scan", 50);
    }
    else
    {
      ROS_WARN("Could not load the model!");
    }
  }

e model and loads the parameters connected to the laser. Then it registers the method "OnUpdate" to be called every time the simulator is updated and publishes the topic "/sensor/scan" over which a sonsor_msgs::LaserScan message will be send.

  // Called by the world update start event
  void ROSSensorPlugin::OnUpdate()
  {
    double laser_frequency = 10;

    // Gather laser data and send it over!
    double angle_scanned = std::abs(laser->GetAngleMin().Radian()) + 
              std::abs(laser->GetAngleMax().Radian());
    ros::Time scan_time = ros::Time::now();
    sensor_msgs::LaserScan scan;
    scan.header.stamp = scan_time;
    scan.header.frame_id = "laser_frame";
    scan.angle_min = laser->GetAngleMin().Radian();
    scan.angle_max = laser->GetAngleMax().Radian();
    scan.angle_increment = angle_scanned / laser->GetRangeCount();
    scan.time_increment = (1.0 / laser_frequency) / 
              laser->GetRangeCount();
    scan.range_min = laser->GetRangeMin();
    scan.range_max = laser->GetRangeMax();
    scan.ranges.resize(laser->GetRangeCount());
    scan.intensities.resize(laser->GetRangeCount());

    // Do not allow the values to be overwritten by the new scan.
    laser->SetActive(false);
    for (unsigned int i = 0; i  < laser->GetRangeCount(); ++i)
    {
      scan.intensities[i] = laser->GetRetro(i);
      scan.ranges[i] = laser->GetRange(i);
    }
    laser->SetActive(true);

    scan_pub.publish(scan);
  }

Whenever the simulator is updated we generate a laser message which includes the current time, the min / max angle of the laser, its frequency, the ranges found, etc. This laser message is then published on the topic "/sensor/scan" for anyone to read.

Bringing it all together

In order to showcase how these nodes can be utilised we made a very small application that uses the laser and searches for the direction where the range of the laser is longest and turn the robot to face that direction. We have called this little application "avoid_walls". We have added the following line in CMakeLists.txt to make it compile:

rosbuild_add_executable(avoid_walls roomba/avoid_walls_controller.cpp)

Lets have a look at the code:

int main(int argc, char** argv)
{
        ROS_INFO("AVOID WALLS v1.0 started!");
        ros::init(argc, argv, "avoid_walls_controller");
        ros::NodeHandle n;
        ros::Subscriber s = n.subscribe("/sensor/scan", 
             100, LaserReader);
        if (!s)
        {
                ROS_ERROR("Could not subscribe to /sensor/scan!");
                exit(1);
        }
        motor_pub = n.advertise("/motor_controller/motor", 5);
        listener = new tf::TransformListener(ros::Duration(10));
        ros::Rate loop_rate(10);

        // We have a reactive system!
        while (ros::ok())
        {
                ros::spinOnce();
                loop_rate.sleep();
        }
        ROS_INFO("AVOID WALLS finished!");
        delete listener;

        return 0;
}

f code initialises ROS and then subscribes to the topic "/sensor/scan" and upon receiving a message the method LaserReader is called. Also it declares that it will advertise on the topic "/motor_controller/motor" in order to control the motors. Next it goes into a infinite loop and checks for messages every 10 milliseconds. However, the real action happens in the LaserReader method. We will breaking it down blow.

void LaserReader(const sensor_msgs::LaserScan& scan)
{
        // Find the transformation from the centre of the robot to 
        // the centre of the laser.
        geometry_msgs::PointStamped laser_point;
        laser_point.header.frame_id = "hokuyo_joint";
        laser_point.header.stamp = ros::Time();
        laser_point.point.x = 0.0;
        laser_point.point.y = 0.0;
        laser_point.point.x = 0.0;

        geometry_msgs::PointStamped base_point;
        try
        {
                listener->transformPoint("chassis", 
                    laser_point, base_point);
        }
        catch (tf::TransformException& ex)
        {
                ROS_ERROR("Cannot find the transformation from 
                    hokuyo_joint -> chassis: %s", ex.what());
                return;
        }

This bit is not described above (part of yet another tutorial!) where we transform the location of the laser to the centre of the body of the robot. So we want to interpret the distances of the laser as the distances from the centre of the body of the robot.

        // Find the longest ray.
        double best_angle = 0;
        double longest_distance = 0;
        unsigned nr = 0;
        for (double angle = scan.angle_min; angle < 
             scan.angle_max; angle += scan.angle_increment, 
             ++nr)
        {
                // Get the point in space relative to the laser. We assume 
                // that the laser beam is parallel to the ground floor 
                // (no ramps, etc).
                double distance = scan.ranges[nr];
                double x = sin(distance) * angle;
                double y = cos(distance) * angle;
                // TODO: figure out at what verticle angle the laser is
                // pointing.
                double z = 0;

                // Calculate the distance from the centre of the robot 
                //to the laser.
                double distance_from_robot_centre = pow(pow(
                       base_point.point.x - x, 2) + 
                       pow(base_point.point.y - y, 2) + 
                       pow(base_point.point.z - z, 2), 0.5);

                if (distance_from_robot_centre > longest_distance)
                {
                        best_angle = angle;
                        longest_distance = distance_from_robot_centre;
                }
        }

        ROS_INFO("Best angle %f, longest distance %f.", 
                        best_angle, longest_distance);

The next chunk takes the angle and range of each laser beam and transforms them into the range from the centre of the body of the robot. We store the angle that has the longest range. This the angle we want the robot to turn to.

        // Turn the wheels to achieve the desired angle.
        src::motor_ctr msg;
        msg.header.stamp = ros::Time::now();
        msg.header.frame_id = "/world";
        if (best_angle < 0)
        {
                msg.left_wheel = -0.1;
                msg.right_wheel = 0.1;
        }
        else if (best_angle > 0)
        {
                msg.left_wheel = 0.1;
                msg.right_wheel = -0.1;
        }
        motor_pub.publish(msg);
}

The final bit of code actually controls the robot. We check the angle we want to turn to and determine if we want to turn left or right and actuate the motors accordingly. This will have the effect of the robot turning on the spot towards the specified angle.

Running the example

In order to make running the example easier a launch files is created in {roomba and quadrotor director}/trunk/src/launch. In order to use this execute the following commands:

export ROS_PACKAGE_PATH={roomba and quadrotor director}/trunk/src:\
$ROS_PACKAGE_PATH
roslaunch src roomba.launch

This launches both the motor controller and laser reader. Now we need a robot so we start Gazebo as follows:

export GAZEBO_MODEL_PATH=\
{roomba and quadrotor director}/trunk/models:$GAZEBO_MODEL_PATH
export GAZEBO_PLUGIN_PATH=\
${CMAKE_INSTALL_PREFIX}:$GAZEBO_PLUGIN_PATH
gazebo

This loads Gazebo and gives it a path towards the model of our robot. Make sure to replace ${CMAKE_INSTALL_PREFIX} (mine is /home/bram/local/lib/gazebo_plugins) with the actual director where you plugins are installed (see the beginning of the tutorial). Now we add a robot to Gazebo (Insert->Simplified Roomba Robot) and we should see the same as we saw during our last tutorial. Now in order to make it move we execute the following command:

{roomba and quadrotor director}/bin/avoid_walls

And lo and behold our robot is spinning like there is no tomorrow!. Try adding a few objects to the scenery and see how the robot responds.

End of tutorial

This is the end of a small overview of how to integrate ROS and Gazebo. There are still many outstanding challenges on controlling the robot. For example, you might have noticed that the robot never stabilises when trying to turn towards a certain angle. It always overshoots and needs to change the rotation of the wheels, overshoots again, etc. So we need a cleverer way of dealing with this. Forward kinematics and inverse kinematics are the topics of the next few tutorials - enjoy!

Integrating ROS with Gazebo Part 1

Permalink 02/13/13 10:22, by morloth, Categories: robots

Overview on ROS and Gazebo

This internal page serves as a collection of notes on how to get ROS working with the simulator Gazebo.

  • Overview
  • Installation
    • Troubleshooting
  • Obtaining the project files
  • Integrating ROS with Gazebo
  • Contact

Overview

ROS is an operating system for robots and is implemented as a distributive system that communicates via a message passing system. A Master node is implemented which facilitates nodes in the system to find each other and shares global memory that nodes can use to store and read parameters. Nodes in ROS can 'speak' to each other via topics and services. Nodes can publish data on topics and services. Topics and services must be registered by the master node, each topic and service must have a unique name. In order to receive messages passed from and to these topics and services nodes can subscribe to them via the master node.

Topics are like a message bus where any publisher can write a message on and the subscribers receive them. The contents of the messages are defined in XML files. Services are the same, except that whenever a message (or request) is written on the bus the node expects a response. An example could be a service that adds two numbers, you send a message containing two numbers and you get a response with those numbers added up.

In order  to simulate a robot we have chosen to use Gazebo. Using SDF files we can specify the characteristics of a robot, like the mesh to describe the shape and properties like mass, friction, etc. A robot can consists of multiple parts which are connected by joints. The SDF file allows us to specify a robot as a tree of components, the position of each component is relative to the position of its child.
Installation

We are using ROS Groovy and Gazebo 1.4. The installation of both of these packages are straightforward under Ubuntu (we are using 12.04) and are described here:

There is a plugin which embeds Gazebo in ROS, but I have found this plugin to be hard to work with. The reason is that as of the K version of ROS (i.e. the next version) Gazebo will be a standalone application so any development on the plugin seems to have stopped and the tutorials / files are severely out of date. Therefore I am advocating to use these pieces of software separately.

Troubleshooting

One issue I came across when testing whether or not Gazebo works after the installation is that it would not be able to connect the the connection manager. This is due to some issue with Gazebo and IP6, to overcome this issue check your /usr/hosts file which should look something like this:

bram@bram-progbak:~/gazebo_workspace/models/roomba$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   bram-progbak

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

To fix this problem remove localhost from the line starting with ::1, this should solve that issue.

Obtaining the project files

The current project files can be obtained by executing the following command:

svn co https://subversion.assembla.com/svn/roombasandquadrotor/

We have created a simplistic version of the Roomba robot (see the directory trunk/models/my_robot.model.sdf) which has the same mass, dimensions, propulsion and I have tried to maintain most of the physical properties (inertia, etc.). This model will be used until we have a physical model to play around with. We will use the files in this project for the remainder of this document.

Integrating ROS with Gazebo

Gazebo is a simulator. In order for this simulator to work we must provide it with a model of our robot. This model describes the physical properties of the robot which are important to correctly simulate it. In our case we have made a simply model of the Roomba robot. The files which describe this robot can be found in trunk/models/my_robot.model.sdf. We will step through this file in order to familiar you with the syntax. For reference the SDF syntax is described here, also check out this tutorial.

Model

Lets have a look at the model.config file:

<?xml version="1.0"?>
<model>
  <name>Simplified Roomba Robot</name>
  <version>1.0</version>
  <sdf version='1.3'>model.sdf</sdf>

  <author>
   <name>Bram Ridder</name>
   <email>bram.ridder@gmail.com</email>
  </author>

  <description>
    Simplified roomba 5XX robot.
  </description>
</model>

The syntax is simple, this file is there to add meta data to the actual model. As you can see it has been created by me (you're welcome ;)), uses SDF version 1.3 and adds a description of what this model represents.

The actual model is more complicated and describes a lot more. The SFD files describes one or multiple models which consists of links which are connected and controlled by joints. In our model we describe three links:

The first is the chassis of the robot which includes a small ball at the bottom back which simulates the small ball that is attached to the actual Roomba (middle top).

Bottom of a Roomba 530

We have encoded this as follows in model.sdf:

<?xml version='1.0'?>
<sdf version='1.3'>

First of all we specify which xml and sdf versions we use. Next we specify the name of the model we are making:

  <model name="my_robot">

In this case the author was hit by a sudden spark of inspiration and named the model of the robot "my_robot".  Next we create the chassis which is represented as a cylinder

    <link name='chassis'>
      <pose>0 0 0.020 0 0 0</pose>
      <inertial>
        <mass>3.7</mass>
        <pose>0 0 0.00761 0 0 0</pose>
        <inertia>
          <ixx>1.0</ixx>
          <ixy>1.0</ixy>
          <ixz>1.0</ixz>
          <iyy>1.0</iyy>
          <iyz>1.0</iyz>
          <izz>1.0</izz>
        </inertia>
      </inertial>

      <collision name='collision'>
        <geometry>
          <cylinder>
            <radius>.144</radius>
            <length>.030</length>
          </cylinder>
        </geometry>
      </collision>

      <visual name='visual'>
        <geometry>
          <cylinder>
            <radius>.144</radius>
            <length>.030</length>
          </cylinder>
        </geometry>
      </visual>

First of all note that we position the chassis a little above the ground, this allows for room for wheels. The pose element consists of 6 parameters: { x y z roll yaw pitch }. Next we provide the physical properties of this chassis like mass and inertia. Next we must specify how the model interacts with the world by providing a model to use for collision detection. In order to keep simple we use a cylinder about the size of a Roomba. Next we specify how this model will appear in the simulator in most cases the collision and visual properties will be the same.

Great! So now we have a cylinder slightly positions above the ground. In order to simulate the ball at the back of the Roomba we add the following lines:

      <collision name='caster_collision'>
        <pose>0.12 0 -0.012 1.5707 0 0</pose>
        <geometry>
          <sphere>
            <radius>.008</radius>
          </sphere>
        </geometry>

        <surface>
          <friction>
            <ode>
              <mu>0</mu>
              <mu2>0</mu2>
              <slip1>1.0</slip1>
              <slip2>1.0</slip2>
            </ode>
          </friction>
        </surface>
      </collision>

      <visual name='caster_visual'>
        <pose>0.12 0 -0.012 1.5707 0 0</pose>
        <geometry>
          <sphere>
            <radius>.008</radius>
          </sphere>
        </geometry>
      </visual>
    </link>

Note that the location of the ball (<pose>) is relative to the location of the model. So the centre of the ball is (0.02 - 0.012 =) 0.008m above the ground. Given that the radius is 0.008m it means that the ball is nearly touching the ground. As a hack we specify some physical properties of this ball such that is has not friction at all. This is unrealistic, but for now this will suffice.

Next we add the wheels to our model:

    <link name="left_wheel">
      <pose>-0.08 0.1 0.01 0 1.5707 1.5707</pose>
      <collision name="collision">
        <geometry>
          <cylinder>
            <radius>.01</radius>
            <length>.005</length>
          </cylinder>
        </geometry>
      </collision>
      <visual name="visual">
        <geometry>
          <cylinder>
            <radius>.01</radius>
            <length>.005</length>
          </cylinder>
        </geometry>
      </visual>
    </link>

The code above is the same for the right wheel (although it is positions on the other side of the robot. The wheels are cylinders that are rotated and transformed in order to be added at the right location. Note that the wheels are not positions at the same location as they are for the Roomba model --- this will be fixed later on and is not important for this tutorial.

So far so good, we have a complete model of our Roomba, right? Wrong! We need to add joints in order to make our Roomba move. Joints are elements of our robot which we can control. In this case we want joints to rotate our wheels:

    <joint type="revolute" name="left_wheel_hinge">
      <pose>0 0 0 0 0 0</pose>
      <child>left_wheel</child>
      <parent>chassis</parent>
      <axis>
        <xyz>0 -1 0</xyz>
      </axis>
    </joint>

Here we create a joint that turns backwards around the y-axis. We make it go backwards because the way we designed our robot it seems that the rear wheel is actually positions in the front. It's of little consequence, it just makes it more intuitive later on when we start to spin these joints such that a positive value will actually make it go forward. Note that joint must be connected by two links. In this case the left_wheel_hinge connects the left_wheel to the chassis. The model creates a tree of links, this allows us to create a chain of connected links which all affects each other (e.g. a rear wheel axle).

The last component we add to the robot is a laser that can scan a set of points in an arc. We will use this in future tutorials to map our surroundings of let our robot react to its environment. We position this laser at the front of our robot and let it face forward. Note that although we add a joint to connect the laser to the chassis we restrict its movement such that it is a static connection.

    <include>
      <uri>model://hokuyo</uri>
      <pose>-0.1 0 0.05 0 0 3.14159</pose>
    </include>
    <joint name="hokuyo_joint" type="revolute">
      <child>hokuyo::link</child>
      <parent>chassis</parent>
      <axis>
        <xyz>0 0 1</xyz>
        <limit>
          <upper>0</upper>
          <lower>0</lower>
        </limit>
      </axis>
    </joint>

Plugins

Now that we have our laser, model and joints we want a way to control our robot. It is possible to do this in Gazebo. In order to do this execute the following sequence of commands:

export GAZEBO_MODEL_PATH={location where you checked out svn}/trunk/models:$GAZEBO_MODEL_PATH
gazebo

The first line exports a environmental variable which allows Gazebo to find the location where the models are stored locally. After starting Gazebo go to the 'insert' tab and select "Simplified Roomba Robot", now you can deploy this robot anywhere on the plane and have a good look at it. Next left click and hold the three horizontal lines and drag it to the left. This exposes a Joints tab, click on the robot and you will see the three joints of our model. Play around with the left and right wheel hinges to move the robot around.

Roomba in action

The next step is to integrate ROS. Which is the subject of part 2

Contact

Any questions please email at: bram DOT ridder AT gmail DOT com

Mario AI - Results!

Permalink 09/27/12 05:57, by morloth, Categories: MarioAI

Hey everybody,

The results for the Mario competition are in! In the end I came 5th out of the 11 competitors which is not a bad score. The organisers have stated that they will at some point write a paper about the competition for which I'll ask to write how my generator works. So I'm quite happy it migth actually lead to a publication :D. The results are displayed in the table below (shamelessly copied from http://noorshaker.com/marioComp2012/results.php):

Level Generation Track - Results

Total number of participants: 147

ParticipantScoreSelected/Presented
Ben Weber6.71 %10 / 35
Tomoyuki Shimizu11.23 %11 / 23
Peter Mawhorter10.96 %14 / 30
G. Takahashi and G. Smith7.83 %11 / 33
Robin Baumgarten6.71 %6 / 21
Martin Müller-Holtz5.34 %5 / 22
Jorge Diz Pico13.05 %15 / 27
Bram Ridder10.21 %10 / 23
L. Ilici, O. Missura and T. Gaertner7.83 %8 / 24
T. Chiang, Y. Chen and C. Cheng14.09 %15 / 25
Noor Shaker6.06 %8 / 31

 

Currently I am working hard on my thesis but hope to get an update on the resistance AI project soonish. I have a new idea on how to calculate the probabilities of someone being a spy, stay tuned! :)

Bram

2 comments »

Resistance AI: First version.

Permalink 09/21/12 20:32, by morloth, Categories: The Resistance - card game

Hi everyone,

After a few days of coding I have finally got my first version of the Resistance AI bot up and running. It is far from complete but it beats all the bots provided for the competition (including those created by aigamedev). At the moment we only calculate the probabilities of players being spies based on the missions which have been executed, we ignore voting and other behaviours the bots exhibit.

I will now provide a brief description of how our bot works:

The good / The bad:

One idea I've taken from the aigamedev implementation is to keep a list of players which have been on a mission where at least one of the players sabotaged. We will vote against any team which contains a set of players which have failed in the past and we will prevent picking a team which contains such a set. On the other hand we do keep track of the largest team to date which did manage to succeed at a mission and use it as our template for future missions.

Statistics:

While a game is played we keep track of the behaviour of players if they are a spy. We check how likely it is for them to sabotage given that they are either 1) The only spy on a mission; or 2) Are on of two spies on a mission. Other information is recorded like the voting behaviour but we do not use this information yet.

Bayes' Theorem:

The meaty bit of the implementation is Bayes' Theorem. Based on the team selected we want to calculate the value:

P(player X is a spy | sabotaged = Y)

We do this by applying Bayes' Theorem which gives us

P(player X is a spy | sabotaged = Y) = P(player X is a spy) * P(sabotaged = Y | player X is a spy) / P(sabotaged == Y)

The first term is known because we retain all information. The second term is calculated by using the product rule which is dependent on the number of sabotages. For example, if the number of sabotages is 1 we need to calculate the property that:

  1. Every possible pair of spies decides not to sabotage, knowing that another spy is in the team.
  2. Only one of the members on the team is a spy and decides not to sabotage.
  3. None of the members in the team is a spy.

For example, to calculate given a team T = (p0, p1, p2), the chance that none of the players is a spy is:

(1 - P(p0 is a spy)) * (1 - P(p1 is a spy | p0 is not a spy)) * (1 - P(p2 is a spy| p0 and p1 are not spies))

A special case is handled when sabotaged is 2. In that case we can deduce quite a lot of information. If we are on the team when this happens we have compete information about who the spies are. If the team is of size 2 than everybody has complete information. If the team is of size 3 and we are not part of the team we can deduce that all the members who are not on the team cannot be spies.

Selecting a team:

Now that we have a good way to figure out who the spies are we can select a team. First of all we try to pick a team which has worked in the past. If that is not possible we use a roulette wheel selection based on the probabilities of each player being a spy.

Results:

The above method leads to the following result:

Not a bad start but we still need to integrate voting behaviour and selecting behaviour to get even better results!

Keep tuned :).

Bram

Resistance AI: Learn how to read!

Permalink 09/15/12 18:47, by morloth, Categories: The Resistance - card game

Hey everybody,

For some reason I thought the deadline of the competition was the 24th of September. It turns out it was the 14th so I missed the boat :(. I will still finish my --- would be --- submission and hopefully I will get a chance to test it against the submitted bots.

More information later!

Bram

Mario AI - The evaluation!

Permalink 09/14/12 10:04, by morloth, Categories: MarioAI

Hi everybody,

The MarioAI competition is finally under way! To help out please do the following (copied from http://www.marioai.org/LevelGeneration/evaluation):

  1. Download the jar file
  2. Double click on it
  3. You will play a test level followed by personalized levels
  4. You will play the personalized levels in pairs and you will be asked a few questions about your gameplay experience

Good luck to all the other competitors and may the best win!

Live results are available here: http://noorshaker.com/marioComp2012/results.php

Bram

Resistance AI: AI GameDev Bots Analysed

Permalink 09/01/12 08:17, by morloth, Categories: The Resistance - card game

Hello everybody,

After announcing I will be competing in the Resistance AI competition I spend some time to get to grips with Python. I never programmed in that language was happily surprised by the syntax and expressiveness of the language which means you can condense quite a bit of logic on a single line with generators. Loving it!

Bot analysis:

Before starting to write my own bot I had some thought as how to go about doing that. I had some vague notions in my head concerning believe states and using logic to weed up the spies from the non spies, including some sensible policies (more on that later), but decided to take a look at the existing bots first to see what I could learn. As it happens the organisers have packed quite a few bots with the programming code including two bots which must be beaten in order to get a nice price (Quote: "If you submit a bot that beats the intermediate AiGameDev.com bots, you win a bonus PLUS membership.") so understanding those bots will go a long way to find a way to beat them!

Statistician

First of is a bot called Statistician, or Bayer's bot if you like. It employs Bayer's Theorem quite a lot to learn more about the bots it is playing against. The probability we are interested in, of course, is the chance of a player being a spy. Statistician pays close attention to the voting process after a mission has been proposed and a team has been selected by the team leader. We are given the following pieces of information:

  1. The probability for a player to vote on suspects.
  2. The probability of a player voting true.

To refresh your memory Bayer's theorem is defined as: P(A|B) = P(A)P(B|A)/P(B). In our case:

  • A is the chance of a player being a spy (see below).
  • B is one of the members of the list above.

The probability of a player being a spy is calculated as follows. If the player himself is a spy he knows all the other spies so we have perfect information. However, if the player is a member of the resistance each other player has a 50% chance of being a spy (there are 5 players and 2 spies in each game). After every mission we know who the team members were and how many sabotages where performed. The chances of players being a spy is updated as follows, let T be the team excluding yourself and S the number of sabotages:

  • For all members on the team we add: S / T to a total V.
  • For all nonmembers on the team we add (2 - S) / (4 - T) to a total V.

The probability for each player being a spy is V / total missions. Question which still stands is, how do we know the values of (1) and (2)? Well, at the end of each game all the information about that game is revealed so then we can look back at all the votes and who actually was a spy and who was not and record this information for the next game(s). The smart bit is that the bot saves the information based on a bot's name which is revealed when the game starts. So in a way it builds a profile of a bot and gets more and more accurate the more games get played. Meaning that the first few games are quite rubbish, on the first game it assumes all probabilities (except for P(A)) are 0.5.

Logical

The second bot is on the other end of the spectrum and does not use any probabilities at all. In fact is uses static policies for decision making, but remembers (as a resistance member) teams which were successful on missions and tries to infer who the spies are.

As a spy it will always select a team including himself and a random set of other players which are not spies, always sabotage, and vote for any mission that includes at least one spy.

As a resistance member it will select a team which has been successful in the past, is such a team exists, but it will always make sure that himself is in it. If no such teams exists one is selected randomly which will include himself and no known spies. It will vote against any team that:

  • Includes a known spy.
  • A team that failed before.
  • He himself is not on the team and the team's size is 3

In this game we cannot every be a 100% certain who the spies are, unless both of them sabotage the same mission! Whenever the number of sabotages performed is more or equivalent to the number of players in a team which are not known spies we know that all of them are (excluding yourself, of course!). For example, whenever you are the team leader and you select a team of three and both other players sabotage the mission you know how the other spies are and record this information.

Summary:

Both bots seem to perform reasonably well but there are a few shortfalls. Statistician solely relies on statistical information and does not perform any of the logical deductions performed by Logican. Voting for both bots is poor, there is a chance that both will vote for missions with two spies on them which could lead to both spies sabotaging and blowing their cover. Selecting which team to go for is more robust with Logican because it uses static policies and actively tries to stick to teams which proved to be successful in the past, whereas Statician uses a roulette wheel selection based on the probabilities of a bot being a spy which (I think) gives a worse performance when playing as a resistance member. As a spy this method is preferred because other bots will have a hard time deducing information from your team selection process (although you still want to avoid choosing two spies on the same mission).

Combining those two approaches seems like an obvious first step, but whereas these two bots are concerned with what other bots are, I want to know what they think. In other words I want to build believe states of the other bots and based on analysis done throughout games (much like Statistician) figure out how to trick them into making decisions which work out good for me. Moreover I want to discover tell tells of how other bots operate whilst being a resistance member / spy. Do they always try to sabotage? Do they try to avoid voting for other spies if they are a spy? etc. This approach also makes me think that I do need to include some random elements in my bot which might break, what would seem as, optimal play in order to avoid my behaviour being detected by other bots.

I will get into this in more detail next week and hopefully present my first bot.

Bram

The resistance AI competition

Permalink 08/26/12 11:47, by morloth, Categories: Announcements, MarioAI, The Resistance - card game

Dear all,

With the MarioAI behind me I am currently focusing on a new project to pick up. This one promises to be a little more interesting because it is more related to my research. The challenge is to write a bot which can play the card game "The Resistance", I have never played the game myself but I do intent to buy it to give it a spin with a few friends over the weekend.

The card game models a struggle of a resistance group against the government. The group which forms the resistance must successfully execute three missions to topple the government. However, a number of spies have infiltrated the network and can sabotage missions, if they sabotage three missions the resistance has failed. Only the spies in the game know who is who, it is up the resistance to figure out who are the spies and select the teams which go on a mission carefully. The spies on the other hand need to plan when it is best to sabotage a mission to minimise the chance of being detected.

How the game and works and details about the competition can be read here: http://aigamedev.com/open/upcoming/resistance-competition/.

I will keep you posted on my progress!

Bram

Mario AI: Need your help!

Permalink 08/21/12 07:17, by morloth, Categories: Announcements, MarioAI

Hello everybody,

My entry for Mario is almost finished, all I need now is to do some final testing and that is where you come in. In order to check if I design the proper levels I am going to do a public test and ask all of you to participate. All you have to do is the following:

  1. Play a test level. This is purely random and based on your actions and performance a level will be (robot) hand crafted for you.
  2. Play the customised level.
  3. Give feedback about your experience.

That is all, feel free to play as many times as you like! If you have any questions or additional feedback, do not be afraid to leave  comment in the section below.

Enough, let me PLAY!

Bram

:: Next >>

©2014 by Bram Ridder

Contact | Help | b2evo skin by Asevo | evoCore | vps hosting | François