Subscribe via Email

Sunday, March 12, 2017

ARC ROS: Behaviour Module Design and Implementation

Behind the Scenes

This post will outline the design and implementation process for the Behaviour module of ARC ROS. Recall the previous high level design

The behaviour component must be separate from both the visualization, and core modules. There must exist one common interface, that provides various behaviours to the rest of the framework, in such a way that is easily customizable.

Behaviour Module Design

I extended on the design that was used in Geoff's Thesis work. In a schema design there are two types of "schemas":
 Perceptual Schema: A node that receives perceptual information (laser/odometry/etc)
 Motor Schema: A node that receives information from perceptual schemas, and triggers action

For example, a perceptual schema may take in images from a camera, and output a list of people found in the image. Motor schemas can now receive this list of people and decide on how to act next.

Figure showing two motor schemas working together to find victim. From Geoff Nagy's thesis Active Recruitment in Dynamic Teams of Heterogeneous Robots
Figure showing two motor schemas working together to find victim. 

This is an example of two motor schemas "MoveToCasualty" and "AvoidObstacles" sending out action vectors (velocity commands) based on incoming victim location information from a victim detector perceptual schema.
Both of these schemas are combined to make more elaborate behaviour (in this case, navigating around an obstacle to reach a victim)

In ROS, I will implement each schema as a Node.
Schema based architecture for behaviour module of ARC ROS.
Schema architecture for behaviour module of ARC ROS. 

 Perceptual nodes publish data they find, and motor schema nodes subscribe to it. A main node "arc_base" will act as the control, where you may turn specific motor schemas on/off on request.

Despite using the same schema architecture, my work will alter the specific motor/perceptual nodes used. This is because ros provides a Navigation Stack, which offers support for planning and exploring an environment. I see it as viable to use this single component to outsource much of the existing motor schema work.

Before making this decision, I had to spend a week doing performance analysis on the Navigation Stack.
The main questions I asked were:
    Will the navigation stack provide more accurate obstacle avoidance and planning capabilities?
    How much more processing power will the navigation stack require?
        Will it scale with multiple robots? 
    How do I incorperate this into the existing schema design?

After much deliberation, I have decided to move forward with using the Navigation Stack. The implementation is outlined below.

Implementation of Behaviour

Implementing Perceptual Schemas
Developers from tuw Robotics provided a patch for Stage that allows fiducial marker publication over ROS.

Stage fiducial marker detection in ARC ROS.
DebrisBot (left) in ARC ROS detecting a fiducial marker (right).

In stage, objects in the environment can have a fiducial key. This means a robot can detect any other obstacles in the environment, with the same fiducial key. In the figure above, both the robot and the yellow marker have a fiducial key of 3.
Stage arc ros marker detection publishing on topic.

As you can see, the robot detected markers with id: [3].

The main limitation is that an object in stage is either a marker, or it is not. In the ARC environments, there may be debris, markers, other robots, and victims in which I also want to be able to detect in a similar fashion.
As such I did a further modification so that each different type of fiducial marker is published on a separate topic.
Consider this environment
Multi-marker detection in ARC ros using stage.
Debris bot (bottom let) and midbot (top left)
detecting marker (top right) and debris (bottom right)

When loading this environment, each robot is initialized with specific detectors based on the sensors it has.
arc ros loading multiple marker detectors in robot for stage.

 Notice the original debris robot (bottom left) can detect debris and the yellow marker; but it can't detect the tiny midbot, just above it. The midbot on the other hand, can detect the marker, debris, AND The debrisbot!

Each robot is publishing detected obstacles in it's own namespace
    /arc/test_debrisbot/marker_detector /arc/test_debrisbot/victim_detector                                                 /arc/test_debrisbot/debris_detector 

The midbot has these topics being published:
   /arc/test_debrisbot/victim_detector                                                                                                            /arc/test_debrisbot/debris_detector 

Each of these topics has select marker information showing what was detected. Now in keeping with the schema based design, I created a perceptual schema node for each detector. For example, the DetectVictimPS has the following api:
Subscribed Topics
marker_detector (marker_msgs/MarkerDetection) Published Topics ~victim_list (arc_msgs/DetectedRobots)
    List of victims found, that are within max_range
~max_range: The furthest away (in meters) that a marker can be detected.
Published Topics
detect_victim_ps/located_markers (arc_msgs/VictimList)
   List of victims detected within range.

The same API format applies for the other perceptual schemas. This means each robot, can have easily customizable, and unique settings for how it can detect Debris, Victims, other Robots. The important part about this design, is that it would be easy to add a new type of object to detect as well. Furthermore, in porting from a Stage simulation to the real world, these nodes could be extended to use more elaborate detection methods.

Victim detection for example, may involve subscribing to camera data, and using machine learning to find faces in an image.

Implementing Motor Schemas

I mentioned above that I'll be using the ROS Navigation Stack to provide more robust planning and obstacle avoidance.
I setup planning and mapping functionality for each of the robot types in the ARC framework.
In order to use the services provided by the Navigation Stack, I had to consider the request latency, and fault tolerance of the system.

Footage of maxbot navigating through test environment.

I'd also like the integration to be seamless, in that anyone using the ARC framework wouldn't need to concern themselves with the navigation stack in order to extend the behaviour.

This node handles requests for navigation. A request can be made to move to some goal location, and this node will handle all further interaction with the Navigation Stack. In the event that the robot gets stuck, this adapter will keep note and emit a signal indicating this. In the future, other robots will pick up distress signals like this, and send assistance to the robot.

For now, this will simply generate random goal locations, and send them to the NavigationAdapter. One issue occurs when an impossible goal is generated.

Search algorithms such as A* will terminate in the event that no goal can be found; however for continuous wandering behaviour, the robot must be moving whenever possible.
As such I wish to implement a custom global planner that will act as a hybrid of carrot planning and A*.
The carrot planning means that if an invalid goal is requested, some nearest VALID goal will be chosen instead, encouraging the robot to get as close as possible to it's desired location. After finding some valid goal, a variant of A* will be used to generate a reasonable path to the goal.

This was a lot more challenging than it seems. Stage doesn't provide a way to manipulate objects over ROS, as such I implemented a simple service that would allow me to send a new position, and object id, in order to forcefully move the object.

The problem with this, is that it can only work on stage models with positional information. Such models also publish their odometry information, and a bunch of other information.
The result? Every single marker, debris ,and victim was publishing a ton of the odometry info that a robot should publish.

These topics are published for EACH object in the simulation, making it completely unrealistic to scale in size. As such I created a new object type, which I call "AlterableObject".

It is based on agent theory, where an agent has control of it's actions, and an object is something in which can be acted on. With this patch, a stage world is created with the prefix "altenv_" before any object that you wish to be alterable.In this example file, notice the markers with name "altenv_...".

They will be deemed as alterable objects, meaning they don't publish any elaborate information, but allow for being manipulated in the environment.

I added a service to stage:

Which will forcefully move an object with a given name/id, to a designated location in stage. If we extend this functionality to allow for setting velocities, we can now allow for robots to move and manipulate stationary objects in the environment.

This footage shows the cleaning functionality. The navigation is turned off, which lets me test this specific component.

With the functionality of AlternativeObjects added for the Debris cleaning schema, picking up and dropping markers is easy.
I use the Marker Detection node to get all markers within some desired range. Then, I provide services "pickup_marker" and "drop_marker" which will select some marker from within range, and move it out of sight of the robot.
When it does this, the robot will store the id in an inventory hashtable. Whenever we want to drop a marker, we check the inventory, take some marker id from it, and then call /arc/move_alterable_object to move this marker to some location that is nearby the robot.

Integration of components
This is where the fun begins. Now we integrate all of the components. Several schemas, and the navigation stack all on one robot. Here is a link to a typical launch file I use, which outlines the setup for 1 robot.

I then created a higher level launch file, which takes in an argument "robot_type" and lets you launch any robot you'd like, in the environment of your choice.

The video below outlines the core functionailty in a distaster room I created. The Sofa/Chair models were created by Marco Becerra and can be found here.

Scaling to Multi-agent Environment
One final adjustment required by stage to support multiple robots, is to allow each robot to share the same common /map coordinate space.
multi-robot tf tree ros stage
Multi-agent transform tree with 4 agents sharing /map coordinate space.

As you can see, each robot is in /map, but has it's own local coordinate space for odometry.

The ARC framework supports both a global /map, or global /base_link frame, meaning if there isn't some shared map, each robot can still navigate around.

This support allows for exploring SLAM (Simultanerous Localization and Mapping) using ARC; although it is not MY immediate concern.

The Final Result
Now it's time. We've explored the process behind designing and implementing behaviour in ARC, here is the final demo footage I put together.

Now I'm back to the drawing board, focusing on part 2 of the ARC project: coordination and teamwork. My next post(s) will outline the design of these components. 
Next up: Designing coordination and task allocation (CORE ARC).

No comments:

Post a Comment

Please feel free to give feedback/criticism, or other suggestions! (be gentle)

Subscribe to Updates via Email