Page - 62 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 62 -
Text of the Page - 62 -
Fig. 1. RoboCup Logistics game-field in the simulator. Two attending
teams with 3 robots and 6 machines each.
This is achieved by a controlled environment which contains
amodularproductionsystemandafleetof robotswhichneed
to be controlled. To allow fair conditions, a standardized
robot platform (Robotino by Festo [4], three per team) is
used for the mobile robots as well as standardized modular
production systems (MPS by Festo, six per side) for the
fabrication steps.
To emphasize the idea of a smart factory, the rules require
that the fleet performs its task completely autonomously.
Thus no intervention from humans is allowed. The idea is to
put a robot in a workshop and let it explore the environment
on its own, find machines to work with and produce products
according to arriving orders. For this, the whole scenario is
split into two phases, the exploration, and the production
phase.
A. Exploration Phase
In this first phase, the robots have no knowledge about
their environment. They have to explore the game-field (see
Figure 1, a screenshot of the RoboCup Logistic League
simulation [5]) and find the machines located there. To award
points for the detection of such a machine, the robots have
to report their observations to a central referee box. Each
report contains the type of the machine, the shown status
light as well as it is position in the field. If all the machines
have been found, or after some deadline has passed, the next
phase is invoked.
B. Production Phase
In this phase the actual production takes place. Random
orders are placed by the central referee box, and both teams
try to produce these as fast as possible.
1) Products: The products are mocked up as cups (base)
with a defined number of rings pressed on it and a cap. The
color of each part of the product is defined in the order.
2) Order: An order consists of the demanded product
(e.g. a red base cup with two rings, the first ring blue, the
second one yellow and a black cap) and its earliest delivery
time as well as the deadline for the delivery of this product.
3) Modular Production System: To produce the ordered
product, themobile robotscanuse thesixproductionsystems
of their team. There are four types of these workstations: • 1x Base Station: Providing bases in the demanded color.
• 2x Ring Station: Mounting a ring in requested color on
the provided base.
• 2x Cap Station: Mounting a cap in required color on
the provided base.
• 1x Delivery Station: Point to deliver a product in the
given time window.
As the mounting of a ring represents the addition of some
feature toaproduct, someringcolors requireadditionalbases
as ”raw“ material. Thus also the need of deliveries for supply
material is modeled in this scenario.
III. SOFTWARE ARCHITECTURE
To solve the tasks of the Logistics League, we propose
the following software architecture. The software is split into
three distinct layers, namely high-level, mid-level and low-
level.Each layer is independentof theother layerswithin this
concept. The lower layers provide functionality to the upper
one [6]. Furthermore, higher layers command the actions of
the lower layers.
The highest level of our software architecture is respon-
sible for the connection of the different parts. It connects
to the central referee box as well as an arbitrary number of
connected robots as it can be seen in Figure 2.
To allow independent development and testing of each
layerdefined interfacesarenecessary.Additionally, to feature
different programming languages for each layer, Google’s
protocol buffers are used for these interfaces. This inde-
pendence is used as the high-level is written in Java, the
mid-levelusingabelief-desire-intention[7]engine(openPRS
[8], C) and the low-level is written in C++ using the ROS
(Robot Operating System [9]) framework. The communica-
tion scheme for one robot can be seen in Figure 3.
For each interface dedicated protocol buffer (protobuf)
messages are defined. With this structure, an increasing
abstraction of the physical world can be achieved from the
bottom up to the top. The message used between the high-
level to the midlevel can be seen as an example in Listing 1.
Listing 1. Protobuf message to communicate between the layers.
1 message PrsTask {
2 required Team teamColor = 1;
3 required uint32 taskId = 2;
4 required uint32 robotId = 3;
5
6 optional ExecutionResult resul t = 4;
7
8 optional ReportMachinesTask reportTask = 5;
9 optional ExploreMachineTask explTask = 6;
10 optional GetWorkPieceTask getWPTask = 7;
11 optional PrepareCapTask prepCapTask = 8;
12 optional DisposeProdTask dispProdTask = 9;
13 optional DeliverProdTask deliProdTask = 10;
14 }
The lowest layer is responsible for small tasks close to
the hardware, e.g. to move to a waypoint, grab an object,
detect an AR-tag or analyze the status light of a machine (see
Section IV-A.2 and Section IV-A.1 for further details). We
call the execution of these small tasks skills in the remainder
62
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas MĂĽller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände