Thursday, August 15, 2019

How to start a robotics project?

Normal software engineering projects are grouped around certain technologies. For example, somebody can use a Linux server together with the PHP language to build a website, or it's possible to create with the C++ language a new computer game. If the programming environment is fixed, it's possible to figure out the details. And the number of options how to create within C++ a game is limited.
The situation in case of robotics project is a bit more difficult. There is no framework available. Sure, some libraries for creating robotics and even some programming language are mentioned in the literature. Sometimes the ROS project is called a quasi standard, and embedded control is often handled with C. But these technologies are not used for creating the AI itself, but they make only if it's already known how to realize the robot.
The better idea to start a robotics project is based on the steps in human computer interaction. A new robotics project is usually started as a manual control system. That means, the human operator gets a joystick and moves the robotarm remote. That is the same what a crane operator is doing. The second step is about reducing the workload for the human operator. The goal is to increase the automation level. In case of a robot arm who grasps objects this is done by automating the step of grasping itself. That means, the human operator controls the arm, but the robot decides when the right moment is there to close the gripper.
In the literature the concept is called shared autonomy. It means, that that some tasks are done by the human and other by the Artificial Intelligence. The human operator controls the movement of the arm, and the vision system detects if an object is in the hand and activates the grasping action. The advantage is, that only subparts of the system gets automated. That means, only the software which executes the grasping action is working autonomously, while the position of the gripper isn't controlled by the software. The overall pipeline can be improved into a fully autonomous system. The next step would be, that the AI controls both: the grasping and the position of the robot hand.
Somebody may argue, that the difference between a teleoperated robot arm and a robot arm who can grasp by itself is small. And indeed, in both cases the human operator is in the loop. That means, he has to move the joystick for doing the task. The advantage is, that the human will recognize the reduced workload. If he doesn't need to press the “grasp” button it's a clear improvement.
Combining GOAP with a vision model

GOAP (Goal oriented action planning) is a well known technique from Game AI to build realistic AI characters. The idea is, that the agent is in a worldstate and has a behavior library in the background. A solver is testing out different behaviors to bring the agent to a goal. GOAP is equal to a automatic textadventure which takes an input worldstate and generates the next behaviors.
To use the concept for real robotics, a vision model is needed which provides the input worldstate. A vision model is in the easiest case a vision cone infront of the agent. This is sometimes describes as spatial grounding in the literature, because it connects pixelcoordinates like “object=(100,100)” to language, e.g. “object isat front”.

Tuesday, August 13, 2019

Event processing in scripting AI

A common pattern in modern Game AI development is to store events in variables. An example is to create a variable “robotatdoor=True”, or “distance=100”. The first example is a boolean event which is called a trigger, and in the second case an integer variable was introduced to store detailed information about a situation.

In a behavior based architecture, the AI script takes the world state as input and calculates the actions in response to the event. A typical script would look like:

if robotatdoor: opendoor()
if distance<50: stop()
Unfortunately, there is a problem with event processing which can be called a categorization problem. The good news is, that in contrast to the robotics domain, all the event are certain. In a computer game it is sure, that the robot is really at the door, and that the distance is precisely 100 pixels. The categorization problem has to do with the program flow over a time period. Let us go into the details. A computer game consists of framesteps. In frame 0 the variable “robotatdoor” is False, in frame 10 the variable is False as well and at timecode, 20 the trigger gets activated and switches the state to True. Over a longer timespan the event can become true or false which is equal with a dual categorization. In case of the distance variable, there are also two categories available. In the first case the distance is smaller than 50 and in the second one it's greater. The robot behavior stop() is activated or not. The usage of categories are equal to formalize a situation. The problem is converted into a machine readable description which includes a decision making process. The algorithm for controlling the robot works deterministic, which means that the robot knows how the world looks like and what to do in each situation. Let us observe what will happen if the situation is unclear. Suppose the distance variable has no value:
if distance<50: stop()
The if statement can't be executed because the value of the variable isn't available. The program will stop with an error. This is equal to a programming error. To overcome the issue, the programmer has to make sure, that each variable has a value. The “try except” statement in Python is a great help for doing so.
  if distance==None: raise ValueError
except ValueError:
The try except statement allows to stay within the program even if the variable has an unknown value. It prevents, that the Python interpreter exists to the command line. The unclear situation can be catched and handled with a subroutine for error management. It's important to build in an exception routine into an event processing system.

Measuring the worldstate Suppose a game consists of 3 sensor information which have all the boolean type:

The total amount of possibilities for the worldstate is 2^3=8. It is possible to react for each world state separate, for example:
if worldstate=(0,0,1) then action1
if worldstate=(1,0,1) then action2
What will happen if the input variables have a different type? An 16 bit integer value can store values from 0 to 65535. The statespace is 2^16. If three input variables are given:
... the needed amount of storage space in the RAM memory is 3x16bit=48 Bit which can hold 2^48 worldstates = 2.81*10^14. It's not possible to decide for each worldstate which actions is needed:
if worldstate=(0,0,65535) then action1
if worldstate=(65535,0,65535) then action2

Especially in the domain of Q-learning the problem of the input space will become a problem. Because the number of rows and columns explodes shortly. The answer to the problem is to store the q-table in a neural network. The neural network is able to transform the complex input space of 2.81*10¹14 worldstates into a smaller one.

From an abstract point of view, it's important how many bits are needed to store the worldstate. In the first case, the entire worldstate can be stored in only 3 bits. In the second example with the integer values the amount of 48 bits are needed.

Sunday, August 11, 2019

Improved Heatmap in Python

In addition to a previous posting an improved version of the heatmap sourcecode is given. The sourcecode was formatted in the HTML mode with the "pre" tag.

import pygame

class Game:
  def __init__(self):
    self.pygamewindow = pygame.display.set_mode((700, 350), pygame.HWSURFACE | pygame.DOUBLEBUF)    
    self.fps=5 # 20 
    for i in range(1000000):
  def heatmapcolor(self,value):
    # value 0..1, returns colorcode (r,g,b) 
    # init gradient
    gradient=[]  # (value,r,g,b)
    gradient.append((0.0,  0,0,1)) # blue
    gradient.append((0.25, 0,1,1)) # cyan
    gradient.append((0.5,  0,1,0)) # green
    gradient.append((0.75, 1,1,0)) # yellow
    gradient.append((1.0,  1,0,0)) # red
    gradient.append((1.0,  1,0,0)) # red extra
    # search base color
    for baseid in range(len(gradient)):
      if diff>=0 and diff<0.25: 
    # relative color
    color=[] # (r,g,b)
    for i in range(1,4):
      temp=(gradient[baseid+1][i]-gradient[baseid][i])*relvalue # get difference
      temp=(temp+gradient[baseid][i])*255 # convert to 255 scale
      temp=int(round(temp)) # round
    return color  
  def paintmap(self):
    for i in range(maxstep):
      value=i/maxstep # 0..1
      pygame.draw.rect(self.pygamewindow, col, (x,0,grid_width,grid_height))

The advantage is, that the resolution can be adjusted easily to reduce the gridwidth.

The symbol grounding problem is overestimated

A normal expert system works great if the facts are defined precisely. An example for a fact is, that the robot is near to the box, another fact is, that the box has an angle of 0 degree. The expert system takes these facts as input and executes operators on the facts. Not all rules can be applied but only a subset. The concept is known in game AI as a GOAP planner, because the solver is able to bring the system into a goal state.
According to some computer scientists, something is missing in that loop. They ask who the expert system gets all his facts. In the literature this question is called the symbol grounding problem because it's about a connection between the environment and the facts in the expert system. But is this problem really so important? In most cases the transition from perception to the fact database is not very complicated. The sensor is able to measure an information and the data is converted into a fact. If the robot is near to the box or not can be determined by a single line of code. Calling this transition a bottleneck which prevents expert systems from become a useful tool is an exaggeration. The real problem is not to convert a variable back and forth the difficulty is, to inference from the given facts. Instead of focus on the environment-to-sensor workflow the more important part of the overall architecture is the expert system itself.

Are all employees internal customers?

Quote: “It is recognized in the marketing literature that all employees of an organisation are internal customers. [...] Internal customers generate goods and services for the end customer” [1] page 2
This description is remarkable advanced, because in the common understanding of leadership the employees tries to satisfy his boss. The customer sees his boss as a customer who gives him an order. But it seems, that the marketing literature and especially the newer one has a different understanding of how management is working. The idea is to flip the social roles. That means, the boss is trying to help the employees and the employees are helping the external customers.
This is called total customer orientation and it seems, that at least in the management literature it's the quasi standard of how to organize a modern business.
[1] Conduit, Jodie, and Felix T. Mavondo. "How critical is internal customer orientation to market orientation?." Journal of business research 51.1 (2001): 11-24.

Saturday, August 10, 2019

Heatmap in Python

According to the website a heatmap is created by a color gradient which goes from blue to cyan then to green, over yellow to red. For realizing a function which takes as input a value between 0 .. 1 and returns as output the colorcode the inbetween values of two categories needs to be calculated. The Python function for doing so is a bit longer and takes 25 lines of code. In the referenced URL only the C++ version was given, I have reprogrammed the code.
During the creation of the sourcecode, a slider widget from TKinter was a great help. This allows the user to set a value between 0 and 1 interactively and observe what the return value of the function is.

Update: Sometthing is wrong with the embedded sourcecode. It seems, that the if statement (font) was formatted by the Blog engine a bit randomly.

import pygame

class Game:
  def __init__(self):
    self.pygamewindow = pygame.display.set_mode((500, 350), pygame.HWSURFACE | pygame.DOUBLEBUF)    
    self.fps=20 # 20 
    for i in range(1000000):
  def heatmapcolor(self,value):
    # # value 0..1, returns colorcode (r,g,b) 
    # init gradient
    gradient=[]  # (value,r,g,b)
    gradient.append((0.0,  0,0,1)) # blue
    gradient.append((0.25, 0,1,1)) # cyan
    gradient.append((0.5,  0,1,0)) # green
    gradient.append((0.75, 1,1,0)) # yellow
    gradient.append((1.0,  1,0,0)) # red
    gradient.append((1.0,  1,0,0)) # red extra
    # search base color
    for i in range(len(gradient)):
      if diff>=0 and diff<0 .25:="" font="" nbsp="">
    # relative color
    # result
    return result      
  def paintheatmap(self):
    for i in range(maxstep):
      value=i/maxstep # 0..1
      pygame.draw.rect(self.pygamewindow, col, (x,3,grid_width,grid_height))


Wednesday, August 7, 2019

Creating a Task and motion planner

A so called Task and motion planner is very complicated to realize. From the description itself, it's a mixture of a high level text adventure plus an underlying physics engine. The idea is, that a solver determines in the text adventure what the actions are to fulfill a goal, and then the motion planner converts the high level tasks into concrete motions which are executed by the robot. The problem is to implement such an architecture in sourcecode.
My project so far relies on the programming language python. The easier part was to create the simulation itself. Thanks to the libraries pygame, tkinter and box2d it was easy in doing so. The resulting robot can be controlled with the keyboard by a human operator. The more compliated parts are the text adventure and the motion planner. The first idea was, to utilize the STRIPS or the Prolog syntax which is equal to store facts and rules. In the literature the concept is explained in detail but in reality, the resulting text adventure was hard to maintain. The problem was, that the rules have access to all the facts and no modules are available.
The better idea is to realize the text adventure with object oriented programming techniques. Which means, that every item in the game like the robot, the box and the map get a separate class, and the methods in the class can only operate on the internal datastructures. This time, the sourcecode was easier to read, because it's compatible to normal programming paradigm. That means, if somebody creates a standalone text adventure he will use for sure an object oriented language, but not the STRIPS notation.
What is open right now, is to combine all the modules into a runable application. This makes it hard to predict if the idea make sense or not. Even the example problem was a minimal example, the amount of needed sourcecode is higher than usual. Especially the concept of running two simulations in parallel makes the code complicated. The problem is, that the normal physics engine represents the game but in the text adventure the same game is calculated but in a different way.
Is there a need to create the text adventure at all? The answer is yes, because without a text adventure the solver can't determine the next step. The precondition to search in a tree for a node is, that a forward model is available which can produce the game tree. Let us go a step back and describe what a GOAP solver is doing. The idea is to test out randomly some actions in the model. A random generator executes an action and then the result is stored in a graph. And exact here is the problem. The action can only be executed inside a text adventure.
What will happen, if no text adventure is available? Then the solver has to send random actions to the normal physics engine. The problem with Box2d, ODE and Bullet is, that there performance is low. They are providing the future state of a system but for doing so lots of cpu ressources are needed. It is not possible to plan longer sequences of around 1 minute with these engines. 1 minute is equal to 60 seconds = 1200 frames. If 100 actions are calculated, the amount of cpu compuation is enormous.
Perhaps the term “task and motion planning” provides the description itself. A task is a high level action for example “bring the box to the goal”, while a motion is a low level action e.g. “move 20 pixels forward”. The normal physics engine works on a motion level, it has to do with a near time horizon of 1-2 seconds and detail movements. In contrast, a task planner has to provide the long term strategy which includes the selection of waypoints and define subgoals. On a task level a pick&place operation can be described with natural language:
1. moveto object
2. grasp object
3. moveto goal
4. ungrasp object
This short plan isn't providing any details. It's not possible to execute the plan directly on a physics engine. A physics engine needs a concrete command for example “left(-20)”. And that is the reason why task and motion planning are handled as different layers. There is a need to plan the actions with different hierarchies.
Practical example
For controlling a puck collecting robot the first thing to do is to create the motion planner. It is working on a low level and affects the underlying physics engine. The motion planner contains of two subfunctions which are “reach angle” and “forward”. The first one controls the direction of the robot, while the second one effects the forward motion. The details of implementing the motion primitives is up to the programmer, in most cases a simple difference calculation is sufficient. After the sourcecode is written it's possible to send to the motion planner the following plan:
1. reachangle(45)
2. forward((100,200))
The interaction with the robot works with these motion primitives. They are providing an interface to control the robot movements. It's not possible to control complicated tasks with these primitives, but only short horizons issues. For longer plans a task planner is required. The task planner is equal to a text adventure and provides also some primitives. The task primitives are:
1. moveto(goal)
2. graspbox
3. ungraspbox
The task planner is not allowed to send commands directly to the robot but the taskplanner sends commands to the motion planner. That means a high level task like moveto() is decomposed into motion primitives like reachangle and forward.
Avoiding the task planner?
If motion primitives are able to control the robot and it's possible to write a longer program which contains a sequence of motion primitives, why is there a need for a high level task planner? Suppose the plan for the robot is to drive to the box, grasp the object, move to the goal and place the object at the position. All the motion primitives are executed in a linear fashion and now an interruption takes place. The robot looses the box during the transit. The motion planner itself doesn't recognize the problem, only the higher instance will detect the issue.
A motion sequence should be tolerate against interruption. And the task planner has to figure out the new motion sequence.
Semi autonomous control
Unfortunately, the amount of frameworks and algorithm to implement a task and motion planner is low. Creating such a software is mostly an art but not an engineering discipline. A good starting point is to set a focus on manual control. If the robot is controlled manual, it's for 100% sure that a task is fulfilled. A planner should be understand as optional. The idea is to start with a teleoperated robot and improve the system slowly into an autonomous system. From the programmer perspective the question is how to improve the control of the robot in a way, that the workload for the human gets lower.
A typical example for this transition is to replace a keyboard control with a mouse control. A normal robot arm for example in an excavator is controlled by different sliders. With slider1 the operator controls motor1, with slider2 motor2 and so on. The first step is to write a software which takes a mouse as input and calculates the servo signal as the result. In the literature the concept is colloquial described as inverse kinematics and it helps a lot to reduce the workload. An inverse kinematics doesn'T mean that the robot works autonomously, it means, that the human operator points with the mouse to a target and the robot arm reaches the point.