This post is devoted to rather modern class of algorithms in motion planning field, which were originally developed to solve sophisticated problem bothered humanity for years - The Piano Problem.

Actually it's named "the piano mover's problem", but previous variant sounds more scientifically :) The interpretation is quite simple: suppose you are moving in and trying to pull through grandma's piano. It doesn't fit in the elevator, so you have to go through staircase, rotate it a lot to pass through and do plenty other annoying actions. Can we automatically find the necessary movement?

Optimal Search With A*

Most should be familiar with graph search algorithms like Djkstra or A*. They can be easily adapted to search path for point in euclidean n-dimensional space with obstacles, and many may know simple extension which allows to search path for sphere. You have to extend obstacles by the sphere radius and run algorithm for point. If you can approximate your object with the sphere then go for it, but it is not always the case, like in piano-movers problem.

The problem seems not to be obvious, so lets try to formulate it more explicitly. For simplicity consider the two dimensional case. We have field with some obstacles, piano polygon, starting and goal position, we need to find collision-free path through the field. We can parametrize piano by the triplet: - coordinates of the piano center and rotation about the center in . Next we should be able somehow to define a search graph. Usually for such problems it is defined implicitly: given a grid resolution we are able to move between nearest grid points if piano doesn't overlap with any obstacle. If states are close enough we can avoid continuous collision detection.

The rest of algorithm is pretty common: pick a node from the queue of unvisited nodes, find its neighbours, add them to queue with priority equal to the sum of the traversed distance and estimated distance to the goal, repeat until goal reached. Below is my python implementation.

def AStar(qinit, qend, threshold, distance, openNode, numSamples): open = {} openHeap = [] closed = set() class node: def __init__(self, q, cost, parent): self.q = q self.cost = cost self.parent = parent def retracePath(self, path = []): path.insert(0, self.q) if self.parent == None: return path else: return self.parent.retracePath(path) ninit = node(qinit, 0, None) open[qinit] = ninit heapq.heappush(openHeap, (distance(qinit, qend), ninit) ) for i in xrange(numSamples): if len(open) is 0: return None nclosed = heapq.heappop(openHeap)[1] del open[nclosed.q] closed.add(nclosed.q) for qopened in openNode(nclosed.q): if qopened in closed: continue copened = nclosed.cost + distance(nclosed.q, qopened) nopened = open.get(qopened) if nopened is None: nopened = node(qopened, copened, nclosed) open[qopened] = nopened heapq.heappush(openHeap, (copened + distance(qopened, qend), nopened)) if distance(qopened, qend) < threshold: return nopened.retracePath() elif copened < nopened.cost: nopened.parent = nclosed nopened.cost = copened return None

One subtlety is distance function. Euclidian distance between states makes no sense. I've found very convenient to give distance some physical meaning: time or work required to move piano from one state to another. For example, suppose piano can be moved with some linear velocity vvel, and some angular velocity wvel, and we can move and rotate piano simultaneously, then distance function will look as follows:

def distance(self, q0, q1): return max(math.sqrt((q0[0] - q1[0])**2 + (q0[1] - q1[1])**2) / self.vvel, math.fabs(angleDiff(q0[2], q1[2])) / self.wvel)

With this distance function A* will found minimum time path. Result may be unexpected though. Because we can simultaneously move and rotate path may contain unnecessary rotations, because they doesn't affect moving time. Choosing work as a distance function will lead to more adequate path, performance may also change. Below is the comparison.

My python A* implementation finds path in several seconds. Rather poor result, even considering that this is python and most expensive part is interactive graph visualization. And by the way, removing rotation from the problem drastically improves performance. I haven't expected such great performance gap between 2D and 3D problem solution with A*. Although I already faced "The Curse of Dimensionality" I belivied that A* would overcome it due to its greedy-heuristic nature. Well, it doesn't. Below is the video demonstrating comparison of A* algorithm for 2D and 3D problem.

Actually you may notice that A* grows like a cone when it encounters an obstacle (see previous video or wiki article), so in case of high number of obstacles dimension addition will reduce its performance K times, where K is the number of points used to discretize new dimension. Considering the discretization of piano angle (about 20 points) performance should degrade about 20 times. And that seems to be quite good estimation (~1000 opened states vs ~19000 in hill demo above). I tried greedy A* as a last resort. This modification forces algorithm to explore states nearby the target first, even though considering them at this time may lead to suboptimal solution. This leads to adequate performance in most of my tests, but mazy environments still ruin the heuristic.

Randomized Search

Now we can move to modern super fancy algorithms for high-dimensional path planning. I noticed that two classes of algorithms are extremely widely spread: Rapidly Exploring Random Trees (RRTs), Probabalistic Road Maps (PRMs) (if you know more, please give me a note). I've limited myself to consideration only of RRTs.

Actually algorithm is extremely simple, much simplier than A*. Core idea is to grow tree in configuration space until some its branch reaches target state. Sounds very stupid, basic implementation doesn't even grow in the direction of the target as does A*, but works much better. Lets delay the consideration of its magic properties and establish the algorithm: sample uniformly configuration space, find nearest vertex in already grown tree, add edge extending tree in the direction of sampled vertex. Below is the basic implementation:

def RRT(qinit, randConf, distance, newConf, numSamples): G = {qinit : []} for k in xrange(numSamples): qrand = randConf() qnear = min((distance(q, qrand), q) for q in G.keys())[1] qnew = newConf(qnear, qrand) if qnew is not None: G[qnew] = [qnear] G[qnear].append(qnew) return G

To made this work we need to provide several functions: randConf, newConf, distance. Third one was already mentioned. Function randConf just samples uniformly the state space, newConf tries to move in the direction of random state and ensures that no collision occured. Below is the demonstration of the algorithm (for 2D and 3D problem).

You may notice that performance and number of explored states is almost the same for 2D and 3D problem (~1000). This is due to extremely useful RRT properties including probability completeness and exponential decay of the probability of failure with the number of samples. An RRT can be intuitively considered as a Monte-Carlo way of biasing search into largest Voronoi regions. Of course you may bias search towards the goal, in clear environments this will give substantial performance speedup (but you may try a lot of invalid configurations in mazy environments, but in most cases it works better).

Of course RRT also has problems. One of them is inability to find path through thin passages. This is where the curse of the dimensionality comes into play again, because probability to sample states in some sphere (suppose the entrance to the passage) decays exponentially with the increase of dimensionality. Below is the comparison of finding way through passage for 2D and 3D problem (btw, A* finds the solution).

There are vast of other RRT improvements, some of them addressing this issue. But in my opinion most of them are domain-specific heuristics that require a lot of tuning and are useless outside their sphere of responsibility. And anyway this passage problem seems to be rare. So I addressed other problems.

First of all, on each RRT iteration we search for a nearest neighbour. With naive search approach we get quadratic complexity, of course, this is unsatisfactory for large scale problems. But what's the problem, there are lots of nearest-neighbour libraries. So I haven't thought much, chosen the one - flann... and screwed up.

All (or almost all) nearest neighbour libraries use a kind of spatial subdivision technique (k-d tree, BVH, ...) to group nearest points, and all of these are intended for euqlidian spaces (some other metrics are supported, though), but metric used in this problem is not an euqlidian distance (see distance function). So I was enforced to search nearest neighbour algorithm with support for a general metric. I've found two algorithms: Vp-tree and cover-tree and stuck with this library (if you know more please give me a hint).

To speed up things even more I've implemented RRT-Connect algorithm. Accordingly to some researchers it is not probabilistically complete, but is extremely fast in practice. And this seems to be true. Idea is very simple: grow trees from start and from goal and try to connect them. This is better than just greedy approach, because trees will overcome obstacles like regular RRT trees, but when they are ready to be connected, they do so (and no parameters to tune!). Below is the implementation and comparsion with regular RRT.

def extend(G, qgoal, newConf, distance): qnear = min((distance(q, qgoal), q) for q in G.keys())[1] qnew = newConf(qnear, qgoal) if not qnew is None: G[qnew] = [qnear] G[qnear].append(qnew) return qnew def RRTConnect(qinit, qgoal, randConf, distance, newConf, numSamples): Ga = {qinit : []} Gb = {qgoal : []} for i in xrange(numSamples): qrand = randConf() qnew = extend(Ga, qrand, newConf, distance) # grow Ga if not qnew is None: for j in xrange(numSamples - i): # try connect Ga and Gb until obstacle is reached qext = extend(Gb, qnew, newConf, distance) if qext == qnew: # if can connect, add edge Ga[qnew] += Gb[qnew] Gb[qnew] = Ga[qnew] return Ga, Gb, qnew elif qext is None: Ga, Gb = Gb, Ga # swap trees i += j break else: return Ga, Gb, None return Ga, Gb, None

Conclusion

As for me, I'm very excited about randomized planning algorithms. Not only because of their performance, but also because of their versatility. I've only considered regular path planning, but you can plan in any phase space. Here is the example. Suppose you are trying to drive a car, in each state your possible actions are: throttle, brake, turn right or left. This will change your velocity, position and orientation. If you are planning in space where you consider all of them you can find control sequence to park your car, for example. Another widely spread example is control planning for space ship. So, may be in future, republic won't need Luke to steer a ship inside death star and destroy it. Or he will just sit in his chair, drink cola and algorithm will do the rest. At least while it can, in either case it can warn the pilot: "Path not found... You will die in 5 minutes 35 seconds", and turn some suitable music :)

So that is all to the moment, now we slowly move to humanoid path planning, hope it will go as well as with piano. Below is the demo of described RRT algorithms.

kender_lokiP.S. While watching the last demo (RRT), I've imagined a pair of drunken movers dragging the fucking piano through the thick forest at night. Cue the Benny Hill music for the complete effect :)

rus_dikobrazrus_dikobrazFast vs Exact solutions: