##########################################

#### single source shortest path ####

##########################################

given a graph G(V,E), lets say you pick one of the vertices as a source, and want to find the shortest path from s to all other vertices.

(1) dijkstra

(2) bellman ford

##################################

#### Dijkstra's algorithm ####

##################################

##

## input:

##

a directed weighted graph G(V,E), and the source vertex s (s belongs to V)

each edge(u,v) has a positive weight.

##

## output:

##

two arrays;

dist[v] = the shortest distance s-v

pred[v] = shows the predesessor to the vertex v. you can trace it till you reach s.

NOTE: dist[s] = 0

NOTE: there can be multiple routes for s-v for dist[v]

##

## logic flow

##

- initialize: mark all dist[]=inf, pred[]=-1, dist[s]=0

- we start from the source vertex s

- look at all its immediate neighbors, and update dist[] info for them

- then mark s "visited" and take the min dist[] from the rest, lets call it vertex u

- examines all u's neighbors, and update dist[] info for them

- then mark u "visited" and take the min dist[] from the rest, lets call it vertex u

- repeat until all vertices are visited

because we assume only positive length for every edge in G, and always pop the shortest distance s-v possible, when Vi is popped from Q, d[Vi] = delta(Vi)

delta(Vi) = the shortest distance for s-v

##

## implementation

##

There are two well known implementations;

(1) priority queue(PQ) based Dijkstra's algo

(2) generic array based Dijkstra's algo

they are similar but differ at certain core operations and PQ-based version is suitable for sparse graphs, while the latter is suitable for dense graphs.

usually we think of heap for PQ

essentially, it's around the concept of treating all input vertices as a set V, and adding each vertex for which the shortest path is obtained to a set S.

when every vertex in V is added to S, the sssp is done.

(ref) : see below ref for visual examples

below stuff written based on

- algorithms in a nutsehll by heineman

- algorithms by sedgewick 3rd edition

- http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm

####

#### (1) priority queue(PQ) based Dijkstra's algo

####

--------------------------------------------

sssp_dijkstra_pq(G,s)

{

PQ pq;

foreach v (V) // initialization. O(V)

{

dist[v] = inf; // need to cleverly mark this somehow

pred[v] = -1;

}

dist[s] = 0;

foreach v (V) // O(V) for the whole loop

pq.enqueue(v,dist[v]) // each iteration has an enqueue op = O(log(V))

// lets say the shortest dist at the front

while( !pq.empty() )

{

u = pq.dequeue(); // gets the min s-v known vertex. O(1)

foreach v (immediate neighbors of u)

{

len = dist[u] + w(u,v)

if(len < dist[v])

{

dist[v] = len;

pred[v] = u;

pq.decrease(v,len); // this is a special operation. O(log V) which will be explained below

}

}

}

}

---------------------------------------------

##

## complexity of PQ-based Dijkstra algo

##

as above,

- the initialization iterates all vertices in G once. hence O(V)

- enqueueing all vertices takes O(V * log(V))

- the while loop looks at each v once and all its neibors = O(E), and each edge we relax = O(log(V)) thus in total, O(E * log(V))

thus in total, O( V + V*log(V) + E*log(V)) = O( (V+E)*log(V) )

####

#### (2) generic array based Dijkstra's algo

####

notice it's almost the identical.

-------------------------------------------- // psuedo code

sssp_dijkstra_4dense(G,s)

{

foreach v (V) // initialization, O(V)

{

dist[v] = inf;

pred[v] = -1;

visited[v] = false; // instead of PQ, we use this additional array to track visited vertices

}

dist[s] = 0;

while(true)

{

u = -1;

min_weight = inf;

foreach i (visited[]) // find the known min weight dist[i] from unvisited pool, O(V)

if( !visited[i] && dist[i] < min_weight ) // this was O(1) in PQ-based implementation

{

min_weight = dist[i];

u = i;

}

if( min_weight == inf ) break; // all unvisited vertices have infinity weight, i.e. unreachable from s

visited[u] = true;

foreach v (immediate neighbors of u)

{

len = dist[u] + w(u,v);

if(len < dist[v]) // notice no pq.decrease() equivalent operation here

{ // because we pay the price by O(V) to find the min as above

dist[v] = len;

pred[v] = u;

}

}

}

}

---------------------------------------------

##

## complexity of array-based Dijkstra algo

##

initialization: O(V)

now look at the while loop. at each iteration, we visit one vertex.

if we determine all unvisited vertices have infinity weight, i.e. not reachable from s, then we terminate.

but assume for the worst case, where all vertices are reachable from s, this while loop iterates V times.

in each iteration, finding the min dist[u] from unvisited vertices: O(V) just a linear traversal.

and for each u iteration, we look at its immediate neighbors, which amounts to O(E) in the whole while loop.

thus the while loop is O(V^2 + E)

##

## PQ-based VS array-based

##

PQ-based : O((V+E)*log(V))

arr-based: O(V^2 + E)

we cannot say either one is always better, but rather it depends on the input graph density.

for a sparse graph, lets say all vertices are barely connected via a line: e.g. { v0 - v1 - v2 - ... - vk }, the E = V-1 = O(V)

for a dense graph, where lets say all two vertices are connected, which gives E = vC2 = v(v-1)/2 = O(v^2)

sparse | dense

-----------------------------------------------

PQ-based : O( V * log(V)) | O((V^2) * log(V))

arr-based: O(V^2) | O(V^2)

======> as above, for sparse graphs, PQ-based wins. whereas for dense graphs, arr-based wins.

(also, as discussed in prev article, dense graph favors adjacency matrix approach)

##################################################

## proof of correctness of Dijkstra's algo ##

##################################################

formal proof is done using induction and proof of contradiction.

##

## induction hypothesis:

##

lets define a set S : vertices dequeued from PQ

any vertex u in S has property dist[u] = shortest path from s to u

now when the next vertex v is added to S, the property of "dist[v] = shortest path from s to v" hosts true.

##

## proof

##

looking at the logic of Dijkstra's algo;

- the first iteration where dist[s] = 0 and all other vertices having infinity weight is obvious. we can add s to S and the property is maintained as all edges have a positive weight.

- also, for any vertex inside S ( lets arbitrarily call it u, and you are about to add a next vertex v ) dist[u] < dist[v] because we couldn't possibly have added a bigger weight path vertex before adding a smaller weight path vertex due to the nature of PQ.

- lets say a vertex v belongs to V-S, and if v is now being added to S, then v must be reachable from some vertext inside S with the shortest path cost than any other vertices in V-S that are also reachable from S. now if L = dist[v] = dist[u] + w(u,v) > shortest path s-v, then by definition, there must be a path s-v that involves a vertex (or vertices) outside S. Specifically two cases;

(1) there is a shorter path for dist[u] that contains some vertex outside S

but vertex u being inside S means dist[u] is already guaranteed to be the shortest path, and cannot be shorter. this defies the induction hypothesis. thus dist[u] must be the shortest possible path, and since w(u,v) is confirmed to be the shortest weight reachable from S at the latest iteration, dist[v] must be the shortest.

(2) there is a path that goes around some vertex outside S, connecting to v, that is actually the shortest.

(this is best explained in the ref slide from Kyoto univ.)

if any other path gives a shorter weight than L by way of vertices x,y, lets say

vertex x belongs to S

vertex y belongs to V-S

so, instead of

s -> ... -> u --(goes out of S)-> v

it is

s -> ... -> x --(goes out of S)-> y -> ... -> v

dist[y] + w(y,v) < L = dist[v]

then

dist[y] < L = dist[v]

===> this defies the induction hypothesis, because if this says dist[u] + w(u,v) > dist[x] + w(x,y)

===> then v couldn't have been chosen as the next vertex to add, it would've been y.

===> thus dist[u] + w(u,v) = dist[v] must be the shortest path.

(ref)

# general

http://en.wikipedia.org/wiki/Dijkstra's_algorithm

http://www.cs.princeton.edu/courses/archive/spr04/cos226/lectures/shortest-path.4up.pdf

# proof

https://www.cs.auckland.ac.nz/~jmor159/PLDS210/dij-proof.html

http://www.math.ucsd.edu/~fan/teach/202/notes/04greed.pdf

http://www.csee.wvu.edu/~ksmani/courses/fa05/gaoa/qen/dijk.pdf

###############################

#### bellman ford algo ####

###############################

yet another single-source-shortest-path algo.

input/output constrains are essentially the same. HOWEVER, unlike dijkstra's algo, bellman ford can deal with graphs with negative weight edges.

##

## input:

##

a directed weighted graph G(V,E), and the source vertex s (s belongs to V)

each edge(u,v) has a weight that can be positive/zero/negative.

NOTE: one critical note is G cannot contain a negative cycle, because as you can imagine, we can infinitely loop that cycle to get negative infinity weight path.

##

## output:

##

two arrays;

dist[v] = the shortest distance s-v

pred[v] = shows the predesessor to the vertex v. you can trace it till you reach s.

NOTE: dist[s] = 0

NOTE: there can be multiple routes for s-v for dist[v]

##

## logic flow (see ref book/wikipedia for good visual examples)

##

- initialization: dist[]=inf, pred[]=-1, dist[s]=0

- relax each edge in E. repeat V-1 times.

- repeat once more for the Vth time, if still able to relax, it means a negative cycle exists.

(ref)

algorithms in a nutshell by heineman

http://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm

------------------------------------------ // psuedo code

sssp_bellmand_ford(G,s)

{

foreach v (V) // initialization, O(V)

{

dist[v] = inf;

pred[v] = -1;

}

dist[s] = 0;

for loop ( V-1 times) // O(V-1) = O(V)

foreach u (V) // check all edges = O(E)

foreach v (immediate neighbors of u)

{

len = dist[u] + w(u,v);

if(len < dist[v])

{

dist[v] = len;

pred[v] = u;

}

}

foreach u (V) // check each edge once more

foreach v (immediate neighbors of u)

{

len = dist[u] + w(u,v);

if(len < dist[v]) // shouldn't be able to relax any further after the above n-1 loop

throw("negative cycle exists"); // this suggests a negative cycle.

}

}

---------------------------------------

======> (why V-1 times ?)

this is simple. well explained with visual example in wikipedia.

##

## best case:

##

consider a V=5 graph G where vertices are connected like a single line.

G = a -> b -> c -> d -> e

e1 e2 e3 e4 // lets label each edge e1,e2,e3,e4

lets say "foreach u (V)" checks vertex from a to e in that alphabetical order.

lets say we choose s = a

initialization;

dist[a]=0

dist[b-e]=inf

first iteration:

- starts dist[a] which is done

- relaxing dist[b] done, because of direct conn a-b

- relaxing dist[c] done, because of direct conn b-c

- relaxing dist[d] done, because of direct conn c-d

- relaxing dist[e] done, because of direct conn d-e

====> first iteration took care of it all.

##

## worst case:

##

again, consider a V=5 graph G where vertices are connected like a single line.

but this time,

G = e -> d -> c -> b -> a

e1 e2 e3 e4 // lets label each edge e1,e2,e3,e4

lets say "foreach u (V)" checks vertex from a to e in that alphabetical order.

lets say we choose s = e // notice the diff here

initialization;

dist[e]=0

dist[a-d]=inf

first iteration:

- starts dist[a] which cannot be relaxed cos dist[a] = inf

- relaxing dist[b] fails also cos dist[b] = inf

- relaxing dist[c] fails also cos dist[c] = inf

- relaxing dist[d] fails also cos dist[d] = inf

- relaxing dist[e] is done as dist[e]=0, and updates dist[d], i.e. relax(e1)

second iteration:

- starts dist[a] which cannot be relaxed cos dist[a] = inf

- relaxing dist[b] fails also cos dist[b] = inf

- relaxing dist[c] fails also cos dist[c] = inf

- relaxing dist[d] works to update dist[c], i.e. relax(e2)

- relaxing dist[e] done already

third iteration:

- starts dist[a] which cannot be relaxed cos dist[a] = inf

- relaxing dist[b] fails also cos dist[b] = inf

- relaxing dist[c] works to update dist[d], i.e. relax(e3)

- relaxing dist[d] done already

- relaxing dist[e] done already

fourth iteration:

- starts dist[a] which cannot be relaxed cos dist[a] = inf

- relaxing dist[b] works to update dist[a], i.e. relax(e4)

- relaxing dist[c] done already

- relaxing dist[d] done already

- relaxing dist[e] done already

===> all edges are relaxed at V-1 = 4 iterations.

===> now, because we never allow negative cycles, the longest path (in terms of edge count) for V=5 graph is to go thru all vertices, which means V-1 = 4.

===> if the fifth iteration yields any relaxing, then it proves there is a negative cycle.

##

## complexity of bellman ford

##

outer loop: O(V)

inner loop: O(E)

in total: O(V*E)

###################################################

## proof of correctness of bellman ford algo ##

###################################################

like dijkstra's algo, proof of bellman ford employes mathematical induction.

# lets say a shortest path from s to t: v_0 -> v_1 -> v_2 -> ... -> v_k

v_0 = s

v_k = t

k is at most v-1

##

## induction hypothesis:

## after the i'th pass of relaxing all edges, dist[v_i] = shortest path from s to v_i.

##

base case ( i=0 ) is trivial. s = v_0. The 0th pass guarantees dist[s] gets set its shortest, and relaxes any edge that is reachable from v_0.

i.e. the 0th pass relaxes all edges that are 1 edge count away from s.

i.e. the i'th pass relaxes all edges that are i+1 edge count away from s.

now, because we assume there is no negative cyclic path, the biggest edge count which the shortest path from v_0 to v_i can have will be i edges. in this case the # of vertices are {v_0, v_1,,, v_i } = i+1

i.e. the (i-1)'th pass relaxes all edges that are i edge count away from s.

===> we need V-1 passes for a graph of V vertices.

(apology it's a bit confusing as we count here in terms of "at most" edge count for V vertices, and for the iteration we count from i=0, instead of i=1 )

now, to build on the induction hypothesis;

for dist[v_(i+1)], relazation to either case;

dist[v_(i+1)] = min ( dist[v_i] + w(v_i, v_(i+1)), dist[v_(i+1)] )

Now if it's the former, we just relaxed the edge v_i to v_(i+1) and dist[v_(i+1)] must be its shortest because in order for it to be not, dist[v_i] has to be a non-shortest which breaks the induction hypothesis.

If the latter, good to know the shortest path didn't have to wait till the i'th iteration to be found (although it had to wait for the i'th iteration to be confirmed. ) it was one of those paths comprised of edges that we already all relaxed and know to be the shortest.

Thus, bellman ford algo computes the shortest path for all vertices from s. As for the number of the pass, i.e. the index for the outer loop, we only need v-2. But we should do v-1 because if any relaxation happens at the (v-1)th pass, it suggests a negative cycle, which is important to detect.

(ref)

p. 671 Sedgewick 4th edition

http://www.slideee.com/slide/introduction-to-algorithms-24-shortest-paths-problem

http://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm

- 2014-08-14 02:58:20 |
- Category : algo
- Page View: