Here you can ask questions and find or give answers to organizational, academic and other questions about studying computer science.

1.1k questions

1.3k answers

1.7k comments

557 users

+1 vote

hey,

in the PRAM Algorithm chapter slide 70 is a parallel boolean matrix-multiplication with meantioned which uses O(N3) Processors and Dept O(1).
And the work is O(N3). Why is this algorithm work efficent, when there are matrix multiplication algorithm Strassen-Algorithmus with O(N2,807) and Coppersmith–Winograd-Algorithmus with O(N2,374). 
I read that Strassen-Algorithmus is considered practical even with great constants.
And copying the boolean arrays in integer arrays can be done in Work O(N2).


 

in * TF "Emb. Sys. and Rob." by (810 points)
edited by
What is your definition of *work efficient*? If it is something along the lines of "An Algorithm is work efficient if the work is polynomial in the size of sequential running time", then $O(n^3)$ looks efficient.

The other algorithms you list suggest, that there are other (may be _more efficient_) algorithms. But this does not mean, the n^3-algorithm has to be inefficient.
the Definition is: PRAM Algorithm with work(N)$\in \Theta(T_{sequientional})$(meant is the best sequiental Algorithm for the given Problem) are called work-efficient
So normally I would say that the definition is not fullfilled.

1 Answer

+1 vote
 
Best answer

I agree that the Boolean matrix multiplication given on that slide is not work efficient. It has a parallel runtime of O(1) and work O(N3) which would be work efficient if the best sequential algorithm would have runtime O(N3). It is true that there are better ones as mentioned above, and compared to these, it is worse. It is therefore not work efficient.

Note however that Strassen's algorithm assumes operations + and * of a ring which is not the case for ⋁ and ⋀. Still, we can use Strassen's algorithm if we simply use the Boolean matrices as integer matrices (and if you want modulo n+1 for nxn matrices). Note that the elements of the product matrix are computed as c[i][j] = sum_{k=0..n-1} a[i][k]*b[k][j] and that a[i][k]*b[k][j] is the same as a[i][k]⋀b[k][j], i.e., 1 iff both a[i][k] and b[k][j] are 1. Hence, we have ⋁_{k=0..n-1} a[i][k]⋀b[k][j] iff sum_{k=0..n-1} a[i][k]*b[k][j] >0. For this reason, we can use any matrix multiplication algorithm like Strassen's algorithm or Coppersmith-Winograd also for Boolean matrix multiplication.

For this reason, there are sequential algorithms for Boolean matrix multiplication with a runtime of less than O(N3), and therefore the considered Boolean matrix algorithm with a parallel runtime of O(1) and work O(N3) ist not work efficient (in the sense of PRAMs, i.e., Brent's scheduling principle).

It can be shown that (Boolean) matrix multiplication can be implemented by a PRAM algorithm that runs in parallel time O(log(n)) with work O(M(n)) where M(n) is the runtime of the best sequential algorithm for matrix multiplication (see Jájá92, page 248-249). It would be interesting to combine both algorithms!

by (170k points)
selected by

Related questions

Imprint | Privacy Policy
...