I agree that the Boolean matrix multiplication given on that slide is not work efficient. It has a parallel runtime of O(1) and work O(N^{3}) which would be work efficient if the best sequential algorithm would have runtime O(N^{3}). It is true that there are better ones as mentioned above, and compared to these, it is worse. It is therefore not work efficient.

Note however that Strassen's algorithm assumes operations + and * of a ring which is not the case for ⋁ and ⋀. Still, we can use Strassen's algorithm if we simply use the Boolean matrices as integer matrices (and if you want modulo n+1 for nxn matrices). Note that the elements of the product matrix are computed as c[i][j] = sum_{k=0..n-1} a[i][k]*b[k][j] and that a[i][k]*b[k][j] is the same as a[i][k]⋀b[k][j], i.e., 1 iff both a[i][k] and b[k][j] are 1. Hence, we have ⋁_{k=0..n-1} a[i][k]⋀b[k][j] iff sum_{k=0..n-1} a[i][k]*b[k][j] >0. For this reason, we can use any matrix multiplication algorithm like Strassen's algorithm or Coppersmith-Winograd also for Boolean matrix multiplication.

For this reason, there are sequential algorithms for Boolean matrix multiplication with a runtime of less than O(N^{3}), and therefore the considered Boolean matrix algorithm with a parallel runtime of O(1) and work O(N^{3}) ist not work efficient (in the sense of PRAMs, i.e., Brent's scheduling principle).

It can be shown that (Boolean) matrix multiplication can be implemented by a PRAM algorithm that runs in parallel time O(log(n)) with work O(M(n)) where M(n) is the runtime of the best sequential algorithm for matrix multiplication (see Jájá92, page 248-249). It would be interesting to combine both algorithms!