This content originally appeared on DEV Community and was authored by Kyle Pena
A reader of my last blog post pointed out to me that np.einsum
is considerably slower than np.matmul
unless you turn on the optimize
flag in the parameters list: np.einsum(..., optimize = True)
.
Being somewhat skeptical, I fired up a Jupyter notebook and did some preliminary tests. And I'll be damned, it's completely true - even for two-operand cases where optimize
shouldn't make any difference at all!
Test 1 is pretty simple - matrix multiplication of two C-order (aka row major order) matrices of varying dimensions. np.matmul
is consistently about twenty times faster.
M1 | M2 | np.einsum | np.matmul | np.einsum / np.matmul |
---|---|---|---|---|
(100, 500) | (500, 100) | 0.765 | 0.045 | 17.055 |
(100, 1000) | (1000, 100) | 1.495 | 0.073 | 20.554 |
(100, 10000) | (10000, 100) | 15.148 | 0.896 | 16.899 |
For Test 2, with optimize=True
, the results are drastically different. np.einsum
is still slower, but only about 1.5 times slower at worst!
M1 | M2 | np.einsum | np.matmul | np.einsum / np.matmul |
---|---|---|---|---|
(100, 500) | (500, 100) | 0.063 | 0.043 | 1.474 |
(100, 1000) | (1000, 100) | 0.086 | 0.067 | 1.284 |
(100, 10000) | (10000, 100) | 1.000 | 0.936 | 1.068 |
Why?
Maybe this has something to do with row-major vs. column-major layout? In the grade-school method of matrix multiplication, to calculate a single entry, you iterate over a row in op1
while you iterate over a column in op2
, so putting the second argument in column-major order might result in a speedup for np.einsum
(provided that np.einsum
works sort of like a generalized version of the grade-school method of matrix multiplication, which I suspect it does).
So, for Test 3, I passed in a column-major matrix for the second operand to see if this sped up unoptimized np.einsum
.
Here are the results. np.einsum
is considerably worse. Clearly, something is going on that I don't understand. Time to start digging.
M1 | M2 | np.einsum | np.matmul | np.einsum / np.matmul |
---|---|---|---|---|
(100, 500) | (500, 100) | 1.486 | 0.056 | 26.541 |
(100, 1000) | (1000, 100) | 3.885 | 0.125 | 31.070 |
(100, 10000) | (10000, 100) | 49.669 | 1.047 | 47.444 |
Going Deeper
The release notes of Numpy 1.12.0 mention the introduction of the optimize
flag. However, the purpose of optimize
seems to be to determine the order that arguments in a chain of operands are combined (i.e.; the associativity) - so optimize
shouldn't make a difference for just two operands, right? Here's the release notes:
np.einsum now supports the optimize argument which will optimize the order of contraction. For example, np.einsum would complete the chain dot example np.einsum(‘ij,jk,kl->il’, a, b, c) in a single pass which would scale like N^4; however, when optimize=True np.einsum will create an intermediate array to reduce this scaling to N^3 or effectively np.dot(a, b).dot(c). Usage of intermediate tensors to reduce scaling has been applied to the general einsum summation notation. See np.einsum_path for more details.
To compound the mystery, some later release notes indicate that np.einsum
was upgraded to use tensordot (which itself uses BLAS where appropriate).
Calling tensordot
where appropriate would speed up einsum
for a matmul
-like operation. But we are only the speedup when optimize
is True.
So what's going on?
If we read def einsum(*operands, out=None, optimize=False, **kwargs)
in numpy/numpy/_core/einsumfunc.py, we'll come to this early-out logic almost immediately:
# If no optimization, run pure einsum
if optimize is False:
if specified_out:
kwargs['out'] = out
return c_einsum(*operands, **kwargs)
Does c_einsum
utilize tensordot
? I doubt it. Later on in the code, we see the tensordot
call that the 1.14 notes seem to be referencing:
# Start contraction loop
for num, contraction in enumerate(contraction_list):
...
# Call tensordot if still possible
if blas:
...
# Contract!
new_view = tensordot(
*tmp_operands, axes=(tuple(left_pos), tuple(right_pos))
)
So, here's what's happening:
- The
contraction_list
codepath is only taken ifoptimize
is True. -
tensordot
is only called in thecontraction_list
loop. - Therefore, we only invoke
tensordot
(and therefore BLAS) whenoptimize
is True.
To me, this seems like a bug. IMHO, the "early-out" at the beginning of np.einsum
should still detect if the operands are tensordot
-compatible and call out to tensordot if possible. Then, we would get the obvious BLAS speedups even when optimize
is False. After all, the semantics of optimize
pertain to contraction order, not to usage of BLAS, which I think should be a given.
The boon here is that persons invoking np.einsum
for operations that are equivalent to a tensordot
invocation would get the appropriate speedups, which makes np.einsum
a bit less dangerous from a performance point of view.
How does c_einsum actually work?
I spelunked some C code to check it out. The heart of the implementation is here.
After a great deal of argument parsing and parameter preparation, the axis iteration order is determined and a special-purpose iterator is prepared. Each yield from the iterator represents a different way to stride over all operands simultaneously.
/* Allocate the iterator */
iter = NpyIter_AdvancedNew(nop+1, op, iter_flags, order, casting, op_flags,
op_dtypes, ndim_iter, op_axes, NULL, 0);
Assuming certain special-case optimizations don't apply, an appropriate sum-of-products (sop
) function is determined based on the datatypes involved:
sop = get_sum_of_products_function(nop,
NpyIter_GetDescrArray(iter)[0]->type_num,
NpyIter_GetDescrArray(iter)[0]->elsize,
fixed_strides);
And then, this sum-of-products (sop
) operation is invoked on each stride returned from the iterator, as seen here:
iternext = NpyIter_GetIterNext(iter, NULL);
if (iternext == NULL) {
goto fail;
}
dataptr = NpyIter_GetDataPtrArray(iter);
stride = NpyIter_GetInnerStrideArray(iter);
countptr = NpyIter_GetInnerLoopSizePtr(iter);
needs_api = NpyIter_IterationNeedsAPI(iter);
NPY_BEGIN_THREADS_NDITER(iter);
NPY_EINSUM_DBG_PRINT("Einsum loop\n");
do {
sop(nop, dataptr, stride, *countptr);
} while (!(needs_api && PyErr_Occurred()) && iternext(iter));
That's my understanding of how einsum
works, which is admittedly still a little thin - it really deserves more than the hour I've given it. It does generally confirm my suspicions, however, that it acts like a generalized version of the grade-school method of matrix multiplication. Ultimately, it delegates out to a series of "sum of product" operations which rely on "striders" moving through the operands - not too different from what you do with your fingers when you learn matrix multiplication.
Summary
So why is np.einsum
faster when you call it with optimize=True
? There are two reasons.
The first (and original) reason is it tries to find an optimal contraction path, which results in speedups when more than two operands are involved.
The second (and newer) reason is that when optimize=True
, it activates a codepath that calls tensordot
where possible, which in turn tries to uses BLAS. And BLAS is about as optimized as it gets.
Therefore, even in the two operand case (where there is only one trivial contraction path possible), optimize
makes a big difference, resulting in the substantial speedups we find in our performance tests.
This content originally appeared on DEV Community and was authored by Kyle Pena

Kyle Pena | Sciencx (2024-11-05T19:27:25+00:00) Investigating the performance of np.einsum. Retrieved from https://www.scien.cx/2024/11/05/investigating-the-performance-of-np-einsum/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.