Skip to content Skip to sidebar Skip to footer

How Do I Speed Up Profiled Numpy Code - Vectorizing, Numba?

I am running a large Python program to optimize portfolio weights for (Markowitz) portfolio optimization in finance. When I Profile the code, 90% of the run time is spent calculati

Solution 1:

In my environment, mutmul (@) has a modest time advantage over einsum and dot:

In[27]: np.allclose(np.einsum('ijk,k',asset_returns,weights),asset_returns@weig
    ...: hts)
Out[27]: TrueIn[28]: %timeitasset_returns@weights100loops, bestof3: 3.91msperloopIn[29]: %timeitnp.einsum('ijk,k',asset_returns,weights)
100loops, bestof3: 4.73msperloopIn[30]: %timeitnp.dot(asset_returns,weights)
100loops, bestof3: 6.8msperloop

I think times are limited by the total number of calculations, more than the coding details. All of these pass the calculation to compiled numpy code. The fact that your original looped version is relatively fast probably has to do with the small number of loops (only 60), and memory management issues in the fuller dot.

And numba is probably not replacing the dot code.

So a tweak here or there might speed up your code by a factor of 2, but don't expect an order of magnitude improvement.

Solution 2:

Here's a version that uses np.einsum to get a little bit of a speed-up:

defget_pf_returns3(weights, asset_returns, horizon=60):
    pf = np.ones(asset_returns.shape[1])
    z = np.einsum("ijk,k -> ij",asset_returns[:horizon,:,:], weights)
    pf = np.multiply.reduce(1 + z)
    return pf ** (12.0 / horizon) - 1

And then timings:

%timeit get_pf_returns(weights, asset_returns)
%timeit get_pf_returns3(weights, asset_returns)
print np.allclose(get_pf_returns(weights, asset_returns), get_pf_returns3(weights, asset_returns))

# 1000 loops, best of 3: 727 µs per loop
# 1000 loops, best of 3: 638 µs per loop
# True

The timings on your machine could be different depending on hardware and the libraries numpy is compiled against.

Post a Comment for "How Do I Speed Up Profiled Numpy Code - Vectorizing, Numba?"