Compute divergence of vector field using python

Posted on

Question :

Compute divergence of vector field using python

Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab) I would expect it exists in numpy/scipy but I can not find it using Google.

I need to calculate div[A * grad(F)], where

F = np.array([[1,2,3,4],[5,6,7,8]]) # (2D numpy ndarray)

A = np.array([[1,2,3,4],[1,2,3,4]]) # (2D numpy ndarray)

so grad(F) is a list of 2D ndarrays

I know I can calculate divergence like this but do not want to reinvent the wheel. (I would also expect something more optimized) Does anyone have suggestions?

Asked By: nyvltak

||

Answer #1:

Just a hint for everybody reading that:

the functions above do not compute the divergence of a vector field. they sum the derivatives of a scalar field A:

result = dA/dx + dA/dy

in contrast to a vector field (with three dimensional example):

result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz

Vote down for all! It is mathematically simply wrong.

Cheers!

Answered By: Polly

Answer #2:

Based on Juh_’s answer, but modified for the correct divergence of a vector field formula

def divergence(f):
    """
    Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
    :param f: List of ndarrays, where every item of the list is one dimension of the vector field
    :return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
    """
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])

Matlab’s documentation uses this exact formula (scroll down to Divergence of a Vector Field)

Answered By: Daniel

Answer #3:

import numpy as np

def divergence(field):
    "return the divergence of a n-D field"
    return np.sum(np.gradient(field),axis=0)
Answered By: user2818943

Answer #4:

The answer of @user2818943 is good, but it can be optimized a little:

def divergence(F):
    """ compute the divergence of n-D scalar field `F` """
    return reduce(np.add,np.gradient(F))

Timeit:

F = np.random.rand(100,100)
timeit reduce(np.add,np.gradient(F))
# 1000 loops, best of 3: 318 us per loop

timeit np.sum(np.gradient(F),axis=0)
# 100 loops, best of 3: 2.27 ms per loop

About 7 times faster:
sum implicitely construct a 3d array from the list of gradient fields which are returned by np.gradient. This is avoided using reduce


Now, in your question what do you mean by div[A * grad(F)]?

  1. about A * grad(F): A is a 2d array, and grad(f) is a list of 2d arrays. So I considered it means to multiply each gradient field by A.
  2. about applying divergence to the (scaled by A) gradient field is unclear. By definition, div(F) = d(F)/dx + d(F)/dy + .... I guess this is just an error of formulation.

For 1, multiplying summed elements Bi by a same factor A can be factorized:

Sum(A*Bi) = A*Sum(Bi)

Thus, you can get this weighted gradient simply with: A*divergence(F)

If ?A is instead a list of factor, one for each dimension, then the solution would be:

def weighted_divergence(W,F):
    """
    Return the divergence of n-D array `F` with gradient weighted by `W`

    ?`W` is a list of factors for each dimension of F: the gradient of `F` over
    the `i`th dimension is multiplied by `W[i]`. Each `W[i]` can be a scalar
    or an array with same (or broadcastable) shape as `F`.
    """
    wGrad = return map(np.multiply, W, np.gradient(F))
    return reduce(np.add,wGrad)

result = weighted_divergence(A,F)
Answered By: Juh_

Answer #5:

What Daniel had modified is the right answer, let me explain self defined func divergence further in more detail :

Function np.gradient() defined as : np.gradient( f) = df/dx, df/dy, df/dz +…

but we need define func divergence as : divergence ( f) = dfx/dx + dfy/dy + dfz/dz +… = np.gradient( fx) + np.gradient(fy) + np.gradient(fz) + …

Let’s test, compare with example of divergence in matlab

import numpy as np
import matplotlib.pyplot as plt

NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)

NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)

def divergence(f):
    num_dims = len(f)
    return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])

y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])

x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)

Fx  = np.cos(x + 2*y)
Fy  = np.sin(x - 2*y)

F = [Fx, Fy]
g = divergence(F)

plt.pcolormesh(x, y, g)
plt.colorbar()
plt.savefig( 'Div' + str(NY) +'.png', format = 'png')
plt.show()

enter image description here

Answered By: Paul Chen

Answer #6:

http://www.scipy.org/Topical_Software#head-85e01502b533f2477ab8c643b38ee92706a377bb

Even if it doesn’t have the divergence hand-packaged for you, divergence is pretty simple and the derivative tools they give you in scipy (the ones linked above) give you about 90% of the code prepackaged in a nice, efficient way.

Answered By: Slater Victoroff

Answer #7:

The divergence as a built-in function is included in matlab, but not numpy. This is the sort of thing that it may perhaps be worthwhile to contribute to pylab, an effort to create a viable open-source alternative to matlab.

http://wiki.scipy.org/PyLab

Edit: Now called http://www.scipy.org/stackspec.html

Answered By: Joshua Cook

Answer #8:

As far as I can tell, the answer is that there is no native divergence function in numpy. Therefore, the best method for calculating divergence is to sum the components of the gradient vector i.e. calculate the divergence.

Answered By: Joshua Cook

Leave a Reply

Your email address will not be published. Required fields are marked *