Introduction

My Christmas Gift was about creating nice images of the Mandelbrot set. A comment on reddit make me write this sequel. The comment is suggesting that I should use a vectorized version of the code rather than the sequential one I am using. I take this excellent suggestion as an excuse to review several ways of computing the Mandelbrot set in Python using vectorized code and gpu computing. I will specifically have a look at Numpy, NumExpr, Numba, Cython, TensorFlow, PyOpenCl, and PyCUDA. All timings, except for TensorFlow, are measured using Python 3.5 provided by Anaconda. I will compare these to C code.

Let me first define the benchmark I will be using throughout this post. You can skip this explanation and directly go to the code if you wish. Each point in the plane of coordinates (x,y) can be interpreted as a complex number with real part x and imaginary part y. Let c be such point. We define a series of complex numbers for it:

z 0 = 0

z 1 = c

z 2 = z 1 2 + c

...

z n+1 = z n 2 + c

If the size |z n | of these complex numbers stays bounded as n tends to infinity, then c belongs to the Mandelbrot set.

In order to compute images, people usually compute the series until either z n exceed a given horizon value, say 2, or n reaches a max number of iterations. In both cases we store n. In the later case, we assume that the point belongs to the Mandelbrot set.

In order to benchmark various versions of the code I will use the time needed to compute the pixel values for two 1M pixels images. The first image is obtained with 80 iterations, while the second one requires a much larger number of iterations, namely 2048. We only review how to compute pixel values here . My Chri stma s Gi f t contains the code to turn these pixel values into nice images.

The full Mandelbrot set

(click on the image to see the 1M pixel image)

Computed with:

-2.0 <= x <= 0.5

-1.25 <= y <= 1.25

80 iterations max

A detail magnified 50,000 times.

(click on the image to see the 1M pixel image)

Computed with:

-0.74877 <= x <= -0.74872

0.065053 <= y <= 0.065103

2048 iterations max

All the Python code I am using, except for TensorFlow, is available in a notebook on github and on nbviewer.

Naive Python

Let's first establish a baseline with a naive code. I took the code from Juli a be nchm arks aga inst Pyt ho n .

def mand elbr o t ( z , maxi te r ) : c = z for n in range ( maxiter ): if abs ( z ) > 2 : return n z = z * z + c return maxiter def mand elbr ot_s e t ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width ) r2 = np . linspace ( ymin , ymax , height ) return ( r 1 , r 2 , [ man delb ro t ( co mple x ( r , i ), maxiter ) for r in r1 for i in r2 ])

Timing it on my laptop for our two images with

% timeit mand elbr ot_s et(- 2.0, 0.5, -1.2 5,1. 25,1 000, 1000 ,80) % timeit mand elbr ot_s et(- 0.74 877, -0.7 4872 ,0.0 6505 ,0.0 6510 ,100 0,10 00,2 048 )

yields:

1 loops, best of 3: 9.21 s per loop

1 loops, best of 3: 3min 11s per loop

Quite slow, isn't it?

Numpy With Loops

The above code uses list comprehensions. Can we speed up the code by using Numpy arrays? Here is a code that loops over arrays instead of using lists.

import numpy as np def mand elbr o t ( c , maxi te r ) : z = c for n in range ( maxiter ): if abs ( z ) > 2 : return n z = z * z + c return 0 def mand elbr ot_s e t ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width ) r2 = np . linspace ( ymin , ymax , height ) n3 = n p . em pt y ( ( wid t h , he igh t ) ) for i in range ( width ): for j in range ( height ): n3 [ i , j ] = mandelbrot ( r1 [ i ] + 1 j * r2 [ j ], maxiter ) return ( r1 , r2 , n3 )

Timing it yields

1 loops, best of 3: 19.9 s per loop

1 loops, best of 3: 4min 57s per loop

This is slower than using lists! It reinforces what I already explained in Python is Not C: looping over arrays is perhaps the slowest way of using Python. On way to speed such code is to use a compiler for Python. We'll have a look at two of them, Numba and Cython.

Numba

Numba, which is a recent just in time compiler (jit) for Python can do marvel on C like code with Numpy arrays. In order to use it we simply need to import Numba and add a decorator to the functions we want to compile. My code becomes:

from numba import jit @jit def mand elbr o t ( c , maxi te r ) : z = c for n in range ( maxiter ): if abs ( z ) > 2 : return n z = z * z + c return 0 @jit def mand elbr ot_s e t ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width ) r2 = np . linspace ( ymin , ymax , height ) n3 = n p . em pt y ( ( wid t h , he igh t ) ) for i in range ( width ): for j in range ( height ): n3 [ i , j ] = mandelbrot ( r1 [ i ] + 1 j * r2 [ j ], maxiter ) return ( r1 , r2 , n3 )

Timing it yields

1 loops, best of 3: 166 ms per loop

1 loops, best of 3: 3.96 s per loop

This more than 100 times faster than the above code! It is also faster than the corresponding Julia code from the same benchmark set, see How To Make Python Run As Fast As Julia.

Further Optimization

After writing the above I read read Numba documentation (shame on me, I should have done it before, but it is so easy to use...) and I saw a Mand elbr ot s et c ompu tati on e xamp l e . It was faster than my code. I quickly found that the difference comes from one single instruction. One can make the code run faster by avoiding a square root computation when computing np.abs(z) > 2 . We can get an equivalent condition by squaring both sides, which yields:

z.real * z.real + z.imag * z.imag > 4

We can do even better, by breaking the complex number into its constituents, i.e. into two floating point numbers. The code becomes:

@jit def mand elbr o t ( cr ea l , ci ma g , ma xite r ) : real = creal imag = cimag for n in range ( maxiter ): real2 = real * real imag2 = imag * imag if real2 + imag2 > 4.0 : return n imag = 2 * real * imag + cimag real = real2 - imag2 + creal return 0 @jit def mand elbr ot_s et 4 ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width ) r2 = np . linspace ( ymin , ymax , height ) n3 = n p . em pt y ( ( wid t h , he igh t ) ) for i in range ( width ): for j in range ( height ): n3 [ i , j ] = mand elbr o t ( r 1 [ i ] , r 2 [ j ] , max ite r ) return ( r1 , r2 , n3 )

Timing it yields:

1 loops, best of 3: 101 ms per loop

1 loops, best of 3: 2.43 s per loop

This is a nice improvement. We are more than 100 times faster than the non compiled code. This is a bit fatser than C code (see appendix)! The difference can be due to the machine, as it is not a proper benchmarking platform when it comes to few milliseconds difference. We can safely say that Numba produces code that is as fast as C code.

My version is also slightly faster than the one on N umba doc umen tati o n . When I time their version I get

1 loops, best of 3: 113 ms per loop

1 loops, best of 3: 2.72 s per loop

We'll see below that we can do even better with vectorized operations.

Cython

Another way to speed Python code is to use Cython with static typing. The syntax to declare types is similar to C syntax, but the resulting code is still a Python function we can call from within Python interpreter. I am using a notebook, hence I first load Cython into it with:

% load_ext cython

Then I add types to the code, and compilation directives that speed up array operations. The code becomes:

%% cython import cython import numpy as np cdef int mandelbrot ( double creal , double cimag , int maxiter ): cdef : double real2 , imag2 double real = creal , imag = cimag int n for n in range ( maxiter ): real2 = real * real imag2 = imag * imag if real2 + imag2 > 4.0 : return n imag = 2 * real * imag + cimag real = real2 - imag2 + creal ; return 0 @cyt ho n . bo unds chec k ( Fa ls e ) @cyt ho n . wr apar oun d ( Fa ls e ) cpdef mand elbr ot_s e t ( do ubl e xmin , double xmax , double ymin , double ymax , int width , int height , int maxiter ): cdef : double [:] r1 = np . linspace ( xmin , xmax , width ) double [:] r2 = np . linspace ( ymin , ymax , height ) int [:,:] n3 = n p . em pt y ( ( wid t h , he igh t ) , np . int ) int i , j for i in range ( width ): for j in range ( height ): n3 [ i , j ] = mandelbrot ( r1 [ i ], r2 [ j ], maxiter ) return ( r1 , r2 , n3 )

Timing it yields

1 loops, best of 3: 109 ms per loop

1 loops, best of 3: 2.61 s per loop

This is a tad slower than Numba, and similar to C code. Note that Numba code is closer to Python code.

Numpy Array Operations

Vectorized operations are a way to write code without explicit loops over arrays. The Numpy package provides vectorized operations that extend traditional arithmetic operations to whole arrays. For instance, if z is an array, then z*z returns the array whose elements are the square of the elements of z. Numpy provides many other types of array operations. For instance if done is an array containing truth values, then z[done] = 0 assigns 0 to all the elements of z where the corresponding element of done is True.

There are various ways to vectorize our computation using Numpy. Here is the best I could find. It is about 3 times faster than the one available on PyOpenCl github.

def mandelbrot_numpy ( c , maxiter ): output = np . zeros ( c . shape ) z = np . zeros ( c . shape , np . complex64 ) for it in range ( maxiter ): notdone = n p . le s s ( z . rea l * z . rea l + z . imag * z . imag , 4.0 ) output [ notdone ] = it z [ notdone ] = z [ notdone ] ** 2 + c [ notdone ] output [ output == maxiter - 1 ] = 0 return output

Given this function takes an array as first argument we need to also change the calling code. it is now:

def mand elbr ot_s et 2 ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width , dtype = np . float32 ) r2 = np . linspace ( ymin , ymax , height , dtype = np . float32 ) c = r1 + r2 [:, None ] * 1 j n3 = mand elbr ot_n ump y ( c , maxi te r ) return ( r1 , r2 , n3 . T )

Timing it for our two images yields

1 loops, best of 3: 1.07 s per loop

1 loops, best of 3: 29.7 s per loop

This is way faster than the non compiled sequential code, but it is slower than compiled code. Let's see how we might improve it further.

Numexpr

One reason the above code is not as efficient as it could be is the creation of temporary arrays to hold intermediate computation result. For instance the expression

z[notdone] = z[notdone]**2 + c[notdone]

will create a temporary array to hold

z[notdone]**2

One way to avoid these temporary arrays is to use the NumExpr package. Our code becomes:

import numexpr as ne def mandelbrot_numpy ( c , maxiter ): output = np . zeros ( c . shape ) z = np . zeros ( c . shape , np . complex64 ) for it in range ( maxiter ): notdone = n e . ev alua t e ( 'z .rea l*z. real + z.imag*z.imag < 4.0' ) output [ notdone ] = it z = n e . ev alua t e ( 'w here (not done ,z** 2+c, z) ' ) output [ output == maxiter - 1 ] = 0 return output

Timing it for our two images yields

1 loops, best of 3: 686 ms per loop

1 loops, best of 3: 20.4 s per loop

This is a nice 30% speedup.

Numba Vectorize

Another way to avoid intermediate arrays is to use Numba vectorize. We will use it where we used NumExpr. Here is the code.

from numba import vectorize , complex64 , boolean , jit @vec tori z e ( [ boo lea n ( co mple x6 4 )] ) def f ( z ): return ( z . real * z . real + z . imag * z . imag ) < 4.0 @vec tori z e ( [ com plex 6 4 ( co mple x6 4 , complex64 )]) def g ( z , c ): return z * z + c @jit def mandelbrot_numpy ( c , maxiter ): output = np . zeros ( c . shape , np . int ) z = np . empty ( c . shape , np . complex64 ) for it in range ( maxiter ): notdone = f ( z ) output [ notdone ] = it z [ notdone ] = g ( z [ notd on e ] , c [ n otdo n e ] ) output [ output == maxiter - 1 ] = 0 return output

Timing it yields:

1 loops, best of 3: 555 ms per loop

1 loops, best of 3: 17.5 s per loop

This is even faster, but it is still slower than the sequential code compiled with Numba. Does it mean that we should forget about vectorized operations? Maybe not.

TensorFlow

Numpy is not the only tool we can use for vectorizing our code, here is another one: TensorFlow is a recent open source library from Google for machine learning, and especially neural networks (deep learning). One of its tuto rial s sh ows how it c an b e us ed t o co mput e th e Ma ndel brot se t . It was really tempting to include it in our benchmark.

TensorFlow comes in two flavors: one that uses cpus, and one that uses gpus. The latter uses the graphic processing chip that you may have on your computer (some don't have such chip, but most have at least one). My laptop has a NVIDIA Quadro 1000M. It only supports CUDA Compute 2.1, when Tens orfl ow r equi res CUDA Com pute 3.5 or high er . I can't test the gpu version of TensorFlow therefore.

Here is my test with the cpu version. I ran it in a Docker container. The bulk of the code for computing the Mandelbrot set looks like:

import tensorflow as tf sess = tf.I nter acti veSe ssio n()

start = time.time()

Y, X = np.mgrid[ymin : ymax : (ymax-ymin)/1000, xmin : xmax : (xmax-xmin)/1000]

Z = X+1j*Y

xs = tf.c onst ant( Z.as type ("co mple x64" ))

zs = tf.Variable(xs)

ns = tf.V aria ble( tf.z eros _lik e(xs , "float32"))

tf.i niti aliz e_al l_va riab les( ).ru n()

zs_ = zs*zs + xs

not_diverged = tf.complex_abs(zs_) < 4

step = tf.group(

zs.assign(zs_),

ns.a ssig n_ad d(tf .cas t(no t_di verg ed, "float32"))

)

for i in range(maxiter): step.run()

This code is quite different from the Numpy one. Here, the code constructs the computation graph. First, there are few constants and variables definitions. Then, few computations are defined (zs, not_diverged). Last, we define an iteration to be the execution of these computations, followed by some assignments.

Timing it with 2 cores yields:

5.22 seconds

31.54 seconds

Clearly, the setup time is quite high, but time per iteration is comparable to that of Numpy. The tensorFlow code can probably be optimized a bit, for instance by avoiding the square root in the complex_abs call.

Note that we used the cpu version of it, when its value clearly comes from its gpu version.

Good news is that there is are a couple of Python libraries that can help use gpus, PyOpenCl and PyCUDA. [Side comment:] there is always a Python library that can help, just look for it. [end of side comment]

GPU and PyOpenCl

Let's use the PyOpenCl library first. Installing it on Windows can be tricky, you can look at my Inst alli ng P yOpe nCl On A naco nda For Wind ows for help.

I am in no way a PyOpenCl expert, and I will start from the code provided on the PyOp enCl git hub to c ompu te t he M ande lbro t se t . I modified it in two ways:

I moved the context creation out of the image computing code, in order to share it between several image computations. I made it interactive so that we can easily select between devices (Intel or NVIDIA in my case).

I added a break statement as there is no need to perform more computations than necessary.

This makes the code at least twice as fast..

The core of the resulting code is:

import pyopencl as cl ctx = c l . cr eate _som e_co ntex t ( in tera ctiv e = Tr u e ) def mandelbrot_gpu ( q , maxiter ): global ctx queue = c l . Co mman dQue u e ( ct x ) output = np . empty ( q . shape , dtype = np . uint16 ) #pragma OPENCL EXTENSION cl_k hr_b yte_ addr essa ble_ stor e : enable __kernel void mandelbrot(__global float2 *q, __global ushort *output, ushort const maxiter) { int gid = get_global_id(0); float real = q[gid].x; float imag = q[gid].y; output[gid] = 0; for(int curiter = 0; curiter < maxiter; curiter++) { float real2 = real*real, imag2 = imag*imag; if (real*real + imag*imag > 4.0f){ output[gid] = curiter; return; } imag = 2* real*imag + q[gid].y; real = real2 - imag2 + q[gid].x; } } """ ) . build () mf = cl . mem_flags q_opencl = cl . Buffer ( ctx , mf . READ_ONLY | mf . COPY_HOST_PTR , hostbuf = q ) output_opencl = cl . Buffer ( ctx , mf . WRITE_ONLY , output . nbytes ) pr g . ma ndel bro t ( qu eu e , output . shape , None , q_opencl , output_opencl , np . uint16 ( maxiter )) c l . en queu e_co p y ( qu eu e , output , outp ut_o penc l ) . w ai t ( ) return output

The code is yet another completely different beast. It is basically split in two. A first piece, shown in red above, is a piece of C code that will be compiled and run on the selected processing unit. Second, the rest of the code sets the input and output for the C code, and runs it. The calling code is similar to that of vectorized code, but we first turn the input array into a one dimensional array:

def mand elbr ot_s et 3 ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width , dtype = np . float32 ) r2 = np . linspace ( ymin , ymax , height , dtype = np . float32 ) c = r1 + r2 [:, None ] * 1 j c = np . ravel ( c ) n3 = mand elbr ot_g p u ( c , maxi te r ) n3 = n 3 . re shap e ( ( wid t h , he igh t ) ) return ( r1 , r2 , n3 . T )

PyOpenCl can run on cpus and gpus. I ran both on my laptop and here are the timings I got.

With my Intel cpu:

10 loops, best of 3: 22 ms per loop

1 loops, best of 3: 181 ms per loop

With my NVIDIA gpu:

10 loops, best of 3: 25 ms per loop

1 loops, best of 3: 212 ms per loop

The results are impressive. The timings are way faster than anything else we tried. The gpu isn't faster. It is probably due to the fact that my NVIDIA chip is rather old. My cpu is 4 cores: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz. With hyper threading it yields up to 8 full speed threads.

It is also interesting that I can run the PyOpenCl code in the same notebook as the rest of my code. I actually used the imaging code from My Christmas Gift to check that all the codes I tried compute the right image...

Interestingly, we can do slightly better on gpu by avoiding redundant computation. We simply need to replace the C code with this.

prg = cl . Program ( ctx , """ #pragma OPENCL EXTENSION cl_k hr_b yte_ addr essa ble_ stor e : enable __kernel void mandelbrot(__global float2 *q, __global ushort *output, ushort const maxiter) { int gid = get_global_id(0); float nreal, real = 0; float imag = 0; output[gid] = 0; for(int curiter = 0; curiter < maxiter; curiter++) { nreal = real*real - imag*imag + q[gid].x; imag = 2* real*imag + q[gid].y; real = nreal; if (real*real + imag*imag > 4.0f){ output[gid] = curiter; break; } } } """ ) . build ()

Timing it with my Intel cpu:

10 loops, best of 3: 25.6 ms per loop

1 loops, best of 3: 344 ms per loop

And my NVIDIA device:

10 loops, best of 3: 23.9 ms per loop

10 loops, best of 3: 187 ms per loop

Interestingly, the new code is way lower on the cpu, but it gets faster on the gpu.

PyCUDA

PyOpenCl can be used to run code on a variety of platforms, including Intel, AMD, NVIDIA, and ATI chips. If we target NVIDIA chips only, then we can use PyCUDA. I provide instructions to install it on Windows and Anaconda here.

The code is similar to the one of PyOpenCl: there is a ++C code that will be executed on the NVIDIA chip, and Python code to compile, execute, and get the results of the C++ code.

Here is the code I am using. It is 6 times faster than the one found on PyCu da d ocum enta tio n .

import pycuda.driver as drv import pycuda.tools import pycuda.autoinit from pycuda.compiler import SourceModule import pycuda.gpuarray as gpuarray from pycuda.elementwise import ElementwiseKernel complex_gpu = ElementwiseKernel ( "pyc uda: :com plex <flo at> *q, int *output, int maxiter" , """ { float nreal, real = 0; float imag = 0; output[i] = 0; for(int curiter = 0; curiter < maxiter; curiter++) { float real2 = real*real; float imag2 = imag*imag; nreal = real2 - imag2 + q[i].real(); imag = 2* real*imag + q[i].imag(); real = nreal; if (real2 + imag2 > 4.0f){ output[i] = curiter; break; }; }; } """ , "complex5" , preamble = "#include <pyc uda- comp lex. hpp> " , ) def mandelbrot_gpu ( c , maxiter ): q_gpu = gpua rra y . to _gp u ( c . asty p e ( n p . co mple x6 4 ) ) iterations_gpu = gpua rra y . to _gp u ( n p . em pt y ( c . shap e , dtype = np . int )) complex_gpu ( q_gpu , iterations_gpu , maxiter ) return iter atio ns_g p u . ge t ( )

Timing it yields:

10 loops, best of 3: 21.9 ms per loop

1 loops, best of 3: 184 ms per loop

This is similar to PyOpenCl on gpu. No surprise s the same driver and the same device is used in this case.

Numba Guvectorize

Numba 0.23.0 provides a new argument that can be used to parallelize code. We simply have to add target='parallel' where we want to use it. Adding it to each of the vectorize calls above does not help. I think it is because parallel incurs some overhead that is only offset when we parallelize more significant chunks of code. Let's parallelize the top loop, the one over the image itself. For this we will use the function guvectorize(). This function accepts arrays as arguments, but these arrays can be of different shape. For instance, we can pass a scalar as a one element array. Starting from the sequential code, we get this code:

from numba import jit , vectorize , guvectorize , float64 , complex64 , int32 , float32 @ji t ( in t3 2 ( co mple x6 4 , int32 )) def mand elbr o t ( c , maxi te r ) : nreal = 0 real = 0 imag = 0 for n in range ( maxiter ): nreal = real * real - imag * imag + c . real imag = 2 * real * imag + c . imag real = nreal ; if real * real + imag * imag > 4.0 : return n return 0 @guv ecto riz e ([ ( com plex 6 4 [:] , int32 [:], int32 [:])], '(n) ,()- >(n) ' , ta rge t = 'p aral lel ' ) def mandelbrot_numpy ( c , maxit , output ): maxiter = maxit [ 0 ] for i in range ( c . shape [ 0 ]): output [ i ] = mand elbr o t ( c [ i ] , max ite r ) def mand elbr ot_s et 2 ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width , dtype = np . float32 ) r2 = np . linspace ( ymin , ymax , height , dtype = np . float32 ) c = r1 + r2 [:, None ] * 1 j n3 = mand elbr ot_n ump y ( c , maxi te r ) return ( r1 , r2 , n3 . T )

Timing it yields:

10 loops, best of 3: 44.7 ms per loop

1 loops, best of 3: 798 ms per loop

Win! This is way faster than anything we've tried so far.

Guvectorize can also target CUDA. The code is similar, we change the target to be 'cuda'. We also had to explicitly create an array for passing the maxiteration as there seems to be a bug with passing scalars at this point. You may have to install the cudatoolkit package to get it running. I installed it with conda install cudatoolkit.

@guv ecto riz e ([ ( com plex 6 4 [:] , int32 [:], int32 [:])], '(n),(n)->(n)' , target = 'cuda' ) def mandelbrot_numpy ( c , maxit , output ): maxiter = maxit [ 0 ] for i in range ( c . shape [ 0 ]): creal = c [ i ] . real cimag = c [ i ] . imag real = creal imag = cimag output [ i ] = 0 for n in range ( maxiter ): real2 = real * real imag2 = imag * imag if real2 + imag2 > 4.0 : output [ i ] = n break imag = 2 * real * imag + cimag real = real2 - imag2 + creal def mand elbr ot_s et 2 ( xm i n , xm a x , ym i n , ym a x , wi dt h , he igh t , ma xite r ) : r1 = np . linspace ( xmin , xmax , width , dtype = np . float32 ) r2 = np . linspace ( ymin , ymax , height , dtype = np . float32 ) c = r1 + r2 [:, None ] * 1 j n3 = np . empty ( c . shape , int ) maxit = np . ones ( c . shape , int ) * maxiter n3 = mand elbr ot_n ump y ( c , maxi t ) return ( r1 , r2 , n3 . T )

Timing it yields

10 loops, best of 3: 53.8 ms per loop

1 loops, best of 3: 906 ms per loop

This is similar t what we get with OpenCl on my machine: Using my Icpu is slightly better than using my NVIDA gpu. Results could be different on a different machine.

Takeway

The table below recaps all the running times we got. The last line is a sequential C code described in the appendix.

Image 1 80 iterations (seconds) Image 2 2048 iterations (seconds) Time per iteration (milliseconds) Naive Sequential 8.87 191 326 Numpy Sequential 19.9 297 496 Numba Sequential 0.101 2.43 4.2 Cython Sequential 0.109 2.61 4.5 Numpy Array Vectorized 1.070 29.7 14.5 Numpy Numexpr Vectorized 0.686 20.4 10.0 Numpy Numba Vectorize Vectorized 0.555 17.5 8.6 TensorFlow cpu Vectorized 5.220 31.5 13.4 PyOpenCl cpu Parallel 0.022 0.181 0.28 PyOpenCl gpu Parallel 0.024 0.187 0.29 PyCUDA Parallel 0.022 0.184 0.29 Numba guvectorize parallel Parallel 0.044 0.798 1.3 Numba guvectorize CUDA Parallel 0.054 0.906 1.5 C Sequential 0.104 2.60 4.5

When looking at overall running times PyOpenCl and PyCUDA are the fastest. They are about 5 times faster per iteration than Numba guvectorize(). This is really significant, but it requires writing C or C++ code in addition to Python. If we want to stick to Python code, then the sequential version parallelized with with Numba guvectorize() is the clear winner. What is also quite impressive is that Numba sequential code is as fast as C code, if not faster. The time difference may come from the back end compiler, LLVM for Numba versus Microsoft Visual C++ for Cython and C, but I may be wrong.



We added a column for the time it takes to perform one iteration. We are not comparing apple to apple here. Indeed, sequential codes and gpu codes stop computation as soon as the complex number size exceeds the horizon, while the vectorized codes iterate systematically until maxiter is reached. The average number of iterations for the sequential code is 23.5 for the first image instead of 80, and 582 for the second image instead of 2048. Therefore, sequential codes save a lot of computation compared to vectorized codes. It means that the overall time isn't a good indication of the average time spent per iteration. We compute the number of iterations via

(time2 - time1)/(2048 - 80) for vectorized codes

(time2 - time1)/(582- 23.5) for sequential and parallel codes

where time1 is the time for the first image, and time2 the time for the second image, assuming the time per iteration is constant.

Let me close with some warnings. Running the above code on another machine may yield different results. In particular, I am using a rather old NVIDIA chip on a laptop. Its cpu is relatively good in comparison. A more recent machine with a powerful NVIDIA chip may run gpu code way faster.

There are things I haven't tried. The main missing one is Pypy, which is a faster Python. I haven't benchmarked it because I cannot use Pypy for my day to day job. Indeed, I rely on Python scientific stack for that (including scikit-learn, scipy and pandas). That stack does not run on Pypy. I know, this has nothing to do with this benchmark, but that is my excuse for not having pypy up and running on my machine.



Updated on January 4, 2016. Added improved code for sequential code with Numba and for Numpy array operations.

Updated on January 9, 2016. Improved code for Numpy with and without Numba vectorize. Added sections on NumExpr, Cython, and PyCUDA.

Updated on January 11, 2016. Added the timing code where relevant.

Updated on January 12, 2016. Addded a section on C code. I also improved the gpu code by avoiding some recomputation. Lots of good comments on reddit here and here, leading to better code, and new ideas still to be tested. I will have a major update to that post, or a new one, after I have tried all of these. I particulalry want to thank the following readers:

neuralyzer suggested to use the double float code in Cython as well, which leads to a 30% improvement. He/she also suggested to have a parallel version of the Cython code. Last, but not least, he/she pointed me to the --annotate option to the cython magic command.

suggested to use the double float code in Cython as well, which leads to a 30% improvement. He/she also suggested to have a parallel version of the Cython code. Last, but not least, he/she pointed me to the option to the cython magic command. pantsforbirds pointed to a similar exercise done with Matlab.

pointed to a similar exercise done with Matlab. jellef suggested to have parallel code for Numba as well.

suggested to have parallel code for Numba as well. kasbah and efilon asked for a plain C implementation to compare to.

and asked for a plain C implementation to compare to. obfuscate asked for a public repo of my code.

asked for a public repo of my code. DRNbw suggest to look at the python bindings for ArrayFire.

suggest to look at the python bindings for ArrayFire. wildcarde815 suggests the intel python variant and numpymkl libraries.

suggests the intel python variant and numpymkl libraries. kirbyfan64sos suggests Pythran.

suggests Pythran. Hanpari suggested NumbaPro.

suggested NumbaPro. dsijl suggest the nogil option for Numba. He/she also points out that Numb aPro is depr ecat e d .

Update on January 13, 2016.

Loyalsol suggested some C code improvement that I propagated to the Python versions when it helped.

Update on January 18, 2016. Added a section on guvectorize().

Appendix: C code

Let's compare with C. Here is the code I am using. This code is faster than th e C code use d in the Jul ia b ench mar k . It is a sequential code, and one could speed it up via multi threading.

I compile it using the same compiler used for Cython, i.e. Visual C++ 2013. It is compiled in release mode for the x64 target, and with all optimizations that were effective.

It uses all tricks that were effective in Python, for instance breaking complex into two floats. Inlining the first function does not improve running time.

int

mandelbrot (double creal, double cimag, int maxiter) {

double real = creal, imag = cimag;

int n;

for(n = 0; n < maxiter; ++n) {

double real2 = real*real;

double imag2 = imag*imag;

if (real2 + imag2 > 4.0)

return n;

imag = 2* real*imag + cimag;

real = real2 - imag2 + creal;

}

return 0;

} int*

mandelbrot_set (double xmin, double xmax,

double ymin, double ymax,

int width, int height,

int maxiter,

int *output) { int i,j;



double *xlin = (double *) malloc (wid th*s izeo f(do uble ));

double *ylin = (double *) malloc (wid th*s izeo f(do uble )) ; double dx = (xmax - xmin)/width;

double dy = (ymax - ymin)/height; for (i = 0; i < width; i++)

xlin[i] = xmin + i * dx;

for (j = 0; j < height; j++)

ylin[j] = ymin + j * dy;

for (i = 0; i < width; i++) {

for (j = 0; j < height; j++) {

output[i*width + j] = mandelbrot(xlin[i], ylin[j], maxiter);

}

} free(xlin);

free(ylin); return output;

}

Code for timing it is the following:

time_t timer;

time(&timer); int i;

for (i=0; i < loops; i++) {

int *output = (int *) malloc ((wi dth* heig ht)* size of(i nt)) ;

output = mand elbr ot_s et(x min, xmax, ymin, ymax,

width, height, maxiter, output);

free(output); }

total_time = difftime(time(0), timer) / loops;

The main() function reads command line arguments, then executes the above loop and prints the overall time. I needed to loop in order to get reliable timings. The timings for the two images are:

1000 loops, time spent 0.109 seconds

50 loops, time spent 2.6 seconds