numpy array dtype is coming as int32 by default in a windows 10 64 bit machine

Posted on

Question :

numpy array dtype is coming as int32 by default in a windows 10 64 bit machine

I have installed Anaconda 3 64 bit on my laptop and written the following code in Spyder:

import numpy.distutils.system_info as sysinfo
import numpy as np
import platform


my_array = np.array([0,1,2,3])

Output of these commands show the following:

Out[31]: 64

Out[32]: ('64bit', 'WindowsPE')

my_array = np.array([0,1,2,3])
Out[33]: dtype('int32')

My question is that even though my system is 64bit, why by default the array type is int32 instead of int64?

Any help is appreciated.

Asked By: Prana


Answer #1:

Default integer type np.int_ is C long:

But C long is int32 in win64.

This is kind of a weirdness of the win64 platform.

Answer #2:

In Microsoft C, even on a 64 bit system, the size of the long int data type is 32 bits. (See, for example, Numpy inherits the default size of an integer from the C compiler’s long int.

Answered By: Warren Weckesser

Answer #3:

Original poster, Prana, asked a very good question. “Why is the integer default set to 32-bit, on a 64-bit machine?”

As near as I can tell, the short answer is: “Because it was designed wrong”.
Seems obvious, that a 64-bit machine should default-define an integer in any associated interpreter as 64 bit. But of course, the two answers explain why this is not the case. Things are now different, and so I offer this update.

What I notice is that for both CentOS-7.4 Linux and MacOS 10.10.5 (the new and the old), running Python 2.7.14 (with Numpy 1.14.0 ), (as at January 2018), the default integer is now defined as 64-bit. (The “my_array.dtype” in the initial example would now report “dtype(‘int64’)” on both platforms.

Using 32-bit integers as the default integer in any interpreter can result in very squirrelly results if you are doing integer math, as this question pointed out:

Using numpy to square value gives negative number

It appears now that Python and Numpy have been updated and revised (corrected, one might argue), so that in order to replicate the problem encountered as described in the above question, you have to explicitly define the Numpy array as int32.

In Python, on both platforms now, default integer looks to be int64. This code runs the same on both platforms (CentOS-7.4 and MacOSX 10.10.5):

>>> import numpy as np
>>> tlist = [1, 2, 47852]
>>> t_array = np.asarray(tlist)
>>> t_array.dtype


>>> print t_array ** 2

[ 1 4 2289813904]

But if we make the t_array a 32-bit integer, one gets the following, because of the integer calculation rolling over the sign bit in the 32-bit word.

>>> t_array32 = np.asarray(tlist, dtype=np.int32)
>>> t_array32.dtype


>>> print t_array32 ** 2

[ 1 4 -2005153392]

The reason for using int32 is of course, efficiency. There are some situations (such as using TensorFlow or other neural-network machine learning tools), where you want to use 32-bit representations (mostly float, of course), as the speed gains versus using 64-bit floats, can be quite significant.

Answered By: gemesyscanada

Answer #4:

entelkedi, you can explicitly cast the array to the needed data type, like so:

int64_array = int32_array.astype(np.int64)
Answered By: Misa

Leave a Reply

Your email address will not be published.