NumPy 2.0.0 is the first major release since 2006. It is the result of 11 months of development since the last feature release and is the work of 212 contributors spread over 1078 pull requests. It contains a large number of exciting new features as well as changes to both the Python and C APIs.
This major release includes breaking changes that could not happen in a regular minor (feature) release - including an ABI break, changes to type promotion rules, and API changes which may not have been emitting deprecation warnings in 1.26.x. Key documents related to how to adapt to changes in NumPy 2.0, in addition to these release notes, include:
C API changes
The
PyArray_CGT
,PyArray_CLT
,PyArray_CGE
,PyArray_CLE
,PyArray_CEQ
,PyArray_CNE
macros have been removed.PyArray_MIN
andPyArray_MAX
have been moved fromndarraytypes.h
tonpy_math.h
.(gh-24258)
A C API for working with
numpy.dtypes.StringDType
arrays has been exposed. This includes functions for acquiring and releasing mutexes which lock access to the string data, as well as packing and unpacking UTF-8 bytestreams from array entries.NPY_NTYPES
has been renamed toNPY_NTYPES_LEGACY
as it does not include new NumPy built-in DTypes. In particular the new string DType will likely not work correctly with code that handles legacy DTypes.(gh-25347)
The C-API now only exports the static inline function versions of the array accessors (previously this depended on using “deprecated API”). While we discourage it, the struct fields can still be used directly.
(gh-25789)
NumPy now defines
PyArray_Pack
to set an individual memory address. UnlikePyArray_SETITEM
this function is equivalent to setting an individual array item and does not require a NumPy array input.(gh-25954)
The
->f
slot has been removed fromPyArray_Descr
. If you use this slot, replace accessing it withPyDataType_GetArrFuncs
(see its documentation and the NumPy 2.0 migration guide). In some cases using other functions likePyArray_GETITEM
may be an alternatives.PyArray_GETITEM
andPyArray_SETITEM
now require the import of the NumPy API table to be used and are no longer defined inndarraytypes.h
.(gh-25812)
Due to runtime dependencies, the definition for functionality accessing the dtype flags was moved from
numpy/ndarraytypes.h
and is only available after includingnumpy/ndarrayobject.h
as it requiresimport_array()
. This includesPyDataType_FLAGCHK
,PyDataType_REFCHK
andNPY_BEGIN_THREADS_DESCR
.The dtype flags on
PyArray_Descr
must now be accessed through thePyDataType_FLAGS
inline function to be compatible with both 1.x and 2.x. This function is defined innpy_2_compat.h
to allow backporting. Most or all users should usePyDataType_FLAGCHK
which is available on 1.x and does not require backporting. Cython users should use Cython 3. Otherwise access will go through Python unless they usePyDataType_FLAGCHK
instead.(gh-25816)
Datetime functionality exposed in the C API and Cython bindings
The functions NpyDatetime_ConvertDatetime64ToDatetimeStruct
,
NpyDatetime_ConvertDatetimeStructToDatetime64
,
NpyDatetime_ConvertPyDateTimeToDatetimeStruct
,
NpyDatetime_GetDatetimeISO8601StrLen
, NpyDatetime_MakeISO8601Datetime
,
and NpyDatetime_ParseISO8601Datetime
have been added to the C API to
facilitate converting between strings, Python datetimes, and NumPy datetimes in
external libraries.
(gh-21199)
Const correctness for the generalized ufunc C API
The NumPy C API’s functions for constructing generalized ufuncs
(PyUFunc_FromFuncAndData
, PyUFunc_FromFuncAndDataAndSignature
,
PyUFunc_FromFuncAndDataAndSignatureAndIdentity
) take types
and data
arguments that are not modified by NumPy’s internals. Like the name
and
doc
arguments, third-party Python extension modules are likely to supply
these arguments from static constants. The types
and data
arguments are
now const-correct: they are declared as const char *types
and
void *const *data
, respectively. C code should not be affected, but C++
code may be.
(gh-23847)
Larger NPY_MAXDIMS
and NPY_MAXARGS
, NPY_RAVEL_AXIS
introduced
NPY_MAXDIMS
is now 64, you may want to review its use. This is usually
used in a stack allocation, where the increase should be safe.
However, we do encourage generally to remove any use of NPY_MAXDIMS
and
NPY_MAXARGS
to eventually allow removing the constraint completely.
For the conversion helper and C-API functions mirroring Python ones such as
take
, NPY_MAXDIMS
was used to mean axis=None
. Such usage must be
replaced with NPY_RAVEL_AXIS
. See also Increased maximum number of dimensions.
(gh-25149)
NPY_MAXARGS
not constant and PyArrayMultiIterObject
size change
Since NPY_MAXARGS
was increased, it is now a runtime constant and not
compile-time constant anymore.
We expect almost no users to notice this. But if used for stack allocations
it now must be replaced with a custom constant using NPY_MAXARGS
as an
additional runtime check.
The sizeof(PyArrayMultiIterObject)
no longer includes the full size
of the object. We expect nobody to notice this change. It was necessary
to avoid issues with Cython.
(gh-25271)
Required changes for custom legacy user dtypes
In order to improve our DTypes it is unfortunately necessary
to break the ABI, which requires some changes for dtypes registered
with PyArray_RegisterDataType
.
Please see the documentation of PyArray_RegisterDataType
for how
to adapt your code and achieve compatibility with both 1.x and 2.x.
(gh-25792)
New Public DType API
The C implementation of the NEP 42 DType API is now public. While the DType API
has shipped in NumPy for a few versions, it was only usable in sessions with a
special environment variable set. It is now possible to write custom DTypes
outside of NumPy using the new DType API and the normal import_array()
mechanism for importing the numpy C API.
See Custom Data Types for more details about the API. As always with a new feature, please report any bugs you run into implementing or using a new DType. It is likely that downstream C code that works with dtypes will need to be updated to work correctly with new DTypes.
(gh-25754)
New C-API import functions
We have now added PyArray_ImportNumPyAPI
and PyUFunc_ImportUFuncAPI
as static inline functions to import the NumPy C-API tables.
The new functions have two advantages over import_array
and
import_ufunc
:
They check whether the import was already performed and are light-weight if not, allowing to add them judiciously (although this is not preferable in most cases).
The old mechanisms were macros rather than functions which included a
return
statement.
The PyArray_ImportNumPyAPI()
function is included in npy_2_compat.h
for simpler backporting.
(gh-25866)
Structured dtype information access through functions
The dtype structures fields c_metadata
, names
,
fields
, and subarray
must now be accessed through new
functions following the same names, such as PyDataType_NAMES
.
Direct access of the fields is not valid as they do not exist for
all PyArray_Descr
instances.
The metadata
field is kept, but the macro version should also be preferred.
(gh-25802)
Descriptor elsize
and alignment
access
Unless compiling only with NumPy 2 support, the elsize
and aligment
fields must now be accessed via PyDataType_ELSIZE
,
PyDataType_SET_ELSIZE
, and PyDataType_ALIGNMENT
.
In cases where the descriptor is attached to an array, we advise
using PyArray_ITEMSIZE
as it exists on all NumPy versions.
Please see The PyArray_Descr struct has been changed for more information.
(gh-25943)
New Features
np.add
was extended to work with unicode
and bytes
dtypes.
A new bitwise_count
function
This new function counts the number of 1-bits in a number.
bitwise_count
works on all the numpy integer types and
integer-like objects.
>>> a = np.array([2**i - 1 for i in range(16)])
>>> np.bitwise_count(a)
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
dtype=uint8)
(gh-19355)
macOS Accelerate support, including the ILP64
Support for the updated Accelerate BLAS/LAPACK library, including ILP64 (64-bit integer) support, in macOS 13.3 has been added. This brings arm64 support, and significant performance improvements of up to 10x for commonly used linear algebra operations. When Accelerate is selected at build time, or if no explicit BLAS library selection is done, the 13.3+ version will automatically be used if available.
(gh-24053)
Binary wheels are also available. On macOS >=14.0, users who install NumPy from PyPI will get wheels built against Accelerate rather than OpenBLAS.
(gh-25255)
Option to use weights for quantile and percentile functions
A weights
keyword is now available for quantile
,
percentile
, nanquantile
and nanpercentile
. Only
method="inverted_cdf"
supports weights.
(gh-24254)
Improved CPU optimization tracking
A new tracer mechanism is available which enables tracking of the enabled targets for each optimized function (i.e., that uses hardware-specific SIMD instructions) in the NumPy library. With this enhancement, it becomes possible to precisely monitor the enabled CPU dispatch targets for the dispatched functions.
A new function named opt_func_info
has been added to the new namespace
numpy.lib.introspect
, offering this tracing capability. This function allows
you to retrieve information about the enabled targets based on function names
and data type signatures.
(gh-24420)
A new Meson backend for f2py
f2py
in compile mode (i.e. f2py -c
) now accepts the --backend meson
option. This is the default option for Python >=3.12. For older Python versions,
f2py
will still default to --backend distutils
.
To support this in realistic use-cases, in compile mode f2py
takes a
--dep
flag one or many times which maps to dependency()
calls in the
meson
backend, and does nothing in the distutils
backend.
There are no changes for users of f2py
only as a code generator, i.e. without -c
.
(gh-24532)
bind(c)
support for f2py
Both functions and subroutines can be annotated with bind(c)
. f2py
will
handle both the correct type mapping, and preserve the unique label for other
C interfaces.
Note: bind(c, name = 'routine_name_other_than_fortran_routine')
is not
honored by the f2py
bindings by design, since bind(c)
with the name
is meant to guarantee only the same name in C and Fortran, not in Python and
Fortran.
(gh-24555)
A new strict
option for several testing functions
The strict
keyword is now available for assert_allclose
,
assert_equal
, and assert_array_less
.
Setting strict=True
will disable the broadcasting behaviour for scalars
and ensure that input arrays have the same data type.
Add np.core.umath.find
and np.core.umath.rfind
UFuncs
Add two find
and rfind
UFuncs that operate on unicode or byte strings
and are used in np.char
. They operate similar to str.find
and
str.rfind
.
(gh-24868)
diagonal
and trace
for numpy.linalg
numpy.linalg.diagonal
and numpy.linalg.trace
have been
added, which are array API standard-compatible variants of numpy.diagonal
and
numpy.trace
. They differ in the default axis selection which define 2-D
sub-arrays.
(gh-24887)
New long
and ulong
dtypes
numpy.long
and numpy.ulong
have been added as NumPy integers mapping
to C’s long
and unsigned long
. Prior to NumPy 1.24, numpy.long
was
an alias to Python’s int
.
(gh-24922)
svdvals
for numpy.linalg
numpy.linalg.svdvals
has been added. It computes singular values for
(a stack of) matrices. Executing np.svdvals(x)
is the same as calling
np.svd(x, compute_uv=False, hermitian=False)
.
This function is compatible with the array API standard.
(gh-24940)
A new isdtype
function
numpy.isdtype
was added to provide a canonical way to classify NumPy’s dtypes
in compliance with the array API standard.
(gh-25054)
A new astype
function
numpy.astype
was added to provide an array API standard-compatible
alternative to the numpy.ndarray.astype
method.
(gh-25079)
Array API compatible functions’ aliases
13 aliases for existing functions were added to improve compatibility with the array API standard:
Trigonometry:
acos
,acosh
,asin
,asinh
,atan
,atanh
,atan2
.Bitwise:
bitwise_left_shift
,bitwise_invert
,bitwise_right_shift
.Misc:
concat
,permute_dims
,pow
.In
numpy.linalg
:tensordot
,matmul
.
(gh-25086)
New unique_*
functions
The unique_all
, unique_counts
, unique_inverse
,
and unique_values
functions have been added. They provide
functionality of unique
with different sets of flags. They are array API
standard-compatible, and because the number of arrays they return does not
depend on the values of input arguments, they are easier to target for JIT
compilation.
(gh-25088)
Matrix transpose support for ndarrays
NumPy now offers support for calculating the matrix transpose of an array (or
stack of arrays). The matrix transpose is equivalent to swapping the last two
axes of an array. Both np.ndarray
and np.ma.MaskedArray
now expose a
.mT
attribute, and there is a matching new numpy.matrix_transpose
function.
(gh-23762)
Array API compatible functions for numpy.linalg
Six new functions and two aliases were added to improve compatibility with
the Array API standard for numpy.linalg
:
A correction
argument for var
and std
A correction
argument was added to var
and std
, which is
an array API standard compatible alternative to ddof
. As both arguments
serve a similar purpose, only one of them can be provided at the same time.
(gh-25169)
ndarray.device
and ndarray.to_device
An ndarray.device
attribute and ndarray.to_device
method were
added to numpy.ndarray
for array API standard compatibility.
Additionally, device
keyword-only arguments were added to:
asarray
, arange
, empty
, empty_like
,
eye
, full
, full_like
, linspace
,
ones
, ones_like
, zeros
, and zeros_like
.
For all these new arguments, only device="cpu"
is supported.
(gh-25233)
StringDType has been added to NumPy
We have added a new variable-width UTF-8 encoded string data type, implementing a “NumPy array of Python strings”, including support for a user-provided missing data sentinel. It is intended as a drop-in replacement for arrays of Python strings and missing data sentinels using the object dtype. See NEP 55 and the documentation for more details.
(gh-25347)
New keywords for cholesky
and pinv
The upper
and rtol
keywords were added to numpy.linalg.cholesky
and
numpy.linalg.pinv
, respectively, to improve array API standard compatibility.
For pinv
, if neither rcond
nor rtol
is specified,
the rcond
’s default is used. We plan to deprecate and remove rcond
in
the future.
(gh-25388)
New keywords for sort
, argsort
and linalg.matrix_rank
New keyword parameters were added to improve array API standard compatibility:
(gh-25437)
New numpy.strings
namespace for string ufuncs
NumPy now implements some string operations as ufuncs. The old np.char
namespace is still available, and where possible the string manipulation
functions in that namespace have been updated to use the new ufuncs,
substantially improving their performance.
Where possible, we suggest updating code to use functions in np.strings
instead of np.char
. In the future we may deprecate np.char
in favor of
np.strings
.
(gh-25463)
numpy.fft
support for different precisions and in-place calculations
The various FFT routines in numpy.fft
now do their calculations natively in
float, double, or long double precision, depending on the input precision,
instead of always calculating in double precision. Hence, the calculation will
now be less precise for single and more precise for long double precision.
The data type of the output array will now be adjusted accordingly.
Furthermore, all FFT routines have gained an out
argument that can be used
for in-place calculations.
(gh-25536)
configtool and pkg-config support
A new numpy-config
CLI script is available that can be queried for the
NumPy version and for compile flags needed to use the NumPy C API. This will
allow build systems to better support the use of NumPy as a dependency.
Also, a numpy.pc
pkg-config file is now included with Numpy. In order to
find its location for use with PKG_CONFIG_PATH
, use
numpy-config --pkgconfigdir
.
(gh-25730)
Array API standard support in the main namespace
The main numpy
namespace now supports the array API standard. See
Array API standard compatibility for details.
(gh-25911)
Improvements
Strings are now supported by any
, all
, and the logical ufuncs.
Integer sequences as the shape argument for memmap
numpy.memmap
can now be created with any integer sequence as the shape
argument, such as a list or numpy array of integers. Previously, only the
types of tuple and int could be used without raising an error.
(gh-23729)
errstate
is now faster and context safe
The numpy.errstate
context manager/decorator is now faster and
safer. Previously, it was not context safe and had (rare)
issues with thread-safety.
(gh-23936)
AArch64 quicksort speed improved by using Highway’s VQSort
The first introduction of the Google Highway library, using VQSort on AArch64. Execution time is improved by up to 16x in some cases, see the PR for benchmark results. Extensions to other platforms will be done in the future.
(gh-24018)
Complex types - underlying C type changes
The underlying C types for all of NumPy’s complex types have been changed to use C99 complex types.
While this change does not affect the memory layout of complex types, it changes the API to be used to directly retrieve or write the real or complex part of the complex number, since direct field access (as in
c.real
orc.imag
) is no longer an option. You can now use utilities provided innumpy/npy_math.h
to do these operations, like this:npy_cdouble c; npy_csetreal(&c, 1.0); npy_csetimag(&c, 0.0); printf("%d + %di\n", npy_creal(c), npy_cimag(c));
To ease cross-version compatibility, equivalent macros and a compatibility layer have been added which can be used by downstream packages to continue to support both NumPy 1.x and 2.x. See Support for complex numbers for more info.
numpy/npy_common.h
now includescomplex.h
, which means thatcomplex
is now a reserved keyword.
(gh-24085)
iso_c_binding
support and improved common blocks for f2py
Previously, users would have to define their own custom f2cmap
file to use
type mappings defined by the Fortran2003 iso_c_binding
intrinsic module.
These type maps are now natively supported by f2py
(gh-24555)
f2py
now handles common
blocks which have kind
specifications from
modules. This further expands the usability of intrinsics like
iso_fortran_env
and iso_c_binding
.
(gh-25186)
Call str
automatically on third argument to functions like assert_equal
The third argument to functions like assert_equal
now has
str
called on it automatically. This way it mimics the built-in assert
statement, where assert_equal(a, b, obj)
works like assert a == b, obj
.
(gh-24877)
Support for array-like atol
/rtol
in isclose
, allclose
The keywords atol
and rtol
in isclose
and allclose
now accept both scalars and arrays. An array, if given, must broadcast
to the shapes of the first two array arguments.
(gh-24878)
Consistent failure messages in test functions
Previously, some numpy.testing
assertions printed messages that
referred to the actual and desired results as x
and y
.
Now, these values are consistently referred to as ACTUAL
and
DESIRED
.
(gh-24931)
n-D FFT transforms allow s[i] == -1
The fftn
, ifftn
, rfftn
,
irfftn
, fft2
, ifft2
, rfft2
and irfft2
functions now use the whole input array along the axis
i
if s[i] == -1
, in line with the array API standard.
(gh-25495)
Guard PyArrayScalar_VAL and PyUnicodeScalarObject for the limited API
PyUnicodeScalarObject
holds a PyUnicodeObject
, which is not available
when using Py_LIMITED_API
. Add guards to hide it and consequently also make
the PyArrayScalar_VAL
macro hidden.
(gh-25531)
Changes
np.gradient()
now returns a tuple rather than a list making the return value immutable.(gh-23861)
Being fully context and thread-safe,
np.errstate
can only be entered once now.np.setbufsize
is now tied tonp.errstate()
: leaving annp.errstate
context will also reset thebufsize
.(gh-23936)
A new public
np.lib.array_utils
submodule has been introduced and it currently contains three functions:byte_bounds
(moved fromnp.lib.utils
),normalize_axis_tuple
andnormalize_axis_index
.(gh-24540)
Introduce
numpy.bool
as the new canonical name for NumPy’s boolean dtype, and makenumpy.bool_
an alias to it. Note that until NumPy 1.24,np.bool
was an alias to Python’s builtinbool
. The new name helps with array API standard compatibility and is a more intuitive name.(gh-25080)
The
dtype.flags
value was previously stored as a signed integer. This means that the aligned dtype struct flag lead to negative flags being set (-128 rather than 128). This flag is now stored unsigned (positive). Code which checks flags manually may need to adapt. This may include code compiled with Cython 0.29.x.(gh-25816)
Representation of NumPy scalars changed
As per NEP 51, the scalar representation has been updated to include the type information to avoid confusion with Python scalars.
Scalars are now printed as np.float64(3.0)
rather than just 3.0
.
This may disrupt workflows that store representations of numbers
(e.g., to files) making it harder to read them. They should be stored as
explicit strings, for example by using str()
or f"{scalar!s}"
.
For the time being, affected users can use np.set_printoptions(legacy="1.25")
to get the old behavior (with possibly a few exceptions).
Documentation of downstream projects may require larger updates,
if code snippets are tested. We are working on tooling for
doctest-plus
to facilitate updates.
(gh-22449)
Truthiness of NumPy strings changed
NumPy strings previously were inconsistent about how they defined
if the string is True
or False
and the definition did not
match the one used by Python.
Strings are now considered True
when they are non-empty and
False
when they are empty.
This changes the following distinct cases:
Casts from string to boolean were previously roughly equivalent to
string_array.astype(np.int64).astype(bool)
, meaning that only valid integers could be cast. Now a string of"0"
will be consideredTrue
since it is not empty. If you need the old behavior, you may use the above step (casting to integer first) orstring_array == "0"
(if the input is only ever0
or1
). To get the new result on old NumPy versions usestring_array != ""
.np.nonzero(string_array)
previously ignored whitespace so that a string only containing whitespace was consideredFalse
. Whitespace is now consideredTrue
.
This change does not affect np.loadtxt
, np.fromstring
, or np.genfromtxt
.
The first two still use the integer definition, while genfromtxt
continues to
match for "true"
(ignoring case).
However, if np.bool_
is used as a converter the result will change.
The change does affect np.fromregex
as it uses direct assignments.
(gh-23871)
A mean
keyword was added to var and std function
Often when the standard deviation is needed the mean is also needed. The same
holds for the variance and the mean. Until now the mean is then calculated twice,
the change introduced here for the var
and std
functions
allows for passing in a precalculated mean as an keyword argument. See the
docstrings for details and an example illustrating the speed-up.
(gh-24126)
Remove datetime64 deprecation warning when constructing with timezone
The numpy.datetime64
method now issues a UserWarning rather than a
DeprecationWarning whenever a timezone is included in the datetime
string that is provided.
(gh-24193)
Default integer dtype is now 64-bit on 64-bit Windows
The default NumPy integer is now 64-bit on all 64-bit systems as the historic 32-bit default on Windows was a common source of issues. Most users should not notice this. The main issues may occur with code interfacing with libraries written in a compiled language like C. For more information see Windows default integer.
(gh-24224)
Renamed numpy.core
to numpy._core
Accessing numpy.core
now emits a DeprecationWarning. In practice
we have found that most downstream usage of numpy.core
was to access
functionality that is available in the main numpy
namespace.
If for some reason you are using functionality in numpy.core
that
is not available in the main numpy
namespace, this means you are likely
using private NumPy internals. You can still access these internals via
numpy._core
without a deprecation warning but we do not provide any
backward compatibility guarantees for NumPy internals. Please open an issue
if you think a mistake was made and something needs to be made public.
(gh-24634)
The “relaxed strides” debug build option, which was previously enabled through
the NPY_RELAXED_STRIDES_DEBUG
environment variable or the
-Drelaxed-strides-debug
config-settings flag has been removed.
(gh-24717)
Redefinition of np.intp
/np.uintp
(almost never a change)
Due to the actual use of these types almost always matching the use of
size_t
/Py_ssize_t
this is now the definition in C.
Previously, it matched intptr_t
and uintptr_t
which would often
have been subtly incorrect.
This has no effect on the vast majority of machines since the size
of these types only differ on extremely niche platforms.
However, it means that:
Pointers may not necessarily fit into an
intp
typed array anymore. Thep
andP
character codes can still be used, however.Creating
intptr_t
oruintptr_t
typed arrays in C remains possible in a cross-platform way viaPyArray_DescrFromType('p')
.The new character codes
nN
were introduced.It is now correct to use the Python C-API functions when parsing to
npy_intp
typed arguments.
(gh-24888)
numpy.fft.helper
made private
numpy.fft.helper
was renamed to numpy.fft._helper
to indicate
that it is a private submodule. All public functions exported by it
should be accessed from numpy.fft
.
(gh-24945)
numpy.linalg.linalg
made private
numpy.linalg.linalg
was renamed to numpy.linalg._linalg
to indicate that it is a private submodule. All public functions
exported by it should be accessed from numpy.linalg
.
(gh-24946)
Out-of-bound axis not the same as axis=None
In some cases axis=32
or for concatenate any large value
was the same as axis=None
.
Except for concatenate
this was deprecate.
Any out of bound axis value will now error, make sure to use
axis=None
.
(gh-25149)
New copy
keyword meaning for array
and asarray
constructors
Now numpy.array
and numpy.asarray
support three values for copy
parameter:
None
- A copy will only be made if it is necessary.True
- Always make a copy.False
- Never make a copy. If a copy is required aValueError
is raised.
The meaning of False
changed as it now raises an exception if a copy is needed.
(gh-25168)
The __array__
special method now takes a copy
keyword argument.
NumPy will pass copy
to the __array__
special method in situations where
it would be set to a non-default value (e.g. in a call to
np.asarray(some_object, copy=False)
). Currently, if an
unexpected keyword argument error is raised after this, NumPy will print a
warning and re-try without the copy
keyword argument. Implementations of
objects implementing the __array__
protocol should accept a copy
keyword
argument with the same meaning as when passed to numpy.array
or
numpy.asarray
.
(gh-25168)
Cleanup of initialization of numpy.dtype
with strings with commas
The interpretation of strings with commas is changed slightly, in that a
trailing comma will now always create a structured dtype. E.g., where
previously np.dtype("i")
and np.dtype("i,")
were treated as identical,
now np.dtype("i,")
will create a structured dtype, with a single
field. This is analogous to np.dtype("i,i")
creating a structured dtype
with two fields, and makes the behaviour consistent with that expected of
tuples.
At the same time, the use of single number surrounded by parenthesis to
indicate a sub-array shape, like in np.dtype("(2)i,")
, is deprecated.
Instead; one should use np.dtype("(2,)i")
or np.dtype("2i")
.
Eventually, using a number in parentheses will raise an exception, like is the
case for initializations without a comma, like np.dtype("(2)i")
.
(gh-25434)
Change in how complex sign is calculated
Following the array API standard, the complex sign is now calculated as
z / |z|
(instead of the rather less logical case where the sign of
the real part was taken, unless the real part was zero, in which case
the sign of the imaginary part was returned). Like for real numbers,
zero is returned if z==0
.
(gh-25441)
Return types of functions that returned a list of arrays
Functions that returned a list of ndarrays have been changed to return a tuple
of ndarrays instead. Returning tuples consistently whenever a sequence of
arrays is returned makes it easier for JIT compilers like Numba, as well as for
static type checkers in some cases, to support these functions. Changed
functions are: atleast_1d
, atleast_2d
, atleast_3d
,
broadcast_arrays
, meshgrid
, ogrid
,
histogramdd
.
np.unique
return_inverse
shape for multi-dimensional inputs
When multi-dimensional inputs are passed to np.unique
with return_inverse=True
,
the unique_inverse
output is now shaped such that the input can be reconstructed
directly using np.take(unique, unique_inverse)
when axis=None
, and
np.take_along_axis(unique, unique_inverse, axis=axis)
otherwise.
any
and all
return booleans for object arrays
The any
and all
functions and methods now return
booleans also for object arrays. Previously, they did
a reduction which behaved like the Python or
and
and
operators which evaluates to one of the arguments.
You can use np.logical_or.reduce
and np.logical_and.reduce
to achieve the previous behavior.
(gh-25712)
np.can_cast
cannot be called on Python int, float, or complex
np.can_cast
cannot be called with Python int, float, or complex instances
anymore. This is because NEP 50 means that the result of can_cast
must
not depend on the value passed in.
Unfortunately, for Python scalars whether a cast should be considered
"same_kind"
or "safe"
may depend on the context and value so that
this is currently not implemented.
In some cases, this means you may have to add a specific path for:
if type(obj) in (int, float, complex): ...
.
(gh-26393)