![]() |
|
![]() |
|
> any evidence of their still strong position will be greatly appreciated Fortran still dominates certain areas of scientific computing / HPC, primarily computational chemistry and CFD. - https://fortran-lang.org/packages/scientific - you don't hear about most of them because they're generally run on HPC centers by scientists in niche fields. But you do get their benefit if you buy things that have the chemical sector in their supply chain. The common thread is generally historical codes with a lot of matrix math. Fortran has some pretty great syntax for arrays and their operations. And the for-loop parallelization syntax in parallel compilers (like openmp) is also easy to use. The language can even enforce function purity for you, which removes some of the foot guns from parallel code that you get in other languages. The kinds of problems those packages solve tend to bottleneck at matrix math, so it's not surprising a language that is very ergonomic for vector math found use there. Same for Matlab, it's mostly used by niche fields and engineers who work on physical objects (chemical, mechanical, etc). Their marketing strategy is to give discounts to universities to encourage classes that use them. Like Fortran, it has good syntax for matrix operations. Plus it has a legitimately strong standard library. Great for students who aren't programmers and who don't want to be programmers. They then only know this language and ask their future employer for a license. If you don't interact with a lot of engineers at many companies, you aren't going to see Matlab. |
![]() |
|
If everyone is just using the Fortran libraries instead of reimplementing it in a modern language, then that's evidence that it's still being used for that purpose.
|
![]() |
|
Interesting... maybe the respective comparison was Matlab to Python and Fortran to C/C++? This sentence actually had three parallel clauses. But that was a great nit to find. |
![]() |
|
Not really, you can always change the indexing to account for it. For example, the GEMM matrix multiplication subroutines from BLAS can transpose their arguments [1]. So if you have A (m x n) and B (n x p) stored row-major, but you want to use a column-major BLAS to compute A*B, you can instead tell BLAS that A is n x m, B is p x n, and you want to compute A' * B'. As the article mentions, NumPy can handle both and do all the bookkeeping. So can Eigen in C++. [1] https://www.math.utah.edu/software/lapack/lapack-blas/dgemm.... |
![]() |
|
> What can we do about this? We can't change the layout of pygame Surface data. And we seriously don't want to copy the C++ code of cv2.resize, with its various platform-specific optimizations, Or... you could have sent a ~25 line pull request to opencv to fix this performance problem not just for you, but for thousands of other developers and millions of users. I think your fix would go here: https://github.com/opencv/opencv/blob/ba65d2eb0d83e6c9567a25... And you could have tracked down that slow code easily by running your slow code in gdb, hitting Ctrl+C while it's doing the slow thing, and then "bt" to get a stack trace of what it's doing and you'd see it constructing this new image copy because the format isn't correct. |
![]() |
|
You're right. Numpy stores arrays in row-major order by default.
One can always just have a look at the flags (ndarray.flags returns some information about the order and underlying buffer).
|
![]() |
|
I dunno if it's "more than that" since they're not directly comparable? What you describe sounds more like numba or Cython than what TFA describes which is a different use case?..
|
![]() |
|
oh THIS is why image byte order and dimensions are so confusing every time i fuck with opencv and pygame well, half of why. for some reason i keep doing everything directly on /dev/fb0 |
![]() |
|
Yep, welcome to the world of RGBA and ARGB storage and memory representation formats, with little- and big-endianness thrown in for the mix, it's all very bloody annoying for very little gain.
|
![]() |
|
Seems to me if you could do most of the work in Python and then just make the critical loop unsafe and 100x faster then that would certainly have some appeal.
|
![]() |
|
Plenty of people would gladly not have to learn another language (especially C). You could also benefit from testing blocks of code with safety enabled to have more confidence when safety is removed. |
![]() |
|
Or if you’re going to learn another language might as well learn nim. Keep most of the python syntax, ditch the performance and the packaging insanity.
|
![]() |
|
"Why would anyone use" -> "when" usage generally means lots of use cases are being ignored / swept under the rug. >1 billion people exist. Each has a unique opinion / viewpoint. |
![]() |
|
You can absolutely get direct unbounded access to memory with ctypes, with all the bugs that come from this. I just think/hope the code I show in TFA happens to have no such bugs.
|
![]() |
|
Oh cmon, I can still write some z80 assembly from memory and remember the zx spectrum memory layout somewhat, but I check new programming languages now and then :)
|
![]() |
|
It might be not interesting to you. Having a lot of Rust features with much smarter compiler and Python syntax is pretty interesting to me.
|
![]() |
|
You don't even need a ctypes import for it, you'll get a segfault from this:
cpython is not memory safe.
|
![]() |
|
The example given by parent does not need eval to trigger though. Just create a function and replace its code object then call it, it will easily segfault.
|
![]() |
|
Is this supposed to be a joke? Have a look at the linked article to see what meaning of 'unsafe' the author has in mind. |
![]() |
|
Safety is catching (more) errors ahead of time ...for which Python is grossly unsuitable imo. Fun lang though, just not one that ever comes to mind when I hear 'safe'. |
![]() |
|
I would agree, Rust is not safe. We need to encourage more formal rigor in our craft and avoid misconceptions like 'safe Rust' or 'safe Python'. Thus my original comment :P
|
This is a classic bikeshedding issue. When Go and Rust were first being designed, I brought up support for multidimensional arrays. For both cases, that became lost in discussions over what features arrays should have. Subarrays, like slices but multidimensional? That's the main use case for striding. Flattening in more than one dimension? And some people want sparse arrays. Stuff like that. So the problem gets pushed off onto collection classes, there's no standard, and everybody using such arrays spends time doing format conversion. This is why FORTRAN and Matlab still have strong positions in number-crunching.