(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39958260

傅立叶变换是一种数学技术,它根据称为复指数的正交基来表示时间信号或任何分段连续且可微的狄利克雷可积函数。 它提供了许多应用程序,包括图像处理、求解微分方程和快速乘法。 从数学的角度来看,变换保留了原始信息并且是可逆的。 然而,工程师通常会在实施过程中丢弃不需要的信息,例如某些频率分量。 由于采样限制,该变换无法准确表示高频信息,从而产生了奈奎斯特极限的概念。 工程师主要处理离散信号,因此离散傅里叶变换(DFT)在工程教育和实践中的重要性。 这些变换对物理学具有重大影响,特别是在弯曲时空中,傅里叶变换不能有效地分离正频率和负频率。 尽管其复杂性,傅里叶变换具有独特的优势,例如充当线性时不变(LTI)系统的特征向量,从而在电路和通信信道中实现无干扰信号传输。 此外,它通过位置和动量的波函数与量子物理学的关系也值得注意。 总体而言,虽然傅里叶变换具有深刻的历史和理论意义,但其实际应用主要集中在工程和物理学领域,为各种现象的行为和操纵提供了宝贵的见解。

相关文章

原文


Mathematically, the Fourier transform is "simply" a way of representing time signals in a certain orthogonal vectorial basis. Vectors in an ordinary sense, e.g. a displacement vector on Earth's surface can also be represented in several orthogonal bases: one basis could, for example, be two vectors pointing North and East; another could be a vector pointing along a certain road and one perpendicular to it. There is nothing inherently special about any of these bases, one could draw maps according to any of these two or many other conventions. (Orthogonal basis vectors are not even necessary, only convenient.)

The interesting thing about time-dependent signals (or any "pretty" function, really) is that they live in an infinite-dimensional vector space, which is hard to imagine; but (besides some important technicalities) the math works mostly the same way: signals as infinite-dimensional vectors can be represented in a lot of bases. One representation is the Fourier transform, where the basis vectors are harmonic functions. The "map" showing the shape of a signal as a combination of infinitely many harmonic functions -- i.e. the frequency domain -- is just as real as any other map with different basis vectors, e.g. the Walsh–Hadamard transform mentioned in the article. And, crucially, the original time-domain representation is also just one map showing us the signal, though it is often the most natural to us.



Excellent answer and I am sure you are aware of this, but like to point out:

"Mathematically, the Fourier transform is "simply" a way of representing time signals in a certain orthogonal vectorial basis."

Not just time signals but any piecewise continuous and differentiable as well as Dirichlet integrable function. This has many applications, just a few examples from the top of my head: image processing, solving differential equations, fast multiplication.

I'd also like to add that from a mathematical point of view these transforms are "lossless" in the sense that the transformed function has the exact same information as the original and you can get back the exact original even if all you have is the transform.

I feel this often gets lost when people approach the Fourier transform from a more engineering perspective, not at least because we often do the transform to throw away unwanted information, like certain frequency components.

In the end it really is just one of many perspectives to look at a function.



> I feel this often gets lost when people approach the Fourier transform from a more engineering perspective, not at least because we often do the transform to throw away unwanted information, like certain frequency components.

That was my problem as well. My first introduction to Fourier transforms was through more of an engineering lens. I remember having trouble with the _inverse_ Fourier transform. I was OK with a Fourier inverse of an already transformed function but I wasn't quite sure what that would mean when applied to a non-transformed, "regular" function.



Inverse fourier transform of a non transformed signal gives you basically the fourier transform with some changes (I can't remember which, were the numbers conjugates or something?). Applying it the second time gives you same result as if you'd do the forward direction transform twice.

If you apply fourier transform 4 times you get your original function back. You can think of it as 90 degree rotation. Inverse transform just rotates it in the opposite direction.

The rotation analog is not even too far fetched as fractional fourier transform allows you to do an arbitrary angle rotation.



This does read like a joke, I had never heard of it either and I'm wondering if many people do use this at all..

Operations on cepstra are labelled quefrency analysis (or quefrency alanysis[1]), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum.



Almost all speech recognizers until this latest crop (of end-to-end DL NN ASR) operated on cepstral coefficients (and their delta-s and delta-delta-s) as their feature vector.


> [...] I wasn't quite sure what that would mean when applied to a non-transformed, "regular" function.

Have you gained some intuition/understanding for this?

I tried a few inputs in WolframAlpha, but unless I manually type in the integral for the inverse transform there's not even a graph :) (and I have no idea whether it's even the same thing without putting a `t` in the exponent and wrapping it in an f(t) = ... )

https://www.wolframalpha.com/input?i=integral+%28sin%28x%29+...



Not parent (but GP) and intuition can mean many things but what helped me was keeping in mind:

Every continuous periodic function turns into a discrete aperiodic one when transformed. Works both ways.

Continuous aperiodic stays continuous aperiodic. Discrete periodic stays discrete periodic.



A fourier transform basically gives you an infinite number of sine waves with different amplitudes/phases at every frequency. If you add them all back together (the inverse fourier transform), you get back your original signal. Audio compression in this case would just be excluding the sine waves that are too high frequency too hear when you add them all back. I always hate how people try to make the fourier transform sound more complex than it actually is (and yes there is more nuance to compression than this, but this is just the basic idea).


The DFT has quite severe limitations that do not appear in the Fourier transform. In particular, the Nyquist criteria that there be zero signal energy for ALL frequencies above half the sampling frequency can only be approximated, and must be accomplished BEFORE sampling, i.e. in the time domain.


In my engineering courses, Fourier transforms were taught in the context of discrete fourier transforms. Because sampling is a thing that matters (computer audio is discrete data points, not an actual wave).

The Fourier transform of a discrete signal repeats in the frequency domain. For example, [1, -1, 1] could be a sine wave with the exact half of the sampling frequency going from 1 to -1 back to 1 once .... Or it could be a sine wave with double the sampling frequency that is actually going from 1 to -1 to 1 to -1 all within the gap of the first 2 samples. Or it could be 3x the sampling frequency, 4x the sampling frequency, etc. The solution is to only keep the part of the transform that is below the Nyquist limit, because we don't have a sampling rate accurate enough to measure the higher frequencies, so just assuming they dont exist. This also means that if the source signal WAS in fact 4x the sampling frequency, we will see a spike at 1/2 the sampling frequency in the fourier transform, and when we re-create the signal, it will be completely wrong.

So unless you have analog hardware for measuring the Fourier transform (or are working purely in a non-physical mathematical domain, like "i have a sine wave" which can be perfectly represented), you are naturally going to be taking discrete samples of a signal to measure the Fourier transform, which means you are going to be losing any part of the signal that doesn't adhere to sampling rates.

Because my engineering courses were so heavily focused on digital signal processing, when I hear "fourier transform" i immediately think of "discrete fourier transform" and loss is immediately applicable.



It can be lossy and not just because of truncation. Convergence is somewhat finicky in Fourier analysis and was not well understood mathematically before the 60's. Engineers made great use of it anyway.

Remember that it is an integral transform. Basically, any data on a set of vanishing measure can be lost or corrupted. Unfortunately it can even be the case the deviation around some points is unbounded.



A square wave would take an infinite number of sinusoidal waves to perfectly reconstruct. To approximate it, one truncates the coefficients. This is done in any engineering application where memory isn't infinite. Which is all of them.


Of course, a discrete, finite sampling of a square wave at a set of points in time only requires a finite number of coefficients to perfectly reconstruct.


Which is the point. A discrete fourier transform can never recreate a square wave unless you have infinite samples. It can only recreate the finite sampled signal (which is only an approximation of the square wave, and not a real square wave).

This means that sure, the Fourier transform itself isn't lossy (garbage in, garbage out) but Fourier transforms would be used in contexts where loss are introduced. If I have a real perfect square wave, and I want to a take a fourier transform of it, the sampling is going to introduce loss, so to associate sampling losses with the transform itself is fair. Real square wave ran through a DFT program on my computer is going to spit out an approximation of a square wave -- loss.



The good news of course is that if you were sampling a real signal, then that signal was not actually a perfect square wave. So the fact that you can't (re)construct a perfact square wave is somewhat moot...


Generally yes, but it's a perfectly reasonable assumption that a natural source could generate a signal that is beyond the bounds of what we can record. Any real signal generated by a computer is going to fit within the constraints of what we can generate, but inevitably something like a whale, or a quasar or something will generate a wave that will be lossy.

But also, the question this is all responding to was effectively "why would engineers associate Fourier transforms with loss" and the answer is simply "because the techniques used in calculating most Fourier transforms are going to inherently put a frequency limit and anything beyond that will be lost or show up as an artifact". Engineers work with real world constraints and tend to be hyper aware of those constraints even if they often don't matter.



I used to think of it like another basis too, but nowadays I think this basis analogy is a bit fraught, or at least not the whole story.

In particular, for multidimensional spaces, the usual multidimensional Fourier transform only really works if you have a flat metric on that space (I.e. no curvature). That’s a bit of a warning signal given that our universe itself is curved.

There was some very interesting work recently where it was shown how to generalize Fourier series to certain hyperbolic lattices [1], and one important outcome of that work is that the analog of the Fourier space is actually higher dimensional than the position space.

Furthermore, the dimensionality of the ‘Fourier space’ in this case depends on the lattice discretization. One 2D lattice discretization may have a 4D frequency-like domain, and another 2D lattice might have a 8D frequency-like domain.

[1] https://arxiv.org/abs/2108.09314 or https://www.pnas.org/doi/full/10.1073/pnas.2116869119



Not the whole story indeed, but you have to dive into representation theory somewhat to get more: the Fourier transform is more or less the representation theory of the (abelian) group of the translations of your space, thus the homogeneity requirement. The finite-lattice version[1] (a discretized torus, basically) may serve to hint what’s in stock here.

[1] https://www-users.cse.umn.edu/~garrett/m/repns/notes_2014-15... (linear algebra required at least to the degree that one is comfortable with the difference between a matrix and an operator and knows what a direct sum is)



If you like this topic, I strongly recommend you read the references I attached to my comment.

In uniformly curved 2D hyperbolic spaces, it turns out that there is a higher dimensional non-Abelian Fuchsian translation group defined on a higher genus torus.



> That’s a bit of a warning signal given that our universe itself is curved.

What does this has to do with whether they are a different basis for cases where we don't account for curvature? This seems completely irrelevant, sure the tool can't be used in some cases but it can be used as a basis change in other cases.



It's not an analogy. It's literally just another basis.

> In particular, for multidimensional spaces, the usual multidimensional Fourier transform only really works if you have a flat metric on that space

What the hell does the metric of space-time have to do with this? When computing a fourier transform, we're not working in 3+1 dimensional space-time, we're working in either an N-dimensional (in the discrete case) or \infty-dimensional (in the continuous case) vector space; while that term contains the word "space" they DO NOT, in this context, have anything to do with Euclidean space or the Pseudo-Riemannian manifold that GR treats space-time as.



I wanted to know more about this too, and I hate to make meta comments, but I'm afraid your confrontational approach may make the other person think this conversation isn't worth the hassle

Which would be a bad thing, reading this kind of conversation is what makes this site worthwhile



> What the hell does the metric of space-time have to do with this?

Maybe calm down for a moment and try not being such a hot-headed ass. You seem to have missed the point entirely.

I’m well aware that these functions can be described as vectors in an infinite dimensional Hilbert space.

The problem I’m bringing up is that the domains of these functions (i.e. not the vector itself) typically have geometric properties we care about.

The problem is that if one has a manifold with a non-trivial intrinsic geometry, then functions defined on that manifold cannot be faithfully Fourier transformed without losing pretty much all geometrically relevant information.

It turns out that in some cases, there are generalizations of the Fourier transform of a function on a curved manifold, but in those cases, the domain of the transformed function is very different, typically having a higher dimensionality.

This is particularly relevant and problematic in physics, where the Fourier transforms of functions on spacetime are really important and useful, but dont work in curved spacetimes.

E.g. it’s a big problem when doing QFT on a curved spacetime that one cannot separate positive frequencies of a field from negative frequencies.



In the past astronomers believed in the geocentric model of the universe with epicycles. It was extremely accurate, and if more accuracy was needed they added more epicycles. It was a completely wrong model, but they unknowingly used the Fourier series as a function approximator.


It wasn't a wrong model, it was just much more complex than needed, given better understanding of physics. Viewing the universe relative to stationary Earth is a perfectly fine exercise, even if it means you have to DFT the rest of the solar system for the math to work.


I agree overall. But a note - every orthonormal basis partitions the frequency spectrum. It doesn't go away, if you are using e.g. polynomials then you're building functions up out of their frequency components too. The Fourier basis has every element correspond to a specific frequency, which is special in some sense. I would say rather.. they are each designed for a purpose. A basis change can rearrange the spectrum in such a way that analysis of it is complicated. Then you're analyzing something else (eg smoothness). Most functions of interest do have distinctive spectra, even if the Fourier basis doesn't answer all the questions.


Sure, nothing special about sine waves as basis functions for signal decomposition. Not necessarily the best either, depending on what you want to do.

Still, as pertains to whether "the frequency domain is a real place", maybe sine waves are relevant as representing resonant frequencies of physical systems.

There also seems to be something fundamental about the way multiple radio frequencies can simultaneously propagate through a vacuum as long as they are different frequencies.



> one basis could, for example, be two vectors pointing North and East; another could be a vector pointing along a certain road and one perpendicular to it.

And there's no requirement that they be perpendicular is there? The second just needs 'some amount of perpendicular', North and North-East for example? Since any [n, e] can also be described as [(1-sqrt(2)*e)*n, sqrt(2)*e] in the latter. (I think that's right, but my main point is you can do it, not the particular value there, and if that's way off I'll blame the fever.)



the fourier transform of a periodic signal is composed of a train of dirac deltas, each multiplied by some factor

the delta with smallest frequency is the fundamental frequency, and the others are harmonics

when you do the inverse fourier transform on this train, each delta becomes a sinusoid

that's how you can write any periodic function as a sum of sinusoids, all of them multiples of the fundamental frequency

and that's the fourier series: it's just the fourier transform, followed by an inverse fourier transform, macroexpanded

but the fourier series only work for periodic functions, because only periodic functions have a bunch of isolated, periodic deltas as its fourier transform

so the fourier transform is only half the step of a fourier series (to write down the series you also need the inverse fourier transform) but, at the same time, the fourier transform is a generalization of the fourier series, because it works for nonperiodic functions too



This reminds me of a great conversation I had on a whiteboard during my masters in the dynamical systems group:

"So energy is pumped into the system on the left, and is dissipated over here on the right"

"But the system is rotationally invariant, there is no left and right"

"I meant in frequency space"

"Oh I thought you were talking in real space"

"ARE YOU STUPID, WHO THE HELL THINKS IN REAL SPACE??!?"



Wait a second-

is there even such a thing as left and right in frequency space?

It's an abstract representation, I'd it doesn't have any relationship to spatial dimensions in terms of left right up down



Fourier basis is unique in that the complex exponential basis functions are the eigen vectors of the linear time invariant (LTI) systems. No other transform has this property. Many real world systems (circuits, communication channels, antennas, etc) are LTI. This property make sure for example, signals transmitted over different frequencies do not interfere. That is why Fourier transform is so useful and used instead of other transforms. There is also the connection with quantum physics, in using Fourier pair as wave functions of position and momentum, which other transforms don’t have.


I'm surprised you're one of the only commenters to bring this up. I have an electrical engineering background -- for analysis, lots of systems are assumed to be either linear or very weakly nonlinear, and a lot of our signals are roughly periodic. Fourier transforms are a no-brainer.

Convolution turns into multiplication, differentiation wrt time of the complex exponential turns into multiplication by j*omega. I don't know about you, but I'd rather do multiplication than convolution and time derivatives.

As a corollary, once you accept "we use the Fourier representation because it's convenient for a specific set of common scenarios", the use of any other mathematical transform shouldn't be too surprising (for other problems).



I’ve been thinking a lot lately about how useful it might be to represent Prometheus data in a frequency domain, to visualize weekly, daily, and yearly access patterns in capacity planning. Autoscaling can avoid brownouts but it can’t tell you what your annual budget should be. Or why.

But Prometheus data isn’t really a sampling interval. Even if each machine in you cluster is reporting on an interval, they aren’t synchronized.



A real place?

There’s an optics experiment I did, bloody fiddly, where a picture goes through some lenses, and there’s a plane of the frequencies, and it goes through further lenses and is projected on a screen.

By blocking out areas in the frequency plane, you can change the image. It was extremely fiddly so huge thanks to Dr Bruce Sinclair at St Andrew’s.

Physics lab work is where you get to see how things work, although if you go through the theory several months after you’ve done the lab work you’re a bit lost.



Yep, that feature of getting the frequency domain representation through optics is pretty convenient for the various microscopies and spectroscopies performed at light sources.


The cochlea actually supports the point the article makes, as while it does transform to the frequency domain it doesn't do (or even approximate) a Fourier transform. The time->frequency domain transform it "implements" is more like a wavelet transform.

Edit: To expand on this, to interpret the cochlea as a fourier transform is to make the same mistake as thinking eyes have cone cells that respond only to red, green or blue light. The reality is that each cell has a varying reponse to a range of frequencies. Cone cells have a range that peaks in the low, medium or high frequency area and tails off at the sides. Cochlear hair cells have a more wavelet-like response curve with secondary peaks at harmonics of their peak response frequency.

Caveat: I'm not an expert in this, only an enthusiastic amateur, so I eagerly await someone well-akshuallying my well-akshually.



Any kind of discrete Fourier transform, and also any device that generates the Fourier series of a periodic signal, even when done in an ideal way, must have outputs that are generated by a set of filters that have "a varying response to a range of frequencies".

Only a full Fourier transform, which has an infinity of outputs, could have (an infinite number of) filters with an infinitely narrow bandwidth, but which would also need an infinite time until producing their output.

So what you have said does not show that the eye cone cells do not perform a Fourier transform (more correctly a partial expansion in Fourier series of the light, which is periodic in time at the time scales comparable to its period).

The right explanation is that the sensitivity curves of the eye cone cells are a rather poor approximation of the optimal sensitivity curves of a set of filters for analyzing the spectral distribution of the incoming light (other animals except mammals have better sensitivity curves, but mammals have lost some of them and the ancestors of humans have re-developed 2 filters for red and green from a single inherited filter and there has not been enough time to do a job as good as in our distant ancestors).



Sure but the article asks the question about the frequency domain generally then constrains itself to Fourier transforms. Fourier has a lot of baggage from making large assumptions. Transforms like wavelet and laplace are closer to "real world" because of fewer non-physical assumptions and have actual physical implementations. It doesn't get much more real than seeing it with your own eyes.


> Transforms like wavelet and laplace are closer to "real world" because of fewer non-physical assumptions and have actual physical implementations.

Could you expand on this a bit please? Especially as it relates to the Laplace transform.



I'm not certain the secondary peaks would matter very much though? It seems to me that maybe the most useful model would be not a wavelet transform but some form of DCT?

At any rate, the point is that the frequency domain matters a lot, since our brain essentially receives sound data converted to the frequency domain in the first place...



It's easy to forget how grounded in physics biology is. When I was in college, we had an issue where our stem cell lines were differentiating into bone. Turns out, the hardness of the environment is a signal stem cells can transduce, and the hard dish was telling them they were supposed to be bone cells.


> To turn the Hadamard matrix in the nicely-ordered flavor showcased earlier, we need to sort the rows based on their sequency. I’m not aware of an algorithm more elegant than counting the number of zero crossings

By staring at the matrix, I guessed a pattern and algorithm already known according to https://en.wikipedia.org/wiki/Walsh_matrix:

> The sequency ordering of the rows of the Walsh matrix can be derived from the ordering of the Hadamard matrix by first applying the bit-reversal permutation and then the Gray-code permutation:



Except sinusoids are special in that they are natural solutions to the Helmholtz wave equation. There's other problems too like square waves having infinite energy. This article might make sense to a mathematician or computer scientist but neglects the underlying physics of sound and waves.


Sinusoids are also special because they are eigenfunctions of the derivative operator.

The physics result is actually probably a consequence of that.

At the end of the day the whole lesson of modern math is that its useful to view things from many perspectives.



Excellent point, lots and lots and lots and lots of physical objects are harmonic oscillators. That does have pretty fundamental grounding in physics.

I can think of lots of other places I'd use fourier analysis (at least qualitatively as with doing diffusion modeling in my head) but you're right that sinusoids are more physically "real" whereas being possible to represent in any basis set is more "valid" if that makes any sense.

Not quite sure what the right word is on this one, but I agree "real" kind of suggests real oscillators underlying the phenomena. Square waves are less physical because of discontinuities in both the signal and derivative; nature really doesn't care for discontinuities.



Frequency domain also makes the math really easy for linear, time-invariant operations, which (approximately) describe a lot of systems that exist in nature.

The Gibbs phenomenon, for example, falls out naturally from the IFT of a frequency response where all the frequencies above some cutoff are zero.

I'm curious how the square wave frequency domain would describe the Gibbs phenomenon -- I think you'd have harmonics of the fundamental square frequency showing up as if the system were nonlinear.



The article asks a very general/philosophical question, but then goes on to say the FD is not really that special because we can find other sets of orthogonal bases and transforms between them.

I would argue that despite this fact the frequency domain and by extension the FT is special compared to many other transforms, because we can actually observe them in nature. Two examples: a lens will perform a 2D FT of an input image on a collimated beam, we can observe this with e.g. a screen. Second example, we can measure the wavelength (or frequency) of light by projecting the output of a grating or prism onto a ccd again a direct measurement of the FD (similar measurement can be done for RF waves).



During undergrad physics / math I came to the conclusion that knowing the value of a function f(x) at infinity many points x is equivalent to knowing the frequency content of f at infinity many frequencies. Both representations are equally “real” in a philosophical sense. Some problems are easier to solve in one representation than the other.


Fully agree. Switching from the time domain to the frequency domain is just like switching coordinate systems.

If you have a signal with a single narrow peak in the time domain you can represent it in a very compact or sparse way just using a delta at the position of a peak. If you try to represent it in the frequency domain you will not find such compact representation. Similarly if you have a sinusoidal signal in the time domain, you won't get a compact representation there, but you will get it in the frequency domain where you just have a couple of deltas.

Time and frequency are two ways of representing the same thing. Sometimes it's easier to represent something in one domain, sometimes it's easier in the other domain.

It can be proven that anything bounded in the time domain will be unbounded in the frequency domain and viceversa. So compact stuff in one domain always spreads when represented in the other domain.

Beautifully, quantum mechanics tell us that position and momentum are conjugated variables (like time and frequency in the example above) and therefore if something has a bounded position (we know where it is) its momentum will be unbounded (we won't know its speed), and viceversa.

That's the main idea of Heisenberg's uncertainty principle.



I used to wonder if imaginary numbers were "real" (in the common sense, rather than the mathematical one). This only intensified after I learnt that they're required in quantum mechanics to explain any physical behaviour at all.

Now I consider them to be just as "real" as the integers - which is not at all. Both just human-invented concepts with no fundamental physical basis.

As you point out - useful, though!



I think "real" in the common sense is just too vague to really apply to things as abstract as numbers. 'Real' numbers seem real in a common sense because counting is a universal human experience. Adding in the imaginary plane allows for "counting" systems which have 2 interconnected components like waves, which also turn out to occur very often when you look at nature a bit more closely. Thus why you can derive all of trigonometry from Euler's formula. It just isn't a universal human experience to look at nature that closely yet.


I do not believe that it is possible to claim that integer numbers or imaginary numbers do not have a fundamental physical basis. The small integer numbers are not invented by humans, because many other animals can count up to some small number (like 5 or 6).

Both are abstractions. That means that they are properties of real physical objects, which are obtained by ignoring all the other properties of those physical objects that are irrelevant in the context of the application.

Therefore an abstract property is an equivalence class of physical objects, where all their other properties are ignored, so they are equivalent if they have the same value for the property of interest.

Non-negative integer numbers are equivalence classes of collections of physical objects, integer numbers are equivalence classes of pairs of such collections.

The imaginary unit is the equivalence class of all rotations by a right angle in the 2-dimensional space. Humans, like many animals, have an innate ability to recognize right angles, like also certain small numbers, so looking around you can perceive as easily all imaginary units like all numbers 3.

The complex numbers are the equivalence classes of all geometric transformations of the 2-dimensional space that can be decomposed in rotations and similarities (a subset of the affine transformations). In contrast, the 2-dimensional vectors are the equivalence classes of all translations of the 2-dimensional space (another subset of the affine transformations).

All the things that are equivalent from the point of view of an integer number or a complex number, so they are the basis from which such numbers are abstracted, are things that you can see with your own eyes in the physical world (similarity transformations appear in optical projections, e.g. in the shadows of physical objects, and the eyes are based on them).



Much earlier than quantum mechanics, they are extremely useful in electrical engineering. And even before that (historically) - their 4D generalization, quaternions, are extremely useful to describe 3D rotations.


> I learnt that they're required in quantum mechanics to explain any physical behaviour at all.

You can do QM without complex numbers as people are used to use them.

But it gets really awkward really fast.



sometimes i think that our existence in the phyiscal world subject to limitations of spacetime is the time domain and after death, we get out of the time domain into frequency domain aka the soul, the reason perhaps we cannot prove of the existence of ghosts or soul is because we are using instruments and techniques of the time domain to measure entities in the frequency domain, just a shower thought


I know I may get downvoted for this - and I know pre-emptively whining about downvotes is incredibly lame and deservedly frowned upon - and I know self-awareness of that fact doesn't make it any better - but: I'm sorry, this is just woo-woo. A replier linked Donald Hoffman, who also espouses a woo-woo theory about root reality being the realm of consciousness.

There're not only no empirical but no theoretical grounds to believe anything like this. The mind is almost certainly wholly defined by the physical processes of the brain, in the same spacetime realm all other known physical processes reside in.



I think you‘re digging into a philosophy question with the wrong shovel. You‘re not wrong, but OP is debating something that is fundamentally unmeasurable (as I understand them). Science can only help us understand things within spacetime. Anything beyond that is philosophy.

I believe this is basically just the dualism vs. materialism debate on consciousness. Consciousness is a fascinating topic. There‘s plenty of paradoxes or thought experiments to fry your brain on. It‘s not just about the electrochemical processes in the brain. It‘s about identity, the continuity thereof, etc.



Science can help us understand things within spacetime. Mathematics and logic within abstract, but rigorously defined spaces. Anything beyond that is philosphy, asthetics, politics, religion, ...


This is just more woo woo.

Just because something is unmeasurable doesn’t mean it cannot be proven wrong.

Just because something is derived from philosophy doesn’t mean it cannot be proven to be wrong in the real world.

> Science can only help us understand things within spacetime. Anything beyond that is philosophy.

Even if one takes this statement to be correct, it doesn’t imply that any specific philosophical idea about “beyond space time” is correct.

And frankly even the “philosophy” of an after life can easily be dispensed with. There’s absolutely no reason to suggest an after life, or a duality between the body and “identity” exists other than “we would like to believe so”.

It’s not just awful science but also bad philosophy.



Why would you be downvoted for speaking facts?

Also why should you restrain from pointing out weaknesses in other people's comments due to the fear of negative karma? Karma is meant to be burned.



I always found it so strange how the top comments on blogposts like this are trying to answer the ‘question’ without engaging with the article at all.

Do they and the people upvoting them not realize that it’s a blog post?



Sometimes the title itself generates a kind of excitement in your brain and you start answering it before you even read the blog post. And in this case as others have noticed, it's not exactly answered in the way some of us would.

I do know I've thought about this, and even had one of those "moments of realization" on the drive home from my first lecture on DCTs, where I thought I could transform all of human history from time domain to frequency domain and how this would bring out certain patterns and truths that could not be understood otherwise. I swear I wasn't on drugs! (Though the lecture was in fact given by Prof. Marshall of Rutgers, the father of the lyricist for the rock band Phish, at whose concerts I have imbibed certain substances... but I digress.) The frequency domain is just as "real" as any other mathematical construction that can help us understand everything.



For the math aficionados in this thread, I have a frequency domain related set of ideas I'd like to develop into a more rigorous mathematical theory. Basically, represent a curve as Chebyshev polynomial: T_1 represents a line, T_2 represents an arc, T_3 an Euler spiral, etc. Smooth curves have rapidly decreasing Chebyshev coefficients, and this whole thing is potentially a lot easier to work with than Fréchet distance, which is the usual error metric but very annoying.

This is conceptual and theoretical, but potentially has immediate application for computing a better offset of a cubic Bézier, used for stroke expansion.

If this sounds intriguing, a good starting point is the Zulip thread[1] I'm using to write down the ideas. I'd especially be interested in a collaborator who has the experience and motivation to coauthor a paper; I can supply the intuition and experimental approach, but the details of the math take me a long time to work out. (That said, I'm starting to wonder if engaging that slog myself might not actually be a good way to level up my math skills)

[1]: https://xi.zulipchat.com/#narrow/stream/260979-kurbo/topic/E...



Is it a place? Yes. A moving wave of some kind, propagating through space, has a certain wavelength, which is related to the frequency. The wavelength is a distance between wave crests, and those have a time-dependent location as the wave propagates. When the wave is periodic, the concept of frequency summarizes the state of a large number of the wave crests existing at equally-spaced locations.

We can have a standing wave, e.g. vibrating string. Frequencies then translate to concrete place in space, such as the nodes where the string appears to be still, (like the exact middle if it is excited with the second harmonic).



If you want to view paradise /

Simply look around and tune it. /

Anything you want to - tune it. /

Want to change the channel? nothing to it.

There is no SDR /

To compare with foreign number stations /

Casting there, you'll be free /

If you tune to ninety eight point three



Love this article but would like to dispute the author's notion of "real". In the post, he shows that the frequency domain is not special, in the sense that there are infinitely other equally valid representations.

But many places are real without being special - other than to those who make special use of them!

I'd argue a real place is one that affords the operators that allow us to inhabit and interact there -- stuff like object permanency, adjacency and distance. If things can be organized and sustained there, are they not real?

It's fun to imagine what kinds of structures can inhabit the frequency domain - or any other.



The scope of a domain is defined by its bounds. Expressed mathematically the bounds can be expressed using any variable type that can enclose an area or a field of elements or sets. The range of the bounds can extend to infinity. An abstract unspecifued domain may have no associated elements, sets or operators.


In music, it is common to think of the structures in frequency space rather than time space.

For example, one thinks of tug C major triad as C E G, rather than mixed sound that comes out of C E G together.



By way of reductio, I think one could make the same argument for the non uniqueness of the time domain and presumably anything else admitting to an isomorphism.


While of course one can use whatever orthonormal basis one likes, and so in particular one can use the one shown in the article,

uh,

I’m not sure what statement that others might believe, the article is contradicting?

The Fourier transform has some nice properties that other bases don’t.

One way to view the Fourier transform, is as decomposing “the space of functions on [some domain], regarded as a representation of [the group of translations in that domain]”, into one-dimensional sub-representations. (Or, if you want to stay over the real numbers instead of complex numbers, then two dimensional representations)

I don’t know that there’s a nice analogy of this for the basis described in the article.

Maybe there is? Like, maybe if one uses some group with 2^n elements other that Z/((2^n)Z) , maybe (Z/(2Z))^n, then maybe the basis you would get would be like the one shown?

Idk.

But I don’t see this as showing frequency space as less real.



Which brings us to the question of whether the complex numbers are real, despite them not being real numbers.

This question has a long and fascinating history.



this is one example of a whole family of orthogonal Wavelet transforms, that let you trade off how between frequency and scale resolution.


This is disappointing.

The title prompted a science (or, maybe, philosophy of science) question, but the article give only applications.

Maths don't have to be real to be useful.



The mundane answer to the philosophical question is that the frequency domain is not any less or more "real" than the time domain; they're different mathematical abstractions of change. Just like base-10 numbers are not any more or less "real" than base-2 or base-8. Carthesian coordinate locations are not any more or less "real" than polar coordinate locations. Some representations are just more convenient for specific calculations than others (mostly by simplifying specific calculations).


The trite response is "get better at maths". I honestly don't mean that facetiously - the whole understanding that underpins "neat tricks" and how to use them is "maths". Once you can use said neat tricks, you are better at maths.

So, I would suggest a better question is: where can I find an introduction to this stuff that is better aligned with my current understanding and knowledge. To that end, here's a great intro book: https://www.dspguide.com/

I read that cover to cover when I was 17 and I would not have been described as particularly mathematically gifted.



This article is actually good at that.

Don't look too much at the Greek letters, look at the tables and code instead. A DCT is just a bunch of multiplications, additions and a lookup table. The approach to derive the Walsh-Hadamard transform is actually very computer sciencey. It is made using a recursive algorithm, it then takes advantage of the fact that a+b==b+a to reorder the rows. There is even a trick that uses a bitwise AND.

As a programmer I find it much easier to start from here: tables, loops, and simple operations like multiplication and addition. And when I finally understand how a computer does the thing, maybe go back to the maths to see if I can get more insight.

In the end, the only appropriate answer is "get good at maths". The question is how to achieve that. And if you have a programmer mindset like I do (and I guess like many people here) and struggle with maths, I recommend trying the bottom up approach. Write the code, have it actually show visuals and play sounds, play with the parameters and see what changes and what doesn't change.

The other part is overcoming the language barrier, I am still struggling with that. Mathematicians use barbaric name like eigenvectors for "stuff that won't change", they call "integration" an addition in a loop and write it with a weird sigma thing, or a weird "S" thing when there are a lot of iterations over very small numbers. But in the end, it is not that different from reading Perl :)



Just play around with audio tools, so you can get to throw ideas at a thing and get results that you can hear?

The things you can hear scale pretty well all the way down to DC, and all the way up to daylight (or beyond, I suppose).

After spending a few (or dozens, or hundreds) of hours tinkering with audio-range frequencies for fun, then maybe you'll have a real place with which to associate the maths that are involved in electronics and programming, and having that place might make that math (and whatever is needed to accomplish it) feel lot more worthwhile to actually-learn.



> might make that math (and whatever is needed to accomplish it) feel lot more worthwhile to actually-learn.

I agree. Start with small hobby projects using what you already know.

They should be things where you get immediate feedback, like graphics or sound. That makes them very fun to work and iterate on. It's addicting when you see/hear the output, make a change in a few seconds, and immediately see/hear the results of your change. That's the stuff that keeps you up all night at the computer.

You'll start hitting barriers where you need more math to get better results. Then you'll be much more motivated to start learning, and it will be easier to retain the knowledge when you actually use it in practice.

For instance, use Python to output audio samples. Start with simple sine wave tones, colored noise, and work your way up to implementing simple FIR and IIR filters to modify input audio like voice and music. Use Audacity to see the change between input and output as a spectrogram.

If you're into music, use an existing Python library to read MIDI files with songs you like. Generate audio output files for those songs. First with just sine waves for the notes, then you can start emulating digital and analog synths. Write code that takes sampled audio files and emulate different guitar sound effect pedals or tube amplifiers.



The title is clickbait-y.

TLDR: Fourier transform is an approximation. We can come up with other approximations, for example by square waves.

联系我们 contact @ memedata.com