![]() |
|
![]() |
| In Russian light blue is “blue” and dark blue is “indigo” essentially. It still has seven colors in the rainbow. It’s just that in English colloquially nobody uses indigo. |
![]() |
| Yes, well that's what I mean. Culturally, Russians think and speak about colors differently, dividing them up differently than the West.
> Russian does not have a single word referring to the whole range of colors denoted by the English term "blue". Instead, it traditionally treats light blue (голубой, goluboy) as a separate color independent from plain or dark blue (синий, siniy), with all seven "basic" colors of the spectrum (red–orange–yellow–green–голубой/goluboy (sky blue, light azure, but does not equal cyan)–синий/siniy ("true" deep blue, like synthetic ultramarine)–violet) while in English the light blues like azure and cyan are considered mere shades of "blue" and not different colors. > Blue: plava (indicates any blue) and modra; in the eastern speaking areas modra indicates dark blue, in some of the western areas it may indicate any blue etc. from https://en.wikipedia.org/wiki/Blue%E2%80%93green_distinction... I am not deeply knowledgeable on Russian, I failed Russian in high school, just going off of my surface-level knowledge of linguistic relativity regarding color, and discussions with a friend from that part of the world, so I might not know what I'm talking about here. |
![]() |
| For a single individual, all of the above is true, but for a large enough sample size, the answers may be more generally useful because you account for all of those rounding errors. |
![]() |
| Fun, I got 174 and when I saw the results my reaction was "but that is not turquoise!" which I suppose means I either don't know what turquoise is, or my screen has bad calibration/gamut. |
![]() |
| I actually disliked the conclusion, because it forced me to classify turquoise as either blue or green. When it's a mix more than anything.
It lacks the "can't classify" to make it a better tool. |
![]() |
| CMY and RYB are both valid primary color sets.
RYB, being taught in grade school, has a lot of influence on how people perceive and name colors, which is what this conversation is about. |
![]() |
| I classified cyan as green because, well, it's greener than pure blue, and it's also the most greener you can get than blue, in RGB space, without losing any blue :) |
![]() |
| In USA:
Primary Additive Colors: Red, Green, Blue Primary Subtractive Colors: Cyan, Magenta, Yellow But, before digital color displays became popular, the average person had, by far, mostly exposure to subtractive (paint) colors. US school children are taught from birth that the primary subtractive colors are red, yellow, and blue, simply because those words are easier to pronounce, and so magenta is a weird "red" and cyan is a weird "blue" , until the children discover on their own, or in specialized print/paint schools, red and blue are not primary subtractive colors. Humans are terrible at naming things. And to bring it back to Current Thing: Google AI cites this source for its red/yellow/blue claim, even though explicitly this source says that Google gives the wrong answer. https://science.howstuffworks.com/primary-colors.htm#:~:text.... Will GenAI's aggressive ignorance kill sarcasm and nuance in writing? Or will people learn to ignore AI input like they ignore banner ads? |
![]() |
| Blue and Green and primary and secondary colors.
Cyan is not. The author decided to cut off the colors list at secondary colors. There is nothing wrong with that. |
![]() |
| "The author decided..."
'The author decided' is not physics. Suggest you look at the Wiki page under 'Wavelength': https://en.m.wikipedia.org/wiki/Color_vision Green: 500 - 590nm, Cyan: 485 - 500nm, Blue: 450 - 485nm. Color vision theory is far too complicated to discuss here, and I'm not going to debate cyan as a mixed color of blue and green wavelengths versus a fixed wavelength that's in between both of them. What the author provided was, at best, misleading but nonsense as far as science is concerned. If the author said he was an artist and presented colors as a preferential list it would have been a different matter. BTW, I don't mind being voted down (it happens to me regularly), but here those who did are only showing their ignorance. I'd add the author—who penned here—ought to explain his actions in much more detail. |
![]() |
|
Also deliberate software blue light filters. Mine is always on, both on the desktop and on the phone. Many people may forget that they are even using one. |
![]() |
| I mean, it really just tests arbitrary word usage. I have no fucking clue if turquoise is supposed to be "green" or "blue", it's turquoise! |
![]() |
| I did it on IPS laptop display and got 175. On my OLED phone I got 179. I am more in agreement with the phone results, but the turquoise on the phone looked even greener to me. |
![]() |
| I'd prefer blue/green/neither.
With the third colour, I just thought "no, that's teal", and my decision was (as you suggested) semi-arbitrary. |
![]() |
| Then by that framing, the test is asking you to decide what hue value is the "zero" between the positive/negative blue/green. Is the wording imperfect? Sure, but the intent was still entirely clear. |
![]() |
| I think the main point of this test was to determine the position of teal in your case, as your definition of teal is the midpoint(-ish range) between blue and green. (For me it's more blue though.) |
![]() |
| Why do you think that would be the case?
One persons ‘blue’ activity could be different than another’s while still being the same wavelength of light and general perception. |
![]() |
| No real need for the snark; if we dismiss the notion of human divinity and look at ourselves as broadly fixed macro-structure computational machines (like any other broadly deterministic machine) similar signals propagating over the same sets of sub-computers will generally (accepting the undetectable, such as steganographically hidden homomorphic compute contexts) be reflective of similar underlying operations.
If I were to imagine a warrior, and his general perception of the colour red, I may find the way his brain processes the colour more closely to a rival warrior than his wife the gardener. A real world example; London taxi drivers and bus drivers show distinct patterns of changes to the hippocampus. https://pubmed.ncbi.nlm.nih.gov/17024677/ The way that the mapping data is stored will be heavily bias towards being spatially reflective of the real world counterpart. Note the bias will be towards a degree structural isomorphism, one internal 2D + 1T spatiotemporal surface map of the city might be a rotation and/or reprioritisation of another - but they will have a shared basis (convergent compute simulations of biased subsets of the same real world structures), and when navigating from point A to point B, the path and nature(though not the propagation vector) of the electrical activity of both will be reflection of the same real-world surface map. Now I say spatiotemporal - because the driver going from A to B in the morning will develop different expectations of the levels of traffic at different parts of the journey. |
![]() |
| If that were the case then there's no way that they'd be able to extract images from people's neural activity, and yet they've started doing that very thing. |
![]() |
| Agreed. It would be more accurate to show the final gradient (without the curve) and let people choose where is the boundary. It wasn't even clear what the actual task is |
![]() |
| Author here. I added fields so you can specify your first language (relevant link: https://en.wikipedia.org/wiki/Blue%E2%80%93green_distinction...) and colorblindness.
FAQ: * I can't know your monitor's calibration, your ambient light, or your phone's brightness. Obviously, this will affect the results. However, I am tracking local time of day and device type, from which we should be able to infer whether night mode and default calibration has any aggregate effects. Anecdotally, thus far, I haven't found any effects of Android vs. iPhone (N=34,000). * The order is randomized. Where you start from can influence the outcome, but methodologically it's better to randomize so the aggregate results average over starting point. You can run the test several times to see how reliable this is for you. * It's common practice in psychophysics to use two alternatives rather than three (e.g. blue, green, something in the middle). It would be a fun extension, which you can handle with an ordered logistic regression. The code is open if you want to take a shot at it: https://github.com/patrickmineault/ismyblue * I will release aggregate results on my blog, https://neuroai.science * I am aware of most of the limitations of this test. I have run psychophysics experiments in a lab on calibrated CRTs during my PhD in visual neuroscience. *This is just entertainment*. I did this project to see if I could make a fun webapp in Vue.js using Claude Sonnet, and later cursor, given that I am not highly proficient in modern webdev. A secondary point was to engage people in vision science and get them to talk and think about perception and language. I think it worked! |
![]() |
| When done on my Xperia cell phone, even a small shift in screen orientation made the green leaners into obviously blue. Might be worthwhile capturing phone position if you can. |
![]() |
| That's a lot! Now I noticed: "I am tracking local time of day[...] infer whether night mode [...] any aggregate effects."
So you've thought about that already :- ) (it's evening here) |
![]() |
| I would guess the hackernews crowd has a higher percent of bluefilter installs since that is a very common topic. Probably also more agressive settings for the blue filter. |
![]() |
| You can estimate that if you can determine at which point the color becomes too ambiguous to call blue on one side, or green on the other. Different people will have a different range. If you want to identify a threshold, you can take the midpoint of the range.
Either of these approaches may be bad. The third paragraph of this page explains why: https://en.wikipedia.org/wiki/Two-alternative_forced_choice My suggested approach might not be much better though; it still relies on presenting a single stimulus. It's not clear how the two-alternative forced choice can be used to find someone's blue-green threshold. I think a better experiment would be to show the user gradients and ask them to move a bar to where they think is the midpoint in the blue-green transition. Subsequent gradients center on the user's previously identified midpoint, but zoom in more. There is also this question: by which path do we interpolate from blue to green? Let's imagine the CIELAB color space. Say that our pure green lies on the red-green axis, all the way on the green end. Blue lies on the extreme of blue-yellow. Do we interpolate through these linearly or what? And using what luminance value? I suspect that for every given, fixed luminance value, the blue-green boundary is a contour. There are many paths we can take between blue and green, and along each path there is a boundary point. If we join those points we get this contour. Then if we do that for different luminance values, the contour becomes a 3D surface in the color space. |
![]() |
| Yes! I've had some lengthy discussions with UI designers trying to get them to understand this exact point. I can see that they're red and green, I just don't notice that they're red and green. |
![]() |
| I got 174 ('true neutral') by choosing 'blue' or 'not blue'. The 'green' here looks to me like a light yellowy-orange. The color that I have learned to associate with unripe bananas. |
![]() |
| In the UK, the yellow light is officially an "amber" light in terms of driver regulations and statutes, such that some anally retentive type is always bound to correct anyone who dares say "yellow". |
But another problem is with displaying the colors essentially full-window, which is going to be nearly-full-screen for many users. When we're staring at a screen with a particular tint, our eyes quickly do "auto white balance" that skews the results. It's the mechanism behind a bunch of optical illusions.
To address that last problem, I think the color display area should be much smaller, or you should be shown all hues at once and asked to position a cut-off point.