(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41430258

在不同的文化中,“蓝色”和“绿色”这两个词会根据文化和语言因素组合或单独处理。 例如,在日语中,这两种颜色历来都被称为“ao”,但在俄语中,浅蓝色和深蓝色被认为是不同的颜色。 在西方文化中,浅蓝色通常被标记为纯粹的蓝色阴影,而在俄罗斯文化中,它们被视为独特的颜色。 这表明我们对颜色的理解会受到语言和文化因素的显着影响。 作为一个说明性的例子,作者讨论了区分蓝色和绿色的个人偏好,表明介于蓝色和绿色传统定义之间的青色等颜色可能会在未来几代人中被认为是独特的颜色。 总之,颜色的感知不仅由物理特征决定,还受到历史和文化的影响。

相关文章

原文


I suspect it tests your monitor and monitor calibration as much as your color perception. In particular, sRGB displays have a pretty severely limited green gamut. If you have a wide-gamut display, the test is probably gonna appear different.

But another problem is with displaying the colors essentially full-window, which is going to be nearly-full-screen for many users. When we're staring at a screen with a particular tint, our eyes quickly do "auto white balance" that skews the results. It's the mechanism behind a bunch of optical illusions.

To address that last problem, I think the color display area should be much smaller, or you should be shown all hues at once and asked to position a cut-off point.



Author here, yes, it tests a mix of your monitor calibration and colour naming. The two types of inferences you can make with this are:

1. If two people take the test with the same device, in the same lighting (e.g. in the same room), their relative thresholds should be fairly stable. 2. If you average over large populations, you can estimate population thresholds, marginalizing over monitor calibrations.

The most interesting thing for me is that while cyan (#00ffff) is nominally halfway between blue and green, most people's thresholds, averaged over monitor calibrations, imply that cyan is classified as blue. I was not expecting that the median threshold (hue 174) would be so deep into the greens.



I got hue 175. It's interesting to note that some older cultures, Japan for example, didn't always have separate words for blue and green, both were the same color ("ao" in Japanese). You can see the effects of this even today with things like traffic lights in Japan, which are considered "green" by their standards but blue by many others' standards.

There are also other cultures, such as Russia, where light blue / dark blue (simplification) are effectively considered separate colors.

All this to say, personally, I think we will continue to evolve to recognize more distinct "colors" such as teal, which is neither blue nor green but somewhere between. A lot of this recognition power is rooted in linguistics and culture, it's not as strictly biological as one might think.



In Russian light blue is “blue” and dark blue is “indigo” essentially. It still has seven colors in the rainbow. It’s just that in English colloquially nobody uses indigo.



Yes, well that's what I mean. Culturally, Russians think and speak about colors differently, dividing them up differently than the West.

> Russian does not have a single word referring to the whole range of colors denoted by the English term "blue". Instead, it traditionally treats light blue (голубой, goluboy) as a separate color independent from plain or dark blue (синий, siniy), with all seven "basic" colors of the spectrum (red–orange–yellow–green–голубой/goluboy (sky blue, light azure, but does not equal cyan)–синий/siniy ("true" deep blue, like synthetic ultramarine)–violet) while in English the light blues like azure and cyan are considered mere shades of "blue" and not different colors.

> Blue: plava (indicates any blue) and modra; in the eastern speaking areas modra indicates dark blue, in some of the western areas it may indicate any blue

etc. from https://en.wikipedia.org/wiki/Blue%E2%80%93green_distinction...

I am not deeply knowledgeable on Russian, I failed Russian in high school, just going off of my surface-level knowledge of linguistic relativity regarding color, and discussions with a friend from that part of the world, so I might not know what I'm talking about here.



The color name question here doesn't have a clear answer because most of the respondents would call this "teal", "blue–green", "turqoise", "cyan", "aqua", or some similar name. You'd get somewhat similar results asking whether an orange (the fruit) is really "red" or "yellow", or whether an eggplant is really "blue" or "red".

An individual person's answers on this kind of question are likely to vary from day to day, are context dependent (i.e. whether one object or another appears more "green" or "blue" depends on what kind of object it is), and colors this intense are very sensitive to changes in eye adaptation and technical details of the display and software, as well as inter-observer metamerism.

So in addition to the color naming difficulties, it's not even a very good test of color naming, if you want to get reliable psychometric/linguistic data.



For a single individual, all of the above is true, but for a large enough sample size, the answers may be more generally useful because you account for all of those rounding errors.



No, because if my case holds more genera (and I suspect it does), the answers are in part out of sheer frustration, and therefore prone to being similar to the last one given.

I am not afraid to say this is poorly designed.



I didn't exactly rage quit but did think it was silly.

I wouldn't describe teal as blue or green any more than I'd describe purple as red or blue, so being forced to pick felt silly. Like being forced to choose my seventh favorite Norwegian glacier - technically its a valid question but my answer is necessarily going to be arbitrary.



That’s like asking which way a Necker cube is oriented. It’s both and neither. For blue and green, there’s a range of shades for which that ambiguity is true and you can “flip” it in your mind.

I would actually find it more practical to determine the thresholds on both sides where I find it to become ambiguous.



Not as far as I can tell. The phrasing of the question test does not acknowledge such ambiguity to start with, and by forcing them to answer one way or the other the test does not allow the users to signal perceived ambiguity even if they wanted to.

So how could the point of this exercise possibly be to find the range of ambiguity?



Fun, I got 174 and when I saw the results my reaction was "but that is not turquoise!" which I suppose means I either don't know what turquoise is, or my screen has bad calibration/gamut.



I actually disliked the conclusion, because it forced me to classify turquoise as either blue or green. When it's a mix more than anything.

It lacks the "can't classify" to make it a better tool.



That wasn't clearly part of the test. To be ultra-pedantic (this is HN after all), the user's choices don't say "This is more-blue-than-green" and "This is more-green-than-blue". The choices are only "This is green" and "This is blue" forcing you to just pick one, where there is no clearly correct choice. When the color on the screen is neither green nor blue, many people will just pick a random answer.

I bet if the choices actually said "This is more green than blue" the results would be different.



> When the color on the screen is neither green nor blue, many people will just pick a random answer.

Or people will naturally intuit that they should choose whichever answer they think is closer to true.



On such a random internet doodad most users will pick a random answer period. To see what this thingy tries to do without wasting any time on it. I hope it doesn't try to do gather any meaningful data.

Personally I "tried" to answer truthfully at first and then went absolutely "ok f u, don't care no more" when it showed turquoise :D



It's different when you show something to someone with intent. Of course they will pay attention. Especially your family, come on.

I'm talking about random day to day browsing when you stumble on something random on the internet.



According to conversion rates and engagement metrics of most apps I've seen (not even mentioning social media where 2-3% engagement is the norm) most users are ¯\_(ツ)_/¯. Unless said app is a work/hobby tool, but that shouldn't be really called engagement.



> most users will pick a random answer period.

Taking how you behave, and extrapolating that it to everyone, (and furthermore being unable to accept that other people might behave differently), is not a winning strategy for life.



Turqoise doesn't feel either more-green-than-blue or more-blue-than-green. It feels neither blue nor green, and I don't see any way to compare it to either.

It's clearly more turqoise than blue. Or green.

Turqoise on a computer monitor is always missing part of itself, so maybe I should've answered based on that, but I don't think the computer monitor was the point.



180 and blue and I suspect that language also plays a part (I was brought up in an environment where the word turquoise starts with green, but now live in a turquoise-producing state where the finished product look far blue-r.)



it looks like my default is if there is 40% green in that it is green. Thus it told me that turquoise for me is green. Which if I look at Turquoise the RGB color, that is green. If I look at Turquoise the mineral about half the time it is green and half the time blue.



Logically, a color, green etc., is a 'simple' notion and cannot be explained terms of anything simpler. With color we have to revert to a different description, here wavelength. But wavelength is not human perception (and we can't explain such perception in simpler terms).



Is a burrito a sandwich?

(Yes in New York and Indiana, no in Massachusetts, and the law is silent elsewhere. Personally I believe that because the torta exists, the burrito may have some characteristics of a sandwich but should be considered a wrap)



I'd love a last step in the test where you're presented with the gradient, but before showing the distribution and the user's score. Allow the user to select where they consider their threshold, then display the final results.



I really wanted to be able to drag my vertical bar on the distribution to the right just a bit. :)

When I could see the entire gradient, I actually thought green continued to the right a bit more than where my line was.



A sorting interface would be another neat step! And yeah, I think most would gravitate toward the middle. Seeing how "far off" you are would be fun :)

Ooh maybe have the user slide a gradient left and right inside a window, aligning the center of the window with where they think the line is between blue and green (i.e., instruct the user to fill the window with equal amounts of green and blue).



It tells me to rotate my device, implying it should work on my phone, but I can't figure out how to move the colors. Holding and sliding doesn't work. Tapping doesn't seem to do anything.

Does it not actually work on mobile?



> The most interesting thing for me is that while cyan (#00ffff) is nominally halfway between blue and green, most people's thresholds, averaged over monitor calibrations, imply that cyan is classified as blue.

Perceptually (that is, in CIE-LCh color space, for example), the hue component of #00ffff is a lot cloer to #00ff00 than it is to #0000ff. But the website doesn't ask which color is closer, it asks if it's "green" or "blue". And how we use those words has more to do with culture than with perception. We also call the color of a clear afternoon sky "blue", even though that is perceptually extremely far away from #0000ff.



> while cyan (#00ffff) is nominally halfway between blue and green, most people's thresholds, averaged over monitor calibrations, imply that cyan is classified as blue

Yes, because (at least for me) the thought went "well that's cyan, it's not really blue but if forced to pick, cyan is more like blue so I'll click that". It's like rounding up at 0.5.



>For me, if forced to pick between two choices that were not correct, I'd just pick one randomly. I think this is a wording problem more than anything.

That's what I'd do if I were being paid to take the survey. Instead I just closed the window as soon as it popped up cyan and only gave me blue and green as options.



It is interesting to test people at just one device.

I used my phone on a mount, and completed the test with my wife, children and myself - I was interested (though not surprised) what an outlier I was, as I am colour blind in various combinations, but though my wife scored 'bang in the middle' - it was interesting that wasn't common.

My kids were both to the left of the scale fwiw - I was further right than 98% of people.



CMY and RYB are both valid primary color sets.

RYB, being taught in grade school, has a lot of influence on how people perceive and name colors, which is what this conversation is about.



I mean, I was taught in grade school that George Washington cut down a cherry tree and then said he couldn't tell a lie. That didn't make it true.

I would hope that here on HN, people are aware of RGB primaries, and then maybe CMYK. Saying that cyan is "not primary or secondary" is just wrong. Even Wikipedia explains in the first paragraph that the RYB model has a "lack of scientific basis":

https://en.wikipedia.org/wiki/Primary_color



By the way, "cyan" is a very poor name to use for #00ffff. The term "cyan" refers to the kind of slightly greenish blue used in 4-color printing (CMYK), and was just a Greek word for "blue" chosen to be a jargon word to avoid confusion with the English color name. It has a totally different color than the equal mixture of typical G and B primaries in a computer display.

Similarly, "magenta" is a poor name to use for #ff00ff. The term "magenta" is a jargon word for the slightly purplish printer's red, which was chosen to avoid confusion with the English word "red". It has a completely different than the equal mix of RGB R and B primaries.

("Red", "green", and "blue" are also very poor names for the RGB primaries, which are substantially orangish red, yellowish green, and purplish blue.)



I checked in at hue 174, the median, which is interesting to me as I know that my wife will test to a very different hue as we have occasional disagreements on whether something is 'blue' or 'green' :)



> I was not expecting that the median threshold (hue 174) would be so deep into the greens.

You're not asking gender of the test taker. Your results will be skewed because you're probably getting more men than women. Women in general have more ability to detect green vs blue.



Even more fundamentally, red-green colorblindness is a recessive trait on the X chromosome, thereby affecting biological males in far greater number than females.

It could be a high enough percentage to make the results from this site noticeably different between the sexes.



Not that surprising. To most people, pure RGB-blue looks a bit violet. People are used to ink (subtractive) blue more than light (additive) blue. People call the sky blue and water blue; both are closer to cyan. Most people think of a neutral blue as something like #0080ff.



I classified cyan as green because, well, it's greener than pure blue, and it's also the most greener you can get than blue, in RGB space, without losing any blue :)



In USA:

Primary Additive Colors: Red, Green, Blue

Primary Subtractive Colors: Cyan, Magenta, Yellow

But, before digital color displays became popular, the average person had, by far, mostly exposure to subtractive (paint) colors.

US school children are taught from birth that the primary subtractive colors are red, yellow, and blue, simply because those words are easier to pronounce, and so magenta is a weird "red" and cyan is a weird "blue" , until the children discover on their own, or in specialized print/paint schools, red and blue are not primary subtractive colors.

Humans are terrible at naming things.

And to bring it back to Current Thing: Google AI cites this source for its red/yellow/blue claim, even though explicitly this source says that Google gives the wrong answer.

https://science.howstuffworks.com/primary-colors.htm#:~:text....

Will GenAI's aggressive ignorance kill sarcasm and nuance in writing? Or will people learn to ignore AI input like they ignore banner ads?



>most people's thresholds, averaged over monitor calibrations, imply that cyan is classified as blue.

I think that's just to your test forcing people to pick either blue or green even though cyan is both, they are just going to pick blue because it's the first option and more likely to be picked randomly.



OP have you considered doing a version for this to test contemporary Greek native speakers, vs others ("control" group),

for differentiation of blues?

I remember reading that modern Greek has two color-names for sky- and dark- blue (not sure what the prototypes are for each nor if they have hue components, maybe the "sky" blue is green-shifted?)... always been fascinated by the discussion of "weak Sapir-Whorf" around this and would be quite interested to see if there are any differences in discrimination...

The classic cognitive/perceptual psyche data to gather would be time-to-discriminate, with the prediction being that Greek speakers make faster judgement because they have higher/faster discrimination, than others.

Not sure how you'd pose the question to non-Greek speakers tho :)



> 2. If you average over large populations, you can estimate population thresholds, marginalizing over monitor calibrations.

This might be one case where it might make sense to cluster between the reported operating system. At the moment I only have a family of Macs to test, but I can imagine that Windows users with their different default gamma get back different results.



This test is useless or of very limited value.

I kept pressing green until the end because you had no 'cyan' button to press when clearly many colors were actually cyan. Cyan is not blue.

Incidentally, my color vision is perfect on all Ishihara tests.



Blue and Green and primary and secondary colors.

Cyan is not. The author decided to cut off the colors list at secondary colors. There is nothing wrong with that.



"The author decided..."

'The author decided' is not physics. Suggest you look at the Wiki page under 'Wavelength': https://en.m.wikipedia.org/wiki/Color_vision

Green: 500 - 590nm, Cyan: 485 - 500nm, Blue: 450 - 485nm.

Color vision theory is far too complicated to discuss here, and I'm not going to debate cyan as a mixed color of blue and green wavelengths versus a fixed wavelength that's in between both of them.

What the author provided was, at best, misleading but nonsense as far as science is concerned.

If the author said he was an artist and presented colors as a preferential list it would have been a different matter.

BTW, I don't mind being voted down (it happens to me regularly), but here those who did are only showing their ignorance. I'd add the author—who penned here—ought to explain his actions in much more detail.



Not to be mean, but I think every assertion in your comment is wrong.

Blue and Green are English words which sometimes describe primary or secondary colors additive colors. Cyan is (an English word that describes) a primary subtractive color.

Colors are not English words. They're physical reactions inside our eye-brain systems, affected by varying wavelengths of light. (Actually that's not the most accurate description of color either, but it's a more useful model.)



Ambient light will also affect the result.

Not necessarily because the ambient light would affect the screen shows (it's emissive, not reflective) but because the brain also does "auto white/colour balance".

For a fun experiment, get your hand on some heavily yellow-tinted party glasses, go outside on a clear day with a bright blue sky.

When you put them on everything will be stark yellow tinged (and the blue sky will be completely off, like green or pink, can't recall which) but after a little while going on your business, perception adjusts and only a much less dramatic yellowish veil is in effect. You'd look at the sky and see almost-blue.

The kicker is when you remove the glasses: the sky will suddenly be of a glorious pink! (or green, can't recall) Only moments later it'll adjust back to be blue.

A certain wavelength may be absolute blue of a certain kind, but the perceptual system is all relative: "wait, I know this sky should be blue because that's what I've always seen, so let's compensate".

The same kind of effect - although less dramatic - can be achieved with lights that can be adjusted from say 2400K to 6500K and having as reference an object that is known "pure white", like a A4/letter sheet of paper.

This effect, in turn, adjusts how "absolutely displayed" colours are identified by way of biasing the whole perceptive system. AIUI that's the rationale behind Apple's True Tone thingy, aiming to compensate for that.

So the result of this test should be somewhat different depending on ambient lighting temperature.



During some heavy dust clouds from nearby wildfires, the sky was a deep and unsettling yellow. However, I couldn’t get a picture of it, because the automatic color balance removed the yellow overcast altogether.



The same problem occurs with photographing the yellow sky when dust from a Sahara sandstorm (presumably coming across the strait of Gibraltar) blows over Europe every few years. But you can set the white balance manually in the camera.



> AIUI that's the rationale behind Apple's True Tone thingy, aiming to compensate for that.

No idea what "AUIU" is, but yes, generally displays should do automatic white balance like iPhones do. I don't know why most Android phones don't seem to do it (pretty sure mine doesn't), and generally TVs/monitors also don't do it. (The required color temperature sensor can't be that expensive?)



> I don't know why most Android phones don't seem to do it (pretty sure mine doesn't), and generally TVs/monitors also don't do it.

The rageguy one would say either patents or "whoa the colors really pop I want that shut up here's my $$$" uncancellable LOOKATMEIAMTHESHINY mall mode, but via Occam'r razor I think mostly because they (manufacturers) simply don't care (about consumers, or about making a good product at all)

TVs/monitors (or laptops even, and more phones that you'd believe) with just a simple auto-brightness are stupendously rare even though Apple does it since forever and a half ago.



Yeah, laptops and TVs not even doing automatic brightness is even more absurd. Though Android phones have automatic brightness since forever, so why do many not have automatic color temperature (white balance)? The color temperature sensor can't be much more expensive than a brightness sensor. It's logically just an RGB brightness sensor.

Android does have a night mode which changes the white balance of the screen at sunset and sunrise, but this is just a binary thing and doesn't respond to actual ambient light.



At least I know that cartoon. But generally people strongly overestimate how many people know various abbreviations. For years I didn't care to look up what "IANAL" means. I since have forgotten it again.



  > Ambient light will also affect the result.
Also deliberate software blue light filters. Mine is always on, both on the desktop and on the phone. Many people may forget that they are even using one.


This is pretty much the same way that a calibrator works (if you have ever watched a color calibrator running, you know what I mean), but a calibrator doesn't get biased, like the human eye.

In order for it to be a true "neutral" test, each test would need to be preceded by a "palate-cleanser" gray screen, or something, and there would probably need to be a neutral border.

> you should be shown all hues at once and asked to position a cut-off point.

This is actually the way I have seen this stuff tested, before.



I tried it twice, once on each of my two different monitors (a Dell S2817Q and Dell S2409W) made a few years apart and with completely different settings; and I got 175 on one and 174 on the other. So pretty close even given the difference.



I mean, it really just tests arbitrary word usage. I have no fucking clue if turquoise is supposed to be "green" or "blue", it's turquoise!



Parent was a joke about the Costco fixed price hotdog.

UK Costco hotdogs are £1.50, which is not equal to $1.50, reflecting both its arbitrary nature and that UK purchasing power is weaker than the exchange rate would appear. (Computer books are a frequent offender here of having the same $ and £ prices)



> To address that last problem, I think the color display area should be much smaller, or you should be shown all hues at once and asked to position a cut-off point.

If you're doing this on a phone, try holding your phone at arm's length and against a white background (such as the wall or ceiling) and doing the test that way. Assuming you have redshift/night mode disabled, I suspect you'll end up closer to the median.



I did it on IPS laptop display and got 175. On my OLED phone I got 179. I am more in agreement with the phone results, but the turquoise on the phone looked even greener to me.



I only realized after seeing your comment. As usual, when I turned it off to compare, the hue it shifted to looked super unnatural and I had to re-enable it.

I always forget how much white-balancing my vision does.



These sorts of tests also need to be done in controlled background lighting. Whether people are doing this in a dark room, in a sunny kitchen, or under green led lighting would be a greater factor than anything being tested.



>> These sorts of tests also need to be done in controlled background lighting. Whether people are doing this in a dark room, in a sunny kitchen, or under green led lighting would be a greater factor than anything being tested.

Whether its a dark room or sunny kitchen, i'm not sure whether Turquoise is ever going to be blue or green. The entire question seems more like wordplay.



I don't think that's necessary for an informal test. Human color perception is extremely good at compensating for that and modern screens are relatively uniform and uniform besides. Cultural differences like the person downthread saying they consider anything with the slightest hint of green to be "green" seem far more impactful.



I think this is flawed. You quickly end up on a color that's clearly not "blue" or "green" and you're unlikely to keep hitting "this is green" several times in a row, conceding that ok, fine, maybe this is blue, whatever. You're basically measuring how many times people are willing to click the same button in a row.

Edit: Possible improvements: changing the wording to "this is MORE green" and "this is MORE blue" and randomizing the order in which they are shown, somehow. I realize you're just doing some kind of binary search, narrowing the color range.

This is not to mention color calibration of your monitor, or your eyes adjusting / fatiguing to the bold color over time...



The order is randomized. Hit reset and you'll get a different sequence. The sequence is also adaptive (not a binary search---it's hitting specific points of the tail of a sigmoid in a logistic regression it's building as you go along). Try it a few times and you'll see how reproducible it is for you.

It of course depends on the calibration of your monitor. One of the reasons I did this project is I wanted to see if there were systematic differences in color names and balance in the wild, for example, by device type (desktop vs. Android vs. iPhone), time of day (night mode), country (Sapir-Whorf), etc.



The sequence itself should be converging however, right? I feel that there should be some random jumps outside of the current confidence interval so that contextual aspects can be filtered out or at least recognized.



Yes, exactly this. Because it seems to be converging right now, I quickly get the feeling that there's no meaningful choice, after the first three prompts you end up with something that's neither green nor blue. Re-taking the test gave me a very different score.

It might work better for me to do some contrastive questioning: show a definite green followed by an intermediary color, then a definite blue followed by an intermediate color.



The whole point of asserting where your border between green and blue is, is to ask about colors that are in between the two. It doesn't make sense to ask is RGB(0,0,255) blue to you? Well, unless you are color blind it is.



Of course, that's clear as day; the idea is to reset your presumptions from the previous trial and sample the ambiguous colors in a more consistent way, by priming you from the extreme ends of the green/blue scale.

See it as a way to avoid perceptual hysteresis.



I'd prefer blue/green/neither.

With the third colour, I just thought "no, that's teal", and my decision was (as you suggested) semi-arbitrary.



It is common practice in psychometrics to use two levels in a forced choice and model responses as a logistic regression, which is what's done here. Adding an N/A option turns the thing into an ordered logistic regression with unknown levels, which is tricky to fit, but it's possible. Having done a lot of psychophysics, having more options generally doesn't make the task easier.



The way that XKCD did it is the best, you ask people to give a name to each color then the responses are entirely natural and unprompted.

I don’t think that forced choice can give accurate results if a substantial number of people perceive green and blue as being non-adjacent - i.e. there exists a color between green and blue (turquoise/cyan/teal).

Otherwise it’s like asking people whether a color is red or yellow, when it’s clearly a shade of orange.



Yes but saying that a shade of orange is closer to yellow is different from saying that it is yellow.

Orange is closer to green than blue but I wouldn’t say that it’s a shade of green. It’s just orange.



Are you sure that it is common practice for a problem that has three valid answers A, B and C, to only allow people to answer A or C?

Your website is not talking about "levels" of colour.

It's asking "is this blue or green", not "is this closer to blue or closer to green".

The question (1) "is this blue or green" has three valid answers: blue, green or neither.

The question (2) "is this closer to blue or green" only has two valid answers.

I would assume that with these types of surveys, the first thing to do is to qualify the proper categorization of the question.

Sorry to say, but to me it seems that almost all of the confusion in the discussion here is because you're asking question (1) (which has three valid answers) but expecting an answer from (2) (which indeed has two valid answers).



But teal isn't a single point, it's a range. You can have teals that are more blue or more green than each other; they can't all be zero. Whichever one you choose to be the true transition point between blue and green, there will be teals that are more blue or green than that one.



Sure, but there's also a subrange at the (subjective) centre of that range that will not be perceived as either more blue or more green.

And the teal that I referenced in my earlier comment was (for me) such a colour.



Then by that framing, the test is asking you to decide what hue value is the "zero" between the positive/negative blue/green. Is the wording imperfect? Sure, but the intent was still entirely clear.



Saying it’s a subrange implies you can perceive differences in tone within it. In which case, reframe the question as “is this shade of teal closer to the blue or green end of the subrange” if you like.



That's not how it works.

Maybe if I'm given two colors inside that range, I can say which is bluer and which is greener. Given just one color, I simply cannot say that it's green or blue, or even if it's more green than blue or vice versa.

I stopped at the 3rd or 4th come because I couldn't give a honest answer. That makes the test useless. I can't complete it with correct answers, and if I give incorrect answers, the conclusion is useless.



No it absolutely doesn't.

It's a well know fact that people are unable to distinguish colours that are too close together.

You could even have a smooth gradient from colour 'a' through colour 'b' to colour 'c', where it's possible to distinguish 'a' from 'c' but not to distinguish 'b' from either 'a' or 'c'.



I think the main point of this test was to determine the position of teal in your case, as your definition of teal is the midpoint(-ish range) between blue and green. (For me it's more blue though.)



I mean, a good test would be able to detect that neither-blue-nor-green range and approximate midpoint as well, and it should be fair to say the midpoint is indeed the threshold between blue and green. (I don't think the current version of test can do this, though.)



I actually checked that at the end of the test (when it shows the gradient image with the response overlay).

There were two distinct points, one for blue and one for green, where my mind would place the transition to the colour in between.

(And yes, on one end it's bluer and on the other end greener, but (much like a shade of orange is neither red nor yellow) the colours are still not either green or blue.)



No, I'm saying that the sliver of a chasm between the colour in isolate, and what I subconsciously imagine the midpoint to be, is so damned thin that were I to look at the colours side by side, I could not distinguish one from t'other.

And (even if I could) a bluish teal would no more be a blue than a reddish orange a red.



I definitely have the bias you mention. In my case I don't think it's mainly due to not wanting to push the same button many times in a row, but because I compare with the previous color, so if previously I was already somewhat unsure but I chose green and now it became slightly bluer, it "must" be blue, right?

I think I can get over it, but it requires conscious effort and even then, who knows. Bias is often unconscious.

Another possible improvement would be to alternate the binary search colors with some randomly-generated hues. Even if those answers are outright ignored, and the process becomes longer, I think they would help to alleviate that bias. At least you wouldn't be directly comparing to the previous color.



VFX engineer here. Yes we used to cailbrate monitors and work in the dark.

However one of the key people that built our colour pipeline was also colour blind, so its not actually a requirement, so long as you use the right tools.

Most people aren't that sensitive to colour, especially if its out of context. a minority of people aren't that good at relative chromaticity as well (as in is this colour bluer/greener/redder than that one) But a lot of people are.

Language affects how you perceive colour as well.

But to say the experiment is flawed I think misses the nuance, which is capturing how people see colour _in the real world_. Sure some people will have truetone on, or some other daily colour balance fiddling. But thats still how people see the world as it is, rather than in isolation.



I once worked for a company that had a designer who was color blind. He would always show up wearing the exact same outfit every day: turns out that he was REALLY color blind, and so he just gave up and bought 7 long sleeved shirts and 7 pants, all black. Didn't work out so well for him in the designs... most companies don't want monochrome websites.



Likewise. I think for me there's quite a wide band of colours in the middle that I consider to be "neither/either", so I'm basically just picking a random answer for those.

A modified version of the test that finds two boundaries (green/neither/blue) could be interesting.

Or maybe it just needs to take more samples, in a more random order.



Same. Some of them are neither obviously blue nor obviously green, so what the test was measuring for me was what I was thinking about at the time, the decision I'd previously made, whether my mouse was currently hovering over "blue" or "green", etc.



>I think this is flawed. You quickly end up on a color that's clearly not "blue" or "green" and you're unlikely to keep hitting "this is green" several times in a row, conceding that ok, fine, maybe this is blue, whatever.

I agree with you, the whole thing is flawed when it could be better. When you ask the question "is my blue your blue?", you are evoking the old philosophical question, and it's a question about color perception, not words. This test did not test color perception, it tested "what word do you use?"

I think of blue as a pure color, and green as a wide range of colors all the way to yellow, to me another pure color. so if there's any green at all in it, I'm going to call it green. (maybe it's left over from kindergarten blending "primary colors". also, while I like green grass, I don't like green as a color, so any green I see is a likely to make me think, ew, green) But in terms of what I see, I can only assume I'm seeing the same thing as everybody else is because the test is not testing it. Just because I call something green doesn't mean I don't see all the blue in it.

>Edit: Possible improvements: changing the wording to "this is MORE green" and "this is MORE blue" and randomizing the order in which they are shown, somehow. I realize you're just doing some kind of binary search, narrowing the color range.

yes, the test should show you pure blue, then a turquoise mix, then pure green, and a ... etc. It should also retest you on things you already answered to measure where you are consistent.



I do think that the philosophical question could potentially be approachable in a modern context;

Show people a colour and map their brain activity - the level of similarity between two people's colour perceptions should be reflected by similarities in the activity.



Why do you think that would be the case?

One persons ‘blue’ activity could be different than another’s while still being the same wavelength of light and general perception.



The philosophical question is not dealing with the objective external reality;

It's a question of subjective experience - and that experience should be reflected in electrical activity.

Given the fact that the broad structure of the brain is largely shared across members of the species, similar stimulation should trigger similar activity in the same regions of the brain.

If the same colour triggers markedly different activities, it would not be unreasonable to conclude that the subjective experiences are not the same.



No real need for the snark; if we dismiss the notion of human divinity and look at ourselves as broadly fixed macro-structure computational machines (like any other broadly deterministic machine) similar signals propagating over the same sets of sub-computers will generally (accepting the undetectable, such as steganographically hidden homomorphic compute contexts) be reflective of similar underlying operations.

If I were to imagine a warrior, and his general perception of the colour red, I may find the way his brain processes the colour more closely to a rival warrior than his wife the gardener.

A real world example; London taxi drivers and bus drivers show distinct patterns of changes to the hippocampus.

https://pubmed.ncbi.nlm.nih.gov/17024677/

The way that the mapping data is stored will be heavily bias towards being spatially reflective of the real world counterpart.

Note the bias will be towards a degree structural isomorphism, one internal 2D + 1T spatiotemporal surface map of the city might be a rotation and/or reprioritisation of another - but they will have a shared basis (convergent compute simulations of biased subsets of the same real world structures), and when navigating from point A to point B, the path and nature(though not the propagation vector) of the electrical activity of both will be reflection of the same real-world surface map.

Now I say spatiotemporal - because the driver going from A to B in the morning will develop different expectations of the levels of traffic at different parts of the journey.



Except the internal structure is randomly seeded for each instance.

Or do you think fingerprints are the most random thing in humans?

There may be general patterns from above, but the actual details vary immensely when you zoom in.

Large populations may still roughly conform to a normal curve, but the volume under the deviations is still huge. And the dispersion is immense.



Except that’s literally not how humans are wired or develop - even nerve paths and other fine grained details in our bodies show significant divergence, and there are major macro level differences readily apparent even based on gender, color blindness, etc.

Honestly, it would be shocking if it were even a little true beyond ‘frontal cortex’ levels of granularity. And even then, Phineas Gage type situations make it clear that may not actually be required either.

And that means completely different individual activity can trigger similar subjective experiences as much as similar activity can trigger different subjective experiences, no?



If that were the case then there's no way that they'd be able to extract images from people's neural activity, and yet they've started doing that very thing.



Agreed. It would be more accurate to show the final gradient (without the curve) and let people choose where is the boundary. It wasn't even clear what the actual task is



Yeah, it felt like a trick question to me.

Because the second color I saw was somewhat like turquoise and the site is called 'Is My Blue Your Blue,' I decided that everything that you say yes to colors would be blue and everything else would be green. I never saw a green until the result was displayed :D



Author here. I added fields so you can specify your first language (relevant link: https://en.wikipedia.org/wiki/Blue%E2%80%93green_distinction...) and colorblindness.

FAQ:

* I can't know your monitor's calibration, your ambient light, or your phone's brightness. Obviously, this will affect the results. However, I am tracking local time of day and device type, from which we should be able to infer whether night mode and default calibration has any aggregate effects. Anecdotally, thus far, I haven't found any effects of Android vs. iPhone (N=34,000).

* The order is randomized. Where you start from can influence the outcome, but methodologically it's better to randomize so the aggregate results average over starting point. You can run the test several times to see how reliable this is for you.

* It's common practice in psychophysics to use two alternatives rather than three (e.g. blue, green, something in the middle). It would be a fun extension, which you can handle with an ordered logistic regression. The code is open if you want to take a shot at it: https://github.com/patrickmineault/ismyblue

* I will release aggregate results on my blog, https://neuroai.science

* I am aware of most of the limitations of this test. I have run psychophysics experiments in a lab on calibrated CRTs during my PhD in visual neuroscience. *This is just entertainment*. I did this project to see if I could make a fun webapp in Vue.js using Claude Sonnet, and later cursor, given that I am not highly proficient in modern webdev. A secondary point was to engage people in vision science and get them to talk and think about perception and language. I think it worked!



This is a fantastic site.

My partner and I were well aware of the limitations, but it has clearly demonstrated our difference in perceptions in a way we were both happy with. Being able to see where your partner lands relative to you is deeply satisfying.



My partner and I regularly disagree on blue vs green as the colours become more of a gray colour - might be interesting to randomise the brightness of the colours being displayed then seeing if the skew towards people perceiving blue Vs green changes as the colours become closer to gray.



I also often disagree on blue vs purple, which is inconvenient when we name the same coat two different colors.

I think my "blue" is a way more specific shade than most people (hue 192 here, whatever that means on an uncalibrated display). Likewise, I'll usually say "purple" before others.



When done on my Xperia cell phone, even a small shift in screen orientation made the green leaners into obviously blue. Might be worthwhile capturing phone position if you can.



It was fun but I messed up the statistics! I had Redshift running, which (maybe you know) makes the colors more reddish. And I got a bluer than 98% of the population result. Turning off Redshift ... makes me instead greener than bluer.



That's a lot! Now I noticed: "I am tracking local time of day[...] infer whether night mode [...] any aggregate effects."

So you've thought about that already :- ) (it's evening here)



I would guess the hackernews crowd has a higher percent of bluefilter installs since that is a very common topic. Probably also more agressive settings for the blue filter.



I stopped at the first one I could not call blue or green.

If I were to call it blue or green, it would not only not be reflecting what I think, but I could not guarantee that if I'm show the exact same color again, that I will go the same way. So I felt there was no point in continuing.

This is a problem in the method; there needs to be a third choice, so that the user can always answer (at least if the test color is always in the blue-green gamut).

It could work with two choices if the user were instructed to randomly choose in the event of indecision. I mean, truly randomly, like by means of a fair coin toss. But that could just be implemented for them by a third button. That button could then just record their indecision rather than randomly choose between blue and green, so you have better data.

Without a third choice, or properly randomized behavior, you have bias problems. For instance, a certain user who likes the blue color might always say blue when not able to decide. Another one might always go for green. Yet, those two users might exactly coincide in what they unmistakably call blue, green and what triggers hesitation/indecision.

(I realize that no matter how many bins we have, there are boundary indecisions, like not being able to decide between green and blue-green. What range constitutes indecision is also subjective.)



That exactly is the point of the test though. Not to test whether most people call 100% blue blue, or 100% green green. It is to test at which point of the "inbetween" colors people switch from blue to green or vice versa. It forces you to decide whether the color you see is "more blue" or "more green", since after all they're all just a mix of blue and green.



Well for me, personally, blue and green are simply not adjacent, so there's no point where green turns to blue without going through an intermediate color. This might well be due to my extreme exposure to computer colors, where the in-between color is usually called cyan, or sometimes teal or aqua. When I see cyan, I cannot sincerely say that it looks “more blue” or “more green” to me, any more than an orange tastes “more apple” or “more banana”.



Light can absolutely be more blue or more green in an objective sense. Either it is closer to blue on the spectrum or it's closer to green. It doesn't matter if you have intermediate categories in between.

To poke a whole in your analogy, a more apt comparison would be to a gradient of sweetness, where one can indeed describe a flavor as "more sweet" or "less sweet" relative to apples and bananas.



You can estimate that if you can determine at which point the color becomes too ambiguous to call blue on one side, or green on the other. Different people will have a different range. If you want to identify a threshold, you can take the midpoint of the range.

Either of these approaches may be bad. The third paragraph of this page explains why:

https://en.wikipedia.org/wiki/Two-alternative_forced_choice

My suggested approach might not be much better though; it still relies on presenting a single stimulus.

It's not clear how the two-alternative forced choice can be used to find someone's blue-green threshold.

I think a better experiment would be to show the user gradients and ask them to move a bar to where they think is the midpoint in the blue-green transition. Subsequent gradients center on the user's previously identified midpoint, but zoom in more.

There is also this question: by which path do we interpolate from blue to green?

Let's imagine the CIELAB color space. Say that our pure green lies on the red-green axis, all the way on the green end. Blue lies on the extreme of blue-yellow. Do we interpolate through these linearly or what? And using what luminance value?

I suspect that for every given, fixed luminance value, the blue-green boundary is a contour. There are many paths we can take between blue and green, and along each path there is a boundary point. If we join those points we get this contour. Then if we do that for different luminance values, the contour becomes a 3D surface in the color space.



I'm red/green colourblind, so this was interesting to compare my green against my blue.

The thing I find being colourblind is that I value colour less than shade. Colour signals, even when I can tell them apart, are just less important to me than to non-colourblind people.

I most recently noticed this playing Valheim with my wife. There are red mushrooms in the game, surrounded by green foliage. I noticed that I have trouble spotting them, even though I have no problem seeing that they are red and the foliage is green. To her, the mushrooms stand out as being very visually different from the background and immediately noticeable. To me, they just aren't that distinct and get quite hard to spot.

So while I got the green/blue distinction to within 80% of the population, despite my shitty colour perception, it just didn't matter. At some point in the process I got to "I really don't care. I would ignore the signal that any further difference in colour is sending".

As you can guess, I have fascinating talks with designers and artists, to whom the differences really matter. I understand that colour is really important to them. I just don't see it.



I am also red/green colorblind and so I cannot tell if graphs using colours in many articles (more than not) is so shitty for everyone else or not, but choosing no distinct colours (that I have no trouble differentiating) on thin lines is defying the purpose (understanding) I believe. Even if I had no trouble with colours (being close to darker shades of brown) I would perhaps use thicker lines and variate the style of the lines. So the information screams out. Putting similar shade colours on graph with colour legend in the corner telling which thin line means what is just something I throw away mentally being so difficult to navigate.



I've got normal color vision, and it's bad for me too. If there's more than about a half dozen lines on a graph, chances are two of them are going to be so close together that it's a pain to figure out which is which. Visually distinguishing information in graphs can be a very tricky problem, but at the same time, people could easily do a much better job at it if they tried.



Interesting. Red next to green creates a different kind of contrast. It looks like its glowing (vibrant border), the same way our eyes perceive something very close compared to something far away. That is just my observation, I'm not sure If there is some scientific evidence for that.



I have normal color vision, and color just doesn’t matter to me (I can never remember the colors of things, and distinction by color doesn’t help me much). I’m not discounting your theory, but I think there must be a little more to it.



Not the person you're responding to, but also colorblind and I strongly relate to what they're expressing. It's different than not being able to remember colors. I can see (most) differences, but I need to actively focus on seeing to do it. For example, one CI system uses red/green stoplight emojis for test status. A given run might have 50-100 of them. Trying to see which ones are red means actively looking at each individual status and thinking "what color is that?" because my brain simply doesn't register reds as "jumping out" in the sea of green.



Yes! I've had some lengthy discussions with UI designers trying to get them to understand this exact point. I can see that they're red and green, I just don't notice that they're red and green.



Interesting, does playing a lot of games with a toddler asking them to distinguish between colors reduces the chance that they have your type of colourblindness? Since you can see the individual colors but need to concentrate on them, I wonder if playing such games make the child learn to notice the colors?



Like the other person said, most forms of colorblindness is caused by genetics--specifically, recessive traits. So, it's the sort of trait that will run in the family.

To help explain our experience, it's like trying to distinguish between two similar shades of yellow. It'll be clear and obvious that both are the color yellow. When there's only one example of each standing next to each other, it'll be easy to tell which shade is the lighter one, even if it's only slightly different. But if you had a sea of examples and are asked to pick out which yellows are slightly lighter than the other ones, then it might cause you to stop and study them for awhile to figure it out.

It's just like that for the common forms of colorblindness (where the color cones in the eyes are bent, but not missing), but instead of this metaphorical "yellow" it's this special "red-and-green" color that we see that's different from what everyone else sees. It's like trying to distinguish between two different shades of the same color, where it's obvious which is which when there's only two examples to compare to but not so much when your entire field of vision has bits of one hidden amongst a sea of the other. It's like red and green are a spectrum of the same color rather than being two separate ones.



Mine is genetic, inherited from my maternal grandfather.

My mother was an artist, spent ages testing my colour range with a set of Pantone colour swatches, just out of curiosity rather than as an attempt to cure it. That's how I know I see shade better than colour - she would show me two swatches that differed slightly in colour and then two that differed only in shade (or shade/tone/tint to be accurate). I could tell the shade differences apart better than the colour differences.

So I'm not sure that early training would help. But it couldn't hurt



>For example, one CI system uses red/green stoplight emojis for test status. A given run might have 50-100 of them. Trying to see which ones are red means actively looking at each individual status and thinking "what color is that?" because my brain simply doesn't register reds as "jumping out" in the sea of green.

Fellow CVD person here, I have that same problem at work. That and when there are up/down arrows and whether up or down is good changes based on the metric and they use color to let you know. They all look samey unless I actually stare at them for a while and the color difference sorta bubbles up.

It's so annoying too because it'd be trivial to use different signals instead of color, but no one cares about the 1/12 of us that are colorblind. It's crazy that the ADA doesn't recognize CVD as needing accommodation when it's far more common than most other disabilities.



I got 174 ('true neutral') by choosing 'blue' or 'not blue'. The 'green' here looks to me like a light yellowy-orange. The color that I have learned to associate with unripe bananas.



I've taken the test multiple times, and ended up with my boundary being both greener than >70% of the population and bluer than >70% of the population in separate attempts. And I know my color perception to be good at distinguishing hue - it's just that I don't have strong opinions about categorizing it in this space.

I'm pretty sure there's some hysteresis going on - if we randomly end up in the ambiguous zone on the bluer side, we'll be pressing "blue" every time a small change happens, because it's basically the same color. Until the changes add up so much that we're out of the ambiguous zone on the green side - and now our "border" is far on the green side. But if we started on the other side, entering the ambiguous zone from the green side, it'd take a big cumulative change before we press "blue".



I got "Your boundary is at hue 167, greener than 86% of the population. For you, turquoise is blue". I think I consider darker and yellower colours as green - for instance tennis balls are firmly green to me, but a lot of people say they're yellow.

I wonder if this has anything to do with your upbringing. I grew up on a farm in a dry part of Australia, where the grass didn't often get very green. Most of the year it was yellow. If you associate green with grass and the grass is yellow, maybe you associate green with a yellower colour?



I think this might be a bit overblown. "why do we call it blue signal?" is a common 3-5 years old question in Japan.

Old Japanese traffic signals had blue tinted lenses, like ultramarine blue. Those lenses were used in conjunction with warm yellow incandescent lamps, technology available at the time. Deep blue + warm yellow = green.

Over time the green color must have normalized, without laws and slogans not reflecting that. And nowadays they're green LEDs.



The blue-green distinction is something that tends to come late in most or maybe all language families. Ancient Greek also used the same word for blue and green. As I recall, the first color words a language gains are black and white, followed by red. Blue-green is one of the last distinctions made.



Thank gods at least red is red.

In all rulebooks, lights are red-yellow-green, but in many places, I can see red-amber-turquoise. Now a sure way to get a traffic police officer livid is to call the yellow light “amber” or “orange”…



My friend got a "Running an Amber" ticket when we were teens outside metro Detroit, MI. I had never heard it called that color before but that small memory is always on my mind when the light changes as I'm crossing.



In the UK, the yellow light is officially an "amber" light in terms of driver regulations and statutes, such that some anally retentive type is always bound to correct anyone who dares say "yellow".



I got a very high "green" threshold too - 95% averaged across three runs, since my first result seemed surprisingly high.

It's funny though - I feel like I'm less likely to go green on the other direction too. I'd probably say a tennis ball is right on the line, and seems more yellow than green to me too.

Maybe I'm some sort of green gatekeeper, and I don't want to dilute my personal definition with lesser greens. Green is my favorite color, I'd say, so maybe that's something to do with it.



Yes, and I'd like to see a breakdown of the answers per country.

I'm French and my boundary is at 167 apparently (though I have a poor screen and depending on where I look, I could say that even further towards the green side is still blue). But a regular occurrence at home is my wife (who speaks a different language, we don't live in France) talking about « the green table » while I'm trying hard to find any green table around us, until I realize she's talking about that turquoise table that I call the blue table. Also happens on the red/pink and pink/purple boundaries.

联系我们 contact @ memedata.com