View Single Post
Old 03-13-2015, 10:00 PM   #36 (permalink)
Rosa
Senior Member
47-hour Marathon 2016 Kickstarter Backer
 
Rosa's Avatar
 
Join Date: Jan 2015
Posts: 129
Quote:
Originally Posted by Mark Sandiego View Post
Rosa's comment that color only exists in our brain is not entirely accurate. The dress in the news article reflects specific wavelengths of light that can me measured with instrumentation. Each wavelength can be assigned a color name. Similarly you can use photoshop to just take a sample of the picture of the dress and this will give a numeric value corresponding to a color. So color can and does exist independent of our brain or experience of it. The confusion comes when our eyes lack the proper cone receptors to accurately interpret the wavelength. The reality is that that there are many wavelengths of light that we cannot see at all, not because they don't exist but because we lack the proper receptors to detect them. Examples would be infrared light or ultraviolet light. Though our eyes and brains cannot see these colors, our electronic devices can.
Hey there... I realize that a certain asshat is going to perceive what is about to happen as a big wank session. The thing is, he won't be wrong. I love color. I love science. And I have spent the last 7 years studying the neural underpinnings of color perception, so any opportunity to dispel common color misconceptions and drop some knowledge does get me wet.

First. Color does not exist outside of the brain. It just doesn't. And I sincerely hope I can convince you of this right now...

When you shine a light on an object, that object will absorb some wavelengths and reflect others. The visual system uses (some of)* that reflected light to generate a representation of that object (that is, color is a sensation the brain invented to make the external world internally comprehensible, and the information it uses is the relative reflectance of objects in the environment). *I say some of, because as Mark points out, humans are sensitive to a very narrow range of the electromagnetic spectrum (other animals, like bees, are sensitive to UV light, and see vibrant patterns on flowers that we perceive as simply being white).

But it isn't as simple as detecting specific wavelengths of light and assigning color values to them. And here's why: Depending on the lighting conditions in the environment at any given moment (bright sunlight, candle light, shadow, incandescent light, fluorescent light, morning light, evening light, etc), the actual wavelengths reflected by a given object will be wildly different, yet the object will appear the same color. For example, morning light and evening light both have more long-wavelength light in them (long wavelengths are typically perceived as redder), while bright mid-day light has more short-wavelength light in it (short wavelengths tend to be perceived as bluer). The result is that the same apple sitting on a picnic table will reflect different wavelengths over the course of the day, yet it will always appear red. To be clear, this means that different wavelengths can give you the same red percept, so how can we ever label a particular wavelength 'red'? This phenomenon is called color constancy. And it is the result of a complex series of neural computations that happen in the brain (not the retina) that relies on both the available sense data and an internal model of the world (generated from learned and/or natively-coded expectations about how light behaves in a 3D environment).

Further, you can take the same object, under the same illuminant, but place it in a new context (i.e change the background) and make it appear a different color. To be clear, this means you can have the same wavelengths entering your eye, but now you see a different color. This is called color-contrast, and again is the result of the way in which the brain combines and pits photoreceptor signals against each other. It's also why you always see bananas for sale in blue boxes (makes them look yellower).

Take this illusion from Beau Lotto. The blue tiles on the left image are physically identical to the yellow tiles in the image on the right (they reflect the same wavelengths). If you don't believe me, you can import the image into photoshop and convince yourself:





For those of you without photoshop, I went ahead and created an opaque white mask with holes in those tile locations and overlaid it directly on the image to eliminate the color-contrast cues:



It turns out those tiles are actually gray (pixel-value-wise).

SO. We just saw that the same patch of an image, reflecting the same wavelengths of light can give rise to three different color percepts (blue, yellow, and gray). Are you convinced yet?


Now. That goddamned dress...

The Dress generated an enormous amount of hype: the article outlining me and my collaborator (Bevil)'s hypothesis of the phenomenon was published in WIRED on the first day of the viral explosion and became the most viewed article of all time in that journal within 18 hours (over 22 million views). Many factors contributed to the hoopla over The Dress, but from our perspective the most important factor is that this image is the most compelling bi-stable color image known. That is, two people can be looking at the same image on the same screen at the same time and experience two legitimately different percepts. There are many bistable shape illusions, such as the face-vase illusion and the Necker cube. But there are almost no compelling bistable color illusions. Without exaggeration, The Dress is to color vision what the discovery of a new solar planet would be to astronomy.

What everyone has been dying to know is how it works and why people tend to fall into one camp or the other (typically with bistable illusions you can switch your percept pretty easily).

I believe the WIRED report was the first scientific explanation published, followed shortly by an Op Ed that Bevil published in The Guardian. But several other articles were published with competing theories that had to do with shitty S-cones or other retinal differences (to which Mark refers). One problem with the retinal accounts is that they don't explain why some people do experience switching.

So what is our hypothesis? And how did I test it? Our hypothesis has to do with the problem of color-constancy--the brain's attempt to discount the illuminant from the image to reflect the object's "true color."

Because sense data is often noisy and ambiguous, perception is always a process of unconscious inference in which the brain generates a best guess about some property of a scene given the available sense data and an internal model of the world (such as expectations about the content of typical illuminants and how they behave in a 3D world). Most of the time our visual system does a remarkable job of inferring the ambient lighting conditions at any given time and discounting their contribution to color-computations (as discussed above with the red apple on the pic-nic table). The visual system relies on many visual cues to do this:

-- scene structure (Where is the light coming from? Is the object in shadow? Or is it directly illuminated? Shadows are typically illuminated by diffuse scattered blueish skylight, so the visual system knows it has to remove some 'blue' from objects in shadow. Objects directly lit by sunlight tend to have a yellow/orange bias, so the visual system knows to remove some yellow from directly lit objects).
-- the relative reflectances of other objects in the room
-- the object's other surface properties (is it a matte, glossy, or rough material?)
-- etc...

But in the original image of the dress, the cues to the lighting conditions are particularly ambiguous, forcing people's brains to make a guess. and it turns out different brains make different guesses....

If you look at the original image you will see that there is a very bright light source in the background. It may be the case that white-gold people interpret this light as backlighting the dress, casting the front of the dress in relative shadow (casting a blueish biased light over the dress). If so, the brain thinks it needs to remove some of the blue from the image, yielding a white-gold percept. The rest of us (I am team blue-black) may be taking that same bright light as a cue that the whole room is very brightly lit, and everything in the space is brightly illuminated (casting a yellowish light on the dress). In that case, we are inferring a yellow-biased illuminant and so discount some of the yellow from the image, yielding a blue-black percept.



It seems the question everyone's brain is struggling with is: Is the light illuminating the dress bright and warm (yellowish) or is it dim and cool (blueish)? Your brain has to make a guess.

To test this hypothesis, I whipped up a demo using Beau Lotto's cube illusion. The demo puts the dress in two contexts that convey clear cues to the illuminant. If the issue is really about illumination guesses under ambiguous conditions, then providing unambiguous cues should force most people to see BOTH percepts (some folks may stick to their original interpretation; brains can be sticky that way and we don't yet understand why).

In both panels of the image the dress itself is IDENTICAL (it is the original image!).

The background and the model's skin tone have been set to provide strong cues to the ambient light (either yellow or blue). Since the brain can tell what the light source is and assumes it is adding either extra blue or extra yellow to the dress, it will remove either yellow or blue from the dress... yielding either a blue/black or white/gold percept.

The dress on the left should look blue/black. The dress on the right should look white/gold. (in fact, THEY ARE PHYSICALLY IDENTICAL). If you are having a hard time seeing both percepts, try zooming in, or staring for a little while. it really should kick in...



if that doesn't work for you, I like the cropped version better:



I put the image up in an informal poll on Facebook for a night last week and 94% of my first 130 responders said they were able to see both percepts, many of them for the first time. We have now collected over 1000 data points on Mechanical Turk and have submitted a manuscript detailing our findings. The data strongly suggest the bi-stability of the original image is a product of differences in illumination estimates, and that it arises due to both the ambiguity in the image, and the fact that the pixel values fall along the line of the daylight axis. There are other interesting pieces to the story... but this has been quite the wank session and I really must go clean myself up.

Disclaimer: This of course does not resolve the question of why people infer different illuminants in the original image, or why their percepts tend to remain so stable (typically, ambiguous illusions are bi-stable). It could have something to do with differences in individuals' priors or perhaps their looking patterns, but it does not seem to be based much on what monitor they are using or those viewing conditions (two people looking at the same screen can have different percepts).

If you would like to share this image please credit me (Rosa Lafer-Sousa) and Beau Lotto (who provided the backgrounds).

Last edited by Rosa; 03-14-2015 at 09:43 AM.
(Offline)   Reply With Quote