Our own vision uses it...
I heard our own eyes only use high resolution and color fidelity on the very focal center, while the rest just picks black-and-white patterns, motion detection... *while our brains fill the gaps*...
So image classification could use the same principle our eyes do...?
The same principle is already used in compressing MP3 with VBR, so why not apply it to speech recognition too?
Image recognition for specific cases will even drop the color entirely, and use black-and-white patterns for the processing... things like checking if a bottle has a cap correctly placed on it. I heard Coca-Cola will check with a camera for their specific Pantone color when printing their cans, otherwise they don't use color whatsoever. Applying the same principles to neural networks should be the next logical step.