-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Color Correction #24
Comments
@jakiestfu what color correction techniques are you using above? |
@ngoldman From the wiki article:
|
This would be awesome. |
@mdcarter color correction, yes, but Rayleigh Correction, not sure... |
Just a layman's observation based on reading Correction of Rayleigh scattering effects in cloud optical thickness retrievals, in order to correct for Rayleigh scattering (which affects the optical density of clouds), it's necessary to measure the density more directly. |
The question is: what colour correction methods can get close enough to the same result? |
You're not going to be able to get quite the same result as above working with a single full disk image but you could use @celoyd's color tweaks suggestions from his Himawari 8 animation tutorial as a starting point. He's using convert which is a part of ImageMagick, so that tool is already available to this project with its current dependencies. |
@ngoldman do you think you could provide a sample picture or some code that outputs that picture? |
This code works fine, however, it doesn't do much to the image other than brighten and add contrast: convert original.jpg -channel R -gamma 1.2 -channel G -gamma 1.1 +channel -sigmoidal-contrast 3,50% updated.jpg |
Right. It tones down blues (and greens) too, but it is extremely simple compared to what CIRA RAMMB has done to the image on Wikipedia. Their code is based on what’s used with MODIS and VIIRS, which is nontrivial. They have a paper in review, so more details should be public soon. I’m pretty confident that it’s more complex than anything that makes sense to implement here. (But skip to the “However” heading for some ideas.) Rationale for simple adjustmentWe don’t have good access to data for complex adjustmentOur options are limited because we don’t have the bands that CIRA does. They’re able to calculate optical depth, cloud height, etc., from information that’s at best only kinda present in the RGB PNGs we’re looking at. And they can mix some of the NIR channel into the green channel to account for the green band being to the < λ (blue) side of the 550 nm peak visible reflectance of chlorophyll. Sidebar: Why? Because the data is produced by the government of Japan, and they haven’t licensed and distributed it that way, as far as I can tell. I’ve signed up for their P-Tree service, but its TOS is vague and refers to other lengthier and confusinger TOSes. They seem to imagine forecasters, researchers, and resellers as the only potential users – which is completely understandable, but frustrating in our position. So while I’m very grateful that they’ve done as much as they have to license and distribute this data – they’ve clearly worked hard to serve their intended users well – I would like to convince them to do a little more. I don’t have the language skills, the contacts, or (at the moment) the time to make a persuasive case for truly open data here. If someone else does, I’d be happy to contribute some sort of amicus brief based on professional experience with these issues. End sidebar. We just don’t have the raw information that CIRA RAMMB does. If we could get it, it’s not clear (to me, yet) that we could “publish” it. We don’t necessarily want complex adjustmentHowever! CIRA’s path is not necessarily ideal. While I would use the NIR→green trick if I could, everything else they’re doing is more science- than esthetics- or realism-oriented. A person in space would see Rayleigh scattering making the atmosphere bluer toward the horizon, for example, and BRDF effects, both of which the correction deliberately and efficiently removes. They would also see the halo of the atmosphere, which CIRA’s correction cuts out, and they would not see the smudgy artifacts that the correction sometimes introduces near the terminator (dawn and dusk): look west of India in the example image at full size. What CIRA’s doing is extremely impressive, cutting-edge correction. But it’s not necessarily what makes sense here – at least as I’ve been envisioning it. HoweverI’m all for experiments more elaborate adjustment!
You can model the atmosphere as a spherical shell of known radii (in pixel dimensions) and do some very light trig to find the distance for which a given pixel’s ray intersects it, then use that field to weight a correction. And you can add low-res gridded elevation data (e.g.) to account for the fact that, for example, the Tibetan Plateau extends above most of the optical atmosphere. Instead of estimating, you can look up measured optical depth, from MODIS or from ground-based weather reports. Or you could pull in something like 6S, which is a standard atmospheric corrector that’s actually related to what CIRA is using. It’s really just a question of how far you want to go. SoThat’s why I’m pretty happy to use a simple/simplistic static adjustment. For input to
Which is a bit light and low-contrast by itself, but whatever random color profiles Just one last point and I’ll give you back the micYou can download CIRA’s images if you prefer them for any reason! There could even be flags to produce, say:
Okay, I’ll be quiet now |
Thanks @celoyd for taking time to get into the details! 😁 👌 💯 |
Yes. This is mainly because CIRA does NIR→green mixing to account for the green channel not being on the chlorophyll peak.
Of the two, or theoretically? Could you be more specific? |
Is there a more correct colouring of the two. |
Provide an optional parameter to color correct the resulting imagery to better reflect images such as the following:
The text was updated successfully, but these errors were encountered: