-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Tone equalizer 2025-04-06 preview version #18656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Tone equalizer 2025-04-06 preview version #18656
Conversation
The detailed explanation is here now: https://discuss.pixls.us/t/tone-equalizer-proposal/49314 |
macos build fails
|
Does this preserve existing edits? Just imported some existing tone equalizer presets. Applying them has no effect, therefore I would believe this version isn't backward compatible and will break existing edits. |
src/iop/toneequal.c
Outdated
@@ -126,35 +128,60 @@ | |||
|
|||
DT_MODULE_INTROSPECTION(2, dt_iop_toneequalizer_params_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to update the version here to support old edits - legacy_params() only gets called if the stored version is less than the version given here (and new edits would get confused with old ones without the version bump).
Looks like a missing version bump - the code is present to convert old edits, but darktable doesn't call it since it thinks they're still the current version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First very quick review (important as otherwise this will break old edits).
Not tested yet.
src/iop/toneequal.c
Outdated
n->quantization = 0.0f; | ||
n->smoothing = sqrtf(2.0f); | ||
|
||
// V3 params |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep this as new_version = 2
. The legacy_params() will be called multiple time until we reach the last version. The rule here is that we never have to touch old migration code, we just add a new chunk to go 1 step to final version.
src/iop/toneequal.c
Outdated
|
||
const dt_iop_toneequalizer_params_v2_t *o = old_params; | ||
dt_iop_toneequalizer_params_v3_t *n = malloc(sizeof(dt_iop_toneequalizer_params_v3_t)); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since all your new fields are at the end of the struct, just do:
memcpy(n, o, sizeof(t_iop_toneequalizer_params_v2_t));
n->quantization = o->quantization; | ||
n->smoothing = o->smoothing; | ||
|
||
// V3 params |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And keep only this section below:
Thanks for your feedback. I will provide a new version in a few days which includes the changes that were suggested here. Some advise on my two major roadblocks would be helpful:
|
@TurboGit please note there has been a lot of discussion in the thread on pixls. Please take that into consideration. |
I am late to the party and I will also keep silent again for private reasons IRL. From my perspective, I do not have many issues with ToneEQ. There is only one thing bothered me from day-one of this module:
I pretty much dislike and had long argues with the author Aurtélien Pierre by its time. He kept trying to convince me that it is not doable differently for this and that reason (as he usually behaves). That bar-indicator on masking tab did not guide me as precise as he always pretended it would be. Nowadays I have mapped those two sliders to two rotaries on my midi-board (X-touch mini) and I can attenuate while looking at the advanced tab. The luminance estimator / preserve details is another thing slightly above my head but from time to time I use it. All the rest, I barely touch After setting the mask, I hover the mouse on my image and scroll the wheel (for this I sometimes need to off/on the module as the rotaries seem to mess with the module-focus). For me the “simple” tab can just be hidden totally. Besides, please do not clutter the advanced tab. I hope my workflow (above) will not be destroyed and nor the old edits. In other words: Never change a running system |
@paperdigits : Yes I've seen and followed a bit the discussion, but it became so heated that it has almost no value to me at this point so I don't follow it on pixls anymore. We'll see if part of it is moved here in a PR. |
Existing user defined presets no longer work. As for the included presets, whenever I apply one the histogram contracts to take half of the horizontal space and moves to the left edge. If I move the histogram back where I want it and apply a preset again the same problem occurs. |
Ad 1) Nope, you must not use the parameters for keeping data while processing the module. We have the global module data, you might use that to keep data shared by all instances of a module. But you'd have to implement some locking and "per instance" on your own. Unfortunately we currentlx don't have "global_data_per_instance" available. Ad 2) Yes, exports and the cli interface. |
@wpferguson Are you sure that the settings were the same? The version in this PR had the bug with the version number introspection, so it did not convert existing settings correctly. It is probably better to wait for a new version to test this. |
So it probably does not make sense at all to depend on values that come from the GUI during export. The core problem is the scale-dependence of the guided filter. An alternative approach for exporting would be to create a downscaled copy of the image (sized like the GUI preview), apply the GF, get the values I need and then apply them to the the full image - essentially simulating the preview calculation. Not the most efficient approach, but it would only be needed during export when auto_align is used. |
Not sure I understood the last post correctly, but if I did, in my opinion there would be a problem. Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.) |
Definitely not, it won't generally work.
I didn't check /review your code in it's current state but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me. About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ... |
As far as I understand it, the preview thread calculates the navigation image shown in the top left of the GUI. However the actual image that the thread sees is much bigger (something like 1000px wide), so the navigation image must be a downscaled version. I hope (but have not yet checked) that the size of this internal image is constant. |
The code in the PR is outdated and has known problems, not much use looking at it now. I will update it as soon as I have a somewhat consistent state.
Of course it is possible that I might have broken something, but as far as I am aware, I did not change anything about the guided filter calculation but only modified what happens with the results.
This sounds nice, but my next attempt will be trying to avoid this. |
Note that your code can figure out how much the image it has been given has been downscaled in order to determine scaled radii and the like to simulate appropriate results. A bunch of modules with scale-dependent algorithms do this. It isn't perfect, but does yield pretty stable and predictable results. Look for |
I have been playing around with the tone equalizer some more and ran into a bit of a roadblock. To recap, what I do is: Image => Grayscale + Guided filter => Mask => Mask Histogram => Percentile values => Auto Scale/Shift values The buffers in the different pipelines (PREVIEW with GUI and the export) have different sizes and the guided filter is scale-dependent, so when I make statistics over the mask, it is expected that there is a systematic error. The final calculated values deviate so much that there is a visible difference between the image that is shown in DT and the export. My idea to overcome this was as to downscale the image during export to the same size as the preview and use the downscaled version to calculate the statistics (essentially simulating the preview pipe during export). However the results were still different even though the calculation was done with images of the same size and with the exact same parameters. Then I added debugging code to write the internal buffers into files and I discovered the reason:
However the left image (PREVIEW) is still a lot blurrier than my downscaled version. I would have expected them to be mostly the same. Using a blurry version of the image is not ideal for my purposes. However the bigger problem is that I need to know, what happened to the image in the preview pipe, if I want to replicate the same steps in the export pipe. I tried to find relevant parts in the DT code, but had no success so far. I am also interested in the calculation of the size of the PREVIEW pipe image, it seems like the height fluctuates the least and is between 880 and 900 pixels. |
The demosaic module downscales if the requested output dimensions to the following module are small enough. FULL is almost certainly getting more data out of demosaic than PREVIEW, which reflects in the sharper image after downscaling. Run with |
7c707c2
to
64fc0de
Compare
I finally have a version that I consider good enough to show it publicly. It would be nice if someone would take the time to look at my code, i.e. there are a few "TODO MF" markers with open questions. The module should be (if there are no bugs) compatible with old edits. I have checked that with parameters "align: custom, scale: 0, shift: 0" the results are the same as in 5.0.1. Most of my changes were not that much effort. Scaling the curve after the guided filter, coloring the curve, changing the UI (even though it still has a few issues) was not that difficult. The one thing that was hard and got me stuck for weeks is the auto alignment feature:
The data that is available to the pixelpipes during GUI operations is different from the export. The FULL pipe may not see the whole image, so it is not suitable to calculate mask exposure/contrast. The PREVIEW pipe sees the image completely, but it is scaled-down and pretty blurry. The guided filter is sensitive to both of these effects, so statistics on the PREVIEW luminance mask deviate significantly from statistics of the whole image. The (unfortunately not so nice) solution this problem is to request the full image in PIXELPIPE_FULL when necessary. Of course this has an impact on performance. However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask - so users who use the module like before should not experience performance degradation (unless I accidentally broke something, i.e. OpenMP). Known issues:
Other notes: I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often. Providing modules with access to an early version of the image would also have other use-cases, like stable parametric masks that don't depend on the input of their respective module. |
And only once when the auto-align is changed, right?
Indeed, you can even resize to negative size of the graph :)
Indeed, we need back the hierarchical presets naming. Should be easy.
A question for @jenshannoschwalm I suppose. During my testing I found the auto-align a very big improvement indeed. I would probably have used the fully-fit as default or maybe the "auto-align at mid-tone". BTW, since the combo label is "auto align mask exposure" I would use simplified naming for the entries:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some changes proposed while doing a first review.
To me the new ToneEQ UI is nice and better than what we have in current master.
I would suggest to address the remaining minor issues (crash when resizing the graph) + the proposed changes here and do a clean-up of the code to remove dead-code and/or commented out code. Probably also removing the debug code and I'll do a second review.
From there we should also have some other devs testing it, such drastic changes may raise some issues with others. Again to me that's a change in the right direction.
n->quantization = 0.0f; | ||
n->smoothing = sqrtf(2.0f); | ||
|
||
*new_params = n; | ||
*new_params_size = sizeof(dt_iop_toneequalizer_params_v2_t); | ||
*new_version = 2; | ||
*new_params_size = sizeof(dt_iop_toneequalizer_params_v3_t); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is wrong, we do a one step update from v1
to v2
. As said previously we don't want to change older migration path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As said previously, this change should be reverted. We want to keep the step migration from v1 to v2.
I don't think fully fit is the problem. The histogram and the changes in all modes do not correlate properly. At least not in an intuitive way. |
I made a comparison with the exposure module by masking an area with highlights ...and reducing the exposure there by 1 EV: In legacy mode you get a similar result by darkening the corresponding area “linearly”: In fully fit or custom mode the area remains much brighter: I only get a similar result when I use at shadows mode and increase _mask contras_t to +1: |
Isn't this wrong (or at least not expected)? I mean to match an expose of -1EV the ToneEq should not be a linear down slope but a straight line at -1EV. My point is that we had an habit to work around a feature that ToneEq had a non linear masking, fine by me... But if the new masking is not exactly the same is that a big deal? Again maybe instead of having a uniform down slope maybe now we need a slope more in S form. Again I'm not in favor of one or another solution, just trying to understand from where we come from and what we will have next. My only strong point is that an OLD edit must be keep fully identical to ensure we get the same export. |
That is indeed the case. Here we have a linear gradient and I have reduced the exposure to -2 EV. I've also drawn a red line to be able to compare it with TE: legacy TE: custom TE: Both behave as expected. It becomes more difficult when you want to darken a certain area. Here with the exposure module and the parametric mask: Legacy with mask exposure compensation: Fully fit: custom with mask brightness/shift histogram: If I only want to darken highlights evenly. Legacy: That works well. Here (custom) this is not possible: |
Thanks a lot for the thorough analysis. I need some time to figure this out. |
@s7habo : Indeed, something wrong... Thanks for the detailed analysis. |
One thing I found out is that in auto-fit mode the ToneEq module recomputes the mask on every change in other modules. This has a big impact on the responsiveness of course and is something that we don't want in dt. That is, a module which changes itself when some other modules are adjusted. My proposal for the auto-fit module would be to compute it the first time the module is enabled and keep it as-is after. From there we can:
I think I prefer 2, but we can open the discussion. What do you think? |
@s7habo I think I have fixed the bug. But I will take my time before I upload a new version. ![]() @TurboGit This is a new requirement, but I can see where you are coming from. But I don't like a solution that effectively means going back to the magic wands. Suggestions:
@AxelG-DE One thing that I tought about... what is the reason for leaving gaps left and right of the histogram? This way the values that you set at 0 and at -8 EV are not fully applied to any pixels. (By the way, your bad experience was probably due to the bug. I think your test from the video will work better with the next version.) |
@TurboGit Maybe removing the automatic adaptation to upstream changes is not that bad. It has caused a number of problems for me in the code. I will think about it and suggest something. |
sample pic no ToneEQ Snapshot, left is A) right is B) Snapshot, left is B) right is C) [admit: in this example one can argue, but I did not spend much efforts now to really really edit you see it more clear, when you use the "old" way of comparing snapshots and slide with the separator over it....] |
I noticed another behaviour: with this PR in place, when moving the mouse cursor (toneEQ active) out of the image, e.g. down to the thumbnails (the image was zoomed), the main image jumps up and down for quite some pixels). While typing this, code was removed already. I injected this PR again to reproduce, but the phenomenon is gone |
@AxelG-DE The number 80% is outdated. It was mentioned in videos and it is still in the documentation, but in the current (5.0) code it is 90%. Things also work a little differently. The bar-thing on the masking page represents not the whole histogram, but only the center 90%. Aurelian's magic wands try to align these center 90% to the -7 to -1 range. They often fail, but that is a different story. The actual histogram will in many cases go outside the border of the -8 to 0 range. This happens with Aurelian's wands when they work as intended and also with my "fully fit". As far as I understood Aurelian in his videos, he does not consider this to be a problem, as long as this affects only a relatively small number of pixels. The outside pixels will be treated the same as the pixels exactly at -8 or 0. TE starts to display warnings when the center 90% leave the -8 to 0 range. I think the "curve back to 0" discussion is situational. I have messed around with some pictures here and on a sunset picture it was clearly not the right approach. I also remember that @s7habo often uses a curve that is just a straight diagonal line down, so it does not curve back towards 0 at all. |
Responding @TurboGit's last comment, I will make some bigger changes to the module. Apparently I tried to do things that do not fit well into Darktable's overall philosophy. The next version will internally be much more similar to the old TE and it will loose some functionality, especially the auto-adaptation to upstream changes. This means no more elaborate calculations in the EXPORT pipeline that have to exactly match the same calculations in GUI mode, also the module will no longer need to request the full image in the FULL pipeline - all of this really complicated stuff was only necessary for the live adaptation. On the other hand I think I have some good ideas of things to add instead that fit DT better and improve usability further. This is a bigger rework and it will take some time. Since some people like to try the new module right now, I don't want to leave the know broken version that produces incorrect results as the newest commit, giving people a wrong impression of the module's capabilities. So the above commit is a quick hotfix for this single bug. @s7habo and @AxelG-DE: Please check if it works better for you now. The new modes should now produce results that are similar to legacy mode. Apart from that, I would like to ask everyone to wait for the next version before giving additional feedback - especially regarding the UI. |
@marc-fouquet understand your points. "Curve back to 0" is something I also often ignore. However the spreading of the histogram from 0 to -8 is something where the results are less pleasing compared to let's call it 0.x to -7.y (see above) in that case the corner pixels are exactly not treated the same but have a reverse slope. Thus I still think it would be good to squeeze the fully fit just a tad. |
@marc-fouquet |
@AxelG-DE Can you try to get results that you like with the custom controls? The same parameters/curve will not be identical to legacy mode, but it should be possible to achieve a similar result with the same process. In the bugged version, the changes in the extreme highlights/shadows were much too weak. I am not so concerned with "fully fit" right now, as it will have to turn from a mode into a button (like the magic wands) anyway. So "fully fit" will be a starting point, but if you think that the histogram is too wide, you will have the sliders to compress it. Also, I have an idea what might cause the results to be better if the histogram is a bit smaller. I plan to investigate this further, but likely not in the next version but at a later point. |
I tried with smoothing diameter, and you can actually get the dark zone around the sun a bit better, but easily it washes out the picture. Edges refinement on the other hand does not help much. what I cannot achieve is the shadows part. Here the fully fit mode -to me- feels like almost does nothing |
I couldn't really follow the internal changes and discussions but the last example with the dark ring around the sun might pinpoint to a principal problem of your approach, when and how do you apply the guided filter. |
@AxelG-DE @jenshannoschwalm This is the inverted brightness effect from my very first post on the forum (Pixel A should be brighter than B, but the downward slope of the curve is so steep that B is brighter than A). This is also what the red curve warning is for. I have played around with a similar image, it is possible to run into the same effect on 5.0.1. TE has always produced various artefacts when pushing things far and these kind of sunset pictures do this. I will continue keeping an eye on this, but at the moment I don't think that this is an indication of a bug in the module. |
@marc-fouquet The TE now works as expected. Thanks for the quick fix!
Yes, I can confirm that. That was one of the reasons why I don't like to have too much variation in the curve. I have a suggestion in this regard. I just need to think carefully about how best to formulate it. |
I also think, this is not a bug. From my (user's) point of view, it just comes from the not.squeezed.histogram :) If I start with fully-fit first, then switch to legacy and do not change the compression, I am rather sure, I will get the same things. (actually I found another funny thing. when lifting the shadows in fully-fit and then move to the sun. the mouse-cursor showed that striped overexposure indicator, but there was no triangle on the histogram yet) |
Thanks, I have added this to the list of things to look at. I also just found one more issue. The red warnings do not react correctly to legacy contrast boost. |
What works much better in this version of TE than in legacy is the mask contrast/ scale histrogram function. @marc-fouquet You have offered different modes for these purposes. Full fit, at shadows, at highlights at mid-tones and custom. Wouldn't it be possible to have only one mode instead of these different “focus” modes for mask contrast, where you have an additional slider next to mask brightnes and mask contrast to set the fucrum for mask contrast dynamically? Let me illustrate this with an example. Here I would like to darken the highlights: I now turn on the TE, it centers the histogram and we have an additional slider with which we can control the fulcrum of the mask contrast. This is displayed, for example, with a vertical line in the histogram (here both in red): Now I darken the highlights and move the mouse over the area I want to protect from darkening (which should not be changed): Now I put the fulcrum right there and increase mask contrast, which gives me even better darkening of highlights: In this way, we only need one mode and three sliders: for mask brightness, mask contrast and mask fulcrum. |
I like the idea. I will have to think about the details. |
BTW: I did those only to test and clarify. It was clear that high dynamic range is the most challenging. Usually I would have raised the exposure first, at least half of what is needed and then use filmic to compress the highlights 😄 |
This is a preview of my proposed changes to the tone equalizer module. There is a lot of debug code still in it and there are known problems, see below.
This message only contains code aspects. I will write a detailed post from a user perspective for pixls.us.
Changes
post_scale
,post_shift
andpost_auto_align
settings, which allow adjusting mask exposure and contrast AFTER the guided filter calculation.post_scale
andpost_shift
are derived from this histogram.post_auto_align=custom
,post_scale=0
,post_shift=0
, the results are the same as with the old tone equalizer (I was not able to get a byte-identical export, but in GIMP the difference between my version and 5.0 was fully black in my tests).dt_dev_pixelpipe_piece_hash
todt_dev_hash_plus
after I noticed that the module constantly re-computed the guided filter, even though this was not necessary.compute_lut_correction
tocompute_gui_curve
) and sorted the functions. The consequence is that I have touched almost all lines of code, so diffs will not be helpful in tracking my changes.Known issues
Tbh., I had more problems with the anatomy of a darktable module (threads, params, data, gui-data, all the GTK stuff) than with the actual task at hand.
Known problems are:
post_auto_align
is used to set the mask exposure, the values forpost_scale
andpost_shift
are calculated in PREVIEW and used in FULL. However, other pixel pipes (especially export) calculate the mask exposure on their own and may get a different result that leads to a different output.Things I noticed about the module (a.k.a issues already present in 5.0)
Related Discussion
Issue #17287