Skip to content

Tone equalizer 2025-04-06 preview version #18656

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

marc-fouquet
Copy link
Contributor

This is a preview of my proposed changes to the tone equalizer module. There is a lot of debug code still in it and there are known problems, see below.

This message only contains code aspects. I will write a detailed post from a user perspective for pixls.us.

Changes

  • Introduced post_scale, post_shift and post_auto_align settings, which allow adjusting mask exposure and contrast AFTER the guided filter calculation.
    • The histogram calculation is now split into two steps. First, a very detailed histogram is calculated over a large dynamic range. The GUI histogram and internal parameters used for automatically computing post_scale and post_shift are derived from this histogram.
    • The new parameters are not actually applied to the luminance mask, but to the look-up table that is used to modify the image.
    • With post_auto_align=custom, post_scale=0, post_shift=0, the results are the same as with the old tone equalizer (I was not able to get a byte-identical export, but in GIMP the difference between my version and 5.0 was fully black in my tests).
  • Changed upstream pipe change detection from dt_dev_pixelpipe_piece_hash to dt_dev_hash_plus after I noticed that the module constantly re-computed the guided filter, even though this was not necessary.
  • Added experimental coloring to the curve in the GUI, it now turns orange or red when the user does something that is probably not a good idea:
    • Raising shadows/lowering highlights with the guided filter turned off.
    • Lowering shadows/raising highlights with the guided filter turned on. The user probably expects a gain in contrast here, but the guided filter will work against this.
    • Setting the downward slope of the curve to be too steep.
  • UI changes:
    • Sliders (previously on the "simple" page) are now located in a collapsible section beneath the histogram.
    • Made the histogram/curve graph resizable (see issues!).
  • In my efforts to understand the code, I renamed things that were named confusingly in my opinion (i.e. compute_lut_correction to compute_gui_curve) and sorted the functions. The consequence is that I have touched almost all lines of code, so diffs will not be helpful in tracking my changes.

Known issues

Tbh., I had more problems with the anatomy of a darktable module (threads, params, data, gui-data, all the GTK stuff) than with the actual task at hand.

Known problems are:

  • Pulling the histogram/curve graph small causes DT to crash. I am clearly still missing something here. There is also no horizontal bar on mouseover to indicate that the graph can be resized.
  • When post_auto_align is used to set the mask exposure, the values for post_scale and post_shift are calculated in PREVIEW and used in FULL. However, other pixel pipes (especially export) calculate the mask exposure on their own and may get a different result that leads to a different output.

Things I noticed about the module (a.k.a issues already present in 5.0)

  • Resetting the module multiple times makes the histogram disappear until the user moves a slider.

Related Discussion

Issue #17287

@marc-fouquet
Copy link
Contributor Author

The detailed explanation is here now: https://discuss.pixls.us/t/tone-equalizer-proposal/49314

@MStraeten
Copy link
Collaborator

macos build fails

~/src/darktable/src/iop/toneequal.c:1343:94: fatal error: format specifies type 'long' but the argument has type 'dt_hash_t' (aka 'unsigned long long') [-Wformat]
 1343 |     printf("toneeq_process PIXELPIPE_PREVIEW: hash=%ld saved_hash=%ld luminance_valid=%d\n", current_upstream_hash, saved_upstream_hash,
      |                                                    ~~~                                       ^~~~~~~~~~~~~~~~~~~~~
      |                                                    %llu
1 error generated.

@wpferguson
Copy link
Member

wpferguson commented Apr 6, 2025

Does this preserve existing edits?

Just imported some existing tone equalizer presets. Applying them has no effect, therefore I would believe this version isn't backward compatible and will break existing edits.

@@ -126,35 +128,60 @@

DT_MODULE_INTROSPECTION(2, dt_iop_toneequalizer_params_t)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to update the version here to support old edits - legacy_params() only gets called if the stored version is less than the version given here (and new edits would get confused with old ones without the version bump).

@ralfbrown
Copy link
Collaborator

Does this preserve existing edits?

Looks like a missing version bump - the code is present to convert old edits, but darktable doesn't call it since it thinks they're still the current version.

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First very quick review (important as otherwise this will break old edits).

Not tested yet.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep this as new_version = 2. The legacy_params() will be called multiple time until we reach the last version. The rule here is that we never have to touch old migration code, we just add a new chunk to go 1 step to final version.


const dt_iop_toneequalizer_params_v2_t *o = old_params;
dt_iop_toneequalizer_params_v3_t *n = malloc(sizeof(dt_iop_toneequalizer_params_v3_t));

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since all your new fields are at the end of the struct, just do:

memcpy(n, o, sizeof(t_iop_toneequalizer_params_v2_t));

n->quantization = o->quantization;
n->smoothing = o->smoothing;

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And keep only this section below:

@marc-fouquet
Copy link
Contributor Author

Thanks for your feedback. I will provide a new version in a few days which includes the changes that were suggested here.

Some advise on my two major roadblocks would be helpful:

  1. The auto-alignment (post_scale/post_shift) is calculated in PIXELPIPE_PREVIEW and I would like to use the results in all pipes. It is easy to get this data to the main window (PIXELPIPE_FULL) by storing it in g. However I also need a way to apply the same values to exports (which don't have g). Since the guided filter is not scale-invariant, the results would differ somewhat if I re-calculated the alignment with pipe-local data during export.
  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.
  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?
  1. The problem with resizing the graph. I have only done two things:
  • In gui_init
  g->area = GTK_DRAWING_AREA(dt_ui_resize_wrap(NULL,
    0,
    "plugins/darkroom/toneequal/graphheight"));
  • and in _init_drawing
  g->allocation.height -= DT_RESIZE_HANDLE_SIZE;
  • After that, resizing the graph works, but no handle is displayed and it is possible to drag the graph too small, which results in a crash.

@paperdigits
Copy link
Contributor

@TurboGit please note there has been a lot of discussion in the thread on pixls. Please take that into consideration.

@AxelG-DE
Copy link

I am late to the party and I will also keep silent again for private reasons IRL.

From my perspective, I do not have many issues with ToneEQ.

There is only one thing bothered me from day-one of this module:

  • mask exposure/contrast compensation sliders are on masking tab
  • histogram is on the advanced tab
  • For precise attenuation one needs to forward/backward jump between those two tabs.

I pretty much dislike and had long argues with the author Aurtélien Pierre by its time. He kept trying to convince me that it is not doable differently for this and that reason (as he usually behaves). That bar-indicator on masking tab did not guide me as precise as he always pretended it would be.

Nowadays I have mapped those two sliders to two rotaries on my midi-board (X-touch mini) and I can attenuate while looking at the advanced tab.

The luminance estimator / preserve details is another thing slightly above my head but from time to time I use it.

All the rest, I barely touch

After setting the mask, I hover the mouse on my image and scroll the wheel (for this I sometimes need to off/on the module as the rotaries seem to mess with the module-focus).

For me the “simple” tab can just be hidden totally. Besides, please do not clutter the advanced tab.

I hope my workflow (above) will not be destroyed and nor the old edits.

In other words: Never change a running system
Thank you!

@TurboGit
Copy link
Member

@paperdigits : Yes I've seen and followed a bit the discussion, but it became so heated that it has almost no value to me at this point so I don't follow it on pixls anymore. We'll see if part of it is moved here in a PR.

@wpferguson
Copy link
Member

Existing user defined presets no longer work.

As for the included presets, whenever I apply one the histogram contracts to take half of the horizontal space and moves to the left edge. If I move the histogram back where I want it and apply a preset again the same problem occurs.

@wpferguson
Copy link
Member

Showing the mask with preserve details shows no difference between no, EIGF, and averaged EIGF.

Masks are vastly different between tone equalizer on current master and this PR.

New masks...

new 1
new 2
new 3
new 4
new 5

versus current master

old 1
old 2
old 3
old 4
old 5

@jenshannoschwalm
Copy link
Collaborator

  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 1) Nope, you must not use the parameters for keeping data while processing the module. We have the global module data, you might use that to keep data shared by all instances of a module. But you'd have to implement some locking and "per instance" on your own. Unfortunately we currentlx don't have "global_data_per_instance" available.

Ad 2) Yes, exports and the cli interface.

@marc-fouquet
Copy link
Contributor Author

@wpferguson Are you sure that the settings were the same?

The version in this PR had the bug with the version number introspection, so it did not convert existing settings correctly. It is probably better to wait for a new version to test this.

@marc-fouquet
Copy link
Contributor Author

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 2) Yes, exports and the cli interface.

So it probably does not make sense at all to depend on values that come from the GUI during export.

The core problem is the scale-dependence of the guided filter. An alternative approach for exporting would be to create a downscaled copy of the image (sized like the GUI preview), apply the GF, get the values I need and then apply them to the the full image - essentially simulating the preview calculation. Not the most efficient approach, but it would only be needed during export when auto_align is used.

@rgr59
Copy link

rgr59 commented Apr 12, 2025

Not sure I understood the last post correctly, but if I did, in my opinion there would be a problem.

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

@jenshannoschwalm
Copy link
Collaborator

So it probably does not make sense at all to depend on values that come from the GUI during export.

Definitely not, it won't generally work.

The core problem is the scale-dependence of the guided filter.

I didn't check /review your code in it's current state but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

@marc-fouquet
Copy link
Contributor Author

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

As far as I understand it, the preview thread calculates the navigation image shown in the top left of the GUI. However the actual image that the thread sees is much bigger (something like 1000px wide), so the navigation image must be a downscaled version. I hope (but have not yet checked) that the size of this internal image is constant.

@marc-fouquet
Copy link
Contributor Author

I didn't check /review your code in it's current state

The code in the PR is outdated and has known problems, not much use looking at it now. I will update it as soon as I have a somewhat consistent state.

but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

Of course it is possible that I might have broken something, but as far as I am aware, I did not change anything about the guided filter calculation but only modified what happens with the results.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

This sounds nice, but my next attempt will be trying to avoid this.

@ralfbrown
Copy link
Collaborator

Note that your code can figure out how much the image it has been given has been downscaled in order to determine scaled radii and the like to simulate appropriate results. A bunch of modules with scale-dependent algorithms do this. It isn't perfect, but does yield pretty stable and predictable results.

Look for piece->iscale and roi_in->scale in e.g. sharpen.c and diffuse.c. Most modules access these in process(), but it looks like tone equalizer actually accesses and caches this info in modify_roi_in().

@marc-fouquet
Copy link
Contributor Author

I have been playing around with the tone equalizer some more and ran into a bit of a roadblock.

To recap, what I do is:

Image => Grayscale + Guided filter => Mask => Mask Histogram => Percentile values => Auto Scale/Shift values

The buffers in the different pipelines (PREVIEW with GUI and the export) have different sizes and the guided filter is scale-dependent, so when I make statistics over the mask, it is expected that there is a systematic error. The final calculated values deviate so much that there is a visible difference between the image that is shown in DT and the export.

My idea to overcome this was as to downscale the image during export to the same size as the preview and use the downscaled version to calculate the statistics (essentially simulating the preview pipe during export). However the results were still different even though the calculation was done with images of the same size and with the exact same parameters.

Then I added debugging code to write the internal buffers into files and I discovered the reason:

github

  • The image on the left is a crop from the input of the PREVIEW pipeline.
  • The image on the right is from the export pipeline. The input buffer was downscaled to the same size as PREVIEW using dt_interpolation_resample. I tested the different interpolation types, this example uses DT_INTERPOLATION_BILINEAR, which should in my understanding be the least sharp option.

However the left image (PREVIEW) is still a lot blurrier than my downscaled version. I would have expected them to be mostly the same.

Using a blurry version of the image is not ideal for my purposes. However the bigger problem is that I need to know, what happened to the image in the preview pipe, if I want to replicate the same steps in the export pipe.

I tried to find relevant parts in the DT code, but had no success so far. I am also interested in the calculation of the size of the PREVIEW pipe image, it seems like the height fluctuates the least and is between 880 and 900 pixels.

@ralfbrown
Copy link
Collaborator

The demosaic module downscales if the requested output dimensions to the following module are small enough. FULL is almost certainly getting more data out of demosaic than PREVIEW, which reflects in the sharper image after downscaling. Run with -d perf to see the image sizes being passed along the pixelpipe.

@marc-fouquet marc-fouquet force-pushed the 2025-04_toneequal_preview branch from 7c707c2 to 64fc0de Compare May 1, 2025 08:32
@marc-fouquet
Copy link
Contributor Author

I finally have a version that I consider good enough to show it publicly. It would be nice if someone would take the time to look at my code, i.e. there are a few "TODO MF" markers with open questions.

The module should be (if there are no bugs) compatible with old edits. I have checked that with parameters "align: custom, scale: 0, shift: 0" the results are the same as in 5.0.1.

Most of my changes were not that much effort. Scaling the curve after the guided filter, coloring the curve, changing the UI (even though it still has a few issues) was not that difficult. The one thing that was hard and got me stuck for weeks is the auto alignment feature:

  • If requested by the user, the module should determine the mask exposure and contrast automatically.
  • These values should automatically adapt to upstream image changes.
  • The result should be the same during GUI operations and during export.

The data that is available to the pixelpipes during GUI operations is different from the export. The FULL pipe may not see the whole image, so it is not suitable to calculate mask exposure/contrast.

The PREVIEW pipe sees the image completely, but it is scaled-down and pretty blurry. The guided filter is sensitive to both of these effects, so statistics on the PREVIEW luminance mask deviate significantly from statistics of the whole image.

The (unfortunately not so nice) solution this problem is to request the full image in PIXELPIPE_FULL when necessary. Of course this has an impact on performance. However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask - so users who use the module like before should not experience performance degradation (unless I accidentally broke something, i.e. OpenMP).

Known issues:

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.
  • In the auto-align case, the UI needs both PIXELPIPE_PREVIEW and PIXELPIPE_FULL to be completed to draw the histogram. If one is missing (which can easily happen) no histogram is drawn. (Even in 5.0.1 there are similar situations, i.e. no histogram ist drawn after resetting the module twice.)
  • My version has been forked from master a while ago. I have looked through the changes on master and applied most of them. The folders for the default presets are still missing, but these are not a problem. The upstream change detection (hash) has been modified, in my version I have also noticed that the original code did not work correctly, but my fix is different.

Other notes:

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

Providing modules with access to an early version of the image would also have other use-cases, like stable parametric masks that don't depend on the input of their respective module.

@TurboGit
Copy link
Member

TurboGit commented May 3, 2025

However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask

And only once when the auto-align is changed, right?

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.

Indeed, you can even resize to negative size of the graph :)

  • The folders for the default presets are still missing, but these are not a problem.

Indeed, we need back the hierarchical presets naming. Should be easy.

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

A question for @jenshannoschwalm I suppose.

During my testing I found the auto-align a very big improvement indeed. I would probably have used the fully-fit as default or maybe the "auto-align at mid-tone". BTW, since the combo label is "auto align mask exposure" I would use simplified naming for the entries:

  • custom
  • at shadows
  • at mid-tones
  • at hightlights
  • fully fit

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some changes proposed while doing a first review.

To me the new ToneEQ UI is nice and better than what we have in current master.

I would suggest to address the remaining minor issues (crash when resizing the graph) + the proposed changes here and do a clean-up of the code to remove dead-code and/or commented out code. Probably also removing the debug code and I'll do a second review.

From there we should also have some other devs testing it, such drastic changes may raise some issues with others. Again to me that's a change in the right direction.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

*new_params = n;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v2_t);
*new_version = 2;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v3_t);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is wrong, we do a one step update from v1 to v2. As said previously we don't want to change older migration path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As said previously, this change should be reverted. We want to keep the step migration from v1 to v2.

@TurboGit TurboGit added this to the 5.2 milestone May 3, 2025
@TurboGit TurboGit added the priority: medium core features are degraded in a way that is still mostly usable, software stutters label May 3, 2025
@s7habo
Copy link

s7habo commented Jun 9, 2025

I will have a look at the histogram alignment for "fully fit".

I don't think fully fit is the problem. The histogram and the changes in all modes do not correlate properly. At least not in an intuitive way.
I am in the process of trying to understand this using various examples.

@s7habo
Copy link

s7habo commented Jun 9, 2025

I made a comparison with the exposure module by masking an area with highlights

grafik

...and reducing the exposure there by 1 EV:

grafik

In legacy mode you get a similar result by darkening the corresponding area “linearly”:

grafik

In fully fit or custom mode the area remains much brighter:

grafik

I only get a similar result when I use at shadows mode and increase _mask contras_t to +1:

grafik

@TurboGit
Copy link
Member

TurboGit commented Jun 9, 2025

In legacy mode you get a similar result by darkening the corresponding area “linearly”:

Isn't this wrong (or at least not expected)? I mean to match an expose of -1EV the ToneEq should not be a linear down slope but a straight line at -1EV. My point is that we had an habit to work around a feature that ToneEq had a non linear masking, fine by me... But if the new masking is not exactly the same is that a big deal? Again maybe instead of having a uniform down slope maybe now we need a slope more in S form.

Again I'm not in favor of one or another solution, just trying to understand from where we come from and what we will have next. My only strong point is that an OLD edit must be keep fully identical to ensure we get the same export.

@s7habo
Copy link

s7habo commented Jun 9, 2025

Isn't this wrong (or at least not expected)? I mean to match an expose of -1EV the ToneEq should not be a linear down slope but a straight line at -1EV.

That is indeed the case. Here we have a linear gradient and I have reduced the exposure to -2 EV. I've also drawn a red line to be able to compare it with TE:

grafik

legacy TE:

grafik

custom TE:

grafik

Both behave as expected. It becomes more difficult when you want to darken a certain area.

Here with the exposure module and the parametric mask:

grafik

Legacy with mask exposure compensation:

grafik

Fully fit:

grafik

custom with mask brightness/shift histogram:

grafik

If I only want to darken highlights evenly. Legacy:

grafik

That works well.

Here (custom) this is not possible:

grafik

@marc-fouquet
Copy link
Contributor Author

Thanks a lot for the thorough analysis. I need some time to figure this out.

@TurboGit
Copy link
Member

TurboGit commented Jun 9, 2025

@s7habo : Indeed, something wrong... Thanks for the detailed analysis.

@TurboGit
Copy link
Member

One thing I found out is that in auto-fit mode the ToneEq module recomputes the mask on every change in other modules. This has a big impact on the responsiveness of course and is something that we don't want in dt. That is, a module which changes itself when some other modules are adjusted.

My proposal for the auto-fit module would be to compute it the first time the module is enabled and keep it as-is after. From there we can:

  1. Have a button to force the recomputation of the mask.
  2. When auto-fit is done the mode is changed to Custom allowing the user to adjust the mask if needed. And if a new auto recomputation is needed just select the auto-fit mode that will again be reset (in the combo box) to Custom when done.

I think I prefer 2, but we can open the discussion.

What do you think?

@marc-fouquet
Copy link
Contributor Author

@s7habo I think I have fixed the bug. But I will take my time before I upload a new version.

@TurboGit This is a new requirement, but I can see where you are coming from.

But I don't like a solution that effectively means going back to the magic wands. Suggestions:

  1. A checkbox. The module will auto-adapt to upstream changes when it is checked.
  2. Another idea that I had mentioned above is using a "stable early in the pipe" version of the image for calculating the TE mask. This would need to be a version after demosaic and denoise but before anything that users would change regularly. Is there any way to make this possible?

@AxelG-DE One thing that I tought about... what is the reason for leaving gaps left and right of the histogram? This way the values that you set at 0 and at -8 EV are not fully applied to any pixels.

(By the way, your bad experience was probably due to the bug. I think your test from the video will work better with the next version.)

@marc-fouquet
Copy link
Contributor Author

@TurboGit Maybe removing the automatic adaptation to upstream changes is not that bad. It has caused a number of problems for me in the code. I will think about it and suggest something.

@AxelG-DE
Copy link

@marc-fouquet

@AxelG-DE One thing that I tought about... what is the reason for leaving gaps left and right of the histogram? This way the values that you set at 0 and at -8 EV are not fully applied to any pixels.

  1. the result is "better" the way I did it (better as in smoother transitions, see my edits, IIRC it was in my vid, where one can see that and below)
  2. as it is computed today, the fully fit looks to me like it is kind of clipping (see screenshots below), so I am not very shore, there is maybe data right of it (yeah I know the yellow triangle is not there, but I like some safety buffer)
  3. actually it is my understanding what the "old" wands tried to achieve, when it is said "try to distribute to represent 80%...." (currently I cannot open the docs to copy the exact wording
  4. like I said before, the original authore Aurélien Pierre by its time recommended to adjust the mask in a way that the histrogram is spreading around -1 to -7 EV and keep the +0 and -8 EV on the horizontal +0 of the Y-axis . Experience dictates, this results in the better transitions

grafik
grafik

sample pic no ToneEQ
grafik
A) fully fit
grafik
B) "my way" but even not pulled the nodes 0 and -8 to the center line
grafik
C) and finally "my way" incl. node 0 and -8 nailed to the center
grafik

Snapshot, left is A) right is B)
grafik

Snapshot, left is B) right is C) [admit: in this example one can argue, but I did not spend much efforts now to really really edit you see it more clear, when you use the "old" way of comparing snapshots and slide with the separator over it....]
grafik

@AxelG-DE
Copy link

I noticed another behaviour: with this PR in place, when moving the mouse cursor (toneEQ active) out of the image, e.g. down to the thumbnails (the image was zoomed), the main image jumps up and down for quite some pixels).

While typing this, code was removed already. I injected this PR again to reproduce, but the phenomenon is gone

@AxelG-DE
Copy link

another example (with current ToneEQ)

left is where I left the nodes as you can see them on the module, right is where I dropped node +0 and -8 back to the ground line (sorry no pic of the module)

grafik

@marc-fouquet
Copy link
Contributor Author

@AxelG-DE The number 80% is outdated. It was mentioned in videos and it is still in the documentation, but in the current (5.0) code it is 90%. Things also work a little differently.

The bar-thing on the masking page represents not the whole histogram, but only the center 90%. Aurelian's magic wands try to align these center 90% to the -7 to -1 range. They often fail, but that is a different story.

The actual histogram will in many cases go outside the border of the -8 to 0 range. This happens with Aurelian's wands when they work as intended and also with my "fully fit". As far as I understood Aurelian in his videos, he does not consider this to be a problem, as long as this affects only a relatively small number of pixels. The outside pixels will be treated the same as the pixels exactly at -8 or 0. TE starts to display warnings when the center 90% leave the -8 to 0 range.

I think the "curve back to 0" discussion is situational. I have messed around with some pictures here and on a sunset picture it was clearly not the right approach. I also remember that @s7habo often uses a curve that is just a straight diagonal line down, so it does not curve back towards 0 at all.

@marc-fouquet
Copy link
Contributor Author

Responding @TurboGit's last comment, I will make some bigger changes to the module. Apparently I tried to do things that do not fit well into Darktable's overall philosophy. The next version will internally be much more similar to the old TE and it will loose some functionality, especially the auto-adaptation to upstream changes. This means no more elaborate calculations in the EXPORT pipeline that have to exactly match the same calculations in GUI mode, also the module will no longer need to request the full image in the FULL pipeline - all of this really complicated stuff was only necessary for the live adaptation.

On the other hand I think I have some good ideas of things to add instead that fit DT better and improve usability further.

This is a bigger rework and it will take some time. Since some people like to try the new module right now, I don't want to leave the know broken version that produces incorrect results as the newest commit, giving people a wrong impression of the module's capabilities. So the above commit is a quick hotfix for this single bug.

@s7habo and @AxelG-DE: Please check if it works better for you now. The new modes should now produce results that are similar to legacy mode.

Apart from that, I would like to ask everyone to wait for the next version before giving additional feedback - especially regarding the UI.

@AxelG-DE
Copy link

@marc-fouquet understand your points. "Curve back to 0" is something I also often ignore. However the spreading of the histogram from 0 to -8 is something where the results are less pleasing compared to let's call it 0.x to -7.y (see above) in that case the corner pixels are exactly not treated the same but have a reverse slope. Thus I still think it would be good to squeeze the fully fit just a tad.

@AxelG-DE
Copy link

@s7habo and @AxelG-DE: Please check if it works better for you now. The new modes should now produce results that are similar to legacy mode.

At your service. Likely has to wait until Sunday. Lot to do IRL :)

@AxelG-DE
Copy link

AxelG-DE commented Jun 13, 2025

@marc-fouquet
I was too eager :-) (but once my wife finished her day job, she will get demanding, --unfortunately for homeworks ;-) )
fresh build:
grafik

fully fit:
grafik

legacy:
grafik
missing histogram
grafik

side by side:
grafik

@wpferguson
Copy link
Member

@AxelG-DE
Copy link

another comparison between fully fit (left) and legacy (right). See that dark "ring" around the sun on left, fully fit...
grafik

@marc-fouquet
Copy link
Contributor Author

@AxelG-DE Can you try to get results that you like with the custom controls? The same parameters/curve will not be identical to legacy mode, but it should be possible to achieve a similar result with the same process. In the bugged version, the changes in the extreme highlights/shadows were much too weak.

I am not so concerned with "fully fit" right now, as it will have to turn from a mode into a button (like the magic wands) anyway. So "fully fit" will be a starting point, but if you think that the histogram is too wide, you will have the sliders to compress it.

Also, I have an idea what might cause the results to be better if the histogram is a bit smaller. I plan to investigate this further, but likely not in the next version but at a later point.

@AxelG-DE
Copy link

I tried with smoothing diameter, and you can actually get the dark zone around the sun a bit better, but easily it washes out the picture.

Edges refinement on the other hand does not help much.

what I cannot achieve is the shadows part. Here the fully fit mode -to me- feels like almost does nothing

@jenshannoschwalm
Copy link
Collaborator

@AxelG-DE @marc-fouquet

I couldn't really follow the internal changes and discussions but the last example with the dark ring around the sun might pinpoint to a principal problem of your approach, when and how do you apply the guided filter.

@marc-fouquet
Copy link
Contributor Author

@AxelG-DE @jenshannoschwalm This is the inverted brightness effect from my very first post on the forum (Pixel A should be brighter than B, but the downward slope of the curve is so steep that B is brighter than A). This is also what the red curve warning is for.

I have played around with a similar image, it is possible to run into the same effect on 5.0.1.

TE has always produced various artefacts when pushing things far and these kind of sunset pictures do this. I will continue keeping an eye on this, but at the moment I don't think that this is an indication of a bug in the module.

@s7habo
Copy link

s7habo commented Jun 14, 2025

@marc-fouquet The TE now works as expected. Thanks for the quick fix!

TE has always produced various artefacts when pushing things far and these kind of sunset pictures do this. I will continue keeping an eye on this, but at the moment I don't think that this is an indication of a bug in the module.

Yes, I can confirm that. That was one of the reasons why I don't like to have too much variation in the curve.
A less steep and/or linear slope gives the best results.

grafik

I have a suggestion in this regard. I just need to think carefully about how best to formulate it.

@AxelG-DE
Copy link

I also think, this is not a bug. From my (user's) point of view, it just comes from the not.squeezed.histogram :)

If I start with fully-fit first, then switch to legacy and do not change the compression, I am rather sure, I will get the same things.

(actually I found another funny thing. when lifting the shadows in fully-fit and then move to the sun. the mouse-cursor showed that striped overexposure indicator, but there was no triangle on the histogram yet)

@marc-fouquet
Copy link
Contributor Author

(actually I found another funny thing. when lifting the shadows in fully-fit and then move to the sun. the mouse-cursor showed that striped overexposure indicator, but there was no triangle on the histogram yet)

Thanks, I have added this to the list of things to look at.

I also just found one more issue. The red warnings do not react correctly to legacy contrast boost.

@s7habo
Copy link

s7habo commented Jun 14, 2025

What works much better in this version of TE than in legacy is the mask contrast/ scale histrogram function.

@marc-fouquet You have offered different modes for these purposes. Full fit, at shadows, at highlights at mid-tones and custom.

Wouldn't it be possible to have only one mode instead of these different “focus” modes for mask contrast, where you have an additional slider next to mask brightnes and mask contrast to set the fucrum for mask contrast dynamically?

Let me illustrate this with an example. Here I would like to darken the highlights:

grafik

I now turn on the TE, it centers the histogram and we have an additional slider with which we can control the fulcrum of the mask contrast. This is displayed, for example, with a vertical line in the histogram (here both in red):

grafik

Now I darken the highlights and move the mouse over the area I want to protect from darkening (which should not be changed):

grafik

Now I put the fulcrum right there and increase mask contrast, which gives me even better darkening of highlights:

grafik

In this way, we only need one mode and three sliders: for mask brightness, mask contrast and mask fulcrum.

@marc-fouquet
Copy link
Contributor Author

I like the idea. I will have to think about the details.

@AxelG-DE
Copy link

That was one of the reasons why I don't like to have too much variation in the curve.
A less steep and/or linear slope gives the best results.

BTW: I did those only to test and clarify. It was clear that high dynamic range is the most challenging. Usually I would have raised the exposure first, at least half of what is needed and then use filmic to compress the highlights 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation-pending a documentation work is required feature: redesign current features to rewrite priority: medium core features are degraded in a way that is still mostly usable, software stutters release notes: pending scope: image processing correcting pixels scope: UI user interface and interactions
Projects
None yet
Development

Successfully merging this pull request may close these issues.