Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt to restart noise density averaging when frequency is changed #119

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

scottnewell
Copy link
Contributor

No idea if this is the right way to reset. Freeing the array seems brutal.

No idea if this is the right way to reset. Freeing the array seems brutal.
@ka9q
Copy link
Owner

ka9q commented Feb 12, 2025

I think that's too aggressive; even a tiny change in frequency will force averaging to restart, and the long time constant is needed to get an accurate estimate. This is a long standing problem I'm not sure how to fix without a lot of ad-hoc constants. A while ago I was looking around at various trend estimators looking for one that could detect non-stationary statistics, which is what we're trying to do here, i.e., tell when a change in noise estimate is just a normal random fluctuation or indicates an actual change in the average. But I didn't see something that looked like the obvious answer, so I've just tolerated the slow adaptation after a big frequency change. Besides, noise is usually flat on VHF/UHF so I don't want to restart there at all with retuning.

@ka9q ka9q closed this Feb 13, 2025
@ka9q ka9q reopened this Feb 21, 2025
@ka9q
Copy link
Owner

ka9q commented Feb 21, 2025

I've been doing some more experiments here. I didn't want to restart all the noise averaging even for small changes in frequency, so I tried shifting the bins by the amount of the frequency change so that they could continue to average the same radio frequencies unaffected. Then I reinitialized each newly "exposed" bin to the first energy measurement without smoothing. This works most of the time, but the estimated noise level can drop by as much as 10 dB, followed by a slow increase to the proper average. I've been pursuing several explanations, but none so far completely explains the effect.

  1. The energy in a single FFT bin follows a Rayleigh distribution, aka chi-squared with 2 degrees of freedom. It's skewed compared to a Gaussian, with the median and mode (most common values) below the mean, which is dominated by a few relatively large values of energy that are relatively unlikely to occur as the very first sample after a reset. Ie, the estimator is biased low at the start, and it takes time to recover to the true mean.
  2. I tried compensating by biasing a new bin by +10 dB so it wouldn't likely be considered as the least-energy bin, but that doesn't work all the time either.
  3. I considered possible floating-point truncation effects in the exponential averaging; a small difference between the new and average value is made even smaller by multiplication by a constant (.001) and then added to the average. There might not be enough bits of precision to actually move the average. So I tried double precision for everything, but it seems not to make a big difference either. (This kind of stands to reason, as large errors in the estimate should still be corrected fairly quickly).
  4. Instead of zeroing the "freshly exposed" bins after a retuning, I simply left them as before. This actually helps a lot for small retunings, probably because there's a lot of correlation between adjacent bins so the previous value is a better starting estimate than the very first new energy sample.

All this makes me wonder whether any of my changes are really helping at all; I might be better off just leaving things as they were. I'm looking at adaptive smoothing strategies but many appear rather ad hoc and/or difficult to implement. I'll keep playing with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants