-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different voltage readings for connected voltage inputs #88
Comments
It's very odd to see that much of a difference between the 2 voltage readings when they are coming from the same AC transformer. I would recommend calibrating the voltages separately, and changing the calibration values so they both read about the same. Otherwise the power reading is going to be a bit off on one side of the meter. |
Makes sense - thanks! |
I'd like to second on this. It is not just a calibration issue. Maybe even another issue should be created. I did a simple test with three meter boards, all connected to similar AC power supplies (could be a little bit different of course). I changed calibration values until I got very close readings from all meters. Values (for ESPHome config) were following:
Then I did power restart for all modules (moved them to another extension cord) and the voltage difference between the lowest and highest reading changed from less than 0.5V to around 5V! M90E32AS Vrms voltage accuracy by spec is ±0.5% This means that it was just a pure luck that these calibration values gave me the same voltage readings. Now I'm troubleshooting this issue with all values For this graph I measure the voltage each second. |
I first suspected some sort of initialization problem because not all setup registers are written in the boot. However, based on spec it should not be a problem and defualt values are going to be used in this case. I have done many chip restarts and read all registers after that but haven't seen any difference in config registers. Another idea came from M90E32AS datasheet 6.4 POWER ON RESET TIMING where it says:
This meter connects |
@Cougar that is very interesting. I haven't observed that behavior myself. You may be onto something with the reset timing. Are you doing a reset via software or power cycling? If software, I'm wondering if it is not updating the calibration values properly. You may want to try a reset via software, and compare to power cycling to see if there is a difference. I'm going to bet there is. Especially with the second IC (assuming there is a natural bit of delay in initializing the second IC) You are correct that the board puts the metering ICs in normal mode only, but I haven't seen problems with this in the past. It may be a matter of reinitializing and writing to the config register with a POR. The code for that in ESPHome is located here. |
@CircuitSetup My first post was power cycle. I moved the whole setup to another room and lost all my "good" calibrations. Most resets are software resets that ESPHome does when uploading new firmware. I hope to find something and add more and more debug to this firmware. Now I dump all registers to the log after chip initialization. Only differences are actual measurement registers, all config remains unchanged. Now I did 10 sec power cycle for all three meters and this is how it looks like: "Line Voltage" is the UrmsA of the first IC and "Line2 Voltage" is second IC. I also tried to set up all offset registers based on this code but it didn't change anything. |
I would expect that the 3 voltage measurements per IC would be similar to each other since they have the same voltage dividers (there's a set of voltage dividers for each IC). Depending on when you received your meter those resistors are either 1% or 0.5%. More recent batches are 0.5%. I'm guessing the variance between individual IC voltage readings is either the IC itself, or the small delay in ESPHome actually taking the reading. Either way, if you're reading power from the meter, and each voltage is calibrated, you should still get very accurate results. FYI, I believe the offset registers only apply to the accumulated power registers. The datasheet is a bit confusing regarding them, and how they should be used. |
I 100% agree. I read only first voltage from every chip. I will try to read all three to see if there is any inconsistency between them too. Of course reading times are slightly different but from graph it is still visible if the difference has always the same offset. I don't think that voltage divider accuracy is a problem right now because the change always happens with chip reset. I'll add some more knobs to ESPHome and check if also happens if I just send If I understand correctly then all config goes to flash and you don't need to resend them at all. This should be the reason for config CRC register and dedicated WarnOut pin. Isn't it how simple pulse counting meters with this chip (and no MCU) should work? |
I tried to reset chips during operation and this didn't change anything. However, the whole ESPHome reset almost always did. My transformer is 9.0V 0.67A (6.0W) which should be enough for ESP32. But the secondary resistance 2.5 ohm is too big for a stable output for ESP32 current peaks which are by datasheet up to 240 mA for wifi itself. This is at least 0.6V drop already. Most probably the difference between two IC readings came from timing when ESP32 wifi is transmitting or not. When I removed the ESP32 from PCB and powered it from USB (keep only M90E32AS power from AC), the difference between all voltage readigs is around 0.2-0.4V only. Now I'm thinking, should I remove rectifier from PCB, keep AC only for voltage sensing and use separate USB power adapter for power the boards to get even more precise reading. Would be great to have dedicated jumper traces for such configuration in the future PCB revision. Or even better, change the JP12 and JP13 to three position so that it is possible cut J4 AC socket from VA+ and VA- and use J3 for both ICs. |
I am not sure whether it is the same issue or not, but it looks like it may be. I have a tiny 9V PCB mount transformer to provide reference for V2, and a PCB mounted 12V 14VA transformer to power the board and ESP32, and provide V1. While I do see a voltage drop of about 0.1V when I connect the ESP/energy monitor to the 12VAC transformer, it tends to be consistent (meaning it seems -0.1V all the time but hard to tell for sure given the fast line voltage fluctuations). I am using a 5 1/2 digit bench DMM set to slow for my measurements. I compare the DMM measurement of line voltage (which for now is powering both transformers) to the voltage reported back for V1 and V2 and calibrated them to show the same reading. Every time I go back to it after some time, the delta between V1 and V2 is off by up to 0.5V while after calibration it was within 0.0V and 0.1V. I chart the voltage readings in Grafana and it is quite visible that they grow apart. I often noticed that when I reset the board manually or by uploading a new ESPHome firmware, the calibration change I made seemed off, or the calibration I had completed was now off again. This sounds similar to what @Cougar was reporting although with a smaller delta. I know that in the big scheme of things a voltage measurement error up to 0.5V on a 120V line is not critical but I was wondering whether anything could be done. I just purchased my board and have v1.4 rev1. Are there any changes I can make to it to make things better (better resistors, etc)? I am comfortable with soldering and have good enough equipment to do it. Also, is there something I can do to stabilize the secondary output powering the ESP so that V1 won't be affected by its fluctuating power draw? |
Based on the graph below, the delta may have something to do with the update delay. When the delta is set to the normal 10s delay, the delta between V1 and V2 is 0.4 ~ 0.5V but when it is set to 2s it is 0.1V. So, the calibration should be fine in my case, just a matter of timing and the measurements not being taken close enough in time. I wonder whether there is some way to improve on that without keeping them at 2s which bogs down my HA with tons of db writes and unnecessary data. In the graph below, where the two lines grow nearer is when I uploaded the firmware changed from 10s to 2s. The big L1 dips were when I rebooted the ESP remotely which may be connected with a higher draw on the transformer while ESP32 is booting up causing a minor drop that translates to a bigger drop of this graph. |
I modified my board a little bit to remove board power from AC transformer. The easiest was to just remove a rectifier from the board and supply power to ESP and ATM90E32AS via NodeMCU micro-USB port. I bought dedicated DIN rail 5V power supply for that. I'm not sure if this is a good practice to feed 3.3V to the step down switcher but so far it works fine. Might be better to remove L1 as well to completely disconnect onboard power supply but this component is much harder to remove. |
@Cougar - Thanks for the tip. For now I have a large capacitor on the 3.3V and 5V but I still have to vet if it is really helping. I opened a ticket on ESPHome github reporting a related issue. I have 2 boards... and the 1st chip of both boards alternate having a 0.5V error in the reading. The arrows indicate manual restarts, no changes. For an instance they all matched but that rarely happens. I can keep restarting and the L1 V and L1 AO V just keep trading places having the 0.5V error. That is not due to the ESP32 current draw, or voltage fluctuation etc... it has to be something with how the chips are configured (?) at boot-up. |
My conclusion was that it doesn't matter if the problem is visible for one or both chips. These chips are polled in different times and it is possible that during one operation the ESP32 draws more current than during another. I think it is because of WiFi but I haven't tried to measure these timings to prove it. WiFi just seems most probable reason due quite high current draw. |
@Cougar What I am showing above is not due to wifi as only 1 of the 2 voltage channels (chip1 on both boards) connected to the same transformer see the 0.5V shift and that shift doesn't change based on anything other than a reboot. The behavior is so repeatable that I am convinced there is a software bug somewhere during initial setup of the chips. |
Did you switch off the Wifi? How do you get these readings? Different boards have been read in different times. You can try to swap chips under |
@Cougar No, WIFI is fully operational. For testing purposes I connected the same 120V feed to both transformers and calibrated them to match what I was reading from my benchtop 5 1/2 digit DMM. That in itself was a bit hard due to the constant fluctuations of line voltage and I found that late at night it is a bit easier. I also sped up the reporting to 3s (faster would cause it all to choke up) because I figured that the 4 chips taking measurements at different times would account for some variation. Using Grafana graphs to monitor the changes helped as I focused mostly on it trending in the right ballpark and for all 4 channels to be close to each other since they were all reading the same line voltage. Trying to reword what I said in my previous post, all 4 channels are now calibrated to read the correct voltage however each time I reboot the system, ch1 of board 1 or board 2 shifts up by about 0.5V. The shift is constant and comes and goes simply by rebooting the board. Most commonly they trade places... when one is shifted high, the other is correct, and after a reboot they trade places. In one instance shown in the screenshot I re-pasted below they happened to both not have the shift so all 4 channels were reporting the voltage correctly (look at signals between the red arrows). Before the left red arrow L1V (Green) is shifted high, and after rebooting twice, L1 AO V (Blue) was now shifted high. This behavior cannot be related to what the ESP is doing... so it must be some software bug in how the chips are setup at boot (or something along those lines). |
Ok, now I see that I described a little bit different situation. You have quite powerful transformer and the load of the ESP and power measurement boards don't affect the reading as much as my 6W transformer (around 2 volt or 1% power drop). I did test with all my three boards now. EMON-1 and EMON-2 are using 9V 0.67A (6W) AC power transformer. EMON-3 has rectifier removed and external transformer doesn't supply power to the board but only to the ATM90E32AS voltage sense inputs. ESP and energy meter board get their power via ESP board 5V USB input. This is a 30 min graph with 1 second step. I reset all boards via API in 5 min interval. First graph shows voltage reading from all boards and then there are graph for each board where you can see voltage reading difference from IC1 and IC2 in volts and in percentage. You can see that the difference between EMON-1 and EMON-2 is quite big and not very stable. This is where the ESP32 power consumption changes are visible. Fluctuation is in volts. EMON-3 does not use power from transformer and two chips should get exactly the same voltage reading. Or with small constant offset due voltage divider resistor tolerances. Difference between two chips is very stable. Still, every reset changes the difference and I think this is the error you are seeing too and I don't have any good explanation for that either. |
I have had now some time to let my test run days. My ESPHome is still not stable enough to run it long time. Its measuring scheduler just stops after some time and only restart helps (even over API). But this is different story and I'll take it to the ESPHome development. Back to the ATM90E32AS errors now. I still have the issue that almost every ESPHome restart changes the error margin in the range of 2V and I have no idea what is the cause. I made a simple button where I can run IC setup any time: button:
- platform: template
name: ${node_name} IC Setup
on_press:
then:
lambda: !lambda |-
id(chip1).setup();
id(chip2).setup(); AFAIK this button runs exactly the same IC initialization code that ESPHome reset does and still only ESPHome reset changes the voltage reading error. But what is even more interesting is that the difference does not stay constant even between restarts and its change looks cyclic. I'm totally puzzled now. This is how it looks like for 3.4 days continuous measurements every second where I already compensated the mean difference (0.25 V). Does anyone have any idea what is going on there? I already suspected things from ESP32 board interference and room temperature to rotation of the Earth. I already separated ESP32 board from the ATM90E32AS board with 20cm cable but nothing changed. The relative difference stays between ± 0.16 % which is very good and I actually don't worry about that but it is still interesting. |
There is a feature on the energy metering IC's that compensate for readings based on temperature (which is why you can get a temp value from them). It is set to its default settings, and shouldn't need to be adjusted though. Details are here on page 35-37 I'm guessing the differences you are seeing are a combination of this and other variables mentioned above. |
Yeah but the temperature is not changing like that and it is quite stable over time. This was the first thing I checked. My daily routine is not same every day either. Still this small change is not an issue but just interesting thing that you usually don't notice unless you do a lot of measurements. This is how the same voltage difference graph looks like now when I send ESPHome reset in every 5 min and this is much bigger issue and can't be related with Earth rotation or anything like that :) |
@Cougar I am certain that the issue I am seeing is not caused by any other signal or environmental factor as I believe you have eliminated as well. It is so repeatable by simply resetting the board that I'd say it is certainly something to do with the software or some 'defect' in the chip. The error in the reading seems to toggle on and off at each reset and while at first I thought it was bouncing from one board to the other, I then saw that at times the voltages match likely because both errors are on the same leg. In other words, it is something that somewhat randomly affects one leg (voltage measurements of L1 and L2) or the other with it often switching which one it affects. Hopefully this makes sense... To be clear, my setup has not been installed yet as once I do that I won't be able to troubleshoot further. Both transformers are connected to the same outlet and both voltage inputs have been calibrated. When the error is not affecting the channel, the reading is correct. Both channels read the voltage correctly unless the error is affecting that channel. If we look at the lower level code that implements this board / chip, is there any code that could be randomly affecting just one of the 2 voltage measurement channels, or maybe both or neither, somewhat randomly? |
Where did you define the chip ids? I don't have any chip related ids in my YAML. I'd like to test what you did to see if I can get the error to bounce around. EDIT: I added My starting point shows that L2 and L2 AO match and that L1 and L1 AO match. The chip setup appears to make no difference (assuming I implemented it correctly). At 11:21 I hit the reset button and L1 error went away and it now matches the other 3 leaving just L1 AO with the 0.5V error. At 11:26 I hit reset again and L1 got the error back... |
Here you found the same behavior that I see. Chip reset via setup hook doesn't chance anything but reset button or ESPHome restart does. Only thing that happens in latter case is the I2C setup and the data collection timing relative to AC zero crossing is probably different. Could any of them be a reason why the difference changes? |
I built a little calibration rig to try to get to the bottom of this issue. While I understand that the voltage difference is minimal and in the big scheme of things irrelevant, I wonder whether there is some way to fix it given this seems to be a software issue. I implemented a faster SPI clock (1MHz) courtesy of @descipher and while I have not noticed any drawbacks, I am inclined to think it helps given this time around I was able to narrow the gap between the voltages reported by the 2 boards. The one thing that still bugs me is why does L1 voltage reported by the main board and the addon board swap places reporting a slightly higher value nearly each time the ESP is re-flashed? The graph below only shows L1 V and L1 AO V so it is easier to see how they swap places. I have a larger transformer feeding L1 on both boards, and a smaller transformer feeding L2 on both boards. Both transformers are currently being powered by the same mains connection (outlet) and I am monitoring the voltage with a benchtop 5 1/2 digit DMM. I am not trying to obtain absolute accuracy, but rather the 4 measurements to be as close as possible to each other. In this thread there is discussion of the effect of the ESP power draw when using WIFI, but that would not explain why L1V and L1AOV trade places... which is the issue I am trying to fix. I am also wondering whether the 4 voltage readings are actually necessary to read 6 power measurements. Would it be possible to just use the L1 and L2 voltages measurements from one of the two boards for the purposes of the entire system? I am guessing not as each chip will independently read voltage & current and calculate power by multiplying the two? To calibrate the system I am using a 150W 120V bulb (my load) and power it through a 4 x loop so the power that the current transformers see it 4x which, given bulb wattage tolerances, translates to 4A. The clamp in the picture is what I use to read the current (reading it typically around 4.017A). The caps I soldered on the ESP are just tests to see if I can reduce fluctuations caused by the ESP draw but have no effect on what I am trying to fix. |
@descipher - I realize this board, and likely the chip, are not meant for high accuracy applications and am ok with some error. Also, not all of my equipment is calibrated or at least not recently so my end goal is mostly to make the readings jive for the lack of a better word. There is a parameter I can change to "calibrate" the voltage measurements and I can get only so close because L1 on the 2 boards appear to keep swapping an error that skews the readings. If you look at the graph in my last post, you will notice that the top trace starts green, then after a reboot trades its place with blue, and then back to green. It doesn't happen at every single reboot, and in fact I point out 3 reboots but the error moved only twice. When this error moves from one input to the other, it messes with my calibration so I find myself having to correct it in the other direction which then gets nullified when it swaps again. Since this swap happens pretty reliably, I am hoping it is a software implementation issue that can be addressed. Resistors with high tolerance values such as the 5% one you mentioned would affect the readings but not cause the error to move from one board to the other, right?
The best I can do to mitigate this issue is what I show in the picture. The CTs are labeled for the respective circuit branches they will measure and the black box with numbered 3.5mm connectors will be inside the outdoor electrical panel. The gray wires will exit the panel and enter the box you see in the picture through electrical conduit. In other words, I am trying to do a "system calibration" in my office as if it were installed because I won't be able to do one once it is installed outdoors. The gray cables will be shorter but I doubt the reduced resistance will have a significant impact. When the ESP32 reboots, something happens to the main voltage input on one of the two boards causing them to diverge. In rare occasions whatever happens makes then "agree". Your 1MHz SPI tweak seems to have had an impact as I was never able to get the V readings so close, but many other things may have changed too, so I am not sure.
I was planning on implementing a
This sounds like something that can be improved in the custom component that supports this chip in ESPHome. I don't have the ability to do so myself. Hopefully someone will contribute it in the future, however I am also guessing that it would not help with the error jumping from one board to the other randomly. Below is the latest revision of the YAML I am using. The update rate is set to 3s only to make it easier to calibrate the measurements. As you mentioned in another thread I typically would keep it at a higher value but not so high I can't see a relevant change fast enough for it to be useful in troubleshooting. In regards to the snapshot on Offsets, aren't those what I am changing? See my
|
There are some code issues that I suspect need to be addressed. The code does a SoftReset reg write and fails to wait the required time b4 writing out other reg values.
There are no checks to see if the subsequent write is ack'd after the reset. This can be why you are seeing differing results. The ATM90E32 has a verification method where all that is needed is a compare to the LastSPIData and the Buffer value. This should be changed so that the next write operation is verified after reset and it needs to wait 5ms + 1ms which is the minimum + 1 which applieas to a powered on state. I have increased the SPI clock to 2MHz and see no issues. I have also added the delay and check in the code of my branch named component.atm90e32
|
I don't think this would be necessary since you'd want the fastest rate possible. Slowing it down would only produce slightly more less accurate results.
I honestly forgot about this for the voltage. The only way around it is to power the ESP32 separately on startup. I think it is more applicable to the current channels anyway. |
@descipher circling back around to this as I have some users reporting some issues with, I think, the current offset calibration, here: #179 |
@CircuitSetup I have not observed that issue. However after looking at the code I'm not sure why this was overlooked but we need a bool variable for controlling the offset calibration function at will so it can be sampled and saved to be use at startup allowing the user to run a calibration cycle with no voltage input and then use it at startup vs collect it at startup, I will do a PR for that function at some point to correct it. So in this case it will not cause the effect seen on #179, I think that's the component accuracy variance of the input RMS divider resistors. The offset calibration input variance will be very small and almost not observable. It's capturing the delta values between many voltage input samples in a very short period during startup. The only way this delta could be significant is if the local voltage is subject due to very large current load changes which may drop or increase the sample voltage during that very short sample period. The comment on the larger value input capacitor is not relevant, a larger capacity would simply smooth the RMS ripple more resulting in better accuracy vs less. I will do some tests to see what that delta looks like with non 0 inputs, I don't expect much and it is easy to log that activity for validation purposes. |
That's what I was thinking. It doesn't make sense to recalculate the offset at every startup. Especially if the CTs are reading something.
The variance they're seeing is on the current inputs, though. It still could be the variance in components - the burden resistors, 22ohm (0.1%), and 100ohm array resistor for 2 of the 3 inputs on each chip, then single 100ohm resistors for the remaining input (both 1%).
I have yet to see the offset registers affect anything, but in theory, shouldn't it correct for non 0 values when readings should be 0? The guidance in the application note isn't very descriptive.
That's what I figured, starting at a lower frequency.
Speaking of, I was trying to see what was getting put into the offset registers by setting the log level to VERY_VERBOSE, and watching the logs while resetting everything, but was unable to see anything coming from here: https://github.com/esphome/esphome/blob/1f3754684adccccc54a4795d8a685d13ba59e352/esphome/components/atm90e32/atm90e32.cpp#L274 Thanks again for your insight and help! |
IMO this cannot be the resistors, as the residual current values which are reported change every time the meter is reset. Resistor values do not fluctuate that much. It rather looks like an uninitialized variable / register. |
I think you are correct, the setup priority is at level "IO" so that is likely well B4 the log component is started. We can do strait printf() calls to catch that output for debugging that element. |
You need to have the CTs in circuit to do any calibration or diagnosis, without them in place it behaves like an antenna, possibly injecting random noise into the ATM90 DAC's. |
@SzymonSlupik Looks like normal noise variance to me. If you would like to rule out the offset calibration I have commented out that function in this branch. I do not have a test rig available from where I am ATM so feel free to run this as a test on that rig you have connected.
|
Thanks, I’ll give it a try, probably tomorrow. Anyway 10 Watts is hardly a noise :) It is a lot of power. I’ll report back once I have the new results. |
@SzymonSlupik If you calculate the accuracy of the Atmel published 0.1% accuracy you will find that its well within spec. 0-100A = 0-24000W at 240V .001 * 24000 = 24W so basically you are seeing the noise floor levels, when we correct the offset 0 level calibration application function it will cancel that noise to some degree when its at 0 current and voltage levels. That's all the offset calibration is supposed to correct. |
Yes I agree with what you’re saying. I’m looking for the truth though. Which means where these discrepancies come from. look, this is not noise, as these values do not change, only when you reset the circuit. And some channels show pure zero while some show random (but static) numbers. Let’s find the root cause. I’ll have some time tomorrow to experiment more. |
@SzymonSlupik I'm not certain what you are observing at this point.
If the reported values are now static then it comes back to the in circuit component tolerances. e.g. voltage divider resistors etc. as described from my original assessment however your observe defined them as changing with every restart. Is that changing output no longer the case with a shunt on the CT input circuit? Please validate and confirm what the current observe is. Power cycles can result in 0 level changes based on the ADC + noise at the time of init within the ATM90E32 IC, restarts would not normally do that with the exception of my comment on the load based change varying the calibration sample which should be gone with the commented code during real input sampling and with the shunts however the voltage input is still an unknown. We will need to know what phases have a voltage reference since those and the current channels should be 0 during the calibration sampling for us to confirm that offset calibration works as expected. Your screen shot shows 4 phases having a reference however I see only one AC input. Are those all shunted to the same AC input in the screen shot? If so you need to adjust your gain to bring them in sync based on an actual external voltage meter check. Based on the voltage reported it certainly has some variation in component tolerance and that's a possible reason for variance as well because each voltage phase offset calibration is done separately but the basis would be not be correct using an invalid voltage gain setting. |
That's the issue - he's saying regardless of the shunt, the current values, that should be 0 anyway, change on reset.
On the meters, the 3 voltage channels are tied to the 1 input internally. The 4 is coming from the main board + add-on board's 4 ATM90E32's. |
Yep. It is all good now - just the noise at the 1W level. And this is regardless if the inputs are shorted or open. Now we need to figure out what goes wrong in the calibration. And why.... |
@SzymonSlupik Thanks for the bench work, this gives us the key piece of info needed. We have non zero voltages on all inputs at the time we are running offset calibrations. The current is zero on all inputs. When we look at the ATM90E32 application guide details, they give us some insight identifying the observed error state. Section 4.2.5 Based on this formula Ub=Uc=Un, Ua=0, Ia=0, Ua = channel a voltage, Ia = channel a current The code performs steps 1,2,3,4 correctly but we are not at 0 volts on any channel and that's the issue here. As previously indicated we must take the calibration code out of the startup area and use a bool option flag to allow any user to meet the requirements for calibration where they can power ATM90E32 separately without AC line voltage and then capture the input noise offset values during a 0 volts sampling. Then those values can be stored in flash and applied during startup if desired. We could also just remove the capability since the application guide indicates its not essential for our use case. |
@SzymonSlupik Did you happen to do some calibration for the voltage gain for each phase? I see some variance in the screenshot. You should be able to bring those to +-0.02 volts |
ATM I have just one power adapter. Plan to move to three once we have the things sorted out here (which I believe now we almost have - thanks for the explanation). BTW it is a bit weird the chips show different voltages, but I understand this is due to the tolerances of the components and that is why separate calibration for each chip is needed. |
Not just each chip, but there are 3 voltage inputs PER chip that could potentially be different. This is why in the ESPHome config there is a
for each phase, replacing things between brackets to differentiate Luckily each voltage phase per chip uses the same on-board voltage dividers (2 sets for each board), so they should be really close, regardless. Any difference you see may just be in the timing of actually getting the data from the chip. |
Oh oh thanks for explaining that. I was wondering why the phase voltages were nested under each chip definition - my understanding was the phase voltages were representing the independent power supplies. Now it all makes sense. I guess I'll do that calibration once I move the meter to the electrical cabinet where I have access to all 3 phases and will implement the proper 3-phase configuration. |
I think it would be beneficial to write up an overall calibration guide for everyone that explains how it works. I will place it on https://github.com/gelidusresearch/device.docs when completed over the next few weeks after I create a PR for the offset calibration issue. It will apply to any ATM90E32/ESPHome based devices. |
I have updated the code to handle optional offset calibration now at:
The code adds the ability to run a calibration or clear a calibration for both the voltage and current sensors. The calibration must be run when all inputs are 0. I have not tested it yet since I am not near my test bench to do it but will be in the coming weeks. Feel free to test it and validate. The calibration feature is fully optional and this is the basic YAML to use it:
The use of id is only required when more than one instance of the atm90e32 component is defined.
then based on the atm90e32_id you can add the run and clear buttons:
|
Thanks so much for doing all of this. It looks perfect! btw, I was looking further into the calibration procedure, and noted something in the eval kit documentation (https://www.microchip.com/en-us/development-tool/ATM90E32AS-DB), located here: https://ww1.microchip.com/downloads/aemDocuments/documents/SE/ProductDocuments/BoardDesignFiles/AutoCalibration_Ver1.0.zip A few things:
Note that the right shift of 7 bits isn't done with the power offsets, otherwise the process is the same. I'm wondering if the power offset, with voltage connected and no power going through the CTs, would be more useful than the voltage and current offset, or if they're needed at all. FYI, I looked in the demo firmware code for any kind of offset function and didn't see anything other then writing default values to the offset registers. The only other thing I saw was the phase angle offsets. This would be impossible to do without external equipment, though, as it needs a constant power factor. |
@CircuitSetup That information is excellent and just what we needed, good find and thanks for this. Some of the methods in this code require that the manual V and I gain and scale settings we current use would be ignored when calibration is enabled. Those elements would be substituted with the calculated output of the calibration process. Some of the methods do require special equipment so this needs to be evaluated further with regard to viability. We may only do a basic adoption based on that complexity level.
|
@descipher no problem! I'm glad you're finding that useful. I didn't think the calibration functions were as relevant, since they're basically doing what is outlined in the calibration instructions, but slightly more automated. If you're up for integrating them in some way, awesome! They would just need a way for the user to input constants for the voltage and current. FYI, the functions you copied above are for the ATM90E26 single phase IC. Take a look in the \AutoCalibration_Ver1.0\Auto Calibration\Firmware\ATM90E3x+Calibration+SAM4L\src\at90e_xx folder |
@CircuitSetup Yes totally agree the demo is targeting large volume automated calibrations which we do manually. The specific information I found useful was the clarity it provides in the code for using this noise floor offset tuning: The code details calibration of the power offsets. I was doing is just the voltage/current offset to account for noise levels. This is something we can do without a special rig providing source signals. How much value it has is depends on how much DC noise is present within the AFE circuits. The clarity is here with the conformations of what the inputs should be: The code confirms the use of those formulas. In our case we need only to zero all input values and read the noise offset. In other words we are doing the right calibration optionally now. We can go further for those that would like comercial accuracy levels but not in this round of changes. It's a significant effort more so on the bench validations. |
Right, got it.
I'm wondering if the power offset registers (Poffset, Qoffset) have more to do with the random low and negative values that are seen with no current load, than the voltage/current ones (Uoffset, Ioffset). Especially since the demo doc doesn't even mention them. |
Only way to know is by checking it, I can add that calibration testing code to see what impact it has when I do the bench work for the current PR. |
It looks like the fix for the offset calibration of voltage and current was just merged. Thanks @descipher! |
I'm in the US with split phase 120V/240V AC power. I just set up my first expandable meter, starting with one of my subpanels where all I want to measure right now are a few 240V HVAC appliances which seem to be balanced (all newly wired with 2 wires plus ground, so no neutrals), so I figured I'd be okay with one AC transformer. So I have not cut the jumpers to enable separate transformers (I'll end up doing that for the meter on my main panel). If I understand correctly, that means they are hardwired together, and I should get the same exact reading for V1 and V2. However, I am seeing ~117V for V1 and ~127V for V2. The voltage calibration values in EmonESP are the same for both.
The text was updated successfully, but these errors were encountered: