Very interesting Power measurement IC, calibrate using LUTs??

fernando_g

Senior Member
An unknown -at least for me- Chinese IC manufacturer (Hiliwi) produces a line of of highly integrated AC powerline measurement devices.

Some background first; I've designed and built powerline meters for over 25 years now. Hiliwi's most advanced IC, the HLW8112 performs a myriad of measurements which back then would have required close to a couple hundred US$ worth of components. Now you can have these features for US$ 0.81 (single quantities) housed in a 16 pin SSOP package.

But I digress. Those require more computing power to handle them than what a Picaxe can provide.
Thus, I am going to focus on the little brother of the family, the HLW8012, which is still very capable, providing true-RMS power, voltage and current.
I include the datasheet.

My question which I would like to discuss in the following thread, is how to calibrate it.
 

Attachments

fernando_g

Senior Member
As you can see from the datasheet, the devices provide an output frequency which is proportional to the current and voltage values, as shown in the equations of paragraph 3.2, which I fully understand.

The issue I face is how to calibrate it. Due to limitations of the internal voltage reference and clock, the overall tolerance is +/- 20%, waaaaay too much to be useful. It requires calibration.
Of course, I could calibrate the old fashioned way, with pots to adjust the input voltage, but that is soooo 20th century. The question is how to calibrate in software.

Of course, the straightforward way would be to measure the frequency using COUNT, and then multiply the value by a calibration constant:
Cal_Value = Raw_Value * Cal_Const / 100, where Cal_Const range would be from 80 to 120.

The problem is that Raw_Value can have a maximum frequency of almost 3500 Hz, and thus the simple instruction above would overflow the word count.
Of course, the obvious would be to use a shorter COUNT period, let's say 1/10 second, for a maximum Raw_Value of 350. But accuracy would suffer.
Could there be a better way, let's say using a lookup table or similar?
 

papaof2

Senior Member
Maybe a lookup table with each range of frequencies selecting a value from the table?

Divide the range between minimum frequency and 3500Hz into 40 steps of 29 or 30Hz (or whatever max freq - min freq is. I'm doing this in my head) and point that block of frequencies to the proper place in the table.

Or do a long series of CASE statements?

Or look back for some examples of 32 bit arithmetic?
 

fernando_g

Senior Member
That is interesting.
From the speed of execution, which of the two approaches would be faster?

I remember reading a long time ago that the CASE statements were the slowest.
 

Buzby

Senior Member
I can't see where you get the 20% error from, the datasheet says 0.2%, but I've not read it closely yet.

However, whatever error you want to correct, you will still need accurate known values at the V and A inputs in order to calculate how much correction is needed.

One website I looked at recommended using a toaster to provide a load for calibration. 1800W is a hell of a lot of power to be using just to calibrate a meter, so I'll tell you how the big boys do it !.

( I might be teaching Granny to suck eggs, but it could be interesting to others anyway. )

Many years ago I worked for a company that made electricity meters. These meters need their calbration testing before shipping. They were tested at various power levels, from very low to very high. Very high meant something like 200A @ 300v, or something in that region. As the meters were tested 400 at a time, that's 400 x 200 x 300 watts, which is 24MW, which is a lot of waste heat to get rid of.

To allow testing with a sensible power usage, the meters had a removeable link or an extra terminal, which seperated the voltage and current circuits. ( I can't remember the exact method, its was a long time ago. )

The trick was then to supply the meter from two seperate 'pseudo mains' supplies.

One was a high current, low voltage supply, which put a lot of amps through the current sensor. The other was a low current, high voltage supply, which put a lot of volts on the voltage sensor.

So, if the current sensor saw 200A and the voltage sensor saw 300V, the meter would report 60KW was being used. The actual power being used was more like ( 200A x 0.01V ) + ( 0.000001A * 300V ), which is just over 2W.

This was for testing the calibration of electricity meters, and the method has been used for very many years, including for those 'spinning disk' meters from our youth.

Looking at the datasheet of the HLW8012 it looks like you could do similar, with the added feature that the chip can show what it is reading on both sensors, as well as the overall measured power.

I see there are breakout boards available on all the usual sites, I might try one soon.

Cheers,

Buzby
 

fernando_g

Senior Member
Buzby, thanks for your comments.
The 0.2% quoted in the datasheet I believe is linearity, not actual accuracy.
The accuracy error I quote arises from the Vref tolerance 2.43V +/-5% and the Masterclock tolerance of 3.579 Mhz +/- 15%

Since both appear on the frequency equations shown in page 7, I assume that with any deviation from nominal, that deviation will also be reflected on the actual output frequency.
I say: I assume, because I haven't received my order yet from LCSC to start experimenting. Let me see if my assumption is correct, but based on my previous work with ICs from Analog Devices, the reference and clock do have a direct relationship with the output frequency.

Your trick about the pseudo supplies is very valuable, and I also have used it on previous designs. Because as you mention, otherwise I would have to have a diesel gen-set to power my home lab!!
 

Buzby

Senior Member
This was the datasheet I looked at, is it the right one ?


Section 1.1 says ...

High frequency pulse CF, indicating active power, meet the accuracy of 50/60Hz IEC 687/1036 standards, in the range of 1000:1 to reach 0.2% accuracy.

I assumed that meant it was good enough for something to meet energy measuring standards, but I can't find the contents IEC 687/1036, just references to it.

The frequency and fvr values do have wide tolerances, but I'm not sure that they matter so much. It's been years since I was involved in the metering business, but I vaguely remember that the result of multiplying the two values was reliable, because both ADC circuits were using the same clock and fvr. I also think it somewhat odd that a chip designed for use in commercial power measuring devices would be 20% inaccurate.


There is another trick which might come in usefull, but it depends if the chip is 'too clever' or not.

The 'pseudo' supplies can be DC. It's a lot easier to measure DC accurately, and you need to measure the inputs very accurately.

By supplying accurate DC, then reading the ADC outputs ( Sigma 1 and 2 in the HWL block diagram ) we could determine the corrections needed to the ADCs. Once the ADC correction tables were populated the rest of the system was software, and that doesn't vary.

( Note: This technique was possible because we had control of the software, and was needed because the analogue side of the meter was all discrete components. No single chip solutions in those days !. )

This will only work if the 8012 doesn't treat DC as some kind of supply failure.
 

PhilHornby

Senior Member
Of course, the straightforward way would be to measure the frequency using COUNT, and then multiply the value by a calibration constant:
Cal_Value = Raw_Value * Cal_Const / 100, where Cal_Const range would be from 80 to 120.

The problem is that Raw_Value can have a maximum frequency of almost 3500 Hz, and thus the simple instruction above would overflow the word count.
Of course, the obvious would be to use a shorter COUNT period, let's say 1/10 second, for a maximum Raw_Value of 350.
If you're reading the frequency o/p of the 8012 using count, doesn't adjusting the 'period' parameter let you manipulate the result in the way you want ❓

(count is another of those functions I've never actually used, so I may be misunderstanding the way it works :unsure: )
 

AllyCat

Senior Member
Hi,
Of course, I could calibrate the old fashioned way, with pots to adjust the input voltage, but that is soooo 20th century. The question is how to calibrate in software.
Indeed, IMHO there is very little need/justification to use (Preset) pots in any microcontroller application which has an EEPROM. I largely gave up with Preset Pots in the 1980s (perhaps when they were at last getting more reliable than the horrible Skeleton Carbon Composition type). ;)

With PICaxe Basic, precise calibration is very easy using the ** operator which multiplies by your calibration constant and then automatically divides by 65536. Thus your calibration constant is effectively a 16-bit "fractional binary" value (which can be calculated on any pocket calculator or mobile smartphone). Admitted that limits you to calibrating/scaling downwards but it's easy to premultiply by 2 or 10 as appropriate, or if you know a slight scale-up is required then you can use something like CalibratedW0 = w0 * CALFRACT + w0 (for effective multipliers between 1.0 and 1.99998).

Similarly, I rarely (so far never) use the COUNT instruction, primarily because it is "blocking" and prevents the program doing anything else (including interrupts) at the same time. In most cases it's better to use PULSIN and then if necessary calculate the frequency by inversion (division). Then at "high" frequencies (probably some hundreds of Hz upwards) accumulate multiple measurements or revert to the COUNT instruction. Of course the inversion does require division which, with native PICaxe Basic, the best that can be achieved is dividing a (large) 16-bit value by an approximately 8-bit divisor (i.e. around 255) to give an 8-bit result to about 0.4% resolution. But.....

To answer another question above, Lookup Tables can indeed be a very fast way to avoid (some) "messy" calculations (and I have written some 16-bit Interpolation Code Snippets for more complex calculations such as Trig functions), but generally I just resort to 32-bit maths calculations: Multiplication to a 32-bit result is natively handled by PICaxe's * (for Low word) and ** (for High word) operators and simple Subroutines or Macros for Addition and Subtraction are possible (but rarely needed). The only "problem" is division and several solutions are available, but my mainstay is this Code Snippet. It executes in a fairly constant 100ms (at 4 MHz clock) which is no longer than your "short" COUNT instruction.

Cheers, Alan.
 

Buzby

Senior Member
If this was my project, I'd do it differently !.

I'd use a 20X2 or better, one with externally clockable counters. ( Probably 28X2, as that has two externally clockable counters. )

Put a very accurate fixed frequency into one counter, say 10 KHz, then use that to trigger an interrupt, say, every 2 sec.

Use the second counter to count the power output signal. Just let this count freely and roll over when it gets full.

On each interupt measure how much the power counter has increased in the last time slot.

By using the freely counting method, any pulses that don't get counted at the end of one period will be counted in the next period.

The method of calibration correction, if needed, doesn't need to be fast. It looks like the HLW8012 takes upto 2 sec to respond to load changes. ( I saw this on some Arduino page, by a guy who has used this chip already. )

Another counting method would be to apply the accurate frequency to the CLK pin of the PICAXE, which is then used to clock an internal counter to trigger the interrupt. This needs a bit of studying of the PIC datasheet to understand how the internal counters are clocked, but does mean you only need one externally clocked counter.

A third method is to put the output to an RC analogue integrator, then feed that to a PICAXE ADC pin. As the input signal has a 50% duty cycle, it should be fairly easy to do the maths that convert frequency to voltage.

Whatever method you use to measure the power signal, I think trying to calibrate it is the biggest problem, and I'm still not sure calibration is needed.

We need to wait and see what you get when the chip arrives.

Cheers,

Buzby
 

fernando_g

Senior Member
There is another trick which might come in usefull, but it depends if the chip is 'too clever' or not.

The 'pseudo' supplies can be DC. It's a lot easier to measure DC accurately, and you need to measure the inputs very accurately.

By supplying accurate DC, then reading the ADC outputs ( Sigma 1 and 2 in the HWL block diagram ) we could determine the corrections needed to the ADCs. Once the ADC correction tables were populated the rest of the system was software, and that doesn't vary.
At least some of the Analog Devices work correctly. Way back, I built a fuel gage for a robot which used an AD7755. Working with DC makes finding the proper accurate sources far easier.

My concerns about the calibration may be unfounded.
 

fernando_g

Senior Member
Thus your calibration constant is effectively a 16-bit "fractional binary" value (which can be calculated on any pocket calculator or mobile smartphone). Admitted that limits you to calibrating/scaling downwards.......

Cheers, Alan.
The solution is actually quite simple and it doesn't require any computing.
The resistor ratios shown in the datasheet should be recalculated such that at nominal voltage it provides a slightly higher voltage at the device's inputs.
Then the calibration requirements will always be downwards.
 

AllyCat

Senior Member
Hi,

Initially, I was slightly "surprised" that the device is designed with a potential raw error as high as 20% (which IMHO it does appear to), but 15% frequency and 5% voltage errors seem quite appropriate for integrated circuit technology. PIC(axe) chip tolerances are intrinsically worse than that, but (particularly the frequency) are improved by factory production-line calibration. Normally one would design-in the capability to (optionally) use external Frequency and Voltage References, but the HLW8012 is only an 8-pin chip! Which is one reason why I wouldn't be adding a 20+ pin PIC(axe). Incidentally, there probably is a reasonably accurate "Reference Frequency" available to us, i.e. better than +/- ~0.5%, at 60 or 50 Hz !

However, the application circuit diagram shows a One milli-Ohm resistor, that I'm sure, even with the differential inputs to the chip, is not going to be precise to within the necessary few micro-Ohms, so calibration is probably essential anyway. Then an additional +/- 20% is unlikely to add much cost, compared with the savings in using a couple of 8 pin chips. A PICaxe 08M2 has almost all of the (necessary) capabilities of its bigger brothers.

And yes, it may have been I who observed that the SELECT... CASE construct adds some time delays (as do Subroutine Calls and even the ELSE option) to the more basic IF and GOTO constructs. But only by some milliseconds which pale into insignificance compared with a COUNT instruction for all except the very highest frequencies (and low accuracy).

Cheers, Alan.
 

hippy

Technical Support
Staff member
I haven't absorbed the full thread, nor read the datasheet, but when it comes to determining a frequency with a PICAXE I too would use PULSIN or COUNT adjusting SETFREQ so units of measurement were shorter time periods.

And, if that's not good enough, there are means to measure pulse times very accurately using the internal timers which can be used as counters -


For run-time determination the same tricks can be used; measure at one SETFREQ, if the result is too high or low, exceeds the period, then adjust SETFREQ up or down until you do get a reading which can be dealt with.
 

hippy

Technical Support
Staff member
Having now grabbed the HLW8112 datasheet I am not convinced it wouldn't be possible for a PICAXE to use it but I haven't studied it.

I would agree that the HLW8012 is a much simpler chip which is far easier to interface to so would seem to be a better choice..

Looking at that datasheet it also seems to me that variance of the internal Vref and clock is what means it may require calibration.

The nominal voltage is said to be 2.43V, minimum 2.23V ( which is -0.20V, -8% ), the maximum 2.55V ( which is +0.12V, +5% ), so somewhat +/-6.5% overall.

That seems rather contradictory to the datasheet claim that the chip has a "high precision band-gap reference source" to me.

The nominal frequency is said to be 3.579 MHz, minimum 3.04 MHz ( which is -0.539, -15% ), maximum 4.12 MHz ( which is +0.541, +15% ), so +/-15%.

Both would seem to suggest a reading could be widely and wildly out but it could be that this is simply worse case, much like the internal PICAXE operating frequency having a wide tolerance but always being what would be expected whenever measured. Or perhaps gives less error in the output than may be anticipated from internal variances.

Perhaps the best way to test it is to see what the results are when used with a test circuit or in comparison to calibrated equipment. That will determine how much of a problem there is.

If the variance does cause inaccuracies I am not sure how one could determine which it was,, or how to apply compensation for the readings which seem to give values dependent on both according to the equations quoted.
 

fernando_g

Senior Member
Based on my limited exposure to them, Chinese companies are building many low-cost, exciting and very clever chips. But their datasheets are, to put it quaintly, full of "Easter eggs" for one to stumble and discover.
Part of the fun which helps maintain one's brain active.(y)

They definitively are not the Linear Technologies or the Motorola Semiconductor datasheets of yore, chock-full of data tables, graphs, app notes, representative circuits, layout and component suggestions, warnings, etc.

But hey! They are ultra cheap. Not much is lost if they don't meet requirements.
 

Buzby

Senior Member
... the application circuit diagram shows a One milli-Ohm resistor, that I'm sure, even with the differential inputs to the chip, is not going to be precise to within the necessary few micro-Ohms, ...
I was pleasantly surprised when looked on the RS website. Shunts of this resistance are available from Murata with an accuracy of 0.25% for less than £30. See : https://uk.rs-online.com/web/p/shunts/8103267

For even less than that you can buy a complete power meter, like this : https://www.amazon.co.uk/SONOFF-Wireless-Monitoring-Control-Appliances/dp/B09XB3RZB9/ref=asc_df_B09XB3RZB9/

( I was amused to read that this device has all the basic functions expected from such a device, such as WiFi control, voice control, LAN control, timing schedule, etc. There was me thinking an example of a basic function would be a realtime display of power usage. )
 

papaof2

Senior Member
I was pleasantly surprised when looked on the RS website. Shunts of this resistance are available from Murata with an accuracy of 0.25% for less than £30. See : https://uk.rs-online.com/web/p/shunts/8103267

For even less than that you can buy a complete power meter, like this : https://www.amazon.co.uk/SONOFF-Wireless-Monitoring-Control-Appliances/dp/B09XB3RZB9/ref=asc_df_B09XB3RZB9/

( I was amused to read that this device has all the basic functions expected from such a device, such as WiFi control, voice control, LAN control, timing schedule, etc. There was me thinking an example of a basic function would be a realtime display of power usage. )
A KillAWatt with wi-fi interface and on/off switching?
 
Top