这是indexloc提供的服务,不要输入任何密码
Skip to content
This repository was archived by the owner on May 17, 2021. It is now read-only.

Conversation

@robnielsen
Copy link
Contributor

Added support to Netamato binding for min_, max_, sum_rain for the intervals 30 minutes, 1 hour, 3 hours, 1 day, 1 week and 1 month. The interval is configurable and defaults to 1 day.

The pull request #2820 that I created didn't have any scaling of converted values, this was giving the false impression of higher precision than is actually available (many digits of precision). The values are treated as intermediate vales scaled according to the Significant Figures and Significant Arithmetic rules which is widely used with scientific data.

Fix for issue #1820, if the token was expired, the token was refreshed and a recursive call was made to re-execute the binding, this would cause null pointer exceptions in the original execute call to the binding.

Fix for issue #3242, if the Netatmo api returns a null altituded, the binding was throwing a null pointer exception.

Added support to also refresh the token if the call to netamoto indicated the token was invalid.

Added better error message if the StartCom CA certificate is not installed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isPressure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed spelling mistake, thanks @watou

@robnielsen robnielsen changed the title Netatamo: Added additional types along with intervals to the binding Netatmo: Added additional types along with intervals to the binding Jul 10, 2015
@teichsta
Copy link
Member

i would go with @watou precision should be held as long as possible (within a reasonable range). Only the UI should decide how many digits to show. Think about rules calculating things with the various values …

@teichsta teichsta added this to the 1.8.0 milestone Jul 10, 2015
@teichsta
Copy link
Member

@watou do you have Netatmo devices at hand?

@watou
Copy link
Contributor

watou commented Jul 10, 2015

@teichsta no, I've used the binding against the demo system in the past located in the makers' Paris office, but not recently.

@robnielsen
Copy link
Contributor Author

@watou, @teichsta, The scales are at or greater than the precision of the devices. My reasoning for changing is values were getting reported like:

Temp: 79.16000000000000116351372980716405436396598815917968750 vs 79.16 degrees
Rain: 2.5393700761499998878878425756511205690912902355194091796875 vs 2.54 inches
Pressure: 29.9788433201520011783802743821070180274546146392822265625000 vs 29.98 inHg
Pressure: 762.812731610999978704157342690450605005025863647460937 vs 762.8 mmHg

I realized that I don't need to scale the default values so I removed scaling from these, but it is needed on the ones for the conversions I added in a prior pull request.

@watou
Copy link
Contributor

watou commented Jul 10, 2015

@robnielsen, I think the issue with precision is that, even though Netatmo's native interface shows converted values rounded to tenths or hundredths, your Metric to "U.S." conversion multipliers in NetatmoUnitSystem.java are precise to 7 and 10 digits after the decimal. When you multiply by such precise numbers, as you definitely should, but then round the result to 1 or 2 digits, you are introducing rounding errors that can end up noticeably skewing any computations that are later performed on sets of those rounded numbers. So even though the input Metric values are only accurate to one or two digits of precision, when you convert them to "U.S." values, the U.S. equivalents need to be as accurate as the input values, which means preserving many more digits of precision in the output U.S. values. Only then will any functions like averageSince, sumSince, etc., remain as accurate as the original Metric values.

For example, if you pass in MathContext.DECIMAL64 to the BigDecimal.multiply function, it will properly constrain the amount of precision to something "reasonable," and yet allow nearly exact results between averaging your U.S. values, vs. averaging your metric values and then converting that to U.S. units. With the current rounding, those results will likely be very different.

@robnielsen
Copy link
Contributor Author

@teichsta, what do you consider a reasonable range? for example, if the device is accurate to 0.04 of an inch, is 0.01 a reasonable value? For example, the code is currently scaling 2.5393700761499998878878425756511205690912902355194091796875 inches of rain to 2.54 inches of rain, this seems reasonable to me, anything beyond the 2nd digit is well beyond the accuracy of the device. If we had further precision, doesn't this give indication to somebody that the values are indeed accurate to the level of precision provided?

BTW, I've also found out that switching the constructor from:

private static final BigDecimal MM_TO_INCHES = new BigDecimal(0.0393700787);

to

private static final BigDecimal MM_TO_INCHES = new BigDecimal("0.0393700787");

Helps considerably with precision, the same calculation, it produces the value 2.53937007615. Below is a note from the http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#BigDecimal(double) constructor:

The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that matter, as a binary fraction of any finite length). Thus, the value that is being passed in to the constructor is not exactly equal to 0.1, appearances notwithstanding.

@watou
Copy link
Contributor

watou commented Jul 12, 2015

Since the BigDecimal class gives you the ability to achieve exact results with only the needed degree of precision when multiplying your conversion factors constructed from strings instead of doubles, I recommend you use those values without further rounding. Since you expressed your conversion factors with 7 and 10 digits of precision, BigDecimal will keep all further computations accurate w.r.t. those conversion factors. Introducing further rounding will lead to odd forms of inaccuracy when users try to compute totals, averages, etc. from those values.

@robnielsen
Copy link
Contributor Author

@watou, For C to F you get a reasonable result if you use String constructor since multiple will use the scales of the numbers of both multiplicands. But this still gives odd results, for example the web services returns values such as 0 or 0.1, and the conversions are 32.0 and 32.18 . If it was additionally scaled, by 2 you'd get 32.00 and 32.18

The problem comes with the other conversions. You want to use as precise of a number as possible when multiplying, but then scale to a reasonable value, which is what I asked @teichsta in my previous comment. If a device is only accurate to 0.04" or (0.1 cm), it's just silly to report accuracy higher than 0.01" for rain or barometric pressure. For example say we use 1 cm which is 0.0393700787" unscaled or 0.04" scaled, and we had 1000 readings of that value we get the following:

0.0393700787 * 1000 = 39.3700787000
0.04 * 1000 = 40.00

Maybe I'm way off base here, but 40.00 is what I'd expect in this case, not 39.3700787000

@watou
Copy link
Contributor

watou commented Jul 12, 2015

In your example, I would expect the value to be recorded as 39.3700787000", but I would probably format it for display as 39.4". Or if I were interested in only inches, it would be formatted as 39". A result of 40" is less accurate than 39", and it would bother me as a weather station operator to know that my values' accuracy was being rounded away for each recorded value, causing me to doubt accuracy over past ranges of values.

When Netatmo says their rain gauge is only accurate to 0.1 cm, I don't interpret that to mean that it makes sense to further sacrifice accuracy by rounding it away when that value is recorded in some other unit system. When you do, you actually worsen accuracy. The only place where I'm aware that the extra digits are unwanted is in formatting for display, which is well handled with formatting strings in the label text. Is there some other problem created by allowing the full precision of your conversions into the item state and persistence? Am I incorrect that computations (like average and sum) will not begin to veer away from the values reported by Netatmo due to rounding each sample?

If Netatmo's native reporting units of measure were reversed (Fahrenheit, feet and inches) and were converting to Metric units, I would make the same points.

@robnielsen
Copy link
Contributor Author

@watou, check out https://en.wikipedia.org/wiki/Significance_arithmetic it has the following:

Significance arithmetic is a set of rules (sometimes called significant figure rules) for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant figures to use to represent the result of a calculation. If a calculation is done without analysis of the uncertainty involved, a result that is written with too many significant figures can be taken to imply a higher precision than is known, and a result that is written with too few significant figures results in an avoidable loss of precision. Understanding these rules requires a good understanding of the concept of significant and insignificant figures.

Which supports my point of view by reporting values with precision of 10 digits we are falsely given the indication of considerably higher precision than the device supports (which is 2 digits).

The article also refers to the https://en.wikipedia.org/wiki/Significant_figures which has the following under https://en.wikipedia.org/wiki/Significant_figures#Arithmetic:

When performing a calculation, do not follow these guidelines for intermediate results; keep as many digits as is practical (at least 1 more than implied by the precision of the final result) until the end of calculation to avoid cumulative rounding errors.

@teichsta, this basically answers my question on what's reasonable (one additional digit). Do you agree? BTW .001" is 0.0254mm, and he device reports values in 0.1mm

@watou
Copy link
Contributor

watou commented Jul 12, 2015

I read those quoted passages differently.

If you formatted for display all digits of your unit conversion, then you would indeed be presenting misleading precision. But that is easily avoided by giving examples and documentation that clearly states that you are merely faithfully converting unit systems (which I believe users would expect of the binding implementation), not implying that the original measurements were accurate to that degree of precision. (The current wiki page for the Netatmo binding, as does all the binding documentation I've read, applies formatting strings to DecimalType values, like %.1f, so the number of decimal places are limited for display, which is sensible.) Think of it as merely projecting the Metric values through a lens as a convenience to the user who wants non-Metric units. The user would expect the lens to project a clear picture, not one that was altered before they even had access to the actual data.

Also, the second passage supports my position, that it is necessary to maintain "as many digits as is practical" in intervening calculations, such as when your goal is to compute sums, averages or other calculations that depend on the most accurate sample values. To discard whatever accuracy you have through rounding each sample, you will further lose accuracy when adding up rainfall, computing average temperatures, etc.

Users would see diminished value in the binding if they had lower faith in the accuracy of the reported values, and perhaps even open a new issue against it so they could restore the available accuracy. In the meantime, the values already recorded in their persistence stores would be dubious.

@robnielsen
Copy link
Contributor Author

I actually discussed this with my son who is a Junior in Electrical Engineering at a university that is ranked as one of the top 20 in the US. He's the one who provided with the info above. Also, I wrote some code to show that 3 digits (possibly 4 is enough):

When converting 0.0, 0.1, ... 99.9 from mm to Inches and back to mm, you need a precision of 3:

import java.math.BigDecimal;
import java.util.Random;

public class Conversion {
    private static final BigDecimal MM_TO_INCHES = new BigDecimal("0.0393700787");
    private static final BigDecimal INCHES_TO_MM = new BigDecimal("25.4");

    public static void main(String[] args) {
        mmToInToMm(2);
        mmToInToMm(3);
    }

    public static void mmToInToMm(int precision) {
        int num = 0;
        int same = 0;
        for (int i = 0; i < 100; i++) {
            for (int j = 0; j < 10; j++) {
                num++;

                BigDecimal mm = new BigDecimal("" + i + "." + j);
                BigDecimal in = mm.multiply(MM_TO_INCHES).setScale(precision, BigDecimal.ROUND_HALF_UP);
                BigDecimal mm2 = in.multiply(INCHES_TO_MM).setScale(1,  BigDecimal.ROUND_HALF_UP);

                if (mm.equals(mm2)) {
                    same++;
                }
                //System.out.println(mm + ":" + mm2 + ":" + (mm.equals(mm2) ? "same" : "different"));
            }
        }

        System.out.println("num: " + num + " same: " + same);
    }
}

produces the result:

num: 1000 same: 394
num: 1000 same: 1000

So for straight conversion to/from mm/inches 3 digits of precision is sufficient, any more precision provides no value.

I also wrote the following program to calculate sums using no scaling, 3 and 4 digits of scaling when summing:

import java.math.BigDecimal;
import java.util.Random;

public class Conversion {
    private static final BigDecimal MM_TO_INCHES = new BigDecimal("0.0393700787");
    private static final BigDecimal INCHES_TO_MM = new BigDecimal("25.4");

    public static void main(String[] args) {
        sum(1000000);
    }

    public static void sum(int iterations) {
        System.out.println("summing " + iterations + " values");
        BigDecimal sum1 = BigDecimal.ZERO;
        BigDecimal sum2 = BigDecimal.ZERO;
        BigDecimal sum3 = BigDecimal.ZERO;

        Random r = new Random();
        int i = 0;
        while (i++ < iterations) {
            // random number between 0.3 and 99.9
            BigDecimal mm = new BigDecimal("" + r.nextInt(1000)).movePointLeft(1);
//          System.out.println(mm);

            BigDecimal in = mm.multiply(MM_TO_INCHES);
            sum1 = sum1.add(in);

            BigDecimal in2 = in.setScale(3, BigDecimal.ROUND_HALF_UP);
            sum2 = sum2.add(in2);

            BigDecimal in3 = in.setScale(4, BigDecimal.ROUND_HALF_UP);
            sum3 = sum3.add(in3);
        }

        // scale back to 2 digits
        BigDecimal sum1Scaled = sum1.setScale(2, BigDecimal.ROUND_HALF_UP);
        BigDecimal sum2Scaled = sum2.setScale(2, BigDecimal.ROUND_HALF_UP);
        BigDecimal sum3Scaled = sum3.setScale(2, BigDecimal.ROUND_HALF_UP);

        System.out.println("sum1 scaled: " + sum1Scaled + " unscaled: " + sum1);
        System.out.println("sum2 scaled: " + sum2Scaled + " unscaled: " + sum2);
        System.out.println("sum3 scaled: " + sum3Scaled + " unscaled: " + sum3);
        System.out.println("sum1 sum2 scaled diff: " + sum1.subtract(sum2));
        System.out.println("sum1 sum3 scaled diff: " + sum1.subtract(sum3));

        BigDecimal sum2diff = sum1.subtract(sum2);
        BigDecimal sum3diff = sum1.subtract(sum3);
        System.out.println("sum1 sum2 scaled diff: " + sum2diff + " accuracy: "
                + sum2diff.multiply(ONE_HUNDRED).divide(sum1, MathContext.DECIMAL64) + "%");
        System.out.println("sum1 sum3 scaled diff: " + sum3diff + " accuracy: "
                + sum3diff.multiply(ONE_HUNDRED).divide(sum1, MathContext.DECIMAL64) + "%");
    }
}

Here's the results of a few different runs:

summing 1000000 values
sum1 scaled: 1966739.42 unscaled: 1966739.41925376831
sum2 scaled: 1966739.00 unscaled: 1966739.004
sum3 scaled: 1966739.39 unscaled: 1966739.3948
sum1 sum2 scaled diff: 0.41525376831 accuracy: 0.00002111381732855886%
sum1 sum3 scaled diff: 0.02445376831 accuracy: 0.000001243365952327248%

summing 1000000 values
sum1 scaled: 1964500.88 unscaled: 1964500.87988597288
sum2 scaled: 1964500.37 unscaled: 1964500.373
sum3 scaled: 1964500.86 unscaled: 1964500.8575
sum1 sum2 scaled diff: 0.50688597288 accuracy: 0.00002580227772203500%
sum1 sum3 scaled diff: 0.02238597288 accuracy: 0.000001139524706209313%

summing 1000000 values
sum1 scaled: 1966658.47 unscaled: 1966658.47437196111
sum2 scaled: 1966658.29 unscaled: 1966658.294
sum3 scaled: 1966658.49 unscaled: 1966658.4900
sum1 sum2 scaled diff: 0.18037196111 accuracy: 0.000009171493854193497%
sum1 sum3 scaled diff: -0.01562803889 accuracy: -7.946493554245969E-7%

summing 1000000 values
sum1 scaled: 1965534.50 unscaled: 1965534.49799515481
sum2 scaled: 1965534.59 unscaled: 1965534.586
sum3 scaled: 1965534.44 unscaled: 1965534.4441
sum1 sum2 scaled diff: -0.08800484519 accuracy: -0.000004477400181974162%
sum1 sum3 scaled diff: 0.05389515481 accuracy: 0.000002742010118111540%

Over a million iterations, this isn't much of a difference. I don't see how somebody can have lower faith in the accuracy of the binding. Again 3 should be sufficient.

@watou
Copy link
Contributor

watou commented Jul 12, 2015

It's no doubt true that the amount of accuracy lost by rounding at hundredths or thousandths of an inch is small, but I still don't understand what problem is solved by knowingly losing accuracy by rounding values you convert from Metric units. I asked earlier but didn't see your answer. I would like to know, as a user, that any values I was handed by the binding were equivalent in all possible ways for the purposes of any calculations I wanted to perform against them, regardless of the unit system they were presented in. If one is rounded off before I get it and the other isn't, then they are not equivalent, and I will know that any averages, sums, etc. using them will be needlessly inaccurate. As a user, this would bother me.

I feel I've run out of ways to present my points to you. If I can be of any further assistance, please reference @watou in your comment and I will chime in. All the best! - John

@robnielsen
Copy link
Contributor Author

@watou, haven't you done essentially did the same thing at https://github.com/openhab/openhab/blob/master/bundles/binding/org.openhab.binding.ecobee/src/main/java/org/openhab/binding/ecobee/internal/messages/Temperature.java#L162? It has:

public final BigDecimal toCelsius() {
        return temp.movePointLeft(1).subtract(THIRTY_TWO).divide(NINE.divide(FIVE), MathContext.DECIMAL32);
    }

which is essentially no different than the code:

public final BigDecimal toCelsius() {
        return temp.movePointLeft(1).subtract(THIRTY_TWO).divide(NINE.divide(FIVE), MathContext.DECIMAL128).setScale(5, BigDecimal.ROUND_HALF_UP);
    }

In this case, a decision was made that 5 digits of precision was reasonable. Just to show that this works out the same way, the code:

        BigDecimal nine = new BigDecimal("9");
        BigDecimal five = new BigDecimal("5");
        BigDecimal thirtyTwo = new BigDecimal("32");

        BigDecimal temp = new BigDecimal("789");
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).divide(nine.divide(five), MathContext.DECIMAL32));
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).divide(nine.divide(five), MathContext.DECIMAL128).setScale(5, BigDecimal.ROUND_HALF_UP));
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).divide(nine.divide(five), MathContext.DECIMAL64));
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).divide(nine.divide(five), MathContext.DECIMAL128));

produces the result:

26.05556
26.05556
26.05555555555556
26.05555555555555555555555555555556

Using your argument, the code at https://github.com/openhab/openhab/blob/master/bundles/binding/org.openhab.binding.ecobee/src/main/java/org/openhab/binding/ecobee/internal/messages/Temperature.java#L162 should be using MathContext.DECIMAL128 instead of MathContext.DECIMAL32 shouldn't it?

Also, once the code is simplified to use a single multiplication instead of two divisions, you'll run into the same problem since 5/9 is the number is 0.555555555..., when do you stop? For example, the code:

        BigDecimal thirtyTwo = new BigDecimal("32");
        BigDecimal zeroPointFives = new BigDecimal("0.5555555556");
        BigDecimal mmToInches     = new BigDecimal("0.0393700787"); // using same number of digits for both

        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).multiply(zeroPointFives, MathContext.DECIMAL32));
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).multiply(zeroPointFives).setScale(5, BigDecimal.ROUND_HALF_UP));
        System.out.println(temp.movePointLeft(1).subtract(thirtyTwo).multiply(zeroPointFives));

produces the result:

26.05556
26.05556
26.05555555764

So again, I'm back to the same question, what is a reasonable level of precision? I don't think we can solve it here. I'll post a question to the group.

@watou
Copy link
Contributor

watou commented Jul 13, 2015

@robnielsen, you chose 7- and 10-digit multipliers for your unit conversions. The Ecobee binding code you referenced has to use an infinite-digit divisor, so in order to avoid an ArithmeticException (#2793), I chose 7 digits of precision. I still don't understand what problem you are solving by rounding your conversions to so many fewer digits of precision than are available. I hope the wider audience of Netatmo users can help you resolve this. Regards, John

@robnielsen
Copy link
Contributor Author

Interesting @watou, so you are basically doing the same thing I am. I could use your same argument against you that you should be using a MathContext.DECIMAL128.

@robnielsen robnielsen changed the title Netatmo: Added additional types along with intervals to the binding Netatmo: Added additional types, fix for #1820, and other improvements Jul 15, 2015
@robnielsen
Copy link
Contributor Author

@teichsta, I've implemented the scaling following https://en.wikipedia.org/wiki/Significant_figures#Arithmetic, since the value is consumed downstream, I've considered it a intermediate result since it's published and don't want to give downstream consumers the indication of greater precision. If this is acceptable, I'll also update the wiki to indicate this.

The PR also includes a fix for defect #1820 which I was able to reproduce.

@robnielsen robnielsen changed the title Netatmo: Added additional types, fix for #1820, and other improvements Netatmo: Added additional types, fix for #1820, #3242, and other improvements Oct 4, 2015
@nesys
Copy link
Contributor

nesys commented Oct 4, 2015

fantastic piece of code, Rob :) it works like a charm. please add in the final release as soon as possible!!!

@teichsta
Copy link
Member

@robnielsen @watou thanks a lot arguing all pros and cons better than i would be able to do! But i wouldn't take it too philosophically :-)

Since you never know when to treat a value as intermediate or as final result (States can be stored using the PersistenceService and can be recalculated in other context) i would simply use what BigDecimal provides without any further scaling (except in the UI).

The cited False Precision definition talks about the the number of significant figures used in the presentation of data which definitely shouldn't imply a better precision as the real sensors can provide.

Although i agree that from the mathematicians point of view it might not be 100% correct to leave the precision as is (False Precision) following this general implementation and review rule might be simpler in more cases as it leads to a false impression of accuracy.

Can we agree on that?

@watou
Copy link
Contributor

watou commented Oct 28, 2015

@teichsta I agree.

@robnielsen
Copy link
Contributor Author

@teichsta, @watou is essentially scaling to IEEE 754R Decimal32 format at https://github.com/openhab/openhab/blob/master/bundles/binding/org.openhab.binding.ecobee/src/main/java/org/openhab/binding/ecobee/messages/Temperature.java:

/**
     * Convert this Temperature to Celsius.
     * 
     * @return temperature in Celsius
     */
    public final BigDecimal toCelsius() {
        return temp.movePointLeft(1).subtract(THIRTY_TWO).divide(ONE_POINT_EIGHT, MathContext.DECIMAL32);
    }

I assume if the same scaling is used with this pull request, it would be acceptable?

@watou
Copy link
Contributor

watou commented Oct 29, 2015

As explained in July, this is the only way I am aware of to avoid the ArithmeticException created by an infinite-digit divisor. Since your unit conversions only involve BigDecimal.multiply, they are not at risk of throwing ArithmeticExceptions, and so no rounding or truncation would be needed.

@robnielsen
Copy link
Contributor Author

Yes @watou , but you choose to use MathContext.DECIMAL32, why not MathContext.DECIMAL64 or MathContext.DECIMAL128. I'll remove the scaling, but I'm also going to use less precise values in the calculation since 9-10 digits precision make no sense when we are talking measurements in degrees or inches.

…e intervals 30 minutes,

1 hour, 3 hours, 1 day, 1 week and 1 month. The interval is configurable and defaults to 1 day.

The pull request #2820 that I created didn't have any scaling of converted values, this was giving
the false impression of higher precision than is actually available (many digits of precision).
The values are treated as intermediate vales scaled according to the Significant Figures and
Significant Arithmetic rules which is widely used with scientific data.

Fix issue 1820, if the token was expired, the token was refreshed and a recursive call
was made to re-execute the binding, this would casue null pointer exceptions in the original
execute call to the binding.

Fix issue #3242, handle case when Netatmo provides a null altitude

Added support to also refresh the token if the call to netamoto indicated the token was invalid.

Added a better message when StartCOM CA certificate is missing
@robnielsen
Copy link
Contributor Author

@teichsta, I removed the scaling, and used a multiplier with a precision of 4 decimal places for conversion calculations. I hope this is acceptable.

The commits have also been squashed and are ready to get merged if you approve.

teichsta added a commit that referenced this pull request Oct 29, 2015
Netatmo: Added additional types, fix for #1820, #3242, and other improvements
@teichsta teichsta merged commit 52b86fd into openhab:master Oct 29, 2015
@teichsta
Copy link
Member

yes, thanks a lot @robnielsen and @watou!

@robnielsen
Copy link
Contributor Author

Thanks @teichsta, I'll get the wiki updated this weekend with the new functionality.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants