| Mime-Version: |
1.0 |
| Content-Type: |
text/plain; charset="UTF-8" |
| Date: |
Thu, 3 Mar 2016 16:55:15 -0500 |
| Reply-To: |
|
| Subject: |
|
| Content-Transfer-Encoding: |
8bit |
| Message-ID: |
|
| Sender: |
|
| From: |
|
| Parts/Attachments: |
|
|
Matt,
I recommend this website for definitions and steps for analyzing uncertainty:
http://www.itl.nist.gov/div898/handbook/mpc/section5/mpc5.htm
If you really want to get into it, here is the official ISO guide:
http://www.bipm.org/en/publications/guides/gum.html
But particular to your question, what are the specifics for why standard deviation is considered an unacceptable parameter for measurement uncertainty? Though it may be considered "naive" to only include error associated with repeated measurements, it is a simple calculation to make and does give a good idea of the spread of isotope ratios associated with a particular sample. It is an important part of the "Type A" evaluation of uncertainty as defined in the links above. However, if you need to report errors associated with between-run variation, normalization, and other sources such as inhomogeneity, a more complex evaluation must be made, necessarily requiring more data. Reporting of error should be fit-for-purpose for your research question.
A comment: if you are reporting delta values, I warn against using relative standard deviations, for what may be obvious reasons: There is no reason to believe a normalized delta value at d13C(VPDB) of 10‰ ± 0.1‰ would be ten times more "precise" than a value at 0.1‰ ± 0.1‰. Likewise, a negative mean delta value will produce a nonsensical relative standard deviation.
John D. Howa
|
|
|