Stable Isotope Geochemistry


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Simon Prosser <[log in to unmask]>
25 Jul 95 06:58:12 EDT
text/plain (58 lines)
As a representative of a mass spectrometer company I'd hate to think I was
responsible for misleading people into believing that their data was more
precise/accurate than it really is. External precision can mean a great many
	When we quote it for a dual-inlet IRMS we mean the precision (1 standard
deviation) that can be obtained from flooding the manifold with gas and
measuring each port. I quite agree with James O'Neil that It has no bearing on
the precision to which a real sample can be extracted and prepared and even less
on the accuracy to which this can be done. It is quoted as a statement that once
you get your sample gas into the instrument, the instrument itself should not
add to the errors by more than this amount.
	When we quote external precision, for example, for a continuous-flow
combustion IRMS, then it means something quite different. It is the precision to
which an ideal, well homogonised, easily combustible sample can be combusted and
measured - maybe one step closer to reality but still no guarantee of precision
for other types of sample.
	We quote these figures to show how well the instruments work under ideal
conditions and because they are figures of merit that the isotope community
demand and use to compare instruments when  making a purchasing decision. I had
hoped that we been clear about what we meant in each instant - perhaps we should
be a bit more explicit in the future, it.

The 'V' debate lead  me to another issue. Precision is one problem, accuracy is
another. In addition to the question of standardisation procedure there is a
muddle one stage further back - the process of converting delta 45s and 46s to
delta 13Cs and 18Os. There are a number of different methods currently being
used - I know of at least three; the original method proposed by Craig, an
updated Craig method with the correction constants changed to reflect newer
absolute PDB ratios, and a quadratic equation method (I am unsure of the origin
of this method - I first came across it in 1983 through Ian Wright at the Open
University though I have also heard John Hayes' name used in conjunction with
it).  These different methods all give a different final answer even if the same
standards are used. I prefer the quadratic method because it makes less
approximations than the Craig method and is fundamentally more accurate. It also
retains its accuracy to the extreme enrichments encountered in tracer
experiments so there is a standardisation of method between different isotope
communities; the geochemists, biologists, agronomists, environmental sciences
	All the methods used depend on knowledge of the absolute ratios for PDB.
There is even confusion over this. R13 seems beyond reproach at the moment, R18
was updated some 15 years ago but, to my knowledge, R17 has not changed (which
seems odd if the original 18O/16O value was so far out). The absolute
measurement of R17 of SMOW by ion probe by Fahey et al. in 1987 (0.00038288 +/-
0.00000028) suggests that the Craig value for R17 of PDB is 23.5 permil light.
Is there a new accepted value for R17 PDB?
	A while ago there was an IAEA symposium to sort this out, and come up
with recommendations so that we can all use the same method. It was reported on
in Brisbane last year, but since then I have heard nothing. Has a decision been
reached yet? Until we all use the same data reduction routines, data will not be
completely comparable between different labs/instruments even if we all
normalise to the same standards - whether they have a 'V' in front of them or

Simon Prosser
Europa Scientific