On this page, we will take a brief look at some of the complications encountered when attempting to define units of measurement with high accuracy.

The **second** is defined as the time taken by 9,192,631,770
oscillations of the microwave radio frequency produced by an atom
of Cesium-133 when the electrons in that atom are in the ground state, except
for one that has emitted this radiation by making the transition from the
upper hyperfine level of this state to the lower one. This definition was
chosen to make the length of the SI second the same as that of a second of
Ephemeris Time and so when this
second began to be used in civil timekeeping (the changeover to "atomic time"),
the use of "leap-seconds" became necessary immediately.

The 1952 definition of the second of Ephemeris time was based on the instantaneous length of the tropical year being 31,556,925.9747 seconds at the beginning of the year 1900, which was considered to be Noon GMT of December 31st, 1899, but according to Ephemeris Time rather than civil time. However, while the length of the year at that time was the basis for the standard, the length of the second was not 1/86,400 of the length of the day at that time, but was rather based on the average of the length of the day from 1750 to 1892 as previously worked out by Simon Newcomb.

At one time, the **metre** was defined as 1,650,763.73 wavelengths
of the orange red line in the spectrum of Krypton-86 which corresponded to
the radiation emitted by an electron moving from the orbital

2 p to the orbital 5 d 10 5

in an unperturbed fashion. This definition was adopted on October 14, 1960, at the 11th CGPM (Conference Générale des Poids et Mesures, or General Conference on Weights and Measures).

However, a scientist proceeded to measure the speed of light by performing an accurate measurement of the ratio between the wavelengths (and/or frequencies) of these two types of radiation. In Zen-like fashion, this convinced those responsible for the standards of the fundamental absurdity of the situation, and so now the definition of the second stands, but the definition of the metre has been replaced; now, the metre derives from the second, through the speed of light, which is, by definition,

8 2.99792458 * 10

metres per second.

This definition was adopted on October 21, 1983, at the 17th CGPM.

Subsequently, Planck's constant was defined to be exactly

-34 6.62607015 * 10

kilogram-square metres per second, effective May 20, 2019, at the 26th CGPM.

The purpose for doing this was to allow the fundamental unit of mass, the kilogram, to be derived from physical constants rather than depending on a prototype kilogram. As part of this, the Avogadro constant was also redefined, to be exactly

23 6.02214076 * 10

with its units given as units per mole.

This leads to the length of a metre being approximately 30.6633189885 wavelengths of the Cesium-133 microwave radiation noted above: the fact that the wavelength is not a tiny fraction of a metre is why it had previously been considered more suitable to a standard of time, for which a radio frequency which can be manipulated by electronics is more accessible than an optical frequency, than one of distance.

The speed of light in a vacuum can also be given in units of the uniform US/British inch of 2.54 centimetres, which leads to light travelling 186,282 miles, 698 yards, 2 feet, and 5 21/127 inches every second in a vacuum.

Note that it is more convenient to measure the length of waves of light through interference fringes, and the time between oscillations of radio waves through electrical circuitry, however. So the dual definitions allowed both time and distance to be more accurately defined, at least at the time they were in use. Presumably, though, ways have been found to derive more accurate length standards from the Cesium-133 frequency or wavelength.

A recent news item notes that one resonance of the Scandium nucleus (presumably the Scandium-45 nucleus, as this element has only one stable isotope) at 12.38959 keV is so narrow that it should allow for standards of length and distance that are accurate to one part in 10^19.

The researchers investigating the possibility of this further improvement in our standards of measurement improved our knowledge of the value of this resonance by a factor of 250, but note that it's still a seven-digit value, and we need a nineteen-digit one to even define a standard. And not only is it hard to make the oscillations of an X-Ray drive pulses in a clock, it's also hard to see them in an interferometer!

However, it's noted that they used "crystal optics" to get the new value of the resonance. And since it is possible to perform X-Ray diffraction experiments, perhaps one way to make use of this as a standard would be, first, to measure the angle by which these X-Rays are diffracted by some crystal, with the result that the spacing between the atoms of the crystal would now be known to an accuracy of one part in 10^19, and then this spacing could be measured directly, thus allowing these X-Rays to calibrate measuring rods and clocks after all.

Incidentally, as far back as 1927, a definition of the metre in terms of a light wavelength existed, but that definition was based on a line in the spectrum of cadmium: the length of the metre was defined as 1,553,164.13 times the wavelength of the 6436.4696 Ångström cadmium red line.

A. A. Michaelson had found the cadmium red line to be particularly monochromatic; he had made a measurement of the International Prototype Metre in 1892 which indicated the length of the metre as 1,553,163.5 times the wavelength of the cadmium red line, the wavelength of which was taken to be 6436.472 Ångström units.

The figure used in the 1927 standard of 1,553,164.13 wavelengths per metre corresponds closely to a wavelength of 6436.4696 Ångströms for the cadmium red line; that the cadmium red line is exactly 6436.4696 Ångström units long was adopted as a standard for the definition of the international Ångström unit in 1907, based on measurements in 1906 by Benoit, Fabry, and Perot of the length of the metre in terms of the cadmium red line. The supplemental standard adopted in 1927, however, derives from one proposed by Michaelson in 1908.

Cadmium, however, has eight stable isotopes, the most common of which, Cadmium-114, has a natural abundance of 28.73%. so the accuracy of a standard based on natural cadmium would be limited by the small variations in the wavelengths of the same spectral line between different isotopes.

Prior to the use of Krypton-86 for the standard length of the metre, another possibility that was considered was to use Mercury-198. This isotope of mercury was created in 1946 by bombarding gold with neutrons, as gold, like aluminum, has only one stable isotope, and this method of creating isotopically-pure mercury was simpler than attempting to separate the isotopes of mercury by their miniscule difference in weight. Its 5460.753 Ångström spectral line was the one considered for use as a standard. Also, a current secondary standard for the metre is an iodine-stabilized helium-neon laser, the light from which has a wavelength of 6329.9139822 Ångströms.

**It is necessary** at this point
to add that 3515.3502 wavelengths of the cadmium red line would, by the
1927 definition, subtend some 2.2633475317 millimetres (as opposed to 2.2633485174
millimetres), which seems to indicate a discrepancy in the definition of the
**Potrzebie**. As a little arithmetic shows that this definition of the Potrzebie
would lead to a metre of 1,553,163.45 wavelengths of the
red line of cadmium in length, it seems apparent that Donald Knuth, then a
19-year old high school student, had converted from millimetres to wavelengths
of cadmium light using a reference giving the 1892 figure.

In my efforts to sort out the mystery of that discrepancy, which led me to finding out about the 1892 figure, I had encountered a biographical essay on A. Michaelson by Robert A. Millikan giving 6438.472 Ångströms as the wavelength of the red line of Cadmium as determined by Michaelson which would indeed lead to a standard of about 1,553,163.5 wavelengths per metre. In the essay, the resulting standard as 1,555,165.5, which I took to be a misprint. Thus, the mystery of the difference between the length of the Potrzebie as defined in terms of the metre and that as defined in terms of the cadmium red line appeared to be solved, and further searching led me both to confirmation that 1,553,163.5 was the figure arrived at by Michaelson in 1892 and to find that the later standard derived from the 1906 measurements of Benoit, Fabry, and Perot as noted above.

It may also be noted that in 1964 (or, according to some sources, 1959), an agreement was reached between the U.S. and Britain to define the inch as 2.54 centimetres.

Prior to 1964, the inch was defined in the U.S. on the basis that a metre was exactly 39.37 inches long, which led to the inch being about 2.540005 centimetres long, and in Britain the inch was 2.539997 centimetres in length. (One older reference gives the metre being 39.37079 inches, and the inch therefore being 2.539954 centimetres in length.)

Or is the British inch 2.539996 centimetres? There are sources giving both values for the British inch. I found the 2.539997 figure in the Encyclopedia Britannica article on weights and measures, while the 2.539996 figure occurs in news stories from 1938 and 1958 about the ongoing negotiations to establish agreement between Britain and the United States on the inch of 2.54 centimetres, so I would have been inclined to believe the 2.539997 figure to be the more accurate one. Yet another source gives the length of the inch based on the British "Bronze Yard #11" to be 2.53999944 centimetres.

Since this was written, I have found a document on the Web, entitled "Which Inch", which sorts out a considerable amount of confusion.

In the case of the United States, there were several standard values for the inch:

The Troughton Bar, given by Britain to the United States in 1815, was carefully measured, and based on that, in 1832, a standard inch 2.54006833 millimetres in length was adopted.

Then, in 1856, the U.S. received the British "Bronze Yard #11", which was conformant with the new standard British inch adopted in 1855. This gave the new standard American inch of 2.5399944 centimetres noted above.

The definition of the inch that appears in so many older textbooks, where one metre equals 39.37 inches, was adopted in 1892, and the yard based on this standard is called the "Mendenhall Yard".

And then the modern standard of 2.54 centimetres to the inch was adopted in 1959.

In Canada, on the other hand, the inch was officially 2.54 centimetres long ever since 1951. However, instead of being defined as the length that was actually officially 2.54 centimetres, based on the platinum-iridium bar in France, it was defined as 2.54 centimetres according to the 1927 supplementary standard based on cadmium light, I have recently learned from Wikipedia. This particular definition of the inch, however, did not originate either in 1951 or in Canada.

Instead, it was proposed in 1930, by the British Standards Institute, and it was adopted by industry in several countries within a few years as a convenient and, most importantly, a reproducible standard. However, I have learned from Wikipedia that the origin of the inch of 2.54 centimetres goes back to 1912. Carl Edvard Johanssson made gauge blocks for inch measure based on the inch of 2.54 centimetres, with a reference temperature of 20 degrees Celsius (68 degrees Fahrenheit) as a compromise between the American and British standards (the reference temperature for measurements in Britain being 62 degrees, while that in the United States was also 68 degrees), and as the discrepancy between the different inches was so tiny, his gauge blocks and the definition on which they were based became popular in industry.

In 1946, the British Commonwealth Scientific Conference recommended that Commonwealth nations make this inch the official standard, and Canada was simply the first and only country to do so prior to the agreement between the United States and Britain to move to an inch of 2.54 centimetres.

The United States was the first country to join the agreement, in 1959; Australia and Britain signed on later, but in addition made the change to the new standard effective on January 1, 1964, which explains why some sources date the agreement from 1959 and others from 1964.

Unlike the case of the U.S. inch, where the inch was defined so that 39.37 inches equalled one metre, so that the U.S. inch must therefore be 2.54000508001016002032004064.. centimetres, the British inch was defined by a separate physical standard, so there is no inherent exact ratio between the British inch and the metre.

A 1928 scientific paper, titled "A New Determination of the Imperial Standard Yard to the International Prototype Metre", by Sears, Johnson, and Jolly, gave the ratio of 1 metre to 39.370147 British inches, which gives rise to an inch of 2.5399955596... centimetres. Incidentally, the International Prototype Metre was made in 1872, while the Imperial Standard Yard dates back to 1855.

An earlier measurement, from 1895, gave the length of the metre as 39.370113 British
inches, leading to a British inch of 2.53999778969... centimetres. That rounds to 2.539998,
and so it cannot alone be the source of the other value; despite the fact that one source
notes that the 39.370113 figure was the *de facto* standard for the length of the
British inch for many years.

The Imperial Standard Yard is made from a bronze alloy, 82% copper, 13% tin, and 5% zinc, and it is defined to have its standard length at a temperature of 62 degrees Fahrenheit; the International Prototype Metre is made of 90% platinum and 10% iridium, and its length is valid at 0 degrees Celsius (32 degrees Fahrenheit, or the freezing point of water).

Incidentally, the Imperial Standard Yard was noted as having shrunk by one part per million in a span of 20 years, calling its accuracy into question. I have recently encountered a paper that noted that a pair of 10-foot secondary standards, made from wrought iron, when measured again in 1953, were found not to have shrunk appreciably, unlike the Imperial Standard Yard, which is encouraging news to those wanting to produce measuring rods with long-term dimensional stability who can't quite afford the price of platinum-iridium alloy.

Looking at photographs of old slot machines, their mechanisms, and reel strips, usually it isn't possible to derive from them a precise size for the items in the photograph. I had tried calculating estimates based on the fact that the dimensions of coinage are standardized, but I was not able to achieve the level of precision I would have liked. However, an image of an uncut sheet with reel strips offered for sale gave a result consistent with one description of a reel strip for an old slot machine that did give a size.

On at least some older mechanical slot machines, the spacing between the symbols appears to have been just under 1 1/4 inches, which is consistent with the diameter of the reels of the slot machine being 7 7/8". As it happens, though, 7 7/8 inches is almost exactly equal to 20 centimetres.

It is slightly more. An inch is 2.54 centimetres; the old American standard had the metre as having a length of 39.37 centimetres; had 7 7/8" been exactly 20 cm, then the metre would have been 39.375 centimetres, making the inch 2.5396825... centimetres, significantly shorter than even the British value. 7 7/8", thus, actually is 20.0025 cm.

Charles Fey was of German ancestry; could he have chosen to make the reels of his original slot machine to a metric dimension, which the other early full-size slot machines all copied? Of course, it is also possible the diameter of the reels was a nice round 8 inches, even if my calculations, based on data which is still imprecise, suggested 7 7/8". If the spacing between symbols needed to be very close to 1 1/4", but the size of the reels still something easy to measure on a ruler, 7 31/32" would be a very close fit.

I came across a photo of reel strips from which it was easier to form an estimate of their size, and while at first, due to an error in calculation, I obtained a larger reel size, double-checking has instead confirmed the spacing of the symbols is close to 1 1/4" on the reel strips for a Mills slot machine.

It may be noted that the Pyramid Inch was claimed (by Charles Piazzi Smyth) to be 1.00106 English inches, so that would make it about 2.5426894 centimetres long. The Pyramid inch was said to be 1/25th of a royal cubit. Earlier, John Taylor, who had supplied the inspiration for Charles Piazzi Smyth, gave the Pyramid Inch as being 1.00133 English inches instead.

In fact, an ordinary cubit, about 18 inches long (so they were at least right that cubits related better to Imperial measure than to the metric system) was divided into six spans (each three inches long), which were in turn divided into four digits (each 3/4 of an inch; and, indeed, the keys on our typewriter keyboards have 3/4 of an inch spacing even today, even when it's specified as 19.05 mm). A royal cubit is seven spans instead of six, and so, nominally, it should be 21 inches long, but then standards of measure were less accurate in those days. The royal cubit was used in the construction of the Great Pyramid; thus, its sides had a rise of one royal cubit for a run of five and one-half spans; which, multiplied by four, gives 3 1/7, giving the appearance that pi is involved in the construction of the Pyramids.

In fact, though, serious archaeologists and historians now know that the Egyptian royal cubit was about 52.63 centimetres (give or take 3mm), or 20.72 inches in length - so it was indeed near to 21 inches, but actually somewhat smaller, and thus not 25.0265 inches long. To the extent, therefore, that such a thing as an ancient Egyptian inch has any meaning, therefore, it would be about 98 2/3 percent of an inch, not 1.00106 inches, in length.

The reference by Richard Lepisus which mentions measurements of several Egyptian royal cubit rods is available online. Fourteen of them were mentioned, and their lengths were, as best I can make out:

1) 523.5 mm 2) 523 mm on one side, 525 mm on the other 3) 525 mm (but noted as being in 7 fragments) 4) 524 mm 5) no length given 6) 526.5 mm 7) 528.5 mm (based on it being 5mm longer than the first one) 8) 523 mm 9) 21.21 feet (528.7 mm) 10) 525.98 mm 11) no length given 12) no length given 13) no length given 14) 524.451 mm

It is also noted elsewhere that Flinders Petrie found that the length of the Egyptian cubit appears to have gradually increased over time, presumably due to errors in repeated copying. Thus, the fact that the most precisely measured length is one of the shorter ones may not be a bad thing; 524.451 mm works out to 20.64768 inches, 98.32% of 21 inches.

While the Egyptian measure relates to Imperial measure by a factor of about 0.9867, or 0.9832, which tempts one to settle on 0.985, the currently accepted value for the Roman foot makes it .971 feet long; thus, while the foot grew on its way to Britain, in the middle, as it passed through Rome, it shrank. And the Romans did divide the foot into 16 digits as well as 12 inches, and so linking the cubit to the inch as I have done is legitimate.

I think it is unfortunate that they missed their chance to define the inch as being about 2.540002 centimetres in length, so that the diagonal of a square 152 inches on a side would be exactly 546 centimetres, or the diagonal of a square 273 centimetres on a side would be exactly 152 inches. After all, supporters of the metric system have always criticized the Imperial system as irrational; and it would be convenient if having two systems of measurement allowed one, by using both of them, to measure exactly both the sides of a square and its diagonal.

Of course, lengths of 152 inches and 273 centimetres are somewhat unwieldy. However, as rulers measuring inches are often divided in tenths of an inch, one could relate 15.2 inches to 273 millimetres. But inch rulers are more often divided into sixteenths of an inch.

If one were to use the same method to relate the sixteenth of an inch to a millimetre, defining the inch as about 2.5399946 centimetres would lead to a square 284 millimetres on a side having a diagonal of 15 and 13/16 of an inch.

However, failing changes in our systems of measurement, one can always simply make use of the fact that 20 squared is 400, 21 squared is 441, and 29 squared is 841 to come reasonably close to a 45 degree angle and still use exact distances.

Also, even if redefining the inch is excluded, if only a single unit of measurement is used, comparable ratios to approximate the square root of two would be 239:169 and 577:408, which are in error by 0.000875 percent and 0.000150 percent respectively, while, using the inch of 2.54 centimetres, the ratio 152 inches to 273 centimetres approximates the square root of two with an error of 0.000078 percent.

The diagonal of a square 273 centimetres on a side is 386.0803025278548... centimetres, while 152 inches of 2.54 centimetres each are 386.08 centimetres, so there is an excess of 0.0003025278548... centimetres. The older U.S. inch, such that 39.37 inches equal one metre, is still in use for survey purposes, and this inch is equal to 2.54000508001016... centimetres. If, of the 152 inches of the diagonal, 59 and 9/16 of those inches were measured using the older U.S. inch, and the other 92 and 7/16 of those inches were measured using the current inch of 2.54 cm, an even closer approximation to the square root of two would be obtained.

A more approximate measurement of the diagonal of the square can be obtained using much simpler numbers. The diagonal of a square 9 centimetres on a side is 5.0109929... inches in length. For comparison, the diagonal of a square 7 centimetres on a side is 9.8994949... centimetres in length. The discrepancy, in addition to being in the opposite direction, is very nearly 3.6 times as large. So, the diagonal of a square 39.4 centimetres on a side, which is 55.7200143574999... centimetres, is very close to 10 centimetres (about the diagonal of 7 centimetres) plus 18 inches (about the diagonal of the other 32.4 centimetres) since 18 inches is 45.72 centimetres.

Since these are all even numbers, we can note that 9 inches plus 5 centimetres is approximately the diagonal of 19.7 centimetres.

Since those words were written, it occured to me that there might be another irrational number that should instead be used to define the relationship between the centimetre and a modified inch.

If one decides that a computer keyboard should be tilted at an angle of ten degrees, then while the spacing of circuit traces for the different keys would be, horizontally, three-quarters of an inch, vertically they would be that distance divided by the cosine of ten degrees. So, if that could be made something reasonably easy to specify...

With an inch of 2.54 centimetres, the distance in question is 1.93438769564234419687... centimetres approximately. If, for example, we want that distance to instead be 1.9344 cm exactly, we would have to change the inch to 2.540016156569... centimetres.

On one page on this site, I note that the Egyptians had a royal cubit of seven spans as opposed to the regular cubit of six spans. But I note at least one page that claims the original form of the royal cubit in Egypt was a measuring unit for measuring diagonals.

Of course, seven spans would approximate the diagonal of a square that was five
spans (rather than six spans, or one cubit) on a side. So I suppose one *could* imagine
a royal cubit of about 7.0710678 spans in length. (Indeed, I've recently run across a claim that the
ratio between the royal cubit and a Nippur cubit was the square root of 2.)

On the other hand, the diagonal of a square 18 inches on a side would be 25.455844 inches, and one twenty-fifth of that would be 1.0182337649 inches, so it would be larger than a standard inch by slightly more than the "pyramid inch" proposed by pyramidologists.

I have since encountered, in a book from 1885, a scientific definition of the length of an inch from first principles instead of from a standard yard that, being made of iron, had a distressing tendency to shrink over time.

The claim is that at sea level, and at the latitude of London, the length of a pendulum which oscillates once a second is exactly 39.13929 inches.

The standard acceleration of gravity used in textbooks, 9.80665 metres/(second^2) is for the latitude of Paris, however.

Using the GRS-67 equation, and the latitude of 57.4769 degrees N for the Royal Observatory of Greenwich, one gets 9.812006677... metres/(second^2), leading to a length of 0.99416413 metres for the seconds pendulum, and thus an inch of 2.54006684956 centimetres - which is close to the 2.54006833 from the Troughton Bar. So it is possible to imagine a length for the inch which is defined independently of the metre directly from first principles; thus, the yard might be defined as 1,509,498.08 wavelengths of the same spectral line of Krypton-86 as used to define the metre.

Another unit of length which involves light is, of course, the light year.

From Wikipedia, I learned that the light year *does* have a formal definition;
it is the distance light travels in a *Julian* year of 365.25 days.

The Astronomical Unit was defined in 2012 as exactly 149,597,870.7 kilometres; until then, its value was periodically updated by observations. The Paris Conference of 1896, which established a unified system of astronomical constants, for example, adopted values which implied an Astronomical Unit of 149,504,000 kilometres; in 1950, the value of 149,530,000 kilometres was found by Eugene Rabe through a study of many observations, and then bouncing radio signals off of Venus and Mercury in the early 1960s led to more accurate values in the vicinity of 149,598,000 kilometres.

Unlike the light year, the definition of the parsec was not mysterious, as it
is derived directly from the length of the astronomical unit by a little trigonometry:
it is one half of an astronomical unit divided by the tangent of one half of a second of arc,
thus it is the distance at which a star shows a parallax of *one* full arcsecond when
observed using the entire diameter of the Earth's orbit as a baseline.

Thus, a light year is 9,460,730,472,580.8 kilometres, and a parsec is approximately 30,856,775,814,853.23354382251... kilometres, so there are about 3.261563777161... light years in a parsec.

On an earlier page of this section, due to its importance in defining the Didot point, I discussed the length of the French foot, which was at one point standardized at about 32.4839385 centimetres. In Germany, many areas used their own definition of a foot, but one that was in widespread use was the Prussian foot, and apparently its length, 31.38536 centimetres, was established to a greater accuracy than that of the others.

The U.S. pound was redefined as 453.59237 grams in 1959, having previously been defined on the basis of 2.2062234 pounds equalling one kilogram exactly, leading to a pound of about 453.5924277 grams, nearly identical to the British pound, which was 453.59243 grams before also being redefined to the international standard avoirdupois pound of 453.59237 grams, which Britain adopted in 1963.

A pound is 7000 grains in weight, that is, the normal, 453.59 gram pound
used to weigh food. Thus, if you have peas to weigh, you use
this pound, which is called the *avoirdupois* pound. The troy ounce,
which is used to weigh gold, however, is 480 grains in weight, and
there are twelve troy ounces in a troy pound.

Thus, the relevant conversions are:

Grains Grams Avoirdupois Pound 7000 453.59237 Troy Pound 5760 373.2417216 Troy Ounce 480 31.1034768 Avoirdupois Ounce 437.5 28.349523125 Grain 1 0.06479891

Incidentally, the modern avoirdupois pound is the result of a
reform of weights and measures brought in by Elizabeth I. Prior
to that reform, the avoirdupois pound was 7,200 grains in weight,
and was divided into *fifteen* ounces, these being the
same 480-grain ounce twelve of which made a Troy pound, and so
had the avoirdupois pound not been lightened, the relationship
between troy weight and common weight would have been less
confusing.

And so we can make this chart of ounces versus grams for some common chocolate bar sizes:

Ounces Grams 1 28.35 1 1/2 42.52 1.58733 45 1 3/5 49.36 1 3/4 49.61 2 56.7 2 1/2 70.87 2.64555 75 3 85.05 3 1/2 99.22 3.5274 100 4 113.4

The atomic weight of gold is 196.9665, and its one stable isotope is Gold-197. An atomic mass unit is

-24 1.6605655 * 10

grams.

Thus, one gram of fine gold contains 3.0573954960689... * 10^27 atoms of gold.

Using the **old** value of 2.2062234 pounds to the kilogram, a kilogram would
equal 15443.5638 grains exactly; so one grain would be 6.479384884502... * 10^-2 grams,
and thus one grain of fine gold would contain 1.97972147858... * 10^20 atoms of gold.

Assuming a ten dollar gold coin made up of 258 grains of metal, being 9/10 gold, 1/20 silver, and 1/20 copper, taking gold to be worth 16 times as much as silver by weight, and taking copper to be worth 20 1/8 cents per avoirdupois pound, what can we infer about the value of gold?

One avoirdupois pound is 7000 grains. So the copper in such a coin is worth 0.0370875 cents, leaving the rest of the coin at 999.9629125 cents to be accounted for.

The rest of the coin consists of 232.2 grains of gold and 12.9 grains of silver; the silver, at 1/16 the value of gold, is equivalent to an additional 0.80625 grains of gold.

Thus, 233.00625 grains of gold equal 999.9629125 cents; and consist of 4.612875777685... * 10^22 atoms of gold.

That leads to a dollar being worth the same as 4.613045864... * 10^21 atoms of gold.

However, one wants the bank value of gold, not the coin value. Thus, a further adjustment needs to be made for seigniorage; the only figure I have available is the one that adds 1 1/2d of seigniorage to 3 pounds, 17s 9d. That works out to 933 pence, and so the factor desired would be 934.5/933, raising the "real" amount of gold in a dollar to 4.6204623364737787546789723 * 10^21 atoms of gold.

Thus, one could, if one wished, define a dollar as the pecuniary value of

4,620,462,336,473,778,754,678.9723

atoms of Gold-197, or, as they say,

197 Au 79

contained in a
*good delivery* gold bar, which is a bar of gold that is at least 99.5% fine,
and which has a mass of approximately 400 troy ounces, ranging from 350 troy
ounces to 430 troy ounces (the actual fineness and the weight being marked on the
bar). 400 troy ounces is about 12,441.4 grams, and the range would be from a low of
10,886.22 grams to a high of 13,374.49 grams.

But it might be more sensible to round up that last 0.9723 atom of gold to a *whole* atom.

Taking the pre-1933 U.S. dollar as the basis for a definition of the value of a dollar, of course, has the virtue of being something well-defined and close at hand. But if one is seeking out an ur-dollar, two other candidates come to my mind, at least at first.

Speaking of the kilogram, I have read news items noting that the international standard kilogram has experienced changes in its mass, and that an effort was underway to make a more accurate standard from a sphere of silicon.

The metre and the second can be defined in terms of spectral lines or microwave emissions from specific substances. Could the gram be physically defined?

Originally, the litre was defined as the volume of a kilogram of water. As well, a litre was
nominally 1,000 cubic centimetres of water. However, because water has a pronounced tendency to
dissolve, at least to a limited extent, almost any other substance, it is not a good material to
use for a physical standard. Thus, for a time, a litre, instead of being exactly 1,000 cubic centimetres
or one cubic decimetre, was instead officially defined as 1.000028 cubic decimetres, because this was, as
near as could be determined, the volume that a kilogram of water *actually* occupied; this anomaly was
corrected in 1964, and thus now one millilitre (ml) is exactly the same as a cubic centimetre
(often noted by the non-SI abbreviation cc instead of cm^{3}).

Thus, if it were desired to give the kilogram a reproducible physical definition based on water, it would have to be redefined so that the new kilogram was 0.999972 existing kilograms, and this would not be acceptable, being too large a change, and affecting the derived units of energy and force, and hence the various electrical units.

Liquid measure has always been one of the more confusing areas of measurement.

In the United States, a gallon is 231 cubic inches by definition.

In the British imperial system, a gallon was originally defined as the volume of ten pounds of water. Subsequently, the British gallon was defined in terms of metric units as 4.54609 litres; this was in 1985, after the litre had become exactly a cubic decimetre.

The Imperial gallon is larger than the U.S. gallon, but the Imperial fluid ounce is smaller than the than the U.S. fluid ounce, due to one difference between the two systems:

U.S. Customary British Imperial 1 gallon = 4 quarts = 3.785411784 litre 1 gallon = 4 quarts = 4.54609 litre 1 quart = 2 pints = 946.352946 ml 1 quart = 2 pints = 1.1365225 litre 1 pint = 2 cups = 473.176473 ml 1 pint = 2 1/2 cups = 568.26125 ml 1 cup = 8 fluid ounces = 236.5882365 ml 1 cup = 8 fluid ounces = 227.3045 ml 1 fluid ounce = 8 drams = 29.5735295625 ml 1 fluid ounce = 8 drams = 28.4130625 ml 1 tablespoon = 4 drams = 14.78676478125 ml 1 tablespoon = 4 drams = 14.20653125 ml 1 teaspoon = 4 scruples = 4.92892159375 ml 1 teaspoon = 4 scruples = 4.73551041667 ml 1 dram = 3 scruples = 3.6966911953125 ml 1 dram = 3 scruples = 3.5516328125 ml 1 scruple = 1.2322303984375 ml 1 scruple = 1.18387760041667 ml

the space in the table dividing the part in which the U.S. units are smaller from the part in which the U.S. units are larger. Often, the tablespoon is approximated by 15 ml in recipies now, although I've read that in Australia, 20 ml is instead used as the approximation. Thus, one could speak of a metric scruple of 1.25 ml.

Given that the *avoirdupois* pound is 453.59237 grams, this is the mass of
453.60507058636 ml of water, which is neither a U.S. Customary pint of 473.176473 ml nor a
British Imperial pint of 568.26125 ml, although it comes close. As the *troy* pound,
based on the heavier troy ounce, is only *twelve* of those ounces, this will not help
to create a closer alignment between the pound and the ounce.

However, as the British Imperial pint is 20 fluid ounces, rather than 16, and the weight of such a pint of water is about 25% greater than a pound, that means that the weight of a British Imperial fluid ounce of water is quite close to an avoirdupois ounce: the former is about 28.412267 grams, and the latter is 28.349523125 grams, making a fluid ounce of water only about 2/9 of 1% too heavy.

And it has just dawned on me that there is something else that closely approximates liquid measure.

Four inches is 101.6 millimetres, so sixty-four cubic inches is just slightly over a litre, and thus a good candidate for a definition of a quart!

This would lead to a system of liquid measure like this:

1 gallon = 256 cubic inches = 4,195.088384 ml 1 quart = 64 cubic inches = 1,048.772096 ml 1 pint = 32 cubic inches = 524.386048 ml 1 cup = 16 cubic inches = 262.193024 ml 1 fluid ounce = 2 cubic inches = 32.774128 ml

The quart in this system, although slightly larger than a litre, would be smaller than the British Imperial quart but larger than the U.S. Customary quart, like the litre.

The cup and the fluid ounce, though, would be larger than the corresponding units in both systems.

[Next] [Up] [Previous]