Monthly Archives: February 2015

Uncertainty in Global Temperature Assessments

In trying to assess global temperature anomalies and trends over time, there are a wide variety of considerations that influence the accuracy of our assessments, including:

  • Sensor type and calibration
  • Sensor redundancy and accuracy
  • Sensor shielding and aspiration
  • Sensor height above ground
  • Ground cover
  • Site exposure
  • Proximity to water bodies
  • Orographic influences
  • Spatial coverage and representativeness
  • Number and timing of observations
  • Data completeness
  • Accuracy of site location information
  • Station relocation
  • Changes in any of the above influences over time
  • To adjust or not to adjust?

These issues directly affect the accuracy and representativeness of temperature measurements. Below I will briefly address each of these influences on global temperature measurement uncertainty while attempting to avoid turning this post into a book.

Sensor type and calibration
Mercury thermometers were long considered the reference standard for temperature measurements. However, in recent decades, thermistors have gradually replaced liquid thermometers in most weather station and climatic station networks. Linearity over typical measurement ranges and proper calibration are critical to minimizing potential bias in measurements, but in practice this is usually a small source of uncertainty, ideally less than 0.5 degrees Centigrade (C) but sometimes can cause biases of 1C or more for extended periods. Anytime an instrument is replaced there is a chance of a small change in sensor bias that can affect trends over time.  Typically there is little or no documentation of this information.

Sensor redundancy and accuracy
Ideally, multiple independent sensors should be collocated to better assess the temperature measurement precision and to provide a better data return if an individual sensor failure occurs. However, most weather and climate stations have only one temperature sensor. Ideally, temperature sensors should be calibrated or at least compared to a reference standard periodically to ensure optimal accuracy, but in practice there is a wide variation in quality control for temperature measurements and this is yet another small source of uncertainty.

Sensor shielding and aspiration
Proper shielding is critical to avoid high bias during periods of intense sunshine and low bias during periods of rain. Likewise adequate sensor aspiration is important to ensure that ambient air is being sampled as opposed to stagnant air inside the measurement housing that may have a different temperature from outside air if not properly ventilated. If unaddressed, these influences can cause high biases on the order of 1C to 3C during sunny conditions and can cause similar low biases if the sensor gets wet from fog or rain.

Sensor height above ground
Another important influence is the sensor height above ground. The recommended measurement height is 1.25 to 2 meters above ground, which is typically met for most weather and climate stations. However, a few stations are sometimes located on the tops of buildings and this situation can introduce a high bias at night during temperature inversion conditions as compared to nearby stations at the ideal measurement height. This influence could cause a high bias of as much as 2C to 4C or more on many nights, depending on the height of the building and frequency and intensity of ground-based temperature inversions.

Ground cover
Ground cover near and below the sensor location can greatly influence the measurements. Ideally the nearby ground should be covered in uniform low-growing vegetation, such as routinely mowed grass. But many sensors are sited over or near concrete, asphalt, buildings, bare soil, or rocks and these types of ground cover can introduce high biases during periods of intense sunshine as compared to the ideal ground surface. They can also add heat to the air at night after sunny days. These effects may add uncertainty on the order of 1C to 3C depending on the type, proximity, and extent of ground cover variations.

Site exposure
The overall site exposure, including the proximity of buildings, trees, and localized terrain can influence the representativeness of measurements at the site. If wind flow is obstructed, wind aspirated temperature measurements may more frequently be biased high during sunny conditions. Wind flow obstruction may not be as much of a factor for properly shielded motor-aspirated sensors, but may still influence heat build-up in the air over less than ideal ground surfaces in the immediate area. This influence can also be a factor in site representativeness for a larger area. Site exposures that are not typical of the larger surrounding area may not represent it very well.

Proximity to water bodies
For land measurements, nearby ponds, lakes, creeks, rivers, and oceans can exert considerable influence on temperature measurements, depending on the size and proximity of the water body. Most water bodies more than about a kilometer across can cause lake-breeze or sea-breeze effects during the day and land-breeze effects at night that affect temperatures at sites in close enough proximity. Since water temperature changes much more slowly than air temperature and water effectively stores more heat than air, it will impact nearby air temperatures in ways that will not be seen at locations farther from the water. This influence is mainly a concern for representativeness of the site measurement on a larger scale and may affect trends if a site is moved to a greater or lesser distance from a nearby water area.

Orographic influences
Terrain can drive large effects on temperature from both elevation differences and slope orientation relative to sun angle. In morning sunshine, eastern facing slopes will heat up faster than flat or western facing slopes. Since warm air rises, this effect causes a local updraft to occur, also called up-slope wind. Later in the day the same effect occurs on westward facing slopes. At night with clear skies, as air cools near the ground because of heat radiating into space, the cold air near the surface is denser and will sink to lower elevations if the terrain is not flat. This effect will cause down-slope or drainage flows that cause cold air to pool at lower elevation and will leave warmer air at higher locations. Thus stations in valley locations will tend to be much colder with clear-sky radiation cooling at night than hillside or hilltop locations, whereas during the day or with cloudy windy conditions the temperatures may be very similar if all other considerations are equal. If the elevation difference is large enough, higher elevation sites can be significantly colder during windy weather with near neutral atmospheric lapse rate. These terrain influences are mainly a concern for representativeness of the site measurement on a larger scale and can add considerable uncertainty for trends if a site is relocated from hilltop to valley or vice versa.

Spatial coverage and representativeness
One of the largest problems in estimating global temperature anomalies is the severe lack of spatial coverage over large areas of the globe, mainly over the oceans and in remote uninhabited areas including deserts, jungles, and polar areas. My guess is that poor spatial representativeness is likely to account for the largest uncertainty among all of the various sources of uncertainty. This issue is also related to representativeness of existing measurements to larger areas that do not have measurements.

Number and timing of observations
Early temperature measurements were spot observations no more than a few times a day without the advantage of min/max thermometers or continuous electronic data acquisition that came later. Much of the temperature record used for estimating global temperature anomalies uses measurements that were made once per day using a min/max thermometer with the resulting minimum and maximum temperature averaged to estimate the daily average temperature. This method is subject to biases depending on the time of day that the observation is made, the so-called “time of measurement bias”. The largest biases occur when the time of measurement is near either the minimum temperature (low bias) or maximum temperature (high bias). This influence is especially important for trends if the time of observation changes or when the measurement method changes from min/max thermometer to continuous electronic thermistor measurement.

Data completeness
Many historical temperature records are incomplete and methods for infilling missing data can introduce uncertainty. Greater amounts of infilled data result in greater uncertainty in the data and trends.

Accuracy of site location information
Accurate site location information is important for determining site characteristics when evaluating site representativeness for an area.  It is also important for evaluating what impacts a station move may have on temperature trends. An error of only 0.1 degree latitude corresponds to about 7 miles or 11 kilometers and that can make a big difference in site characterization.

Station relocation
In the historical climate record, there are many cases where measurement locations are moved, sometimes only a few meters and other times as much as a kilometer or more. As mentioned previously these moves can introduce a variety of uncertainties caused by changes in ground cover, site exposure, terrain, and nearby water influences. Attempts to correct for these changes may introduce further error if incorrect assumptions are made.

Changes in any of the above influences over time
Over time, even stations that initially began with ideal siting may gradually become less ideal over time with changes in nearby ground cover, vegetation, buildings, and associated wind flow. Sometimes sudden changes occur, as when a building is constructed close to the site or a nearby grassy field becomes a parking lot. Since many stations are located in urban and suburban environments, effects from increasing urbanization over time may be very significant on long-term temperature trends. Also, sites that were once rural often become suburban or urban over time and are thus subject to a high bias in the temperature trend caused by increasing urban heat island effects over time.

To adjust or not to adjust?
Ideally, if temperature data are going to be adjusted, the adjustment should be carefully documented and justified on a case-by-case basis, including an explanation of what correction was applied to what period and why. However, this approach is very tedious and time consuming and often there is not enough information to determine what, if any, adjustment should be made. Consequently, most organizations that attempt to estimate global temperature trends resort to complex automated algorithms that “homogenize” the data. This approach could actually add to the uncertainty rather than decrease it if incorrect assumptions are applied. My preference would be to adjust the data only when well documented information justifies the adjustment on a case-by-case basis and when the adjustment itself is well documented. Otherwise, data should be left with all of its warts and blemishes since most of the time we cannot be sure what is a blemish and what is real.

Ocean temperatures
About 70% of the earth’s surface is covered by oceans and actual air temperature measurements there are relatively sparse, especially before the buoy and satellite era.  Prior to deployment of weather and ocean measurement buoys and satellites, all ocean related measurements came from ships, including air temperature and water temperature.  These measurements are subject to many of the same problems as those from land stations.

Most of the ships were moving, so there are not any long-term measurement records from fixed locations.  Furthermore, most of the ships follow shipping lanes so that large areas away from the shipping lanes have little or no coverage, especially in the southern hemisphere. A few ship weather stations were deployed to fixed locations from the 1940’s into the 1970’s and were replaced by more numerous fixed buoy weather stations since the 1970’s.  By far the greatest number buoys at present are drifting buoys. However, the number of both fixed and drifting buoys is much smaller than the number of land stations, despite the much larger surface area.  Thus spatial and temporal coverage of air temperature measurements over the oceans remains poor even today.  Consequently, many assessments of global surface air temperature use ocean surface water temperature as a proxy for surface air temperature, which adds another degree of uncertainty to a very large area of the globe.

Since the late 1970’s satellites have been measuring ocean surface water temperatures, but even these measurements are only possible when skies are clear and can be subject to calibration uncertainties related to clarity of the air column as well as instrument calibration drift.  Spatial coverage of water temperatures has been greatly improved by satellite observations, but temporal coverage affected mainly by cloud cover still adds some uncertainty.

Considering all of the problems noted above, estimating surface air temperatures over the oceans might be the largest source of uncertainty associated with estimates of global surface temperature, especially before the buoy/satellite era beginning in the late 1970’s.

Estimated uncertainties in global temperature assessments
The HadCRUT data set provided by the Climatic Research Unit at the University of East Anglia in conjunction with the Hadley Centre at the United Kingdom Meteorological Office includes estimates of uncertainty with their data. The graph of annual global temperature anomalies in Figure 1 includes the estimated uncertainty ranges provided with the HadCRUT4 data set.

Figure 1.  Estimated annual global temperature anomalies relative to a 1961-1990 baseline in red with upper and lower 95% confidence interval bounds shown in blue, as provided by the UK Hadley Centre.

Figure 1. Estimated annual global temperature anomalies relative to a 1961-1990 baseline in red with upper and lower 95% confidence interval bounds shown in blue, as provided by the UK Hadley Centre.

According to information provided with the HadCRUT4 data set, the trend in Figure 1 represents the “medians of regional time series computed for each of the 100 ensemble member realizations” and the “uncertainties are computed by integrating across the distribution described by the 100 ensemble members, together with additional measurement and sampling error and coverage uncertainty information”. The total uncertainty provided for the estimated 2014 global temperature anomaly of 0.56C is plus or minus 0.09C.

These uncertainties subjectively seem much too low to me, considering all of the various types of uncertainty previously described.  So I took the HadCRUT4 estimated annual uncertainties and multiplied them by a factor of three to produce the graph in Figure 2. I believe this adjustment of the uncertainty results in a conservative estimate of the full uncertainty in the data, which could be as much a factor of five higher rather than the factor of three that I selected for the graph.

Figure 2.  Estimated annual global temperature anomalies relative to a 1961-1990 baseline in red with upper and lower 95% confidence interval bounds shown in blue but multiplied by three from what was provided by the UK Hadley Centre.

Figure 2. Estimated annual global temperature anomalies relative to a 1961-1990 baseline in red with upper and lower 95% confidence interval bounds shown in blue but multiplied by three from what was provided by the UK Hadley Centre.

I do not believe these estimates of global temperature anomalies have enough accuracy to clearly and confidently resolve trends in this data set over the temperature anomaly ranges that have occurred so far. At best, they provide only a somewhat uncertain hint at possible trends over periods of decades to a century or more.  Most of the year-to-year variation is easily in the noise range.

Update 2016 January 26

Global temperature is a concept and not something we can measure directly.  The standard surface temperature measurement height in the US is 2 meters above the ground. In England it is 1.25 meters above the ground. So, what is the surface temperature anyway? Is it the surface air temperature 1.25 to 2 meters above:

  1. The ground in the shade of my backyard?
  2. My concrete patio?
  3. The roof of my house?
  4. A suburban city street?
  5. The roof of a skyscraper?
  6. The street in a downtown skyscraper street canyon?
  7. A paved parking lot?
  8. The top of 2 meters of snow on the ground?
  9. A glacier?
  10. Desert sand?
  11. The ground under the canopy of a jungle or forest?
  12. An open field of grass?
  13. The tops of the highest mountain peaks?
  14. A pond?
  15. A river?
  16. A lake?
  17. An ocean?
  18. The troughs between waves in an ocean storm with 20 meter waves?
  19. The top of each wave in an ocean storm with 20 meter waves?
  20. This list could go on and on.

In reality, when looking at the earth from straight above, all of these locations are at the “surface” and locations around the globe together combine to represent the “global surface temperature”. The surface of the earth is about 510 million square kilometers. That means we would need 510 million temperature sensors evenly spaced around the globe just to have one measurement per square kilometer. Even then, there are many areas with complex terrain and/or highly variable surface features where a single measurement is not likely to represent the average temperature over a square kilometer area very well. Likewise in the middle of a large city. This situation is part of why I believe that our best estimates of global temperature and global temperature anomalies are woefully inadequate and come with a large uncertainty.

Advertisement