PLEASE NOTE:


*

CCNet CLIMATE SCARES & CLIMATE CHANGE, 22 May 2002
--------------------------------------------------
[The sources from which the blurbs below are taken are not unbiased and the reader
should, as with any other source of information, consider the influence of vested interest. bobk]
 
"Splashy science news reports draw eyeballs and move policy, but
sometimes the scientific heart of the news comes up short. Worse, it
can be dead wrong. So what happens in the news when a study is found to be
faulty or false and ends up being retracted or thrown out? Not much,
usually. Science news revolves around news -- new studies, discoveries and
achievements. The discovery that previous research has been dis-proven or
shown to be worth less than the paper it was printed on just does not
register as news to most journalists, no matter how said research was
originally hyped to the public."
--Howard Fienberg, Tech Central Station, 20 May 2002


(1) HOLOCENE CLIMATE RECORDS
    CO2 Science Magazine, 22 May 2002

(2) LIMITATIONS OF CLIMATE MODELS AS PREDICTORS OF CLIMATE CHANGE
    NATIONAL CENTER FOR POLICY ANALYSIS, 16 May 2002

(3) DEMISE OF CARBON SEQUESTATION "FREE RIDE" GREATLY EXAGGERATED
    CO2 Science Magazine, 22 May 2002

(4) IT'LL COST YA': HOW EUROPE WILL SUFFER
    Tech Central Station, 21 May 2002

(5) SOLAR DELUSIONS
    Tech Central Station, 21 May 2002

(6) BAD SCIENCE NEVER DIES
    Tech Central Station, 20 May 2002


==============
(1) HOLOCENE CLIMATE RECORDS

>From CO2 Science Magazine, 22 May 2002
http://www.co2science.org/subject/h/summaries/holoceneasia.htm

To assess the significance of the global warming of the past century or so,
i.e., to determine whether or not it is man-induced, it is necessary to see
how the warming of this period compares with that of earlier periods of
indisputable natural warming.  Within this context, we here review some
recent studies of climate reconstructions of the current interglacial for
different parts of Asia.

Naurzbaev and Vaganov (2000) developed a continuous near-surface air
temperature record from tree-ring data obtained from 118 trees growing near
the timberline in Siberia that covers the period 212 BC to AD 1996, as well
as a 700-year record for the period 3300 to 2600 BC.  Because the
temperature fluctuations they derived agreed well with air temperature
variations reconstructed from Greenland ice-core data, they concluded that
"the tree ring chronology of [the Siberian] region can be used to analyze
both regional peculiarities and global temperature variations in the
Northern Hemisphere."  So what did they find?

The scientists discovered a number of several-hundred-year warm and cool
periods, including the Medieval Warm Period (AD 850 to 1150), the Little Ice
Age (AD 200 through 1800), and the current Modern Warm Period. In regard to
the warming between the latter of these two periods, Naurzbaev and Vaganov
say it is "not extraordinary" and that "the warming at the border of the
first and second millennia [i.e., AD 1000] was longer in time and similar in
amplitude."  They also note that temperatures of the mid-Holocene were
warmer yet, averaging about 3.3C higher than those of the past two
millennia.

In another tree-ring study - this one of the Pakistan/Afghanistan border
region near China and India - Esper et al. (2002) employed more than 200,000
ring-width measurements from 384 trees obtained from 20 individual sites
ranging from the lower to upper timberline to reconstruct the climatic
history of Western Central Asia since AD 618. Their work revealed that early
in the seventh century, the Medieval Warm Period was already firmly
established. Between AD 900 and 1000, tree growth was exceptionally rapid,
at rates that they say "cannot be observed during any other period of the
last millennium."  Between AD 1000 and 1200, however, growing conditions
deteriorated; and at about AD 1500, minimum tree ring-widths were reached
that persisted well into the seventeenth century. Towards the end of the
twentieth century, ring-widths increased once again; but the scientists
report that "the twentieth-century trend does not approach the AD 1000
maximum." In fact, the Medieval Warm Period was far more conducive to good
tree growth than the Modern Warm Period. Summing up their work, Esper et al.
say that "the warmest decades since AD 618 appear between AD 800 and 1000."

Zhuo et al. (1998) reviewed what is known about the mid-Holocene period in
China, noting that temperatures during the Climatic Optimum in that part of
the world were also warmer than they are currently, by anywhere from 2-6C.
They additionally reported that many glaciers across the country retreated
during this period, and that some in eastern China disappeared altogether.
Also, the warmer temperatures of the mid-Holocene resulted in a retreat of
the southern permafrost limit to 100 km north of its current position.

Yafeng et al. (1999) analyzed a 2000-year high-resolution 18O record
obtained from the Guliya ice cap located in the Qinghai-Tibet Plateau of
China.  Their data clearly depicted the Dark Ages Cold Period of the middle
of the first millennium AD, the warmth of the subsequent Medieval Warm
Period, and the following "well-defined 'Little Ice Age'," which in that
part of the world appeared to last until 1930.  Perhaps the most striking of
their findings, however, was the occurrence of over 30 abrupt climatic
shifts on the order of 3C that took place over but two or three decades.

The Holocene in Asia, as depicted by these several records, was a period of
millennial-scale climatic oscillations, the warm and cool nodes of which are
typified by the Medieval Warm Period and Little Ice Age. Another distinctive
feature of the Holocene was its peak mid-period warmth, when temperatures
were considerably higher than they are currently, but when atmospheric CO2
concentrations were much lower. Also evident in the Holocene record of Asia
are many rapid shifts in temperature, which - on the Guliya ice cap, at
least - were even more dramatic than the "unprecedented" warming that is
claimed by the IPCC to have occurred during the last decades of the 20th
century.

In view of these real-world observations, there appears to be nothing
unusual about the planet's current climatic state or its recent climate
dynamics, particularly in Asia. In fact, the data of Esper et al. suggest
that the Modern Warm Period still has a long ways to go before it can be
said to be equivalent to the Medieval Warm Period. Hence, there would appear
to be little reason to suggest that the hand of man is evident in the global
warming of the past century or so.  Indeed, we would say there is no reason
to make such an inference.

References
Esper, J., Schweingruber, F.H. and Winiger, M.  2002.  1300 years of
climatic history for Western Central Asia inferred from tree-rings.  The
Holocene 12: 267-277.

Naurzbaev, M.M. and Vaganov, E.A.  2000.  Variation of early summer and
annual temperature in east Taymir and Putoran (Siberia) over the last two
millennia inferred from tree rings.  Journal of Geophysical Research 105:
7317-7326.

Yafeng, S., Tandong, Y. and Bao, Y.  1999.  Decadal climatic variations
recorded in Guliya ice core and comparison with the historical documentary
data from East China during the last 2000 years.  Science in China Series D
- Earth Sciences 42 Supp.: 91-100.

Zhuo, Z., Baoyin, Y. and Petit-Marie, N.  1998.  Paleoenvironments in China
during the Last Glacial Maximum and the Holocene Optimum.  Episodes 21:
152-158.

Copyright 2002.  Center for the Study of Carbon Dioxide and Global Change


============
(2) LIMITATIONS OF CLIMATE MODELS AS PREDICTORS OF CLIMATE CHANGE

>From NATIONAL CENTER FOR POLICY ANALYSIS, 16 May 2002
http://www.ncpa.org/pub/ba/ba396/

by David R. Legates  Download this page in PDF format

World leaders are making critical decisions based upon predictions of
General Circulation Models or Global Climate Models (GCMs) that humans are
causing global climate change or global warming. Global climate models
attempt to describe the earth's climate and are used in variety of
applications. These include the investigation of the possible causes of
climate change and the simulation of past and future climates. But these
models are limited in important ways, including:

an incomplete understanding of the climate system,
an imperfect ability to transform our knowledge into accurate mathematical equations,
the limited power of computers,
the models' inability to reproduce important atmospheric phenomena, and
inaccurate representations of the complex natural interconnections.

These weaknesses combine to make GCM-based predictions too uncertain to be
used as the bases for public policy responses related to future climate
changes.

Nor is this the worst that can happen. These numbers are based on the
intermediate (most likely) projections of the Social Security Board of
Trustees. Under the trustees' pessimistic projection, by 2050 the total
taxes needed to support elderly benefits will climb to 53 percent of taxable
payroll. On this projection, we have already pledged more than half of the
incomes of future workers - most of whom are not yet born and none of whom
have agreed to be part of a chain-letter approach to funding retirement
benefits.

The Limits of Human Knowledge.
 
The world's best scientists have only an incomplete understanding of how the
various atmospheric, land surface, oceanic and ice components interact. Even
if their understanding of the climate system were perfect, scientists would
still face challenges. Consider that while scientists do have a general idea
of the complex interrelationships of the atmosphere and the oceans,
expressing this knowledge mathematically is very difficult.

The Limits of Computing Power.
 
GCMs are limited in important ways. Global climate is produced through a
variety of processes and interactions that operate on a wide range of
scales, including molecular, regional, continental and global. Changes in
climate occur from physical interactions that take place on any or all of
these scales. The changes, and the resulting weather patterns, can occur
nearly instantaneously or they can take decades or millenia to develop.
Unfortunately, the computers and programs that run the GCMs are limited to
gross representations of the geographic, geologic and atmospheric details
that they use to run climate simulations. Thus, many small-scale features,
such as a temporary but significant shift in the prevailing winds or
unusually dry surface conditions due to increased evaporation from forest
fires and high winds cannot be represented, even though they may
significantly impact the local, regional, or even global climate.

Indeed, GCMs can at best represent only a thumbnail sketch of the real
world, with spatial resolutions no finer than regional areas a thousand
miles square. Many topographical, geological, atmospheric and biological
variations can occur within any contiguous thousand square miles. For
instance, GCM's might average rainfall amounts and wind velocity over large
diverse land surfaces which could include arid mountain plateaus, low-land
deserts and temperate coastal rainforests. But, even modest topographic
changes - for instance, a new housing development that paves over farmland
and drains a wetland area - could render a model of land-surface
interactions inaccurate.

Resulting Model Breakdowns.
 
Given the limitations noted, GCMs simply cannot reliably reproduce climate
systems. Commonplace events like precipitation and the passage of typical
weather fronts are difficult enough to depict; truly complex phenomena, such
as, hurricanes, thunderstorms, and tornadoes may be represented so poorly
that they simply cannot be relied upon. El Nio, La Nia and the Pacific
Decadal Oscillation are examples of complex climate patterns that are
inadequately reproduced or completely absent in GCMs.

In addition, global average temperature is measured by three different
instruments - ground-based thermometers, weather balloons and global
satellite observations - with each system covering a slightly different
range of the earth's atmosphere. The data they provide is conflicting.
Whereas, both the global satellite network and weather balloon observations
show a modest cooling trend during the past 25 years, the ground-based
thermometers show a modest warming of approximately 0.13 degrees Celsius per
decade.

The GCMs display two flaws related to measured global temperatures. First,
they show global temperatures rising across all levels of the atmosphere, a
finding not reflected in reality. [See figure.] Second, the lowest predicted
global temperature measurement of the GCMs is nearly three times more than
the temperature rise measured by ground-based thermometers. Thus, the GCMs
do not reflect the temperature differences or the direction of temperature
change within various levels of the atmosphere, nor do they show the actual
amount of temperature change.

Finally, GCMs ignore the interconnected nature of climate processes and how
an inaccurate simulation of one introduces errors into every other related
process. A simple model for precipitation involves scores of variables. But
a single error, say in representing atmospheric moisture or deciding what
mechanism is causing precipitation, will make the simulation "wrong." For
example, precipitation requires moisture in the atmosphere and a mechanism
to force it to condense (i.e., by forcing the air to rise over mountains, by
surface heating, as a result of weather fronts or by cyclonic rotation). Any
errors in representing either the atmospheric moisture content or the
precipitation-causing mechanisms will produce an erroneous simulation. Thus,
GCM simulations of precipitation will be affected by limitations in the
representation and simulation of topography.

Inaccuracies in simulating precipitation will, in turn, adversely affect the
simulation of virtually every other climate variable. Condensation releases
heat to the atmosphere and forms clouds, which reflect energy from the sun
and trap heat from the earth's surface - and both sources of heat affect air
temperature. This in turn affects winds, atmospheric pressure and
atmospheric circulation. Since winds drive the upper currents of the ocean,
the simulation of ocean circulation also is adversely affected.
Additionally, inadequate simulations of precipitation lead to inaccurate
assessments of soil moisture. Since vegetation also responds to
precipitation, the entire representation of the biosphere becomes open to
question. This is not to say that climate scientists lack skill or
dedication; it is to reiterate the extraordinary difficulty of producing
accurate climate models.

More than just long-term average and seasonal variations go into estimating
the extent of climate change. Climate change is likely to manifest itself in
small regional fluctuations. Moreover, year-to-year variability is
important. Much of the character of the earth's climate is in how it varies
over time. GCMs that simulate essentially the same conditions year after
year, as virtually all climate models do, miss an important aspect of the
earth's climate. Thus GCMs' predictive powers must be evaluated in light of
each model's ability to represent the global climate's holistic and variable
nature.

Although GCMs are not weather prediction models, climate is nevertheless an
ensemble of weather events. The utility of a climate model is not in
predicting whether it will rain in northern Florida on a certain afternoon.
What is of interest is to determine the long-term probability that future
precipitation will be significantly different - in frequency and/or
intensity - from what it is today. Will the winter of 2048 be warmer or
colder, wetter or drier than present conditions, and if so, by how much? If
climate models cannot simulate processes known to drive daily weather
patterns, to what degree can GCM's climate predictions be believed?

Conclusion.
 
Climate is to some degree a representation of the average of weather events
that occur. If the frequency and locations of weather events are simulated
inaccurately or not at all, the reliability of climate change
prognostications is undermined. While GCMs cannot be expected to simulate
future weather, they should be able to accurately depict the earth's present
climate and vitality. Since they cannot, GCM predictions of climate change
are statistical exercises with little bearing on reality.
 
David R. Legates, Director Center for Climatic Research University of
Delaware Newark and adjunct scholar with the NCPA.

===========
(3) DEMISE OF CARBON SEQUESTATION "FREE RIDE" GREATLY EXAGGERATED

>From CO2 Science Magazine, 22 May 2002
http://www.co2science.org/edit/v5_edit/v5n21edit.htm

A Duke University press release dated 15 May 2002 heralds the "end of [the]
'free ride' on ecosystem CO2 absorption."  The free ride to which the
document refers is the historical and still-ongoing increase in the capacity
of the world's soils to store carbon as a consequence of the historical and
still-ongoing increase in the growth of the planet's vegetation that has
been driven by the historical and still-ongoing increase in the air's CO2
content that has been associated with the demise of the last great ice age
and the inception and progression of the Industrial Revolution.

Stimulated by the gradually intensifying aerial fertilization effect of this
rise in the atmosphere's CO2 concentration, earth's plants extracted ever
increasing amounts of the CO2 that went into the air during this long period
of time and sequestered its carbon in their tissues and the soils in which
they grew.  Consequently, were it not for this increasingly voracious
appetite of the globe's plants for carbon dioxide - the more CO2 there is in
the air, the more of it they absorb - the air's CO2 content would have risen
much higher than it actually has, and it currently would be rising nearly
twice as fast as it actually is.

The Duke University press release appears to readily accept all of this past
good news.  With respect to the future, however, it has nothing encouraging
to say.  In fact, it states that the CO2-enhanced sequestration of carbon
will shortly come to an end, and that we will therefore soon be looking at
yearly atmospheric CO2 increases "that are double what they are now."

The basis for this pessimistic and, we believe, erroneous contention is the
recently published paper of Gill et al. (2002), wherein the authors describe
their multi-year study of carbon sequestration in the soils of several
portions of a Texas grassland ecosystem that were maintained under a variety
of atmospheric CO2 concentrations, ranging from just over 220 to just over
560 mol CO2 per mol air (mol mol-1).  The Duke University press release
refers to this experiment as a "precise ecosystem study," and in nearly all
respects, that description is correct.  In the case of the most crucial
measurement of all (soil organic carbon content), however, the word precise
is totally inappropriate, if not outright wrong; for it is notoriously
difficult to accurately measure the relatively small changes in soil organic
carbon content that occur over periods of a few short years, which in this
specific instance numbered but three.

Consider, for example, the data of Figure 1, which we have carefully
extracted from a figure in the Gill et al. paper.  The scatter in the data
is tremendous, especially in the mid-CO2-range, where the difference between
the high and low extremes of the three-year soil organic carbon content
change is considerably greater than the difference between what would be
calculated from the beginning and end points of any reasonable trend line
that might be fit to the data.  So what did Gill et al. do to conclude -
from these highly imprecise data - that the ability of earth's ecosystems to
continue to enhance their capacity to sequester carbon in response to future
increases in the air's CO2 content would soon be coming to an end?

Figure 1. [ http://www.co2science.org/edit/v5_edit/v5n21edit.htm ] The entire
suite of 1997-2000 changes in the organic carbon content of the top 15 cm of
soil plotted against atmospheric CO2 concentration. From Gill et al. (2002).

The group's fatal misstep was trying to coax more out of their data than
could realistically be delivered.  Accepting all of their widely-dispersed
data points as equally valid, Gill et al. fit a second-order polynomial to
them, as shown in Figure 2.  This functional representation - which rises
rapidly from the lowest CO2 concentration employed in the study to a
concentration of approximately 400 mol mol-1, but then levels out - serves
as the basis for their contention that the CO2-induced stimulation of carbon
sequestration that is evident over the first half of the CO2 concentration
range essentially disappears above an atmospheric CO2 concentration of 400
mol mol-1.  That's what their representation of the data implies, alright.
But is this representation correct?
 
Figure 2. http://www.co2science.org/edit/v5_edit/v5n21edit.htm The entire
suite of 1997-2000 changes in the organic carbon content of the top 15 cm of
soil plotted against atmospheric CO2 concentration, together with the trend
line fit to the data by Gill et al.

In broaching this question, we begin as Gill et al. did in another place in
their paper and divide the data of Figure 2 into two groups: a
sub-ambient-CO2 group (comprised of the 10 data points with the lowest
atmospheric CO2 concentrations) and a super-ambient-CO2 group (comprised of
the 10 points with the highest atmospheric CO2 concentrations).  But instead
of calculating the mean three-year change in the soil organic carbon content
of each of these groups, as they did, we derive best-fit linear regressions
for each group, as shown in Figure 3.

Figure 3. http://www.co2science.org/edit/v5_edit/v5n21edit.htm The
sub-ambient-CO2 and super-ambient-CO2 changes in the organic carbon content
of the top 15 cm of soil plotted against atmospheric CO2 concentration,
together with the linear regression lines we have fit to the two groups of
data.

In viewing these results, it can be seen that the sub-ambient-CO2 data - by
themselves - exhibit absolutely no trend in carbon sequestration with
increasing atmospheric CO2 content (because, of course, of their high
imprecision); yet this is the CO2 range over which Gill et al. claim the
strongest - actually, the only - positive carbon sequestration response to
atmospheric CO2 enrichment.  In point of fact, however, the only way they
can support this contention is via the help they receive from their
super-ambient-CO2 data; and it is the uncritical way in which they used
these data that led them to draw their unwarranted conclusion about rising
CO2 concentrations having little effect upon soil carbon sequestration above
a concentration of 400 mol mol-1, as we will now demonstrate.

First of all, it is important to note the great discontinuity that exists
between the two data sets of Figure 3, i.e., the great jump in the change in
soil organic carbon content that occurs where the relationship describing
the first data set ends and the relationship describing the second data set
begins.  One way to resolve this problem - and, hopefully, make the
discontinuity disappear - would be to acquire many more data points across
the entire range of atmospheric CO2 concentration that was investigated.
Unfortunately, the data displayed are all the data that exist.  Hence, a
different approach must be employed; and that approach is to use less data,
specifically, to delete from the highly-variable population of imprecise
data points a small number of the most aberrant points.

So, which are "the most aberrant points"?  Logic would suggest they are the
ones that make the biggest contribution to the patently unreal discontinuity
between the two groups of data.  Can you guess which ones they are?  That's
right.  They are the first two (lowest-CO2-value) data points of the
super-ambient-CO2 group.  Removing those two data points does more to make
the two groups of data compatible with each other than does the removal of
any other two data points of either group.

When these highly aberrant data points are thus deleted, as shown in Figure
4, the data that remain are best described by a simple straight line; and
that straight line implies that there is no detectable change in the
CO2-induced stimulation of soil carbon sequestration over the entire range
of atmospheric CO2 concentration investigated by Gill et al.  And that is
the only conclusion that can validly be drawn from their data.

Figure 4. http://www.co2science.org/edit/v5_edit/v5n21edit.htm The changes
in the organic carbon content of the top 15 cm of soil plotted against
atmospheric CO2 concentration, with the two most aberrant of the imprecise
data points removed, together with the linear regression we have fit to the
data.

To emphasize this point, we have also fit a second-order polynomial (the
functional form preferred by Gill et al.) to the data of Figure 4, obtaining
the result depicted in Figure 5.  Interestingly, although the result is
indeed a curve, one really has to squint to see it.  In fact, the curvature
of the line is so slight that its graphical representation in Figure 5 is
virtually indistinguishable from the straight line of Figure 4; and it
possesses essentially the same R2 value.  Hence, and once again, it is
abundantly clear there is no defensible basis for claiming anything about
the response of soil carbon sequestration in this Texas grassland ecosystem
to the range of atmospheric CO2 enrichment employed in this study beyond
what is suggested by the results of Figures 4 and 5, i.e., that the response
is linear.

Figure 5. http://www.co2science.org/edit/v5_edit/v5n21edit.htm The changes
in the organic carbon content of the top 15 cm of soil plotted against
atmospheric CO2 concentration, with the two most aberrant of the imprecise
data points removed, together with the second-order polynomial we have fit
to the data.

The end result of this more realistic and critical treatment of the data -
wherein we identify and discard the two most aberrant data points (10% of
the original 20) - contradicts Gill et al.'s conclusion that the
sequestration of carbon in soils is more responsive to atmospheric CO2
increases at the low end of the CO2 concentration range they investigated
than at the high end of that range.  In light of the gradual weakening of
the aerial fertilization effect of atmospheric CO2 enrichment that is often
observed in experiments as the air's CO2 content is sequentially elevated,
however, one might logically have expected to see something along the lines
of what Gill et al.  wrongly concluded, although of a much more muted
nature.  Nevertheless, such was not observed.  Why?

Part of the answer to this question undoubtedly resides in the nature of the
photosynthetic responses of the grassland plants to atmospheric CO2
enrichment.  It is interesting to note, for example, that the CO2-induced
stimulation of the maximum photosynthetic rates of the plants did not taper
off, as might have been expected, as the air's CO2 content rose, even at the
highest CO2 concentration investigated.  Rather, the photosynthetic response
was linear, just like that of the change in soil organic carbon content.
What is even more interesting - even fascinating - in this regard, is that
the linear responses were maintained in the face of what Gill et al. say was
a "threefold decrease in nitrogen availability" as the air's CO2 content
went from its lowest to highest level.

In light of these striking experimental observations, we fiercely disagree
with Gill et al.'s conclusion that the ability of soils to continue as
carbon sinks will be severely limited in the near future by impending
nutrient limitations.  Indeed, their own data, when properly analyzed,
indicate that even with their "threefold decrease" in soil nitrogen
availability, the soil of the grassland they studied continued to sequester
ever greater amounts of carbon as the air's CO2 content continued to rise to
close to 200 mol mol-1 above the atmosphere's current CO2 concentration.

How sad it is, therefore, that Duke University ecologist Robert B. Jackson,
on the basis of the Gill et al. paper of which he was a co-author, would
publicly state, as reported in the Duke University press release, that "the
current lack of interest by the United States in participating in the Kyoto
accords is especially unfortunate."  In point of fact, if there is anything
unfortunate here, it is that so many people have been so egregiously mislead
by Jackson's unsubstantiated declaration; for when properly considered, the
data of Gill et al. actually imply that earth's vegetation will yearly
sequester ever more carbon as the CO2 concentration of the atmosphere
continues to rise, thereby exerting an increasingly more powerful brake on
the rate of increase in the air's CO2 content and reducing the potential for
deleterious global warming.

So climb aboard the fossil-fueled biospheric train, folks, the ride is still
free!

Dr. Sherwood B. Idso, President 
Dr. Keith E. Idso, Vice President 

Reference
Gill, R.A., Polley, H.W., Johnson, H.B., Anderson, L.J., Maherali, H. and
Jackson, R.B.  2002.  Nonlinear grassland responses to past and future
atmospheric CO2.  Nature 417: 279-282.
 
Copyright 2002.  Center for the Study of Carbon Dioxide and Global Change


===========
(4) IT'LL COST YA': HOW EUROPE WILL SUFFER

>From Tech Central Station, 21 May 2002
http://www.techcentralstation.com/1051/envirowrapper.jsp?PID=1051-450&CID=1051-051702A

By Duane D. Freese 05/17/2002 
 
When serious people examine problems, they usually look at it from all
angles.

Many leaders on the world scene probably figured they had all the angles
covered when 100 nations agreed last fall to implement a program to reduce
emissions of so-called greenhouse gases. But recent studies demonstrate they
missed a big one -- economics.

The United States government had to explore the economics of climate change
after Vice President Al Gore negotiated the Kyoto protocol in 1997 to cut
greenhouse gas emissions below 1990 levels. Prior to Kyoto, the Clinton
administration received a 95-0 injunction by the Senate not to send it a
treaty that would substantially harm the U.S. economy. So the administration
handed over to the Energy Department the job of figuring out what
implementing Kyoto would cost.

The result didn't satisfy Kyoto backers within the administration. The study
turned up a cost by 2010 -- at the midpoint of the 2008-12 period for
meeting Kyoto's strictures -- of 2% to 4% of GDP. In other words, in the
best of circumstances, it would cost about $200 billion and millions of jobs
to cut U.S. emissions from the burning of fossil fuels 7% below 1990 levels.
No astute political observer believed then or now that the Senate would
ratify such a blow to the economy. And Clinton, despite some attempts by
administration economists to brighten Kyoto's numbers, never submitted the
protocol to the Senate for approval.

Despite this, when President Bush -- relying on the Clinton Energy
Department and other studies indicating the deal provided plenty of economic
pain for almost no environmental gain -- dumped the deal three months after
taking office, howls emanated from Kyoto backers, both here and in Europe.

So the decision to go ahead with the protocol last fall despite the United
States' withdrawal from the agreement prompted the Bush administration to
come up with an alternative. Bush's Clear Skies proposal, calling for a
voluntary emissions trading program as part of an effort to increase energy
intensity by 18% over the next decade, didn't win the president any points
in Europe.

Germany's Environment Minister Juergen Trittin called the proposal a major
disappointment. His counterpart in France, Yves Cochet, called on the
European Union to push Bush to ratify Kyoto "without delay." And a European
Union spokesmen reiterated that the Kyoto protocol remained "the best
framework for taking action."

Well, it's pretty easy to push for action in the abstract. Nothing needs to
be done immediately; compliance, after all, begins six years hence. But
since last fall, few governments have stepped forward to take concrete steps
to reduce their nation's emissions.

The European Parliament -- not to be confused with the sovereign governments
of each of the European Union members -- has "ratified" Kyoto. But it
remains to be seen whether all the 15 members of the EU will ratify the
protocol by June 1, as an unexpected development has cropped up that may
force EU nations to actually make emissions cuts that many didn't intend.

On April 30, the European Environment Agency (EEA) reported that the
downward trend in emissions that Europe enjoyed through most of the 1990s
had reversed. In 1999, emissions for the EU as a whole had dropped by 3.8%
from 1990 levels, nearly half the 8% target for the union. The success of
the EU was built on three things - Germany cleaned up the inefficient, high
polluting factories and utilities it inherited in reintegrating with former
communist East Germany, Britain converted its utilities to North Sea natural
gas, and European population generally stagnated.

Last year, though, emissions crept up 0.3%, with Britain, which submitted
Kyoto to Parliament for ratification in March, seeing its emissions rise by
1.2%.

The point is that the easy cuts - those that sold the politicians in Europe
on Kyoto in the first place - appear to be over. The next round of cuts will
come at the expense of economic growth and employment.

A study by the DRI-WEFA for the American Council of Capital Formation (ACCF)
presented at a roundtable in Brussels on April 25 - before the new EU
numbers on emissions were delivered - provided an eye-opener about that
reality for many political leaders there.

The report estimates that Germany, for example, will see home heating oil
prices rise 14%, gasoline and diesel prices rise 14% and 20% respectively,
and natural gas prices for industry rise by 40% by 2010 to meet its Kyoto
target. Bottom line: GDP lower by 5.2% and unemployment up 1.8 million from
what it otherwise would be. In a nation, suffering 10% unemployment, that
can hardly be heartening news.

Britain faces a 4% GDP loss from the baseline, as industry faces a whopping
117% increase in natural gas costs. Job loss during the 2008-12
implementation phase of Kyoto would amount to 1 million annually.

Spain, which has seen its emissions increase 34% since 1990, faces similar
difficulties bringing them down to merely 15% above 1990 levels, as the EU
has ordered. The 5% loss in GDP will cost 4 million Spaniards employment by
2012.

Some environmental groups dispute such findings. But the fact is that most
nations that signed onto the Kyoto protocol never have done a close
examination of the issue. After finally doing so this year, Canada put off a
decision on ratifying Kyoto from August until later this year. It wants to
renegotiate the Kyoto deal with Europe to permit allowances for such things
as natural gas sales to the United States as a contribution to emissions
reductions for itself.

You can make a bet that similar deals will start to be offered once Kyoto
strictures begin to pinch economies, if politicians let it get that far.

Many European business leaders don't think that their bureaucrats will
actually go ahead with putting such an enormous drag on their economies.
Unlike the United States, where passage of Kyoto would create an inflexible
mandate for bureaucrats to enforce and private groups rights to sue,
Europe's bureaucratic machinery bows to the will of the parliamentary winds.
As economist Margo Thorning of ACCF has noted, they have the flexibility to
let businesses off the hook if they don't live up to the goals.

That is probably why politicians there can sound so alarmed about global
warming, because they know they can back away before it hurts too much. That
also may explain why they embraced draconian to global warming solutions
before looking carefully at the costs and benefits.

Environmentalists have demonstrated that they don't care much about the
costs of their programs. One of the reasons that former Greenpeace member
Bjorn Lomborg, author of "The Skeptical Environmentalist," has drawn such
ire from environmental advocates is that he constantly raises questions
about costs. Raising costs requires scientists and environmentalists who
fear global warming to justify their positions. And who likes to do that?

The world's politicians, though, and their constituents may come to regret
that they didn't study the costs earlier, as was done here. If they back
away from their rhetoric, they'll deservedly be labeled hypocrites; if they
push ahead to the point of scuttling their economies, they'll be fools. You
got to check the angles.

2002 Tech Central Station
 
============
(5) SOLAR DELUSIONS

>From Tech Central Station, 21 May 2002
http://www.techcentralstation.com/1051/envirowrapper.jsp?PID=1051-450&CID=1051-052102A

By Sallie Baliunas 05/21/2002 
 
Editor's note: This article is the second in a series.

Solar power proponents tout sunlight as an energy source that is abundant,
free of noxious pollutants and carbon dioxide emission. They claim that if
only sunlight were harnessed, plenty of clean, inexpensive and abundant
energy would be available to improve the human condition while preventing
environmental degradation.

When asked why the fantastical promise of solar power over the last several
decades has not led to very much of it -- less than 0.1% of total energy
supplied in the United States -- Ralph Nader in an interview could only
explain, "Because Exxon doesn't own the sun."

Nader and I agree on one implication of his statement: capitalism works.
Beyond that, Nader ignores some down-to-earth realities about converting the
sun's energy for human use.

Sunny Promises

People want electricity when they want it. Electricity cannot be stored; it
must be generated and delivered as needed. Flicking "on" a light switch
instructs the local power distribution system to locate and deliver
electricity that courses from power plants through a grid of miles of wires
to light the bulb in a fraction of a second. In the case of power from
fossil fuels -- which at present supply about 70% of the U.S.'s electricity
needs -- those fuels are burned to generate the electricity at nearly the
moment the switch is flicked. No ready power generation; no light.

Can't sunlight do that job just as well?

As we learned in the last column, energy can be neither created nor
destroyed, only transformed. To get electricity from sunlight, humans must
do a lot of work to transform and deliver electricity to their homes and
businesses. That work is a major barrier in cost-effectiveness of solar
electricity compared to the current price of conventional sources of
electricity.

Taking In The Rays

How do we transform sunlight? This episode focuses on fixed photovoltaic
cells, which transform sunlight for local use. (Another way to transform
sunlight is to concentrate and store its heat so it can create electricity
through a generator -- solar thermal power. And hydroelectric and wind power
owe their power to the sun. Those energy sources will be covered in another
installment.)

Photovoltaic cells are remarkable semiconductor devices, producing an
electrical current when sunlight strikes them. For example, my decades-old
calculator works when light from the sun or a lamp shines on its blocky gray
solar cell.

Most cells are manufactured from silicon and can convert up to about 20% of
the sunlight illuminating them to electrical energy. The more expensive the
cell, the more electricity it will yield. Even higher efficiencies may be
possible with more exotic and expensive alloys employed as semiconductors,
e.g., indium gallium arsenide nitride.

Highly efficient photovoltaic cells are excellent for space application,
where a smaller size and lighter weight is favored over higher cost. But
even in space, where sunlight is undimmed by clouds and atmosphere, the real
estate required for useful applications of solar arrays is enormous.

To illustrate, consider the panels of ganged-together solar cells for
operating the International Space Station that are, according to NASA, the
largest electricity-generating arrays in space. They cover eight wings
spanning more 32,000 square feet -- nearly three quarters of an acre. Even
so, the more than 250,000 solar cells deliver a theoretical peak power of
only 246 kilowatts - in the sunlit portion of the 90-minute orbit.

The director of NASA solar system space exploration says of planet-exploring
satellites, "We currently operate with a light bulb's worth of power on
board," which can limit science experiments. For spacecraft in deep space,
faint sunlight means that solar panels are prohibitive in terms of size and
weight. Expectations are for nuclear power systems aboard spacecraft to
provide the kilowatts for improved science in deep space exploration.

The limitation of photovoltaics in space is mirrored on Earth with even
greater trade-offs. Even the least expensive panels are relatively quite
expensive. According to the Federal Energy Management Program, photovoltaic
solar systems average about 25 to 50 cents per kilowatt-hour at remote
locations, over a system lifetime of 20 years. The national average of
conventional power delivered from the grid costs 4 to 8 cents per
kilowatt-hour. More power can be had when the panels track the sun, rather
than being fixed. The ability to track the sun adds to capital and upkeep
costs. For now, less efficient and fixed systems will be favored, but they
require more square footage for light collection.

At Home with Solar Arrays

Here's a practical example showing the impracticality of operating a
fixed-panel system in New England.

A home clothes dryer uses about 5,000 watts (5 kW) of energy and takes about
one hour to dry one batch of laundry. Now, my New England neighbors might
save resources (money, most importantly) by hanging laundry outdoors. The
downside is New England's weather: one must have a back-up plan in case of,
e.g., freezing temperatures, blizzards or rain. An alternative would be to
use the waste heat from a home furnace to dry clothing hung near the
furnace.

The inconvenience is unappealing to many people. Could photovoltaics run the
clothes dryer? Practically speaking, no, because sunlight shines feebly and
intermittently in New England.

Accounting for both day and night, seasons throughout a year, and incidence
of clear weather, a typical New England yard receives about 15 Watts of
sunlight per square foot. There is no changing that -- it is the flux from
sunshine that one can expect, on average, in New England.

So to run just the clothes dryer off sunshine would take about of 3,300
square feet of cells if the system delivers electricity at a good, 10%
efficiency. True, the times of peak sunlight could deliver enough
electricity to operate the dryer with less area of solar cells, but then the
inconvenience of planning to work only during peak power - as in drying
clothes in the air - returns.

In short, the diluteness and intermittency of sunlight means that solar
collecting devices require land area, storage devices and back-up sources of
electricity. On-demand electricity is convenient, and solar panels alone
cannot provide it.

To operate more electrical appliances at the same time -- like the furnace,
hot water heater, air conditioner, lights or computer -- would require ever
more thousands of feet of panels. At night, nothing would run, unless there
were significant energy storage capabilities. And what about those majestic
cedar, sugar maple, ash, hickory, alder, beech, white pine, oak and giant
sequoia trees? To keep shadows at bay during the day, those tall trees that
sequester carbon and provide woodland habitat for animals from hawks to
fishers would have to be chopped down so they do not block sunlight falling
on thousands of square feet of panels.

The expanse of panels - much larger than the area of the typical home's
south-facing roof - would need the support of sturdy steel and concrete
structures to survive outdoor hazards. New England experiences hurricanes
with winds over 100 miles per hour, tornadoes (a July 9, 1953, twister
killed 90 people in Worcester, Mass., within one minute), heavy snowfalls of
three to five feet, hailstorms and, although on very rare occasions,
earthquakes (a magnitude 6 shock struck Cape Ann in 1755 and 6.5 in central
New Hampshire in 1638). The panels also will need to be kept clean with
periodic washing. Other drawbacks: in hot weather, the panels are less
efficient, and as they age, their efficiency declines.

Still, couldn't solar panels offset some electrical use from other sources,
so that, for example, less coal would be used to generate electricity, and
wouldn't that be better for the environment?

Solar arrays may be economically worthwhile at isolated, sunny sites, or for
small demand at peak sunlight times, far from the electrical grid. But on
the electrical grid, solar arrays are not yet cost-attractive. The capital
cost of installed panels (on a roof) is about $10 per watt -- $30,000 for
3kW, still not enough to run the clothes dryer. Conventional electricity
from the grid is available at costs of roughly one-fifth to one-tenth that
of the solar panel power, so the homeowner won't recover the capital costs
even in the sunniest climates over a system's 20-year lifetime of collecting
"free sunlight."

Thus small solar panel systems aren't likely to draw a lot of customers to
reduce the need for non-renewable sources like coal. As for larger systems
that might provide economies of scale to reduce costs, they have a major
cost in the land needed for arrays. The problems homeowners face with small
arrays compound when one looks at a hypothetical system that might serve
communities.

Consider solar energy to serve Pennsylvania's 12 million people. How much
land would be needed for solar panels at current energy usage, assuming the
panels deliver 10% of the incident sunlight as electricity, averaged around
the year, day and night? The answer: about 1,100 square miles snuggled
together in one massive ecosystem-robbing swath, consuming a land area equal
to a third of Vermont - all of it on land that is clear cut and of necessity
kept bald.

That 1,100 square miles doesn't include the land needed to accommodate more
panels to make up transmission losses, for service roads, for buildings and
high power transmission lines, and for the inevitable storage devices when
the sun sets.

The footprint of a massive "clean" photovoltaic facility serving any large
community's energy needs would raise serious environmental issues all its
own. That is one reason that even as photovoltaics may have a small niche
market in rural areas distant from existing electricity lines, they for now
appear incapable of delivering the power for a 21st century nation.

Next: Solar Towers

So, fixed solar arrays appear to be too environmentally costly to replace a
typical 1,000-megawatt utility plant. The cost of photovoltaic cells may
drop and their efficiency increase, but the sun is too feeble to make a
substantial increase in capacity, without the technological breakthrough of
inexpensive storage devices.

Could solar thermal towers that concentrate and store the sunlight, be less
costly? The answer, again, is, "No," as we shall see in the next
installment.

Copyright 2002, Tech Central Station

=============
(6) BAD SCIENCE NEVER DIES

>From Tech Central Station, 20 May 2002
http://www.techcentralstation.com/1051/envirowrapper.jsp?PID=1051-450&CID=1051-052002A

By Howard Fienberg 05/20/2002 
 
Splashy science news reports draw eyeballs and move policy, but sometimes
the scientific heart of the news comes up short. Worse, it can be dead
wrong. So what happens in the news when a study is found to be faulty or
false and ends up being retracted or thrown out?

Not much, usually. Science news revolves around news -- new studies,
discoveries and achievements. The discovery that previous research has been
dis-proven or shown to be worth less than the paper it was printed on just
does not register as news to most journalists, no matter how said research
was originally hyped to the public.

This is understandable. After all, journalists often work for publications
that don't worry much about correcting the public record. Most newspapers
and magazines print correction columns, but they can be hard to find. Few
publications admit in big type that they were wrong.

Of course, when the media make a mistake, it usually is not earth
shattering. By contrast, scientific errors can spread and leave even more
bad science in their wake.

A study by Dr. John M. Budd et al. in the Journal of the American Medical
Association (Jul. 15, 1998) examined 235 scientific journal articles that
had been formally retracted due to error, misconduct, failure to replicate
results, or other reasons. The researchers reported that, "Retracted
articles continue to be cited as valid work in the biomedical literature
after publication of the retraction."

Budd and his colleagues acknowledged that there is sometimes a significant
time lag (an average of 28 months) between publication and retraction. But
they found that the flawed articles were cited in the scientific literature
an astonishing 2,034 times after they had been retracted. The vast majority
of these post-retraction citations treated the work as still valid, making
no reference to the retraction.

At a certain level, these studies have become urban myths. Despite no longer
possessing scientific authority, their repeated publication has let them
take on a life of their own -- regardless of any grounding in truth. Such
scientific myths are worse than simple scare stories about kidney stealing
or the influence of the full moon, because future researchers unwittingly
depend upon their (invalidated) analyses.

For example, take the finding back in November that genetically modified
corn had infiltrated regular strains in Mexico and was scrambling DNA chains
(see Mexican Jumping Genes, Mar. 18). Scientists were suspicious of the
study's claims from the start, but the examination of the research took
time. Nature, the journal that had published the study, then spent time
reviewing and considering the arguments of the critics as well as the
counter-claims of the original authors. When Nature finally printed letters
in April from two teams of scientists pointing out the extensive
deficiencies in the research, the journal all but retracted the original
article, admitting that "the evidence available is not sufficient to justify
the publication of the original paper."

However, when the media responded to this news, they concentrated on
political aspects, not the important scientific ones. The Washington Post
reported that one of the authors of the original article "believed the
effort to undermine" the study "was the work of biotechnology advocates,
some of whom had personal reasons for attacking him." Rather than laying out
the science and the criticism, the Post reduced the matter to a 'he said,
she said' narrative, concentrating on personal and political motives rather
than the merits of the research. This kind of narrative may follow the
dictates of allegedly objective journalism, but it doesn't explain very
much.

Further in the political vein, Guardian writer George Monbiot (May 14)
dedicated a lengthy article to investigating the supposed public relations
campaign against the article, a campaign "so severe" that it "persuaded" the
journal to retract it (never mind the methodological problems in the
article, which Monbiot called "hardly unprecedented in a scientific
journal").

And, as if to demonstrate the conclusions of JAMA's 1998 study of
retractions, the Washington Times (Apr. 30) ran a center-page spread on the
Mexican corn infiltration, along with photos from an anti-GM crop
demonstration outside the Mexican consulate in San Francisco and a photo of
one of the original articles. While the Times did admit, deep inside the
article, that the journal might have erred in publishing the study "because
of a technical issue," understatement was not the biggest problem. In
continuing to accept the retracted Nature article as gospel, the newspaper
was simply following in the well-worn footsteps of news coverage earlier
that month. When that media coverage reduced a scientific retraction into
just another installment of political controversy, they reduced the need of
other journalists to worry about the scientific part of the problem.

Since even scientists must rely on the news media for much of their science
news (an endless array of journals defy any sane person to keep track of
them all), they might miss the Nature retraction, too. It won't be long
before other journal articles cite the (retracted) study.

When initial research becomes received wisdom and subsequent criticism and
retractions fail to enter the public consciousness, journalism fails in its
duty to both science and the public. As long as science is news, journalists
should learn to take the mundane footnote with the exciting headline.
 
2002 Tech Central Station

--------------------------------------------------------------------
CCNet is a scholarly electronic network. To subscribe/unsubscribe, please
contact the moderator Benny J Peiser < b.j.peiser@livjm.ac.uk >. Information
circulated on this network is for scholarly and educational use only. The
attached information may not be copied or reproduced for
any other purposes without prior permission of the copyright holders. The
fully indexed archive of the CCNet, from February 1997 on, can be found at
http://abob.libs.uga.edu/bobk/cccmenu.html. DISCLAIMER: The opinions,
beliefs and viewpoints expressed in the articles and texts and in other
CCNet contributions do not necessarily reflect the opinions, beliefs and
viewpoints of the moderator of this network.



CCCMENU CCC for 2002

The content and opinions expressed on this Web page do not necessarily reflect the views of nor are they endorsed by the University of

The content and opinions expressed on this Web page do not necessarily reflect the views of nor are they endorsed by the University of Georgia or the University System of Georgia.