Carbon Dioxide has many benefits!

Via.GREENIE WATCH.

11 April 2014

Report: CO2 Is Not a Pollutant, Provides ‘Beneficial Impacts’ to Planet

Carbon dioxide is not a pollutant but a naturally occurring chemical compound that benefits plants and thus, the planet and its inhabitants, according to a lengthy report released Wednesday by the free-market Heartland Institute.

“Carbon dioxide is an aerial fertilizer that provides many beneficial impacts,” said Craig Idso, one of the lead authors of the report, when CNSNews.com asked him to name the most salient finding of the 37 scientists from 12 countries who contributed to it.

“You can look at thousands of studies – real world data studies that have actually been conducted that demonstrate beyond any doubt that higher levels of CO2 are going to increase the productivity of plants,” Idso said.

“They’re real,” Idso said of the benefits of CO2. “They’re not imagined. They’re not projected. They’re real, and they’re occurring now.”

On December 2009, the Environmental Protection Agency (EPA) issued a final regulation listing CO2 as one of the greenhouses gases that is considered a pollutant that “endangers public health.” The regulation is part of what the EPA says is required under the Clean Air Act.

The EPA relies heavily in its environmental assessments on the climate change reports produced by the United Nations’ Intergovernmental Panel on Climate Change (IPCC), which issued its fifth report in September 2013.

Joseph Bast, president of the Heartland Institute, said the IPCC report has been “largely discredited” by the Nongovernmental International Panel on Climate Change’s (NIPCC) “Climate Change Reconsidered II” series of reports, including a 1,000-page report on the physical science of climate change that was released in 2013.

Fred Singer, report co-author and an atmospheric and space physicist and climate change expert, said at the press conference that the models used by IPCC do not reflect the real-world data about the planet and its warming and cooling trends.

Idso provided dramatic examples of how CO2 impacts plants, showing images of small and underdeveloped plants that were exposed to a small amount of the compound compared with thriving plants with generous leaves and blossoms and expansive root systems.

“One of the overall important findings of our report is that atmospheric CO2 is not a pollutant,” Idso said. “It is a colorless, odorless, tasteless gas that offers many biosphereric benefits.

“Probably chiefly known among all of these benefits is that elevated levels of atmospheric CO2 tend to increase the biomass and productivity of nearly all plants and ecosystems on earth,” Idso said.

Some of the other findings in the biological impacts report summary include:

* The ongoing rise in the air’s CO2 content is causing a great greening of the Earth.

* Rising levels of CO2 are increasing agricultural productivity around the world, therefore increasing food security.

* Terrestrial ecosystems have thrived around the world where temperatures have warmed, including amphibians, birds, butterflies, other insects, reptiles and mammals.

* A modest warming of the planet will result in a net reduction of human mortality from temperature-related events.

http://cnsnews.com/news/article/penny-starr/report-co2-not-pollutant-provides-beneficial-impacts-planet

 

 

Climate Science is never settled!

Lovejoy’s 99% ‘confidence’ vs. measurement uncertainty

By Christopher Monckton of Brenchley

It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:

“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”

One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.

Here is the Lovejoy paper’s proposition:

“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:

clip_image002 (1)

ΔTglobet is the measured mean global temperature anomaly, ΔTantht is the deterministic anthropogenic contribution, ΔTnatt is the (stochastic) natural variability (including the responses to the natural forcings), andΔεt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”

Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.

Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.

Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:

clip_image004

Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.

 

clip_image006clip_image008

Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº higher and 1998 0.3 Cº lower than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.

Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20thcentury to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.

But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.

Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.

His campaign was so successful that the U.S. climate community were shamed into establishing a network of several hundred ideally-sited temperature monitoring stations with standardized equipment and reporting procedures. That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.

Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.

Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.

clip_image010

Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.

Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).

The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.

clip_image012

Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.

Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.

Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.

For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.