Chemical Carcinogens and risks thereof
I have always been attentive to global environmental issues such as global air pollution and acid rain (225,244). My interest in chemical carcinogens arose in Denver, Co in summer 1974. I had given a talk on comparing the risks (to health) of energy systems at a conference on energy arranged by Edward Teller at the “Edward Teller Center for Science Technology and Political Thought” at the University of Colorado. In addition to a “School” with twenty participants, there was also a large public meeting in Denver of 300 people. This was part of what I call “my right wing summer”. The Vice President of Air Products and Chemicals, Ross Adams, came up to me and commented that he liked my approach and they needed it in the chemical industry. “We had a problem with cancers caused by vinyl chloride” he said. We think we are doing the right thing but are having difficulties in explaining this to the public. “Can you help us”. “I know nothing about vinyl chloride,” I said. “But I will look into it. If I think you are right you will have made a friend. If I think you are wrong you will have made an enemy. That is the risk you take.”. He took the risk and I became a consultant. I read an excellent EPA paper by Dr McGaughy. McGaughy had looked both at the human data and also at some animal data where Maltoni in Bologna, Italy had kept Sprague-Dawley rats in and atmosphere containing benzene and they also had developed angiosarcoma. The next year was the year my father was dying of lung cancer, so when I went to see him in Oxford. I also called on Sir Richard Doll - then Regius Professor of Medicine and a member of the SCR at my college, Christ Church in Oxford, and then I went to visit the VP of the Imperial Chemical Industries. in Welwyn Garden City.
The risk assessment was simple. I presented it at a meeting of the American Chemical Society in San Francisco in 1975. Exposures in the workplace were very high. No one had taken care to seal leaks and when the public complained of the smell the factory managers just shut the windows, reducing the exposures outside, but increasing them inside. Some exposures were as high as 10% (which exceeded the anaesthetic level and the workers passed out) most were over 1000 ppm. Workers began to get a rare liver disease - angiosarcoma. The Risk-Ratio (RR), for angiosarcoma, the incidence among those exposed divided by the incidence for those unexposed, was 400. That was huge! But there were not many people exposed. I was given access to the vinyl chloride registry of some two hundred victims, worldwide, over the previous 20 years. Industry, labour, and regulators all did the right thing. The plants were sealed to avoid fugitive emissions and occupational exposures were reduced 1000 fold. Industry actually made money - they avoided the loss of 5% of the vinyl chloride. The problem was solved although some environmental advocates and, alas, the head of OSHA continued to raise the issue as late as 1980 and demand further unnecessary action. It was a pity these environmental advocates could not move on. There were so many more problems to address that needed their attention.
One problem became apparent to me. Epidemiologists tended to talk to themselves and not to the public, not even to scientists in other disciplines. A typical, otherwise first rate, paper on the effects of pollutant X might end with a statement that the Risk Ratio is 3.3 with no discussion of what that might mean in public policy. What matters in establishing that a pollutant is troublesome is the Risk Ratio. But to the individual what matters more is the Risk itself, and what matters for society the risk summed over all people exposed. The Risk Ratio for vinyl chloride of 400 did not mean a widespread problem, because very few people were exposed. But a much smaller risk ratio of 3.0, with a much more widespread exposure might. In 1977 the FDA came out with a fine discussion on saccharin. No person has ever been known to have any sickness from saccharine exposure, so that any calculation of a problem had to be by use of animal data. Rats get bladder cancer when exposed to high doses. From this bitter experience of rats FDA estimated that 500 persons a year might develop cancer in the USA, more that the number of angiosarcomas world wide in 30 years! THIS was an issue that deserved my attention. Alas, 30 years later I still have trouble explaining it to people.
When I went to see Richard Doll in 1976 to discuss vinyl chloride, he suggested to me that there was a real need to compare animal data on carcinogens and human data to see to what extent one could take animal data to predict the behavior in man and choose a “Safe” exposure level. This we did. I had just hired as a research fellow for energy studies Dr Edmund A.C. Crouch who had world with Richard Eden at the Cavendish Laboratory. He changed the field of risks of chemical carcinogens and we wrote a couple of seminal papers. Edmund was, and is, imaginative and careful. He was always correcting me when I was careless in distinguishing between the estimates and the estimator and such like niceties. I get tied up in knots about whether to divide by N or N-1 when calculating the standard deviation. Edmund doesn’t. Also he is first rate with computers - both software and hardware. Indeed a few years later the head of the statistics Department at Harvard once said of him: “he is the best statistician I know: he gets it right first time. The first paper, published in 1979 with Edmund Crouch, was about a comparison of carcinogenic potency in rats and mice and hence (hopefully) in men (211). Edmund wrote several more papers about this on his own (Crouch, 1983a, 1983b).
These papers compare potency in several species -- principally comparing potency in rat and mouse. It is shown that when animals are fed the same amount, daily, as a fraction of body weight, the lifetime incidence of cancer is comparable. There are variations with the particular strain of rat or mouse, but these are usually small factors. Better and more data, which were published in the second edition of our book on Risk-Benefit analysis (717) , confirm this general conclusion. There are limited data on humans, but where they are, they are in general agreement with the same result with an important exception that we noted at the time - arsenic. We attributed this, tentatively but erroneously, to the idea that this was an exception particular to the particular mode of entry into the body, by inhalation. It was 10 years before the tragedy of the world wide misunderstanding hit the world, but I was among the first to realize to realize it and its implications. The tragedy that no one was looking carefully at the effects of chronic doses of arsenic when they urged use of ground water rather than surface water is discussed elsewhere and most attribute the tragedy in Bangladesh to blindness of the World Bank, UNICEF or British Geological Survey. But in my view the problem lies deeper. Toxicologists, the world over, had implicitly but incorrectly, assumed that rats and mice were always good indicators of human responses, and were not alert to exceptions.
As I ponder this issue of animal-man comparisons I also think of asbestos. Here a factor enters that is additional to the chemical properties. The shape and size of the fibers seems to matter. This brings a physical parameter in addition to the obvious chemical one. Over 30 years scientists have floundered on this issue. Are animals a good test? Most scientists have tended to rely on human data which unfortunately has been extensive because of high past exposures. An attempt by the US EPA in July 2008 to establish a more reasoned risk assessment has foundered because the scientific committee established to examine the problem has said , basically, we do not know enough to make many distinctions. But it does appear, to my surprise, that chrysotile asbestos, a form of serpentine asbestos, which can be distinguished by electron microscopy from amosite, crocidolite, and tremolite which form amphiboles, is a little less potent in causing lung cancer than the amphiboles and a lot less potent in causing mesothelioma (906). What does this mean for the materials that are being made by our enthusiastic nanotechnologists? I do not believe that anyone has any really good idea of how to address these new risks.
The role of bioassays in understanding risks of chemicals
Although animal bioassays have been used for half a century to discover which chemicals cause cancer, their use in quantitative assessment of risk is only about 25 years old. The U.S. Food and Drug Administration and the Environmental Protection Agency (EPA) have made the most extensive use of quantitative risk assessments and the assumptions they make dominate the field. However the assumptions are rarely clearly stated. The four most crucial assumptions are:
(1) that a substance that is carcinogenic in animals is carcinogenic with a similar potency (measured in appropriate units) in humans;
(2) that there is a linear (proportional) relation between dose and carcinogenic response (probability of developing cancer); and
(3) the slope of the dose response relationship at low doses can be derived from data at high doses
(4) it is reasonable to treat all carcinogens, regardless of proposed mechanism of action, in the same way.
Studies on animals are used to detect potential human carcinogens before harm to people is obvious. The usefulness of animal tests is usually justified by reference to the fact the all compounds identified as human carcinogens through epidemiology have been demonstrated to be animal carcinogens, although sometimes after much difficulty and in at least one case, arsenic, with lower than the expected potency. (However, the converse of this argument, that all animal carcinogens are or can be expected to be human carcinogens, has been challenged. Some challenges can be addressed by a careful consideration of potency and exposure. This point has been hard to get across. A paper by Ennever et al. in 1987 listed several animal carcinogens which had not been demonstrated to cause cancer in humans in spite of careful and responsible epidemiological studies. But Goodman and I (479, 482, 483) showed that the carcinogenic potency in humans predicted on the basis of chemical bioassays and the "usual" interspecies relationship is not large enough to expect tumors to appear in the small, moderately exposed group of humans in these studies. I still have to explain to intelligent scientists who ought to know better, that there is a difference between not finding an effect because it is below the limit of detection, and it not being there.
Each of the major regulatory assumptions listed above has come to be challenged by the scientific community. In many ways, the objections hinge on assumption 4, that one should start with an assumption that all carcinogens are carcinogenic at some level, but they can differ by factors of 10 million in their potency. Empirical studies have demonstrated that there are instances when assumptions 1 and 2 are indeed reasonable as shown in our first paper (211), and times when they are not). In particular assumption 1 is wrong for arsenic and in my view an adherence to this assumption, without realizing that it is an assumption is a major reason for the tragedy in Bangladesh and in SE Asia generally (900). I and my friends and colleagues have been searching for ways in which differences between carcinogens can be demonstrated empirically hoping to help bridge the gap between scientists and regulatory scientific policy.
Our colleague, the late Professor Fiering, suggested that one of his graduate students Lauren Zeise work with us for a PhD thesis. This she did, using a small grant I got from DOE. Lauren was poor in statistics but Edmund taught her. Indeed when Lauren got her first job in the California regulatory system, she found, with wonder (so she told me), that she seemed to be the only person in the department who understood statistics! She may still be. As a student Lauren noticed a paper by Parodi in Italy pointing out an unexpected correlation between acute toxicity and carcinogenic potency. He PhD thesis at Harvard University in 1984, under my direction, then became: conforming and expanding this result using the CBDS data base for carcinogenic potency and the RTECS data base for acute toxicity.. It was hard to get it published. We tried several Journals. Typically we got two extreme (but both negative) reviews. The first said this is obvious and not worth publishing, the second said these are two different medical end points and there cannot be any connection! That told me that we were on to something important, but Lauren was in despair. I cheered her up with the story, probably oversimplified but told to me by Rosalyn Yalow herself of her early work on radioimmunoassay. Rosalyn had tried to get it published in three separate journals, and was turned down. Finally her collaborator persuaded the Editor of the third journal to publish the paper provided that one controversial conclusion about the mechanism of insulin I believe, was eliminated. Rosalyn told me that the offending sentence was finally published in her Nobel Prize speech. Lauren did get her work published. (306,346,364,) but not in a Nobel Prize speech, an honor she has yet to receive.
But perhaps the most controversial, the most misunderstood and potentially the most important, was our proposed use of this for regulation of carcinogens. The Society for Risk Analysis with its journal “Risk Analysis”, had just started and Edmund and I broached the subject in the first paper of the first issue (258). Professor Richard Zeckhauser, an economist at the Kennedy School of Government, was a reviewer, and he discussed this with us at the Science Center cafeteria. Our idea was, and is, to accept this correlation as a statement of fact, although not a statement of causation, but nonetheless to use it in estimates of risk for regulatory purposes. This, we felt was being properly cautious, although as noted in the section on my work with the Atlantic Legal Foundation and the brief we submitted in Joiner, this should not mean that animal data should by themselves be used for assigning blame or awarding damages. If one had data on Carcinogenisis in animals then there is a reasonable prediction of what the carcinogenic potency in people will be with an uncertainty range attached. Much of the toxicological community had been doing this qualitatively for a century without clearly saying so. The danger of the approach is to forget the assumptions and limitations. The terrible example is arsenic. The world had basically assumed that if rats and mice don’t get cancer why should people? That arsenic is an exception to our rule, leads me to ask the question, what other exceptions are lurking around the corner that will cause another catastrophe like the SE Asia arsenic catastrophe? I am still in 2007, having a tough time trying to get scientists to think about the problem in this way.
We tried to extend this approach to situations where all we know is the acute toxicity, maybe only in rats or mice . We suggested that this can give us the first estimate of carcinogenic potency in people. Of course, we never suggested, nor do we believe, that this should replace more direct data when those data can be collected. But it can guide anyone producing a new chemical whether or not to abandon it or to go on to the carcinogenic bioassay and maybe a clinical trial. This led also to another suggestion for which at the time (25 years ago) I was much criticized. I noted that of the chemicals found to be carcinogenic in animals, most clustered just at the limit of detection and none were found to be 100 times more carcinogenic than the detection limit. Bruce Ames and collaborators showed half of all tested chemicals had a measurable (statistically significant) carcinogenic potency and this did not, and does not, depend on whether the chemicals are naturally present in the environment or were man made. I postulated and still postulate that if we had perfect measurements one could generate a plot of number of chemicals vs. log potency which is normal, but half the chemicals with a detected potency and the other half with a potency that is real but undetectable with present procedures. This postulate led to several suggestions. The first is that we should not ask; “is this chemical carcinogenic?” But assume it is carcinogenic and estimate the potency by all means available. We should also never say that a certain chemical is a non-carcinogen but say that it is a chemical with a potency less than X, where X is the best estimate from all the data, direct or indirect. I thought and think, that this is a simple way of guiding us in what to do next. But one distinguished toxicologist for example, said “It is the craziest idea I have ever heard”. I recently heard, 20 years later, that some important toxicologists are thinking it to be less crazy than they thought at first. Maybe they will tell me directly and credit me with the thought. But on this I am not optimistic.
Very few people have used the NTP CBDS database for addressing problems and questions about carcinogenisis. This data base is of the tumors and other lesions in over 300,000 rodents exposed to a variety (over 400) of chemicals. There have been several recent discussions suggesting that regulation based upon these animal bioassays cause unnecessary panic in the American populace. I disagree and in a letter to the Washington Times in 2005 (880) and claim that the problem is the attempt to regulate minuscule risks. Alas, the industry have never come to grips with this.
Alas, we got into a little conflict and misunderstanding with Bruce Ames and Lois Gold, . They had undertaken an important and massive task of listing the basic all chemicals alleged to cause cancer in animals. I call it the Gold book. Bruce picked up on Lauren’s correlation and , in my view attached more meaning to it than I believe is justified and not, in my view, even correct. He claimed that the carcinogenic effects are due to the toxic effects of dosing chemicals at too high a dose. Unfortunately that claim does not work out in practice. More importantly, he and Lois claimed that the interspecies comparison of carcinogenic potency is a tautological consequence of the previously known correlation between acute toxic responses in rats and mice. We argued about this and wrote a couple of papers thereon (340,461). The last was a careful statistical (Monte Carlo) study of the relationship between acute toxicity and carcinogenic potency to address the extent to which they are real and the extent to which they might be statistical artifacts (461). The main pont is that there exist very few, if any, chemicals with a high carcinogenic potency but low toxicity and this fact was known well before anyone did any statistical analysis.
We also looked at other aspects of the data on carcinogenisis in rats and mice. Byrd et al, 1988, 1990 studied the concordance between different organ sites for different species (437,500) and demonstrated that rare tumors in one species are better predictors of tumors in another species than are common tumors; this was the first definitive quantitative justification for a procedure adopted by both IARC and FDA (437,500).. Contrary to our initial presumption, liver tumors in B6C3F1 mice are as good a predictor of tumors in rats as other common tumors. Moreover, benign liver tumors (adenomas) are as predictive as malignant tumors (carcinomas) (500). With George Gray we showed that concordance studies across sites and species improves when common regulatory criteria (as used by FDA for example) are used. This is a quantitative justification for these procedures (620). Then wee used Monte-Carlo modeling, to study false-positive detection rates in long-term rodent bioassays, (650) and showed that anticarcinogenic effects (called hormesis by Calabrese) is common in the CBDS database (642,,650 and erratum)) One important issue is the tumor site concordance between humans and laboratory animals. This was explored directly by my colleagues Gray and Evans in 1992. With humans and mice there was no evidence of tumor site concordance. A similar comparison of humans with rats results in a statistically significant level of discordance, that is a response in one tissue in one species is actually statistically associated with the lack of that response the other species. In this data set there is no statistically significant association (at the p = 0.05 level) of tumor site between rats and mice for those human carcinogens tested in both species, although evidence of discordance is present with 0.10 < p < 0.05. We began to study these site specific correlations in detail with a view to comparing the predictions of an Absolute Risk (AR) and a Relative Risk (RR) model.
Interestingly there is an anti-correlation between the incidence of liver tumors in control (non dosed) rodents and lymphomas. This seems to be most pronounced for histiocytic lymphoma and adenomas. We produced the most detailed study to date of this (642, 650, 743, 744). We then began to examine in a little detail the effects of tumor groupings (classifications) on the fraction of chemicals being studied that are assigned to be carcinogens or anti-carcinogens, A preliminary report on this was presented as a poster at the Society of Toxicology (SOT) meeting in March 1999 and was then published (745, 748) In addition members of the group have been authors of a number of other papers on specifics of chemicals, or asbestos (207, 209, 212, 218, 240, 258, 304, 309, 317, 327, 332, 343, 346, 347, 378, 391, 408, 414, 440, 451, 455, 499, 495, 499, 701.). Notwithstanding the arguments just above, we discussed in an unpublished lecture at Professor Calabrese’s annual seminar on hormesis that in the same animal species one can often have both anti-carcinogenic effects at one tumor site and carcinogenic effects at another tumor site. When the cancer rates are added the total cancer rate has a dip. This was obviously true in one of the early experiments on dioxin by Kociba and others, but often ignored.
I tried, unsuccessfully, to get Edmund Crouch appointed in the School of Public Health so in 1987 he left to a small, very good, company Cambridge Environmental. We let these ideas sit undeveloped for some years. Then a graduate student, Francesco (Frank) Pompei, came on a very promising development. He had noticed that cancer incidence above age 70 flattened and fell. I said: “nonsense. That cannot be.” The Armitage Doll multistage theory, about which I had just lectured to the graduate class on Risk Analysis, predicts that incidence rises indefinitely. I was right about the Armitage-Doll prediction but wrong about the data. This led at once to another discovery we are finding it hard to get the “experts” to accept.
It has been the conventional wisdom for a longtime that if a person does not die of something else (car accident, heart disease etc.) he/she will die of cancer. This view is being challenged by the fact that the age-specific incidence of cancer does not seem to rise indefinitely but flattens off about age 80 and even falls above that age. This was pointed out by Frank who successfully defended his Harvard PhD thesis on this subject in April 2002. Papers followed (805) Better data are presented in a comment on a paper by Campisi (868) . These were expanded in a conference poster. Some implications were presented by Pompei, and Wilson (872). But the most comprehensive data on 20 carcinogens in males and females for 3 different observational periods. are in a recently published paper in the lead journal “Cancer Research” in 2008. All but one show a similar fall of the incidence to zero at age 100.
Then we were told by critics that this fall off does not appear in animals. True, it mostly does not. That is because the “regulatory” bioassay is to study rats and mice up to age 2 years and then to kill them in a “terminal sacrifice” That corresponds to about age 80 in people. For studies where animals last longer, there is a flattening and fall off. Pompei, Polkanov and Wilson showed that there is a flattening and fall off (805). There is one data set, the so called “megamouse” study, or the ED01 data., where animals are followed to death at 800 days. But these are old data and our attempt to decode the data from the file that we were given led to a gross error that unfortunately we published. The incidence does not fall off in cancer incidence as fast as we first thought. We are now examining the data more carefully. We note that the animals dosed by high levels of 2,4,DAA show high incidence of cancer in the age period 2-3 years. I tentatively postulate that this is because 2-4 DAA acts to reduce senescence, allowing cancers to develop. If this postulate is true, there should be a concomitant life extension. So far it seems that the life extension is small. We repeatedly urge chemical industry and the National Toxicology Program to carry out their bioassays till the end of life so that this phenomenon may be studied further.
The obvious question now is; “what is wrong with the conventional theories such as that of Armitage and Doll? Or the clonal expansion theory of Moolgavkar and Knudsen?” Our critics have suggested that the data imply a variation in sensitivity among people. This explanation would have to include the assumption that the number of completely non susceptible persons varies with cancer site in such a way that the turn over happens at the same age for each cancer. This means 99% completely non susceptible people for a rare cancer and 20% for a common cancer. Some people had speculated that the problem was in the mathematical approximations used. Ritter, Burmistrov, Pompei, and Wilson (876) examined the mathematically exact formulation and showed that it does not alter the above conclusion.
Dostları ilə paylaş: |