In August 1945, the United States dropped two atomic bombs on the Japanese cities of Hiroshima and Nagasaki, ultimately ending the Second World War but causing up to 250000 direct deaths but also countless others indirectly through genetic damage caused by the high levels of radiation emitted from the bombs. The mutagenic effects of radiation were first demonstrated when mutations were induced in Drosophila melanogaster by exposing them to high doses of radiation (Muller, 1927). These effects were also demonstrated in plants shortly after when mutation was induced in barley using x-rays and radium (Stadler, 1928). The bombs dropped on Hiroshima and Nagasaki consisted of Uranium and Plutonium isotopes respectively which emitted ?-rays which, when they react with biomolecules, can produce reactive ions (ionizing radiation). Interaction of ionizing radiation with biomolecules can lead to breakage of one or both DNA strands, damage and/or loss of nucleotides and DNA and protein crosslinking. Over 60 years after the bombings, the effects can still be seen in the victims and their descendants in the forms of cancers, physical and mental retardation, fertility, mortality rates and growth and development. Numerous studies have been conducted and continue to be conducted on the effects of the atomic bombs' radiation on genes with regard to somatic and germline mutations. This review aims to examine the effects the atomic bombs have had on human genetics by looking at papers published in relation to studies on some of the mutations caused up to the present day. Human genetics is the most widely studied subject in relation to the genetic effects of the bombs, as they are much more prominent compared to their effects on the genetics of plants and other animals, which will also be examined in this review, but to a lesser extent due to the lack of research conducted.
Ionizing Radiation and Associated Effects:
Ionizing radiation occurs between high frequency ultraviolet light, x-rays and ?-rays on the electromagnetic spectrum. The most prominent source of ionizing radiation is Radon gas, occurring naturally from the decay of Uranium and, according to the World Health Organization, is responsible for the deaths of tens of thousands of people each year due to lung cancer. Radon accounts for 43% of the ionizing radiation exposed to the world's population. Artificially occurring ionizing radiation is most prominently used for medical purposes such as x-rays. Radiation in medicine accounts for 14% of the total ionizing radiation we are exposed to. It is also the most prominent source of artificial radiation (UNSCEAR 2000).
As mentioned above, mutation-inducing effects from ionizing radiation were first demonstrated by Muller in 1927 using x-rays. This discovery was the first of many experiments observing the relationship between genes and radiation. One such experiment displayed the relationship between the level of ionizing radiation and the rate of mutation by exposing Drosophila to different levels of background radiation (Babcock and Collins, 1929). The investigation was carried out in two locations fifteen miles apart: a tunnel in San Francisco beneath 140 feet of sandstone rock and a laboratory in the genetics department of the University of California at Berkeley, the former location having twice as much background radiation present than the latter. The mean per cent of lethal mutations in the laboratory was 0.251%, and almost twice as much in the tunnel (with twice as much radiation) at 0.526%.
One of the most common long-term consequences associated with ionizing radiation is cancer and will be examined later in this review. However, since the discovery of radiation-induced mutagenesis, several other significant consequences have been observed such as reduced fertility, immune disorders, birth defects and mental illness. These will also be examined in this review. The reason that some consequences are more predominant than others is that different tissues require different levels of exposure to radiation in order to induce a mutation which accounts for the range of frequencies of various diseases and abnormalities caused by radiation.
The Law of Bergonie and Tribondeau is an early prediction about the radiosensitivity of cells. It suggested that radiosensitivity is proportional to a cell's rate of division and inversely proportional to a cell's degree of differentiation. An example for each would be bone marrow, which highly radiosensitive. This can be seen for leukaemia, cancer of the blood-forming tissues, which would have a low degree of differentiation. The relationship between the development of leukaemia and radiation was observed in epidemiological studies by March (1944). Another example of this law can be seen in nervous tissue, which is highly radioresistant as it consists of cells with a low rate of division. This law would suggest that those most vulnerable to radiation would be embryos at the early stages of development, as they would have a high rate of cell division coupled with cells with a low degree of differentiation. This review will also examine studies conducted on survivors exposed to the atomic bombs in utero.
The Atomic Bomb Casualty Commission (ABCC) and the Radiation Effects Research Foundation (RERF):
The ABCC was set up in 1947 by the National Academy of Science - National Research Council (NAS-NRC) to examine the long-term effects of the bombs on the survivors in both cities. The first study was by Neel in 1947 on the haematological effects. Due to increasing costs of operation, the ABCC was forced to terminate by the early 1970s. The Japanese government was encouraged to support the work of the ABCC and in 1975; the Radiation Effects Research Foundation (RERF) was formed to continue to work of the ABCC and remains in operation to the present day.
Incidence of Leukaemia among Atomic Bomb Survivors:
As mentioned above, leukaemia is one of the most prominent radiation-induced diseases due to the degree of differentiation in bone marrow tissue and the high rate of cell division. Therefore, it comes as no surprise that it was the first cancer type observed at an above-normal frequency among survivors (Folley et al, 1952). This observation was made by looking at incidence and death rates of leukaemia between 1948 and 1950 among those exposed to the bombs and those not exposed. For both Hiroshima and Nagasaki, it was seen that the increased incidence of leukaemia coincided subjects who were exposed to radiation within 2 kilometres of the hypocentre (ground zero), compared with subjects exposed beyond 2 kilometres of the hypocentre who showed no evidence of severe radiation injury. Folley et al also observed indications that the radiation from the bombs favoured development of certain types of leukaemia regardless of distance from the hypocentre, with acute and myelocytic leukaemia being the most predominant cases.
A study by Lange et al (1954) outlines the occurrence of leukaemia among survivors dividing the subjects according to sex and age groups. The subjects were obtained from three sources: patients discovered through routine medical surveys, patients referred to the ABCC by local physicians or visited by ABCC doctors and finally, death certificates of cases showing evidence of radiation exposure with blood and bone marrow smears available to be studied. 124 leukaemia patients were studied, 75 of which were exposed (present in one of the two cities during the bombings). With regards to sex, the exposed subjects had an almost equal ratio of males to females at 1.03:1. Sex was not seen to be an influencing factor on the incidence of leukaemia among the survivors. By referring to statistical analysis of those exposed in Hiroshima, it was concluded that younger survivors had the same susceptibility to radiation-induced leukaemogenesis contradicting experiments by Furth and Upton (1954) suggesting that younger mice were more susceptible to leukaemia after exposure to ionizing radiation than older mice. However, it was mentioned that few young children were exposed to the bombs having been evacuated prior to the explosions so the study was somewhat biased. This study was consistent with that by Folley et al regarding the predominant types of leukaemia seen among survivors. It was found that myelogenous leukaemia accounted for 68% of the exposed cases and that 57.3% of exposed cases were acute, with only one case of chronic lymphatic leukaemia in the exposed group. The authors noted that chronic lymphatic leukaemia is relatively rare in Japan which could account for the single case in this study and that by Folley et al Based on this, the authors concluded that there is no evidence to suggest that radiation-induced leukaemogenesis favours one form of leukaemia over another.
This study was closely followed up by Moloney and Lange (1954) consisting of ten case reports. Seven cases were acute or sub-acute and the remaining 3 were chronic, again reinforcing the observation that acute manifestations are more predominant. The paper mentions a mechanism providing a possible explanation for the high frequency of myelogenous leukaemia and the delayed onset. Studies on the bone marrow of those killed by the atomic bombs showed a presence of radioactive phosphorous, at levels inversely proportional to the distance from the hypocentre (Shimomoto and Unno, 1953). This suggests that the radioactive phosphorus may have been present in the bone marrow of survivors exposing myeloid tissue to prolonged localized radiation, eventually inducing leukaemogenesis.
Two separate studies by Heyssell et al (1960) and Tomonaga et al (1962) examined leukaemia among survivors in Hiroshima and Nagasaki respectively over a period of almost 15 years. The two studies were in agreement with the above papers in relation to the strong evidence pointing to radiation-induced leukaemogensis in the two cities (Fig. 1).Both studies included leukaemia patients from a so-called 'Master Sample', which consists of survivors resident in either city on 1 October 1950, recorded by the ABCC.
Both studies noted a correlation between symptoms of acute radiation syndrome (ARS) and leukaemia. In the case of Hiroshima, 20 of the 29 people from the master sample who were in the exposed area (1500m from the hypocenter) had experienced symptoms of ARS with most of the 9 who had not experienced symptoms living near the boundaries of the exposure area. In the case of Nagasaki, 8 of the 15 people from the master sample who were in the exposed area had experienced symptoms with a higher estimate per 100,000 population per year (Table 1) suggesting that if there is a threshold dose for leukaemogenesis, it must be below that for ARS.
The study by Tomonaga et al also observed that the onset of leukaemia among those from the Master Sample who were exposed within 1500m of the hypocenter occurred on average 1.75 years earlier than those from the Master Sample beyond 1500m suggesting that the time between exposure and onset of leukaemia is somewhat dependent on the dose.
Regarding sex, Tomonaga et al contradicts previous reports by Lange et al that there is no significant difference between males and females who had developed leukaemia. The findings by Tomonaga et al indicated that among those in the Master Sample exposed group, the ratio of male to female leukaemia sufferers was 3:1, with 3 times the expected incidence of leukaemia among females and 7-8 times the expected incidence among males. This contradiction may be due to the time elapsed between the two studies. The ratio among Hiroshima survivors was not significant enough to elicit any conclusions that one sex was favoured over another (Table 2), which was put down to the differences in shielding of sexes, with females in Hiroshima receiving less shielding than males.
Shortly before publication of the above two papers, the ABCC received air dose data by the Oak Ridge National Laboratory showing estimates of the neutron and gamma ray air dose at various distances from the hypocenter with gamma rays inflicting a higher radiation absorbed dose (rad) value than neutrons regarding proximity to the hypocenter. Using judgement of distance from the hypocenter and level of shelter to determine the dose received, the authors were able to create similar dose response curves (Fig. 2) showing the increased frequency of leukaemia correlating with proximity to the hypocentre and air dose estimates.
The reports also examined the incidence of the various types of leukaemia among survivors with Hiroshima showing chronic granulocytic leukaemia to be the most common and in Nagasaki's case, acute granulocytic leukaemia was most common. Hiroshima's data appeared to contradict above findings that acute types of leukaemia were more common, again possibly due to the time elapsed between these studies and previous studies allowing chronic granulocytic leukaemia to develop and become more prominent. In a report examining leukaemia among survivors between 1950 and 1966 (Ishimaru et al, 1971), the authors note a relationship between the type of leukaemia present and radiation dose exposure. For instance, in both cities, all forms of acute and chronic granulocytic leukeamias occurred more frequently among those who received >100 rad than those who received <5 rad. Hiroshima survivors showed an increased risk for acute lymphocytic and chronic granulocytic but not acute granulocytic leukaemia among those exposed to between 5 and 99 rad but Nagasaki showed no such pattern. They also point out the threshold for doses of radiation that put survivors at elevated risk of leukaemia, 20-50 rad in Hiroshima and around >100 rad in Nagasaki. Among the explanations put forward to justify this difference in patterns between the two cities is that the type of leukaemia induced depends on differences between neutron and gamma radiation. Hiroshima had received significantly more neutron radiation than Nagasaki because of differences in the bombs. In Nagasaki's bombs, protons had efficiently slowed down neutrons while in Hiroshima's bomb, heavy iron atoms from the bomb's nose scattered neutrons with little absorption. This explanation suggested the ability for neutron radiation to induce chronic and acute leukaemias with gamma radiation only having the ability to induce acute leukaemia.
A more recent study on the incidence of leukaemia among survivors between 1950 and 2000 has shown that 103 of 310 deaths among 86611 atomic bomb survivors were due to radiation-induced leukaemogenesis (Richardson et al, 2009). In the final decade of the study (1991-2000), 34% of the deaths by leukaemia among survivors exposed to more than 0.5 rad were caused by radiation demonstrating that after 5 decades, the leukaemogenic effects of the bombs were persisting.
The fact that there is an increased risk (2-4 times the normal risk) of the disease among siblings of leukaemia sufferers, and 20% concordance among identical twins hints at a heritable element of leukaemia. The possibility of increased risk of leukaemia among survivors' offspring (conceived after the bombings) was examined by Hoshino et al (1967) dealing with children born between May 1946 and 1963 showing no significant findings that the offspring were at an increased risk. In fact, the number of cases was generally lowest among offspring whose parents were within 2000m of the blasts with the majority of cases having parents who were not in either city at the time of the explosions (Table 3). Ishimaru et al (1981) agreed with these findings in a technical report for the RERF showing no significant increase in leukaemia among the offspring of survivors. Nomura (1982) conducted an experiment examining the heritability of induced cancers in mice, and while lung cancer was most frequently inherited, inheritance of lymphocytic leukaemia was also observed at a lesser frequency but similar ratio to the lung cancer. As of yet, there is no hard evidence that leukaemia induced in atomic bomb survivors can be passed on to their offspring. The effects of the atomic bombs on survivors' offspring conceived after the explosions will be further examined later in this review.
Genetic Risk in utero at the time of Bombings
As described by the Law of Bergonie and Tribondeau, the risk of radiation-induced mutation is dependent on degree of differentiation of cells and rate of cell division. Taking this into consideration, those exposed to the bombs in utero would be at a greatly increased risk of mutation compared with those exposed after birth. Among the first studies on children exposed in utero was one investigating congenital anomalies among 205 children from Hiroshima exposed in utero during the first half of gestation (Plummer, 1952). 68 of the pregnant mothers reported a total of 101 symptoms including burns, injury, purpura, fever and gastrointestinal bleeding consistent with the effects of radiation. 28 congenital anomalies were found among the children (Table 4). There were 6 cases of microcephaly with mental retardation, all born to mothers exposed within 1200m of the hypocentre, 2 cases of Down syndrome, one of which had microcephaly and was born to a mother exposed within 1200m the other born to a mother exposed between 2500 and 3000m. The incidence of microcephaly occurred only in those born to mothers exposed within 1200m of the hypocentre. There was no correlation between the age of the mother or week of gestation with incidence of microcephaly in this study. Of the eleven mothers exposed within 1200m of the hypocentre, seven delivered microcephalic and mentally retarded children. It was noted that these mothers received inadequate shielding from the blasts. The rest of Plummer's data is consistent with other population studies in Japan. The report indicates a significantly increased risk of morphological and developmental defects among those in utero exposed within 1200m without adequate shielding. Miller (1956) further examined the incidence of microcephaly pointing out a relationship between the week of gestation and microcephaly and mental retardation. Among 33 children exposed in utero with microcephaly, 24 were between weeks 7 and 15 of gestation at the time of the bomb. His results were consistent with Plummer's in that the severity of microcephaly was inversely proportional to distance from the hypocentre.
Another study on microcephaly and mental retardation among Hiroshima survivors exposed in utero mentioned reservations about Miller's findings due to lack of validation by further observation (Wood et al, 1967). This study re-evaluated earlier observations with annual examinations of 1613 (5 of which are excluded in later studies using this sample due to birth dates outside of the required time period) suffering from microcephaly and mental retardation. Among the conclusions made by Miller and Plummer above, after observing these survivors over 20 years, Wood et al deduced that mortality was increased among those children with mental retardation compared with normal children, and also increased among all subjects who were exposed under 1500m, with or without mental retardation or microcephaly.
Between 1958 and 1959, a study of 286 Nagasaki adolescents exposed in utero was conducted to investigate the relationship between exposure to radiation and the rapid increases in height and weight among adolescents in the hope of finding biochemical anomalies that could be attributed to radiation (Burrow et al, 1964). The 286 adolescents were divided into three groups: Group I consisted of 100 children whose mothers exposed within 2000m of the hypocentre, Group II consisted of 99 whose mothers between 3000m and 4999m from the hypocentre and the remaining 87 constituted Group III, whose mothers were not in the city at the time of the bombing. The study revealed 9 children to be mentally retarded, 6 males and 3 females. 5 of the males were from Group I and 1 from Group 2. In the case of females, there was one from each group. Other malformations such as cleft uvulas and spinal defects were noted in 17 subjects, again with a majority of 14 being males, and 9 of these 14 males being from Group I. The authors state that females from Group I had received a higher dose of radiation than the males eliminating the possibility that radiation dosage is the explanation. It is noted that Group I had an imbalanced sex ratio with the majority being males, possibly justifying these differences. An unbalanced sex ratio among children of atomic bomb survivors had been previously observed (Schull and Neel, 1958) but was later argued to be a small early effect, as a further study failed to find evidence of an effect on sex ratio by radiation (Schull et al, 1966).
The growth and development of 1259 of the 1613 children from both cities exposed in utero who were annually examined by the ABCC were later examined at the age of 17 by Wood et al (1967), the diminished number of subjects being due to dispersion of the sample. The authors chose to study this age group because most significant growth would have been completed. The main growth effects observed were below average head circumference, height and weight. For those exposed within 2000m of the hypocentres, mean head circumference was shown to be reduced with the exception of Nagasaki females. The study showed no relationship between impaired development and symptoms of ARS in the mothers but significant differences were observed in the closest distance group (0-2000m from hypocentre) when skull circumference samples were subdivided into high dose (0-1499m) and low dose (1500-1999m) subgroups with the exception of Nagasaki males. Within 2000m of the hypocentre, mean standing height and weight were also less than those exposed at further distances with the exception of Nagasaki females, but in the high dose subgroup, the mean was less for both sexes in both cities. With regards to the stage of gestation during exposure, there were no significant patterns shown within the distance groups consistent with Plummer's findings and contrasting Miller's earlier report of a relationship between week of gestation and incidence of microcephaly.
As mentioned above, Wood's 1967 study was later re-examined excluding 5 from the original sample of 1613 people exposed in utero (Otake and Schull, 1984). 9 were also excluded due to the unknown exposure status of the mothers. This study showed an increased incidence of mental retardation among those who had received the highest doses of radiation as well as those exposed within8-15 weeks of gestation. For those in both cities who received the greatest radiation dose (100->200rad), 36.8% had suffered from mental retardation. None who were between gestational ages of 0-7 weeks were mentally retarded, 5 of the 8 (62.5%) who were between 8-15 weeks were mentally retarded, 1 of the 6 (16.7%) who were between 16-25 weeks were mentally retarded and 1 of the 4 (25%) who were >26 weeks were mentally retarded. following radiation exposure was 8-15 weeks. When separating the samples according to city, Hiroshima is consistent with the overall result, with all 3 cases in the greatest exposure group suffering from mental retardation at 8-15 weeks. Nagasaki however, had its highest frequency of mental retardation among those exposed between 16-25 weeks (50%) and the second highest frequency among the 8-15 weeks group (40%). This higher frequency may be due to the fact that the 16-25 weeks group was much smaller, having only 2 subjects with the 8-15 weeks group being comparatively larger, with 5 subjects. The authors suggest that the absence of an effect early on in gestation may be due to the readiness of cells to replace themselves following radiation damage compared to cells in later stages of gestation or the possibility that a process present in later stages not seen in early stages of foetal development such as cell migration is prevented.
The majority of studies on those exposed to the bombs in utero have been with regard to physical and mental development. Recent publications however, have been epidemiological studies. A study examining the incidence of thyroid diseases in 328 individuals exposed in utero was conducted by Imaizumi et al (2008). The study conducted tests for thyroid diseases resulting in 123 positive results (37 men and 86 women). Thyroid diseases consisted of solid nodules, cysts, hypothyroidism, Grave's disease and presence of antithyroid antibodies. The study was unable to reveal any relationship between incidence of thyroid disease and week of gestation, nor were there any significant findings in relation to radiation dose and autoimmune thyroid diseases or thyroid nodules. However, these findings are subject to further investigation as the size of the sample used may not have been large enough to elicit conclusive results. A contemporary study (Preston et al, 2008) examined incidence of solid cancers among survivors exposed in utero and revealed evidence that following in utero exposure, there is an increased risk of onset of solid cancers (primarily breast, lung and stomach cancers) during adult years, albeit less than that for those exposed to radiation in early childhood (Table 5) but again, due to the size and average age of the sample, further follow-up may be required to provide more concrete evidence.
Incidence of Solid Cancers among Atomic Bomb Survivors:
As well as an increased incidence of leukaemia among survivors, an excess risk of solid cancers has also been observed, including but not limited to stomach, breast, colorectal, lung and lymphatic cancers. An early study on the incidence of stomach cancer (Murphy and Yasuda, 1957) examined 535 cases of stomach cancer in Hiroshima obtained from surgical and autopsy records dated between December 1948 and June 1957. After dividing the sample into an exposed group (within 10000m of the hypocentre) and a non-exposed control group (beyond 10000m from the hypocentre), no significant differences were observed with regards to mean age of onset, life expectancy following surgery or behaviour or location of the tumours with an almost equal incidence among both groups. The exposed group was then reclassified as being within 2500m of the hypocentre and the remaining subjects added to the control group due to concerns that the original exposed group may be concealing a significant effect with a threshold distance much closer to the hypocentre. Again, no significant differences were seen between the two groups. A more recent study has shown a relationship between radiation dose and incidence of stomach cancer among Hiroshima survivors (Sauvaget et al, 2005). Using a sample of 1139 cases of stomach cancer from the ABCC's Life Span Study (LSS) consisting of 93000 exposed survivors and 27000 non-exposed, the ratio of stomach cancer among those exposed to high doses of radiation to that among those who were not exposed was significantly higher (Table 6). One reason why the latter results may differ from Murphy and Yasuda's results could be due to the different samples used. The LSS provides greater sample sizes giving more accurate results compared with the smaller sample used in the earlier study. Both studies however were consistent with each other regarding the incidence among males and females, with an increased risk among the male gender also shown by Sauvaget et al.
Another recent paper by Yamamoto et al (2009) examining the outcome of stomach cancer among survivors compared to a non-exposed group revealed a poorer prognosis for survivors of the bombs, whose tumours ranged between 1.76-8.34cm in size compared with the control group whose tumours were ranged between 1.34-8.02cm. The exposed group also revealed a lower ten year survival rate compared to the control group. The difference in tumour size was put down to a higher rate of residual tumour among survivors following surgery. The differences between the two groups led to the assumption that the development of stomach cancer among the survivors follows a different path compared to that of the control group.
Another cancer that has seen a significant increase is breast cancer. An epidemiological study by Okuyama and Mishina (1988) carried out on cancer patients in Nagasaki divided its subjects into 3 groups: epithelial (in which breast cancer belonged), non-epithelial and gonadal. The study observed a definite increase in the incidence of cancers (including breast cancer) in Nagasaki compared to 3 control cities: Fukuoka and Miyagi and Osaka (Table 7). Another study on solid cancers by Thompson et al (1994) among the LSS cohort encountered 529 cases of breast cancer, 329 in Hiroshima and 140 in Nagasaki. The sample was divided into 3 groups consisting of 234 subjects in the non-exposed control group, 252 in the low dose exposed group and 43 in the high dose exposed group. The respective incidence rates of breast cancer for these groups were 4.3, 5.2 and 16 cases per 10000 person-years (no. of subjects in sample * length of study in years), displaying a rate of incidence proportional to dose received. Another observation made by the authors was an increase in estimated Excess Relative Risk (ERR) was increased five-fold among females less than 10 years of age at the time of the bombing compared to women who were over 40.
In a report by Land et al (2003) consisting of 1093 breast cancer cases (776 exposed) in Hiroshima and Nagasaki from the LSS cohort, among which 34 were double primary cases, was consistent with previous reports stating a relationship between the dose of radiation received and an increased ERR. It was also consistent with the findings by Thompson et al observing a higher ERR among women below 20 years of age compared to those over 40 at the time of the bombings. The authors suggest that the differential state of breast cells in younger females compared to terminally differentiated cells in more mature females may be responsible for the increased ERR, following the law of Bergonie and Tribondeau (1906). In the case of the 34 double primary breast cancer cases, 26 of the subjects fell into the exposed group. 15 of these subjects were less than 20 years old at the time of the bombs with the largest group among these 15 receiving the highest dose. There was no relationship between proportion of subjects and radiation dose among the older group.
As mentioned above, Radon is a major source of background radiation and causes tens of thousands of lung cancer deaths. Bearing this in mind, it would be expected that the radiation emitted from the atomic bombs would generate an increased incidence of lung cancer among the populations of both cities. An epidemiological study based on findings from survivors' autopsy results between 1961 and 1970 by Cihak et al (1974) listed 204 cases of lung cancer from 3778 autopsies. When the results were divided into age groups and dose received, the 20-39 age group showed an ERR of 2.4 following a dose of >200 rad, while the 40-59 age group showed a similar ERR of 2.3 following exposure to the same dose. In the over 60 age group, there were no cases with lung cancer in the same dose group as the latter two age groups, but one case in the second highest dose group (100-199 rad) with an ERR of 1.7. With the exception of two outliers in the 100-199 rad and 1-49 rad dose groups, the obtained ERR values followed the expected pattern being proportional to the dose of radiation received.
One protein involved in the regulation of cell radiosensitivity is the Epidermal Growth Factor Receptor (EGFR). The length CA repeat present in the first intron of the EGFR gene has been found to be inversely proportional to EGFR production, leading to the hypothesis that those with longer repeats may by more susceptible to lung cancer (Zhang et al, 2006). Yoshida et al (2009) tested this hypothesis on atomic bomb survivors regarding their susceptibility to lung cancer. Their sample consisted of 2160 subjects randomly selected from a cohort of 4647 bomb survivors in which the relationship between gene polymorphisms and development of cancer is examined. Among these 2160 subjects, 486 cancers were found, including 62 cases of lung cancer, with a higher proportion of male sufferers (54.8%). The length of the CA repeats was determined using PCR. Following determination, the sample was divided into long and short genotypes in two ways. The first consisted of long genotypes and short genotypes. Long genotypes had a total of 38 or more repeats and short genotypes had a total of 37 of less repeats. The other identification involved separate alleles, with long being classified as 18 or more repeats and short being 17 or less giving individuals 3 possible genotypes - long/long, long/short or short /short. Using the former genotype identification method of the sample, those who were not exposed to radiation had an increased risk of lung cancer with a short genotype. Exposed survivors with long genotypes were associated with an increased risk as radiation dose increased, while with the short genotype, risk did not increase with radiation dose. The findings from the genotype identification method using separate alleles produced similar results.
It is well known that skin cancer has an increased incidence following exposure to ionizing radiation. The study by Thompson et al (1994) mentioned above examined the incidence of skin cancer among 79972 subjects from the LSS cohort recorded 191 cases, 13 of which were melanoma and the remaining 168 non-melanoma. When examining the non-melanoma cases, the sample was divided into three dose groups. Contradictory to an earlier report by Johnson et al (1969) which found no increased incidence of skin cancer among survivors, incidence of the disease appeared to be proportional to dose received. Similar to the incidence of breast cancer, the ERR for skin cancer was inversely proportional to the age of the subjects at the time of the bombings. The most two most common types of non-melanoma skin cancers were found to be basal and squamous cell carcinomas, with 78 and 69 cases respectively. The incidence of skin cancer was further examined by one of the collaborators of the above paper involving 208 skin tumours found between 1958 and 1987 (Ron et al, 1998). Of the 208 tumours, 172 were non-melanomas, again with basal and squamous cell carcinomas predominating with 80 and 69 cases respectively. The authors deduced that the basal layer constitutes the majority of skin stem cells so would have a higher level of radiosensitivity justifying the figures above. An increased ERR was also found for all the skin cancer types except squamous cell carcinoma, a possible explanation being the low doses of radiation received by sufferers. As with the previous study, the authors also found a decrease in ERR with increasing age at the time of the bombs.
Relationship between Somatic Mutations and Carcinogenesis among Atomic Bomb Survivors:
Mutations on four genes among atomic bomb survivors have been closely studied which have an indirect effect on carcinogenesis (Akiyama et al, 1996). These are GPA, which encodes the sialoglycoprotein Glycophorin-A which bears the MN blood group antigenic determinant, HPRT, which encodes hypoxanthine-guanine phosphoribosyltransferase involved in purine metabolism, HLA-A, which encodes human leukocyte antigen A - a component of MHC class I cell surface receptors and TCR genes a and b, which encode T-cell surface receptors. A report by Akiyama et al (1991) summarises the assay methods described below which were used for calculation of the frequency of mutation of these genes among survivors.
Using flow cytometry as an assay method, Kyoizumi et al (1989) examined the frequency of GPA variants in blood samples of 68 atomic bomb survivors from Hiroshima and 4 healthy donors all possessing an MN blood group using fluorescent markers to identify the variants. The three variants to be assayed were hemizygous M and N cell types and homozygous MM cell types. The sample was divided into two groups depending on radiation dose received. The first group consisted of 43 subjects exposed to between 0.11 and 5.02Gy while the second group consisted of 21 subjects who received <005Gy. The findings gave respective M, N and MM frequencies of 1.1*10^-5, 1.8*10^-5 and 1.1*10^-5 for the four healthy donors and similar results for the least exposed group. In the exposed group the frequency of variants was seen to increase with radiation dose. The authors also noted a linear relationship between the frequency of variants and frequency lymphocytes which contained chromosomal aberrations among survivors.
Mutation frequencies on the HPRT locus among atomic bomb survivors were assayed by incubating T-lymphocytes among phytohaemaglutinin, interleukin 2 and 6-thioguanine. T-cells showing resistance to 6-thioguanine contain the mutated HPRT gene. Hirai et al (1995) used this assay system among atomic bomb survivors. T-cells from the sample were plated onto microtiter plates with an average of 1 cell/well for the non-selective control group (Cont) and 5*10^4 cells/well for the 6-thioguanine selective group (TG). The mutation frequency (MF) was defined as the ratio of cloning efficiency of TG to that of Cont. Their findings unexpectedly showed no increase in HPRT mutation with radiation dose but noted an increase in chromosomal aberrations with dose. The unexpected result was suggested to be due to the young age of the sample. The authors mention that HPRT mutation frequency is known to increase with donor age and the fact that the high dose group had the majority of younger subjects suggests a possible masking of the radiation effect. However, in the report by Akiyama et al (1991) mentioned above, a similar assay produced similar results putting forward an explanation that there is a possibility that selection has occurred against mutant T-cells in vivo in the years since the bombings.
Similar to the GPA assay, in order to assay the frequency of TCR mutations, flow cytometry can be used. If the TCR genes (a orb) are mutated, the mutated T-cell can be detected as it is CD3 negative. CD3 is a T-cell co-receptor which binds to a normal T-cell when generating an activation-signal. Staining the cells with an anti-CD3 antibody with a fluorescent dye allows for determination of mutant T-cells. When assayed by Kyoizumi et al (1992), no relationship between mutation frequency and radiation dose could be seen. Similar to the HPRT mutation assay, the explanation regarding negative selection of mutants was put forward in the report by Akiyama et al (1991).
In the case of HLA-A mutations, fluorescently tagged monoclonal antibodies (anti-HLA and anti-CD3) are used to detect mutant cells not possessing A2 and A24 (the two most common HLA alleles among the Japanese). The donors are required to be heterozygous for either the A2 allele or A24 allele. The assay provided results similar to that of the TCR mutants, with no definite reason why. One suggestion was put forward suggesting that a relationship between dose and mutation frequency was masked similar to the GPA assay due to the old age of the sample, as the authors noted that the frequency of mutation was seen to increase with age.