young scientists journal issue 15

64
YOUNG SCIENTISTS Dispelling the “Mozart Effect” ISSUE 15 JAN 14 - JUNE 14 SUPPORTED BY Biomimetics & Roof Design Your 5 a day could be the key to outsmarting pathogens Counterfeit Coin Detetction The Schmallenberg Virus WWW.YSJOURNAL.COM

Upload: ysjournal

Post on 01-Apr-2016

225 views

Category:

Documents


1 download

DESCRIPTION

Issue 15 of Young Scientists journal

TRANSCRIPT

Page 1: Young Scientists Journal Issue 15

YOUNG SCIENTISTS

Dispelling the“Mozart Effect”

ISSUE 15 – JAN 14 - JUNE 14

SUPPORTED BY

Biomimetics & Roof Design

Your 5 a day could be the key to outsmarting pathogens

Counterfeit Coin Detetction

The Schmallenberg Virus

WWW.YSJOURNAL.COM

Page 2: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 142

ORIGINAL RESEARCH 4 Effect of Acidic Beverages on Teeth Deepika Kubsad

16 Investigation into Novel Antimicrobials Shannon Guild

21 Biomimetics and Optimal Design of Roof Structures Tina Kegl

32 Assessing the Viability of X-ray Fluorescence as a Method of Counterfeit Coin Detection Keir Birchall

42 Diminishing Cancer Proliferation Meriame Berboucha

REVIEWS8 A Comparative Study of Methods to Combat Antibiotic Resistance Paul-Enguerrand Fady

39 Cloud Storage: Virtual Databases Christine Li

46 Isotopes Elissa Foord

49 From Mozart to Myths: Dispelling the ‘Mozart Effect’ Alexandra Cong

54 Consonance Rosie Taylor

56 The Schmallenberg Virus Ellie Price

61 How and Why Do We Taste? Vera Grankina

Contents

Page 3: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 3

Young Scientists Journal is proud to present Issue 15. This issue is published at a very exciting time

for the journal as we adjust to a new website with a new look and publication system. As a young scien-tist, whether you have written a long piece of original research or a shorter review article, all you have to do is submit it into the new template and it will be reviewed before being published directly on the website. We are always looking to receive articles from young scientists between the ages of 12-20 and I hope that this new system encourages you to submit your work.

We were honoured to have Professor Harry Kroto as the keynote speaker at the first ever Young Scientists Journal Science and Communication Conference in March. The day was a very successful one and all the poster presentations done by students can be viewed on the conference page of the website.

A wide range of science topics feature in this issue, ranging from an introduction to the isotopes of vari-ous elements to how cloud storage can be used to give access to information from any location or device. I am also pleased to say that there are a large number of original research articles, such as Tina Kegl’s research into the usage of Biomimetics in architecture, showing how we can look for inspiration in nature to design buildings, as well as Keir Birchall’s investigation into how X-ray fluorescence can be used as a technique to detect counterfeit coins.

On the subject of music and aspects in which it relates to science, Rosie Taylor has written a short review arti-cle about consonance and harmony, discussing reasons why some combinations of notes sound nice whilst others don’t. I am sure that those of you who enjoy listening to Mozart would agree that his music cer-tainly features a lot of consonance – but have you ever thought about the effect of listening to classical music on the brain? Alexandra Cong’s article explains evi-dence to dispel the ‘Mozart Effect’, which involves the idea that listening to classical music can make babies more intelligent.

The subject of antimicrobials and antibiotic resistance is a widely debated and concerning area as the number of bacterial species resistant to one or more types of

antibiotic has been on the rise in recent years. Shannon Guild’s research investigated the potential of various vegetables, fruits and teas as novel antimicrobials whilst Paul-Enguerrand Fady discusses the mechanisms by which resistance develops in bacteria, current antimi-crobial therapies and modes of action of new antimi-crobials.

Two very different diseases are considered in this issue. Ellie Price’s article introduces a relatively new outbreak of disease that happened in 2012 called the Schmallen-berg virus, which is transmitted via midges and causes birth defects in sheep, cattle and goats. In contrast to this, Meriame Berboucha’s study looks at inhibition of choline kinase alpha, an enzyme cooperating in cell proliferation that is over-expressed in cancer cells, the disease which humans have been fighting against for many years.

Finally, Vera Grankina has written about taste and smell and how these senses combine to give us what we call ‘flavour’. These are perhaps the least understood of the sensory systems, yet we do take them for grant-ed every day at meals. Deepika Kubsad writes about her research into the effects of acidic beverages on our teeth.

This issue is my last as Chief Editor – I hand over to Ed Vinson and a dedicated team of senior editors who I am sure will do a fantastic job. I would like to thank everyone who has helped me and contributed to the work of the journal over the past year.

Sophie Brown, Chief Editor

Editorial

[email protected]

Page 4: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 144

Parts of the tooth are shown in Figure 1 above. [4] Enamel is the semi translucent, outer layer of the teeth. [5] It is the strongest substance in the human body but is also brittle. [6] The loss of enamel is irreversible. Den-tin is a bone-like tissue that supports the enamel. The chemical composition of dentin and enamel is almost entirely calcium phosphate.

Dentin is much softer and acts as a shock absorber when you are chewing. [7] However, within the dentin is the pulp which is the soft centre tissue that supplies nourishment and transmits signals to the brain. Mean-while, cementum and periodontal ligaments act as a connecting instrument from the root of the teeth to your jaw.

The sodas contain phosphoric acid and citric acid which are substances that erode tooth enamel. [3] It is believed that pops and sodas that have the pH of 5.5 cause little damage. However, as the pH gets lower than 5.5, it will erode teeth. [2] When the soda is consumed the liquid remains between the teeth until the saliva (neutralisa-tion agent) washes it away. [3]

The purpose of this research was to investigate the effects of drinking carbonated and acidic beverag-es on human teeth. As human teeth were not obtainable deer teeth were used as a replacement. The experiment was conducted for a time period of 25 days on three beverages to simulate the effects of sipping beverages throughout the day over a time period. The results revealed a drastic colour change at the roots of the teeth. The texture and the smoothness varied in the teeth from the beginning to the end of the experiment. The general trend of the variation in the mass of the teeth was observed, indicating that over time the sodas do corrode tooth enamel.

ABSTRACT

Effect of Acidic Beverages on Teeth

ORIGINAL RESEARCH I ACIDIC BEVERAGES

DEEPIKA KUBSADGrande Cache Community High school. Email: [email protected]

Pops and sodas are becoming increasingly popular. An average Canadian drinks about 120 liters of pop

every year. [1] It is a well-known fact that these beverag-es aren’t very good for your health due to the high sugar levels. [2] However, the effect of the acid in these drinks on your teeth is far worse. [3] This research is relevant to the effect of these beverages on human teeth.

Figure 1: Parts of a toothhttp://www.cosmeticdentistblog.com/the-parts-of-a-tooth/

BACKGROUND

Page 5: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 5

The experimental research addresses the other effects of drinking acidic beverages using twelve whole deer teeth and was conducted for 25 days. The experiment conducted in 2009 by Dr. Wolff’s (Professor and Chair-man of the Department of Cariology & Comprehensive Care at New York University College of Dentistry) research team used cow teeth that were split in half and immersed in sports drink and water for time periods of 75 to 90 minutes. The teeth immersed in sports drink contained numerous tiny holes and no damage was done to the teeth in water. [8]

PURPOSE

The objective of this research is to compare and con-trast how the different levels of pH in various beverages affect teeth. The work is also to notice and record other changes occurring to the teeth before and after they were placed in the carbonated drinks. This experiment is done to simulate the effect of drinking these beverag-es on human teeth.

HYPOTHESIS

Based on my research I believe the more acidic a sub-stance is, the more it will corrode teeth and will cause more damage. The teeth mass will decrease over time. There will be a slight colour change at the crown of the teeth as teeth are semipermeable.

PROCEDURE AND EXPERIMENTAL WORK

Since human teeth are considered biohazardous, it was not possible to obtain the human teeth samples, thus deer teeth were used as a replacement in this experi-ment. The word liquid mentioned in this paper refers to the different types of pop.

The teeth were extracted from the deer jaw using extraction forceps and a dental elevator. Gloves were used for sanitary purposes and masks were used for the smell. The room temperature was recorded. The pH of the Coca-Cola, Diet Coke and Monster were measured using a universal indicator and recorded.

Twelve clean 4L milk can lids were used. Four lids were assigned numbers from 1 to 4 for each drink. The deer teeth were measured using a scale that measured to tenths of a gram and the initial measurements were

recorded. After measuring each tooth, they were placed in their assigned lid. Once all the teeth were in their assigned lids 14 ml of their assigned drink was poured into each lid.

The teeth were left alone for 12 hours (7:00 a.m. to 7:00 p.m.) in the liquid and then measured again. When the teeth were taken out of the lids, the amount of liquid remaining was measured in each lid and more liquid was added for the mass of the liquid to equal 14 ml.

When measuring the teeth, latex gloves were used for sanitary reasons. Each individual tooth was taken out one at a time; any liquid substance was wiped off the tooth and then it was measured on the scale. The meas-urements were recorded every twelve hours for 25 days.

RESULTS

There was a considerable amount of plaque on the teeth when they were first extracted. The overall colour of the teeth was white but by the end of the experiment the colour of the teeth was altered in all the pops that the teeth were placed in and there was almost no plaque left on the teeth.

DIET COKE

The pH of Diet Coke is 3.5. Temperature was 17 de-grees Celsius. There was an increase in viscosity of the Diet Coke in the 20th reading, which had a consistency similar to maple syrup. At the beginning of the exper-iment the tooth’s surface was uneven to touch but by the end of the experiment the surface seemed to have

Figure 2

Page 6: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 146

evened out. The average decrease in the mass of the teeth was approximately 8%. Figure 2: Example of deer tooth that was placed in Diet Coke during the experi-ment.

MONSTER

The pH of Monster is 2.5. Temperature was 17 degrees Celsius. There was an increase in viscosity of the Mon-ster in the 20th reading to a maple syrup like consist-ency. There were colour changes throughout the teeth from root to tip. The root turned yellow from baby pink and the top had a white coating. At the beginning of the experiment the tooth’s surface was uneven to touch but by the end of the experiment the surface seemed to have evened out. The average decrease in the mass of the teeth was approximately 13%.

Figure 3: Example of deer tooth that was placed in Monster during the experiment.

COCA-COLA

the pH of Coca-Cola is 3. Temperature was 17 degrees Celsius. There was an increase in viscosity of the Coca Cola in the 20th reading to a maple syrup like consist-ency. There was a colour change from pink to Brownish black at the roots of the teeth. At the beginning of the experiment the tooth’s surface was uneven to touch but by the end of the experiment the surface seemed to have evened out. The average decrease in the mass of the teeth was approximately 25%.

Figure 4: Example of deer tooth that was placed in Co-ca-Cola during the experiment.

CONCLUSIONS

There were many effects on the teeth due to the acid-ic beverages. The colour change on all the deer teeth indicated that drinking pops and sodas over time could result in staining of the human teeth, as teeth are semi-permeable. There was a texture change on the surface of the teeth from irregular to polished indicating that the acidic levels in the beverages were corroding the outer layer of the teeth. The change in viscosity of the soda pops that the teeth were placed in suggested that the teeth released some sort of chemical. This conclusion was drawn because sodas kept in another container were exposed to the same conditions but the viscosity of the substance remained the same. Visibly the Monster drink seemed to have the most impact on the teeth by coating the crown and the root with a thin layer of white sub-stance. Visibly the Coca-Cola had the least impact on the teeth with darkening the root of the teeth and slight staining on the crown. The hypothesis made was only partly true as not all the masses of the teeth decreased. The drastic colour change at the root of the teeth was unexpected because it wasn’t included in my research.

ORIGINAL RESEARCH I ACIDIC BEVERAGES

Page 7: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 7

Figure 5: Mean percentage change in mass of tooth for each drink.

This research was important because it can simulate the effects of drinking carbonated drinks throughout the day over the course of a person’s life. It will help us to have a better understanding of the different effects caused by drinking acidic beverages. This experiment had limitations such as the number of deer teeth avail-able. The scale only measures to tenths of a gram so minute changes in the mass of the teeth could not be recorded. If the liquid wasn’t completely wiped off the teeth it would alter the mass. I used deer teeth assuming they had an identical composition to human teeth and that the results could be related back to human teeth. The universal indicator used wasn’t very specific on the pH of the beverages as the beverages were in a close range in variation of the pH. There might be many other influencing factors contributing to the corrosion of the teeth besides the acidic concentration of the pops and sodas.

ACKNOWLEDGEMENTS

I would like to thank Mr. Ewald for providing me with the deer jaw and the equipment for extraction. I would like to thank Ms. Reuer, my grade 10 science teacher, for presenting me with this opportunity and investing her time in mentoring me in science and inspiring me to do my best. I would also thank my parents, Vijay Kubsad and Roopa Kubsad for supporting me through-out this process.

REFERENCES

1. Food Statistics. (2002). Retrieved Febuary 1, 2012, from NationMaster.com: http://www.nationmaster.com/graph/foo_sof_dri_con-food-soft-drink-consumption

2. Kaiser, A. (2009, April 5). Digital Bits Skeptic. Retrieved november 2, 2011, from Sugar, acid and teeth: http://www.dbskeptic.com/2009/04/05/sugar-acid-and-teeth/

3. Peterson, D. (2006, Febuary 6). Pop and cavities in a can. Retrieved november 14, 2011, from Family Gentle Dentle care : http://www.dentalgentlecare.com/diet_soda.htm#POP AND CAVITIES

4. Levy, M. (n.d.). Parts of Tooth. Retrieved December 20, 2011, from Mark Levy D.D.S: http://www.cosmeticdentistblog.com/the-parts-of-a-tooth/

5. Parker, S. (1988). Skeleton. In S. Parker, Skeleton (pp. 26-27). Toronto: Stoddart Publishing co.Limited.

6. Teeth and gums. (n.d.). Retrieved November 21, 2011, from Healthy Teeth: http://www.healthyteeth.org/index.html

7. Goho, A. (2005, May 14). Something to chew on. Retrieved October 29, 2011, from CBS; Bussiness library: http://find-articles.com/p/articles/mi_m1200/is_20_167/ai_n13806539/pg_2/?tag=content;col

8. Sports Drink Consumption can cause Tooth De-cay. (2009, April 3). Retrieved Febuary 1, 2012, from ScienceDaily: http://www.sciencedaily.com/releas-es/2009/04/090403122016.htm

ABOUT THE AUTHOR

DEEPIKA KUBSAD is in grade eleven at Grande Cache High School in Alberta, Canada. This research project was inspired by the increasing consumption of acidic beverages amongst teenagers and was carried out in 2011. Deepika has been interested in science from a young age. In her free time, she enjoys playing sports such as volleyball and badminton.

Page 8: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 148

REVIEW ARTICLE I ANTIBIOTIC RESISTANCE

In essence, the world has grown complacent against the rising threat of antibiotic-resistant bacteria, contribut-ing to the growing fear within the scientific commu-nity as to how to deal with these bacteria. The general consensus is that “the pharmaceutical industry, large academic institutions or the government are not invest-ing the necessary resources to produce the next gener-ation of newer, safe and effective antimicrobial drugs.” (Alanis, 2005) This is in response to a threat which the United Kingdom’s chief medical officer has stated “is as serious as that of terrorism”.

2 HOW RESISTANCE ARISES

2.1 SOCIAL MECHANISM

Key to understanding why the number of cases of resistant-bactaeremia is on the rise is the realisation that humans as a species have had a drastic effect on our environment. Whilst we can easily observe this effect on a macroscopic level — we can see forests being chopped down, for example — we are far worse at understanding

The rise in the number of resistant-bacteria-induced infections is a concerning one; so much so that it has been likened to terrorism in its potential to take lives and sow fear among the world’s populace. Current medical treatments are becoming increasingly obsolete as bacteria continually evolve new mechanisms to combat them. Poor antimicrobial stewardship, in particular the careless over prescrip-tion of antibiotics, has led to a dramatic increase in the rate of mutation and adaptation of bacteria in the wild. In this review I will provide an overview of three things: the social causes and physiological mechanisms by which resistance develops in bacteria; current antimicrobial therapies at our disposal; and finally, new antimicrobial therapies and their modes of action.

ABSTRACT

A Comparative Study of Experimental Methodsto Combat AntibioticResistance

PAUL-ENGUERRAND FADYSt Paul’s School, Barnes, London, United Kingdom UK.

Email: [email protected]

1 INTRODUCTION

In recent years we have seen the rise of bacteria species exhibiting resistance to one or more forms of antibi-

otic treatment. The most famous of these is doubtless Methicillin-Resistant Staphylococcus aureus — better known as “MRSA”. Indeed, the number of MRSA infections per 1,000 hospital discharges from US Aca-demic Medical Centres more than doubled in 5 years: 20.9 cases were observed in 2003 compared to 41.7 in 2008 (Boucher and Corey, 2008).

In and of itself, this rise should, of course, cause some concern. The real cause for concern, however, is the fact that only two new antibiotics have been approved since 2009, and the number of novel antibiotics annu-ally approved for sale in the United States continues to decline (Boucher et al., 2013). In addition, whilst there was a slight resurgence of research in the early years of the 21st century, with the discovery of three new classes of antibiotics (each with very few FDA-approved mem-bers), there were no new classes of antibiotic discovered between 1987 and 2000. Despite all this, progress remains rare and relatively insignificant.

Page 9: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 9

the effects that we have on the microscopic world. The lower level of awareness concerning the extent to which we impact this “invisible” world has led to the develop-ment of current clinical practice and public health pol-icy which have made the situation orders of magnitude worse. These can be found in the subsections below.

2.1.1 OVER-PRESCRIPTION

It is indisputable that medical professionals tend to over-prescribe antibiotics. There are many reasons for this, usually linked to “parental expectation to receive antibiotics” (Barden et al., 1998) in regards to paedi-atric patients. It was determined that this played an important role in influencing the overuse of antibiotics (Barden et al., 1998). It should be noted, however, that parents from the same study indicated that parents would be satisfied with the consultation only, providing the physician explained the reasons for the decision not to prescribe any antibiotics (Barden et al., 1998). This clearly demonstrates a problem in communication between the physician and the patients.

Over-prescription of antibiotics can also be attributed to an inherent difficulty in differentiating bacterial and viral infections — for instance, in pharyngitis or ton-silitis. Both of these can be caused either by a strep-tococcal bacterium or a viral agent, but will manifest with very similar clinical presentation. Upon analysis of one particular sample, however, “group A streptococci were isolated from only 12% of children, whereas viral infection was documented from 31%” (Putto, 1987). Consistent with the aforementioned similarity, “clinical analysis of the illness [. . . ] did not reveal differences that could help in differentiating bacterial from viral tonsillitis” (Putto, 1987). Another analysis of antibi-otic prescription compared to actual disease showed that almost 76% of patients with pharyngitis received antibiotic treatment — whilst 85% of those with acute tonsillitis or pharyngitis were not in need of a course of antibiotics (Wang et al., 1999).

Clearly the decision as to whether or not to prescribe antibiotics is one which must be subject to thorough discussion between the parent and physician. Ultimate-ly, however, diagnostic tests must become more widely used — and better, more immediate tests must be devised for rapid action to be taken in cases where it is needed.

2.1.2 NON-COMPLIANCE

Non-compliance with treatment is quite a serious issue the world over. Up to 44% of patients in one particular study undertaken in China admitted to having failed to comply fully with treatment (Pechere et al., 2007). Fur-thermore, 60% of patients with upper respiratory tract infections and 55% of patients with diarrhoea were seen to be non-compliant to antibiotic therapy in a Mexican study (Reyes et al., 1997). Whilst this is not representa-tive of all patients prescribed antibiotics the world over, it is quite alarming, and when considering sources of low-dose exposure to antibiotics, non-compliance is a non-negligible factor. It serves to place a “selective pres-sure” on the bacteria causing the infection (explained in a later section), tying into the molecular mechanism of resistance evolution.

2.1.3 IMPROPER DISPOSAL OF ANTIBIOTICS

This is an oft-overlooked issue which remains central to the problem of increasing resistance. In essence there is always the possibility that the presence of antibiot-ic agents in the environment might increase the level of resistance in natural bacterial populations, (Sahoo, 2008) as these agents remain biologically active even at low levels. In fact, this phenomenon has been shown to occur in the field: water samples taken downstream from a sewage discharge displayed resistance against all of ciprofloxacin, tetracycline, ampicillin, trimethoprim, erythromycin and trimethoprim/sulphamethoxazole (Costanzo et al., 2005). The contamination of coastal waters — samples of which also contain bacteria which display a degree of multiple antibiotic resistance (Dash et al., 2007) — is indicative of a larger issue. This water will be treated and put back into general circulation for public consumption — and indeed, resistance genes and have been found in bacteria found in tap water (Xi et al., September 1, 2009). This process places a “selec-tive pressure” on certain bacteria (as mentioned previ-ously) and ultimately promotes bacterial resistance.

Page 10: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1410

2.1.4 PROPHYLACTIC USES

IN A CLINICAL SETTING — In certain countries and states, surgical interventions require prophylactic antibiotic treatment. To phrase it another way, “there is a widely accepted agreement that antibiotic prophylaxis in clean-contaminated, contaminated and dirty wounds is warranted” (Holzheimer, 2001). Indeed, this makes sense: why wait until a patient is infected to begin treatment rather than simply prevent infection in the first place? Unfortunately, this one-off dose of antibiot-ics serves only to kill bacteria in the host microbiomes, yet again applying a selective pressure to these bacteria. It would seem, however, that the risk of creating human reservoirs of resistant bacteria has been deemed less im-portant than that of potential iatrogenic complications — and most likely, rightly so.

IN AN AGRICULTURAL SETTING — “The most common non medical use of antibiotics in animals is as growth promoters in feeds.” (Sahoo, 2008). Indeed, it is striking to note that “8 times more antimicrobials are used for nontherapeutic purposes in the three major livestock sectors than in human medicine” in the Unit-ed States (Mellon et al., 2001). Whilst this may seem irrelevant to human health, it was estimated as early as 2002 that between 8 000 and 10 000 patients presented with fluoroquinolone-resistant infections acquired from poultry and were subsequently treated with fluoro-quinolones. (Lipsitch et al., 2002) A minor change in public health policies could reduce this number hugely, curbing the impact of agriculture on rates of bacterial mutation.

2.2 MOLECULAR MECHANISM

Development of a resistance to antibiotics is a natural genetic process; whilst it is true that modern natti-tudes and behaviours have exacerbated the problem (as mentioned), bacteria tens of thousands of years old have been shown to possess resistance genes similar to those found in bacteria today. The most notable example of this was the discoery of “30,000-year-old Beringian permafrost sediments” which, upon analysis, led to “the identification of a highly diverse collection of genes encoding resistance to β-lactam, tetracycline and glyco-peptide antibiotics.” (D’Costa et al., 2011) It is worth noting that “β-lactam” is the name of the square ring group responsible for penicillin’s anti-microbial action, as we will discuss later. Figure 1 in the “Appendix”

shows the ring’s position in multiple different common-ly-used antibiotics.

The rise in cases of infections by resistant bacteria is not solely due to random genetic mutations and the passing down of resistance genes through bacterial generations; there is no doubt surrounding the fact that there is a se-rious anthropogenic effect at play. This effect has many causes which were discussed above (see section 2.1), but in this example, we will assume that the “selective pressure” is a result of noncompliance with prescribed treatment.

In this case, the bacteria in the systemic microbiome of noncompliant patients — in particular those in their “gut” — are exposed to a low dose of, for example, pen-icillin. This creates a “selective pressure”: in any given sample “some of the microbes will be more resistant to the penicillin than others.” (Aldridge, 1998) This is a direct consequence of random mutation being so prev-alent — a few of the bacteria will, in most cases, have a mutated gene which leads to some aspect of penicillinresistance. These bacteria have certain morphological or physiological traits which lead to it being more difficult for the penicillin to break them down. In this example, the genes in one particular bacterium could code for the production of β-lactamase enzymes. These enzymes break down the main chemical group in penicillin which causes it to be antimicrobial — the β-lactam ring, which was mentioned earlier. This ring is what normally allows penicillin to block the enzyme glyco-peptide transpeptidase, IV “which co-ordinates the forg-ing of cross links” (Aldridge, 1998) in the bacterial cell wall. As the ring is broken down, this weakening of the cell wall cannot take place, and the bacterium survives.

This is not to say that penicillin is not able to kill off the bacteria in vitro; it is very important to note that if sufficient penicillin was added to a bacterial sample, all the bacteria would die out — even those “resistant” ones. This is because, again using the example above, there would simply be more β-lactam rings weakening the bacterial wall than β-lactamase enzymes preventing them from doing so; the antibiotic would “outcompete” the bacteria. The difficulty arises when the bacteria are notin vitro, but rather residing in a human host. Each anti-biotic has a “therapeutic index”, a measure of how many times the minimum effective dose will lead to toxic effects. The problem arises for the following reason: “For some drugs, such as the antibiotic gentamicin —

REVIEW ARTICLE I ANTIBIOTIC RESISTANCEREVIEW ARTICLE I ANTIBIOTIC RESISTANCE

Page 11: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 11

which is used to treat [. . . ] life threatening septicaemia — toxic effects kick in fairly quickly after the minimum effective dose. Such drugs are said to have a small thera-peutic index.” (Aldridge, 1998)

In short, if a low dose of antibiotics is consumed, the only bacteria that will survive are those exhibiting re-sistance to that antibiotic. As a direct result of the poor microbial stewardship detailed in section 2.1, we have become reservoirs and breeding grounds for these resist-ant bacteria. We do, however, have treatments available to combat resistant bacterial infections; a brief explana-tion as to how these work can be found below.

3 CURRENT TREATMENTS

First of all, the simplest treatment for bacteria which produce β-lactamase enzymes is a drug which goes by the trade name “Augmentin”. In reality, this is not a single compound, but two: amoxicillin, and clavulan-ic acid. The amoxicillin is an antibiotic derived from penicillin (therefore possessing a β-lactam ring), whilst the clavulanic acid acts as a β-lactamase inhibitor. The latter does so by not only having a β-lactam ring itself, but also an arrangement of other atoms in the molecule such that it is able to bind to the β-lactamase enzymes more tightly. Essentially, the clavulanic acid acts as a diversion of sorts to “disarm the β-lactamase, leaving the amoxicillin to go in for the kill.” (Aldridge, 1998) This is a very clever method, but is prone to the same problem as the original antibiotic — improper use of Augmentin may cause bacteria to experience selective pressures, leading to a mutation which may usher in a β-lactamase enzyme more adapted to cleaving clavulanic acid’s β-lactam ring. Regarding MRSA in particular, the “last resort” antibiotic vancomycin has become more freely administered to combat MRSA-induced bacter-aemia. Not only is vancomycin a dangerous drug — “it can be toxic to the ears and kidneys, and sometimes causes flushing, chills and fever” (Aldridge, 1998) —, it is an expensive one. Moreover, certain strains of Staph-ylococcus aureus have already developed resistance to vancomycin; its days as a not-quite-miracle cure are thus numbered.

Clearly, this is a race we can only lose. The current stance of governmental agencies — in particular the FDA — is outdated and dangerous. Newer ideas have begun arising which deal with the issue very differently, and hopefully, funding will follow these ideas.

4 FUTURE TREATMENTS

4.1 ANTIMICROBIAL PEPTIDES

The recent discovery of so-called “Antimicrobial Pep-tides” — also known as “AMPs” for short or “host de-fense peptides” — created quite a buzz in certain circles. These are short chains of amino acids (usually fewer than 50) which are part of the host organism’s immune response, acting on the bacteria infecting the organ-ism in various ways (as detailed below). These peptides are present in all animals: they are said to be found in virtually every life form (Hancock and Sahl, 2006), and sampleshave been found “representing virtually every kingdom and phylum, ranging from prokaryotes to humans” (Yeaman and Yount, 2003). This is a huge advantage of AMPs, as the vast range of organisms in which they are found means that researchers will be able to isolate multiple different AMPs to act on each microbe. This is because these peptides likely evolved in completely different ways depending on the organism, and could therefore have vastly different mechanisms to deal with the same microbe.

An example might be as follows: two organisms, one of the Arthropoda phylum and the other of a completely different one such as the Mollusca phylum, are some-how affected by the same bacterial infection. These organisms develop completely different peptides to deal with this infection based on their environment, which may have affected the kinds of mutations which altered the respective genomes. The arthropod may therefore develop a peptide which acts on cell walls — as has pre-viously been discovered in certain organisms – creating a pore in the cell wall and lysing the cell; they can do so “by ‘barrel-stave’, ‘carpet’ or ‘toroidal-pore’ mecha-nisms” (Brogden, 2005) .

The mollusc, however, may develop a peptide which passes through the bacterial cell wall and disrupts the formation of the cytoplasmic membrane septum forma-tion, inhibits the synthesis of nucleic acids and proteins — including those proteins which compose the cell wall — or inhibits enzymatic activity (Brogden, 2005). An example of a non-pore-forming AMP is buforin II and its analogs, which pass through the cell membrane [but] do not make it permeable (Park et al., 2000).

Page 12: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1412

Rather, these AMPs have intracellular action, targeting substances within the cell — most probably nucleic acids — without significantly increasing the permea-bility of the cell membrane (Park et al., 2000). The fact that we already have a broad range of AMPs, each with their own individual antimicrobial mechanism is a clear indication of the sheer variety of peptides and modes of action available to us.

The variety of mechanisms at our disposal is not the only advantage conferred by using this novel method. One major advantage is the fact that host-defense pep-tides act rapidly and are potent; in addition, they tend to possess a broad spectrum of activity (Hancock and Sahl, 2006). This is also a direct consequence of the vast pool from which to draw and isolate these peptides, and allows for less VII targeted antimicrobial action — es-sentially, AMPs can be used both when the pathogen is known and when it is not, allowing for effective treat-ment at all stages of infection.

Another such advantage is that not only do these AMPs act as antimicrobial agents, they also appear to have direct functions in innate immunity — in particular, they upregulate the expression of multiple genes in eukaryotic cells which code for various parts of the im-mune response (Hancock, 2001). This means that other parts of the host immune system are stimulated by the AMPs, magnifying the natural immune reaction as well as helping reduce the threat. With this combined effect, it seems clear that infections ought to be beaten much more rapidly than with current antibiotic treatment.

There are disadvantages to using AMPs, and these must be taken into account when considering whether or not to develop the technology further. Firstly, some mi-crobes already exhibit some innate resistance to certain AMPs: “Wild-type strains bearing additional copies of the dlt gene [...] were less sensitive to antimicrobial peptides” (Peschel et al., 1999). This is a result of the dlt gene changing the charge distribution of the bacterial membranes, preventing the AMPs from either creating pores orpassing through the membrane to target cellular com-ponents. The result demonstrates that resistance to these peptides can — and will — develop.

Furthermore, another major disadvantage is that artifi-cially evolved resistance to a candidate AMP drug pro-vided S. aureus with cross-resistance to human-neutro-phil-defensin-1 — a structurally similar human AMP,

which forms a key part of the human innate immune response to infection (Habets andBrockhurst, 2012). If the use of AMPs leads not only to a roadblock because the bacteria develop a resistance to the peptides themselves, but also to a worsening in future infections because bacteria are now resistant to part of our innate immune system, it may not be worth investing in AMPs.

It is still very much the case that “a clearer recognition of these opposing themes will significantly advance our understanding of how antimicrobial peptides function in defense against infection” (Yeaman and Yount, 2003); to that end, funding would be well spent in this area.

4.2 BACTERIOPHAGE THERAPY

“Phage” is the common abbreviation for “bacterio-phage”, or “bacteria-eating organism”. The term is now used almost exclusively in reference to viruses which infect bacteria and kill them. These viruses have evolved over millions of years to be particularly well suited to infecting certain kinds of bacteria — see Figure 2 in the “Appendix” section. Bacteriophages are not in any way unique or novel; in fact, bacteriophages are the most abundant micro-organisms on the planet and among those exhibiting the most diversification (Labrie et al., 2010). Moreover, the earliest research into phage therapy dates back to the early 20th century, and it has been in continued use “in the former Soviet Republic of Georgia, [where] phage therapy traditions and practice continue to this day” (Kutter et al., 2010). In fact, a major study was commissioned by the USSR, involving no less than 30 769 children (Sulakvelidze et al., 2001). This study was a resounding success for proponents of bacteriophage-based treatments: “Based on clinical diag-nosis, the incidence of dysentery was 3.8-fold higher in the placebo group than in the phage-treated group (6.7 and 1.76 per 1 000 children, respectively) during the 109-day study period” (Sulakvelidze et al., 2001). This is illustrated by Figure 3 in the appendix. The fact that we know this method to work — and well at that — continues to be central to the argument for developing bacteriophage-based treatments.

Another clear advantage is outlined succinctly in this phrase: “Owing to their host specificity, phages may be used to achieve the targeted destruction of bacterial pathogens with minimal effects on the beneficial micro-bial flora” (Lu and Koeris, 2011). This is a significant advantage over traditional antibiotic treatment. It is

REVIEW ARTICLE I ANTIBIOTIC RESISTANCE

Page 13: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 13

now common knowledge that the intestinal microbi-ome plays a significant role in maintaining good health by performing important and specific metabolic and protective functions, including “production of vitamin K”, “development [. . . ] of the immune system” and “protection against pathogens [by] the barrier effect” (Guarner and Malagelada, 2003). Traditional antibiotic treatment kills all bacteria which are not resistant to it, regardless of their effects, be they helpful or harmful. Indeed, use of antibiotics can disrupt the ecological bal-ance and allow a potentially pathogenic species to out compete others present in the environment (Guarner and Malagelada, 2003), thus creating a new problem by solving the previous one. Bacteriophage therapy, on the other hand, is far more targeted and much less disrup-tive to the fragile microbiome — though any kind of external influence on this ecosystem is likely to effect a change of some sort.

As with all treatments there are disadvantages to using phage therapy. The most major of these disadvantages is oft-touted as one of the best aspects phage-based treat-ments — the phage’s ability to evolve. Whilst “phage resistance mechanisms are widespread in bacterial hosts” (Labrie et al., 2010), the phage is an organism capable of evolving and adapting to the new bacterial morphol-ogy. This leads to a major issue: that of how to ensure that bacteriophages used therapeutically do not mutate into IX pathogenic viruses, or pass lab-induced resist-ances and traits into native populations. Unfortunately, this disadvantage is so great that to overlook it would border on the insane.

5 CONCLUSION

Whilst it is clear that “there are still many new antibiot-ics to be discovered in microbes themselves” (Aldridge, 1998), it would be naïve to believe that continuing to research antimicrobial drugs in the way that we have up until now will lead to any progress. As we have seen, this is a very limited option, and new methods, in particular the use of antimicrobial peptides, show much more promise. It is my belief that their rapid action and paucity of disadvantages make clear the fact that AMPs are the most sensible and logical choice of treatment moving forward. Whilst careful work will need to be undertaken to ensure minimal cross-resistance of struc-tural homologues, these are far less dangerous and just as useful as bacteriophage-based treatments.

Furthermore, it is my contention that not only should

further research be undertaken by private commercial parties, but also that national governments should dis-tribute grants to promote and fund research both in the lab and in the field by academics and other non-com-mercial interests. All of this needs to be done in as small a timescale as possible if antimicrobial peptides are to help quell the rise in antibiotic-resistant bacterial infec-tions successfully.

APPENDIX

Figure 1: Commonly used β-lactam antibiotics, with the position of the ring highlighted(Rubtsova et al., 2010).

Page 14: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1414

Figure 2: The life cycle of a bacteriophagesource by: http://www.nature.com/scitable/con-tent/a-schematic-of-the-lifecycle-of-a-35875

Figure 3: The incidence of clinical dysentery, cul-ture-confirmed dysentery, and diarrheal disease of undetermined etiology in phage-treated and phage-un-treated (placebo) children 6 months to 7 years of age (Sulakvelidze et al., 2001)

REFERENCES

1. Alfonso J. Alanis. Resistance to antibiotics: Are we in the post-antibiotic era? Archives of Medical Research, 36(6):697–705, November 2005. doi: 10.1016/j.arcmed.2005.06.009. (I)

2. Susan Aldridge. Magic Molecules: How Drugs Work. Cam-bridge University Press, 1998. (IV, V, X)

3. Louise S. Barden, Scott F. Dowell, Benjamin Schwartz, and Cheryl Lackey. Current attitudes regarding use of antimicro-bial agents: Results from physicians’ and parents’ focus group discussions. Clinical Pediatrics, 37(11):665–671, November 1998. doi: 10.1177/000992289803701104. (II)

4. Helen W. Boucher and G. Ralph Corey. Epidemiology of methicillin-resistant staphylococcus aureus. Clinical Infectious Diseases, 46(Supplement 5):S344–S349, June 2008. doi: 10.1086/533590. (I)

5. Helen W. Boucher, George H. Talbot, Daniel K. Benjamin, John Bradley, Robert J. Guidos, Ronald N. Jones, Barbara E. Murray, Robert A. Bonomo, David Gilbert, and for the Infec-tious Diseases Society of America. 10 » ’20 progressUdevel-opment of new drugs active against gram-negative bacilli:An update from the infectious diseases society of america. Clinical Infectious Diseases, –:–, April 2013. doi: 10.1093/cid/cit152. (I)

6. Kim A. Brogden. Antimicrobial peptides: pore formers or metabolic inhibitors in bacteria? Nature Reviews Microbiolo-gy, 3(3):238–250, March 2005. doi: 10.1038/nrmicro1098. (VII)

7. Simon D. Costanzo, John Murby, and John Bates. Ecosystem response to antibiotics entering the aquatic environment. Marine Pollution Bulletin, 51(1-4):218–223, 2005. doi: 10.1016/j.marpolbul.2004.10.038. (III)

8. Sashi Kanta Dash, Chandi C. Rath, Pratima Ray, and S.P. Adhikary. Effect of antibiotics and some essential oils on marine vibrios isolated from the coastal water of bay of bengal at orissa coast. Journal of Pure and Applied Microbiology, 1(2):247–250, October 2007. (III)

9. Vanessa M. D’Costa, Christine E. King, Lindsay Kalan, Mariya Morar, Wilson W. L. Sung, Carsten Schwarz, Duane Froese, Grant Zazula, Fabrice Calmels, Regis Debruyne, G. Brian Golding, Hendrik N. Poinar, and Gerard D. Wright. Antibiotic resistance is ancient. Nature, 477(7365):457–461, September 2011. doi: 10.1038/nature10388. (IV)

10. Francisco Guarner and Juan-R Malagelada. Gut flora in health and disease. Lancet, 361(9356):512–519, February 2003. (IX)

11. Michelle G. J. L. Habets and Michael A. Brockhurst. Thera-peutic antimicrobial peptides may compromise natural immu-nity. Biology Letters, 8(3):416–418, June 2012. doi: 10.1098/rsbl.2011.1203. (VIII)

12. Robert E W Hancock and Hans-Georg Sahl. Antimicrobial and host-defense peptides as new antiinfective therapeutic strategies. Nature Biotechnology, 24(12):1551–1557, Decem-ber 2006. doi: 10.1038/nbt1267. (VII)

13. Robert EW Hancock. Cationic peptides: effectors in innate immunity and novel antimicrobials. Lancet Infectious Diseas-es, 1(3):156–164, October 2001. (VIII)

14. RG Holzheimer. Surgical Treatment: Evidence-Based and Problem-Oriented. Zuckschwerdt, 2001. (III)

15. Elizabeth Kutter, Daniel De Vos, Guram Gvasalia, Zemphira Alavidze, Lasha Gogokhia, Sarah Kuhl, and Stephen T. Abe-don. Phage therapy in clinical practice: Treatment of human

REVIEW ARTICLE I ANTIBIOTIC RESISTANCE

Page 15: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 15

infections. Current Pharmaceutical Biotechnology, 11(1):69–86, 2010. doi: 10.2174/138920110790725401. (IX)

16. Simon J. Labrie, Julie E. Samson, and Sylvain Moineau. Bac-teriophage resistance mechanisms. Nature Reviews Microbiol-ogy, 8(5):317–327, March 2010. doi: 10.1038/nrmicro2315. IX

17. Marc Lipsitch, Randall S. Singer, and Bruce R. Levin. Antibiotics in agriculture: When is it time to close the barn door? Proceedings of the National Academy of Sciences, 99(9):5752–5754, April 2002. doi: 10.1073/pnas.092142499. IV

18. Timothy K Lu and Michael S Koeris. The next generation of bacteriophage therapy. Current Opinion in Microbiology, 14(5):524–531, October 2011. (IX)

19. Margaret G Mellon, Charles Benbrook, and Karen Lutz Benbrook. Hogging it: estimates of antimicrobial abuse in livestock. Union of Concerned Scientists, 2001. (IV)

20. Chan Bae Park, Kwan-Su Yi, Katsumi Matsuzaki, Mi Sun Kim, and Sun Chang Kim. Structure and activity analysis of buforin II, a histone H2A-derived antimicrobial peptide: The proline hinge is responsible for the cell-penetrating ability of buforin II. Proceedings of the National Academy of Sciences, 97(15):8245–8250, July 2000. doi: 10.1073/pnas.150518097. (VII)

21. Jean-Claude Pechere, Dyfrig Hughes, Przemyslaw Kardas, and Giuseppe Cornaglia. Non-compliance with antibiotic therapy for acute community infections: a global survey. Internation-al Journal of Antimicrobial Agents, 29(3):245–253, March 2007. (III)

22. Andreas Peschel, Michael Otto, Ralph W. Jack, Hubert Kalbacher, Gunther Jung, and Friedrich Goetz. Inactivation of the dlt operon in staphylococcus aureus confers sensitivity to defensins, protegrins, and other antimicrobial peptides. Journal of Biological Chemistry, 274(13):8405–8410, March 1999. doi: 10.1074/jbc.274.13.8405. (VIII)

23. Anne Putto. Febrile exudative tonsillitis: Viral or streptococ-cal? Pediatrics, 80(1):6–12, July 1987. (II)

24. Hortensia Reyes, Hector Guiscafre, Onofre Munoz, Ricardo Perez-Cuevas, Homero Martinez, and Gonzalo Gutierrez. Antibiotic noncompliance and waste in upper respiratory in-fections and acute diarrhea. Journal of Clinical Epidemiology, 50(11):1297–1304, November 1997. (III)

25. M.Yu. Rubtsova, M.M. Ulyashova, T.T. Bachmann, R.D. Schmid, and A.M. Egorov. Multiparametric determination of genes and their point mutations for identification of beta-lactamases. 75(13): 1628–1649–, 2010. doi: 10.1134/S0006297910130080. (XIV)

26. Krushna Chandra Sahoo. Antibiotic use, environment and antibiotic resistance : A qualitative study among human and veterinary health care professionals in Orissa, India., 2008. III, IV

27. Alexander Sulakvelidze, Zemphira Alavidze, and J. Glenn Morris. Bacteriophage therapy. Antimicrobial Agents and Chemotherapy, 45(3):649–659, March 2001. doi: 10.1128/AAC.45.3.649-659.2001. (IX, XVI)

28. Elaine E. L. Wang, Thomas R. Einarson, James D. Kellner, and John M. Conly. Antibiotic prescribing for canadian pre-school children: Evidence of overprescribing for viral respira-tory infections. Clinical Infectious Diseases, 29(1):155–160, July 1999. (II)

29. Chuanwu Xi, Yongli Zhang, Carl F. Marrs, Wen Ye, Carl

Simon, Betsy Foxman, and Jerome Nriagu. Prevalence of antibiotic resistance in drinking water treatment and distri-bution systems. Applied and Environmental Microbiology, 75(17):5714–5718, September 1, 2009. doi: 10.1128/AEM.00382-09. (III)

30. Michael R. Yeaman and Nannette Y. Yount. Mechanisms of antimicrobial peptide action and resistance. Pharmacological Reviews, 55(1):27–55, March 2003. doi: 10.1124/pr.55.1.2. (VII, VIII)

ABOUT THE AUTHOR

PAUL-ENGUERRAND FADY is a 17 year-old who attends St Paul’s School. He is currently studying Biology, Chemistry, Maths, and Spanish. He hopes to study Microbi-ology with Spanish at Imperial College London. His main scientific interests include all aspects of microbiology, cell biology and synthetic biology. He is heavily involved in the iGEM synthetic biology competition. Non-scientific interests include European languages and linguistics as well as computer programming.

Page 16: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1416

“Threat posed by resistance to antibiotics ‘ranks alongside terrorism’” – The Telegraph, March 2013 With rising reports of drug resistant pathogens, the health industry is facing a dilemma. This pro-ject investigates the potential of various vegetables, fruits and teas as novel antimicrobial agents. The agents were tested against various strains of bacteria including S.aureus and E.coli and their activity measured at different concentrations. Excitingly, the results exposed one particular plant based product which could be developed into a new antibiotic drug and play an important role in the fight against antibiotic resistance.

ABSTRACT

Investigation into Novel AntimicrobialsCOULD YOUR 5-A-DAY BE THE KEY TO OUTSMARTING PATHOGENS?

ORIGINAL RESEARCH I ANTIMICROBIALS

SHANNON GUILDSt Mary’s Catholic High School, Chesterfield, Derbyshire, UK. Email: [email protected]

THE PROBLEMS ASSOCIATED WITH ANTIBIOTIC RESISTANCE

Worryingly for the health industry, the prevalence of resistant strains of pathogens is on the rise and

has been for many years. Resistance often develops as a result of extensive use of antibiotics. In intensive care units, for example, antibiotic treatment is high due to the tendency of infection to occur. This creates an envi-ronment in which the selective pressure for the bacteria is antibiotic resistance. Bacteria that have mutated to no longer be susceptible to the antibiotic are more likely to survive and reproduce. Reproduction is by a method of asexual reproduction (e.g. binary fission) and, on average, bacteria will divide (replicate) every 15 to 20 minutes and thus, a mass of bacteria will grow expo-nentially. The significance of this is that even if only a few resistant forms of bacteria are present, once the non-resistant form is wiped out by the antibiotics and competition for nutrients is greatly reduced, the resist-ant bacteria can thrive.

Resistance in bacteria is well understood in strains such as MRSA (Methicillin-resistant S.aureus. The S.au-reus bacteria has a group of proteins called penicillin

binding proteins (PBP’s) which incorporate into the cell wall and contribute to its synthesis. Methicillin kills S.aureus bacteria by binding to the PBP’s and partially inhibiting their function so that cell wall synthesis is prevented. Resulting from the presence of the mecA gene (originally derived from a mutation), PBP2a (an alternative PBP) is produced instead. It has a low affinity for the binding of β-lactam antibiotics, such as methicillin and thus creates the methicillin resistant form of S.aureus.

In resistant forms of E.coli, which is a gram negative bacteria, genes located on plasmids code for the for-mation of β-lactamases which are enzymes that inhibit β-lactam antibiotics. The fact that the resistance gene is located on a plasmid means that it can be transferred relatively easily to other bacteria by conjugation (Fig. 1) which leads to rapid spreading of resistance.The scale of the problem is evident when we consider how the cases of resistance have moved from being isolated in the ICU environment to within the com-munity. With antibiotic resistance having high costs in terms of both finance and lives, there have been exten-sive studies into the potential use of plants (particularly vegetables and fruits) as novel antimicrobials.

Page 17: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 17

PREVIOUS STUDIES INTO ANTIMICROBIAL ACTIVITY OF FRUITS, VEGETABLES, AND HERBS Most of us are familiar with the importance of eating fruits and veg-etables as part of a balanced, healthy life style. The question is: just how useful could our 5 a day be in re-ducing our susceptibility to certain pathogens?

One study at the University of California presented some particularly interesting findings. Various fruits and vegetable extracts where tested on common pathogens including some resistant strains. It was reported that whilst green vegetables showed no antimicrobial potential, the anthocyanins in many red and purple fruits meant that inhabitation of bacteria was frequent. For example, beetroot, cherry and cranberry all inhibited the growth of S.epidermis and K.pneumonia.

It was, however, garlic and tea that gave the most exciting result. Both were tested on strains of bacte-ria including MRSA and proved effective. Raw garlic juice (cooked garlic loses inhibitory factor) showed inhibition to MRSA with a 128 minimum inhibitory dilution (MID) and against on non-resistant S.aureus with 2’56 MID. Green tea also showed strong inhibi-tion against MRSA with a concentration of 3.1 mg/ml.

DETERMINING THE ANTIMICRO-BIAL PROPERTIES OF FRUIT AND VEGETABLE EXTRACTS

MethodsThe first week of investigation focused on examining various fruits and vegetables against two common strains of bacteria, Escherichia coli (E.coli) and Staphylococcus aureus (S.aureus). The four extracts involved in my investigation were red cabbage, white cabbage, carrot and pomegranate. Vegetable juices were added directly to liquid agar at 3 differ-ent concentrations (10, 20 and 40%). Each plate was

divided in half by drawing a line on the underside of the plate. One half was used to test S.aureus and the other to test E.coli. Three drops (each 3µl in volume) of the corresponding bacteria strain were added to each half plate. Following addition of the drops, the plates were incubated at 37°C for 24h. ResultsAfter 24hrs incubation, we found that there was similar growth of both E.coli and S.aureus for all the vegetable extracts. There was no observational difference across the range of concentrations used (10%, 30% and 40%) for any vegetable extract on both bacteria strains.

DiscussionFrom the results, it could be suggested that none of the extracts tested have antimicrobial properties against

the strains used in the investigation. This is surprising when compared to the results found by the University of California as mentioned above. Whilst I expected little or no inhibition by white cabbage, the presence of

Figure 1. Conjugation in bacteria

Figure 2. Petri dish

Page 18: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1418

the anthocyanin pigments in the red cabbage and the pomegranate predicted microbial resistance. Contrast-ingly, the University of California study found that the red cabbage had mild activity against Staphylococcus epidermis (S.epidermis). S.aureus and S.epider-mis are not only both gram positive, they are also both members of the Staphylococcus genus. It would therefore be expected that red cabbage would have an-timicrobial properties against S.aureus according the study. What is even more surprising is that red cabbage showed inhibitory affects against S.epidermis at a 1:2 dilution (i.e. 33% concentration). The highest concen-tration used in my investigation was 40% yet there was no inhibition.

The differences between the results of my experiment and the one by the University of California are surpris-ing. However, they could be explained by differences in the method used or the fact that the exact strains of bacteria tested against were not the same and small differences in similar strains could account for differ-ences in resistance. This depends on the mechanisms of inhibition.

COMPARING THE ANTIMICTROBI-AL PROPERTIES OF IODINE AND GREEN TEA

It is well known that iodine has antimicrobial prop-erties and it is has many medicinal uses. For example, as elemental iodine is an irritant, it is used in the form povidone-iodine (PVP-I) in wound dressings. The iodine is delivered to the wound in the PVP-I complex and free iodine is slowly released to the wound. The overall concentration of free iodine is very low which minimises irritation whilst still preventing infection. I decided to compare the antimicrobial effectiveness of iodine against green tea which, as mentioned in my introductory section, has been previously found to have antimicrobial potential.

MethodsConcentrations of iodine were made up using 10% iodine solution and PBC with concentrations of 0.3, 0.6 and 0.9% of iodine used. To mimic the dressing from which the iodine would be released filter paper disks were soaked in the iodine solutions for 1hr and sterilized under UV light. The disks were placed in the centre of agar plates which had been spread with a lawn of the relevant bacteria

(E.coli, S.aureus, P.aeruginosa or one of bacterial isolates A-E (obtained from laboratory sink swabs)).

To test the green tea, concentrations of 1, 5 and 10% were used. 10ml of each concentration were made. I.e. to make 1% solution, 0.1g of Twinings Pure Green Tea were added to 9.7ml of water. The mixture was then steamed for an hour allowing water to boil and brew the tea. The solution was filtered and filter paper disks soaked in each solution for 1 hr as with the iodine. Following UV sterilization, disks were placed in the centre of plates lawn spread with bacteria. The strains tested were also S.aureus, E.coli, P.aeruginosa and Isolates A-E.

I used a variety of laboratory techniques in order to iden-tify the species of pathogen in isolates A-E. Gram staining was used to distinguish between gram positive and gram negative strains. PCR followed by 16S rRNA gene spe-cific amplification and sequencing was used for a more definite identification. This way, I could better determine the specific activity of the agents tested against various strains.

Figure 3. Photographs of 40% vegetable extract plates after 24hrs incubation

ORIGINAL RESEARCH I ANTIMICROBIALS

Page 19: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 19

The results of the sequencing showed:Isolate A - 99% match to Bacillus licheniformisIsolate B - 99% match to Bacillus licheniformis Isolate C - 99% match to Staphylococcus warneri Isolate D - 99% match to Staphylococcus aureus Isolate E - 100% match to Bacillus thuringiensis

ResultsFigure 4 (above) shows the effect of 1% green tea solu-tion compared to 0.9 % iodine solution on the tested bacteria strains.

DiscussionAll of the isolates were found to be gram positive species. Isolates A, B and E were of the Bacillus genus which includes soil dwelling, rod shaped bacteria. Bacillus licheniformis has high levels of enzyme secre-tion and can exist in spore form to resist harsh envi-ronments. Isolates C and D were of the Staphylococcus genus. S.warneri is a normal constituent of the human skin flora and rarely causes disease. S.aureus was also tested as a known laboratory sample. The B.linchenformis showed mild inhibition by green tea and the B.thurigienisis showed moderate inhibi-tion. The S.aureus showed moderate inhibition whist the S.warneri showed high inhibition. Gram negative P.aeruginosa also showed moderate inhibition. The low-est inhibition show by the green tea was against gram negative E.coli. The results on green tea as an antimi-crobial agent suggest that it would be an effective treat-ment against a wide range of gram positive bacteria.

In terms of inhibition of gram negative bacteria, it is difficult to draw a conclusion. If given the opportunity to test a wider range of gram negative bacteria, I would predict that in general, inhibition of gram negative bac-teria would be less than that of gram positive. This pre-diction is made based on the presence of efflux pumps which are more commonly found in gram negative bacteria. Efflux pumps are intrinsic, membrane bound proteins (spanning the whole membrane) that act as carrier proteins. Their role is to actively expel harmful molecules such as antibiotics from the cell interior. This allows them to meet the selection pressures of antibiot-ics and other harsh environments. Whilst some efflux pumps can only expel a certain type of molecule others can remove a wide variety of unrelated molecules and are named Multi-Drug efflux pumps. Such multi-drug efflux pumps are used by both E.coli and P. aeruginosa. This could explain the low inhibition of P. aeruginosa with 0.9% iodine and of E.coli with 1% green tea.

CONCLUSIONS

My results show that of all the extracts tested, green tea has the most potential as an antimicrobial agent. It showed inhibition of a variety of gram positive bacteria as well as the gram negative P. aeruginosa. The results of green tea were comparable to the results of iodine suggesting that green tea has similar antimicrobial strength to iodine. Although green tea is suggested to be an overall weaker antimicrobial agent than iodine at

Figure 4. A chart showing the average inhibition zone of bacteria at 1% green tee solution com-pared to 0.9% Iodine concentration.

Page 20: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1420

a similar concentration, the fact that green tea has little or no toxicity to humans means it could be consumed or applied at a much higher concentration. Whilst 10% iodine would cause significant irritation to the skin, 10% green tea could be applied without the irritation and very high antimicrobial activity. Green tea could also be consumed safely allowing poten-tial use as oral treatment. The investigation could be extended to investigate the effect of green tea on more strains of gram negative bacteria to find out whether the presence of efflux pumps reduces antimicrobial activity of the agent. Furthermore, it would be useful to extract the active compound in green tea to see if it has potential as use in a novel antibiotic drug.

REFERENCES

1. Bailey, R. (n.d.). Bacterial Reproduction. Retrieved August 2012, from About: http://biology.about.com/od/bacteriology-/a/aa080907a.htm

2. Farnsworth, D. S. (2001). The value of plants used in tradi-tional medicine for drug discovery. Environmental Health Prospectives , 7.

3. J.Chastre. (2008). Evolving problems with resistant patho-gens. Clinical Microbiology and Infection , 12.

4. KUNDUHOĞLU, M. K. (n.d.). ANTIMICROBIAL AC-TIVITY OF FRESH PLANT JUICE ON THE GROWTH OF BACTERIA AND YEASTS. 6.

5. M.Elgayyar, F. D. (2001). Antimicrobial activity of essential oils from plants against selected pathogenic and saprophyotic microorganisms. Journal of Food Production , 7.

6. NIKAIDO, H. (1996). Multidrug Efflux Pumps of Gram-Negative Bacteria. JOURNAL OF BACTERIOLOGY.

7. Piddock*, M. A. (2002). The importance of efflux pumps in bacterial antibiotic resistance. J. Antimicrob. Chemother. (2003) 51 (1): 9-11.

8. Wikipedia. (2012, August 27). Bacillus. Retrieved September 2, 2012, from Wikipedia: http://en.wikipedia.org/wiki/Bacil-lus

9. Wikipedia. (2012, July 4). Bacillus licheniformis. Retrieved September 1, 2012, from Wikipedia: http://en.wikipedia.org/wiki/Bacillus_licheniformis

10. Wikipedia. (2012, July 1). Pseudomonas aeruginosa. Retrieved September 2, 2012, from Wikipedia: http://en.wikipedia.org/wiki/Pseudomonas_aeruginosa

11. Wikipedia. (2012, May 30). Staphylococcus warneri. Re-trieved September 2, 2012, from Wikipedia: http://en.wikipe-dia.org/wiki/Staphylococcus_warneri

12. Yee-Lean Lee, P. T. (2003). Antibacterialactivity of vegetables and juices. Nutrition , 2.

13. Zgurskaya, H. N. (2001). AcrAB and Related Multidrug Efflux Pumps of. J. Mol. Microbiol. Biotechnol .

ABOUT THE AUTHOR

SHANNON GUILD finished studying Biology, Chemistry, Mathematics, French and Philosophy & Ethics at St Mary’s Catholic High School in Chesterfield in June 2013. At 17, after spending her summer working in the Laboratories of Sheffield Hallam University, she took this Novel Antimicrobials Project to the 2013 Big Bang Fair in London. In her free time, Shannon loves to explore her musical side in piano and dance or delve into different cultures by studying foreign languag-es. Most of all, she has a passion for science and will begin Biomedical Sciences at Oxford University Autumn 2014, from where she aspires to go on into the Medical Research field.

Figure 4. Photograph of inhibition zone of 1% green tea on isolate E. coli.

ORIGINAL RESEARCH I ANTIMICROBIALS

Page 21: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 21

and biomimetic self-curing materials often outperform conventional materials and constitute future innova-tions for architecture. [1]

Many natural systems are studied and imitated in the field of architecture. [1,2,3,4,5] Radiolaria and diatoms, organisms that live in the sea, are virtual catalogues of ideal solutions to architectural problems. In fact, these tiny creatures have inspired many large-scale architec-tural projects. The Pavilion in Montreal, inspired by the radiolarians, is just one example [Figure 1].

Figure 1: Radiolaria (left) and Pavilion in Montreal (right) [2,3]

ABSTRACT

Biomimetics andOptimal Design of Roof Structures

TINA KEGL

INTRODUCTION

Many features considered advantageous for various structures, such as energy conservation, beauty,

functionality and durability have already been observed in the natural world. By first examining and then imitating or drawing inspiration from the models in nature, biomimetics aims to solve problems in medi-cine and technology. Biomimetics is the term used to describe the substances, equipment, mechanisms and systems by which humans imitate natural systems and designs.

Today, biomimetics finds applications in all areas, including architecture and building. Biological models may be emulated, copied, learnt or taken as starting points for new technologies. Through studies of biolog-ical models, new forms, patterns and building materials arise in architecture. Because of their properties, bio-mimetic nanomaterials, biomimetic technical textiles

This paper deals with biomimetics, a discipline which studies natural solutions in order to address technical problems. Attention is focused on the usage of biomimetics in architecture, especially the usage of nacre shell design as inspiration for computer supported roof design. For this reason, the struc-tural requirements for a roof structure and some mechanical properties of a common nacre shell (Mytlus edulis) are presented. Among the modern methods of architectural design, the parametric modelling and optimal design methods are used to design a shell-like roof in order to fulfil the require-ments of functionality, economy and ecology. Firstly, the strength analysis of a flat and a wavy roof, computed with the STAKx research program, is presented. Next, the displacements, the total strain en-ergy, and the volume of material are compared. An optimal design procedure, using the iGOx research program, is also performed. The design variables are determined by the thickness and by the shape of the roof. The constraints are determined by maximal displacement and maximal volume of the roof. The objective function to be minimized is defined as the total strain energy. The obtained optimal design of the roof is a wavy, shell-like design and has excellent properties such as low mass, high load capacity, low strain energy and low cost. Some practical implementations of this optimal roof design are also proposed.

Rugby School, Warwickshire UK. Email: [email protected]

ORIGINAL RESEARCH I BIOMIMETICS

Page 22: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1422

The Crystal Palace in London was designed by the landscape designer Joseph Paxton, who drew inspira-tion from Victoria amazonica, a species of water lily [Figure 2]. Each leaf of this lily has radial ribs stiffened by slender crossribs. Paxton thought these ribs could be replicated as weight-bearing iron struts, and the leaves themselves as the glass panes. In this way, he succeeded in constructing a roof made of glass and iron, which was very light yet still very strong.

Figure 2: Water lily leaves (left) and Crystal Palace in London (right) [2,3]

Some spiders spin webs [Figure 3] that resemble a tar-paulin covering thrown over a bush. The web is borne by stretched threads, attached to the edges of a bush. [2,3,4] The load-bearing system lets the spider spread its web wide, while still making no concessions in terms of its strength. This marvellous technique has been imitat-ed by man in many structures to cover wide areas. Some of these include the Munich Olympic Stadium [Figure 3] the Sydney National Athletic Stadium, the Jeddah Airport’s Pilgrim Terminal, zoos in Munich and Cana-da, Denver Airport in Colorado, and the Schlumberger Cambridge Research Centre building in England.

Figure 3: Spiders spin web (left) and Olympic Stadium in Munich (right) [2,3]

Dragonfly wings [Figure 4] are one three-thousandth of a millimeter thick. [2,3,5] Despite being so thin, they are very strong since they consist of up to 1000 sections. Due to this compartmental structure the wings do not tear, and are able to withstand the pressure that forms during flight. The roof of the Munich Stadium [Figure 4] was designed along the same principle.

Figure 4: Dragonfly wings (left) and Olympic Stadium in Munich (right) [2,3]

Various shells [Figure 5] resemble wavy hair because of their irregular shapes. This shape allows the shell, despite being very lightweight, to withstand enormous pressure. Architects have imitated the shells’ structure for designing various roofs and ceilings. For example, the roof of Canada’s Royan Market [Figure 5] was de-signed with the oyster shell in mind.

Figure 5: A sea shell (left) and Royan Market in Cana-da (right) [2,3]

All the aforementioned and many other examples show that for the imitation of living beings, good observation is needed. Furthermore, frequently some coincidence is necessary, as evidenced by the invention of Velcro. The idea came to George de Maestral, one day, after returning from a hunting trip with his dog in the Alps. [3] He took a close look at the burrs (seeds) of burdock that kept sticking to his clothes and his dog’s fur. He examined them under a microscope, and noted their hundreds of “hooks” that caught on anything with a loop, such as clothing, animal fur, or hair. He saw the possibility of binding two materials reversibly in a sim-ple fashion, if he could figure out how to replicate the hooks and loops. In a similar way, one can admire the beautiful shapes of shells at the sea coast. If one steps on the shell, it does not break under your weight. This was one of the reasons to start to investigate the possi-bility of designing roofs of buildings similar to mollusk shells: tiny and unbreakable.

In this paper, at first some properties of the common nacre shell and some structural requirements fora roof structure are presented. Then, two modern

ORIGINAL RESEARCH I BIOMIMETICS

Page 23: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 23

methods of architecture design, the parametricmodelling and optimal design method, are presented. This is followed by the strength analysis of aflat and a shell-like roof, computed with the STAKx research program. The displacements, the totalstrain energy, and volume of material are compared. An optimal design procedure of 1000 m2

pavilion, using the iGOx research program, is also per-formed and the roof properties such as mass,load capacity, strain energy, and cost are analysed. Fi-nally, some practical implementations of thisoptimal roof design are proposed.

2 NACRE SHELL DESIGN

Many researchers have investigated the properties of nacre shells. [6,7] They have established that the tensile strength in the direction perpendicular to the layered structure can be explained by the presence of mineral bridges [Figure 6].

Figure 6: Microstructure of nacre shell [2]

These bridges, having a diameter of approximately 50 nm, have a tensile strength no longer determined by the critical crack size, but by the theoretical strength. Their number is such that the tensile strength of the tiles is optimized for the tile thickness of 0.5 μm. A higher number of bridges would result in tensile fracture of the tiles with loss of the crack deflection mechanism. The elastic modulus of the shell material is about 100 Gpa. [6] It is interesting to note that the thickness of the shell changes throughout the year. [7] In this way, the average values of thickness of mussel shells are from 0.97±0.030 mm throughout the spring and summer season, or 1.23±0.034 mm throughout the autumn [or fall] and wintertime. Accordingly, the strength of a shell during the seasons also varies. Mechanical tests with loading applied perpendicular to the plane of the organic layers reveal a tensile strength lower than 10 MPa, whereas the compressive strength is approximately 190-550 MPa.

All mechanical experiments and analyses of nacre indicate that mineral bridges effectively increase the stiffness, strength and toughness of the organic matrix layers, and demonstrate that the effect of the mineral bridges on the organic layers and wavy design of shell are of significant importance.

3 ROOF REQUIREMENTS

There are many factors that influence roof design and construction [Figure 7]. However, maybe themost important requirements related to roof design are: [9]• Strength and resistance to earthquake, fire, weather

circumstances, and durability,• Aesthetic appearance,• Architectural adjustment with other objects, sur-

roundings, and with landscape characteristics of• Building location,• Economy.

In this paper we concentrate mainly on roof strength necessary to resist external loads, primarily induced by snow. For buildings being designed in cold regions, local building codes provide guidelines to determine loads attributed to snow accumulation. These guidelines necessarily apply to a range of generic situations that are fairly simple. Model testing and computer analyses, utilizing detailed meteorological histories, have now developed to the point where many variables that affect snow loading can be accounted for, even in complex situations, leading to more accurate loading predic-tions. The variables affecting snow loading include: [8] building shape; building orientation; building insula-tion; wind speed; wind direction; precipitation (both snow and rain); temperature (melting, re-freezing); cloud cover; and solar exposure [Figure 7]. According to Slovenian weather circumstances, especially wind and snow action, analysis plays an instrumental role in the design and optimization of roof structures. For that purpose, the dead and permanent loads, as well as snow

Figure 7: Factors influencing roof design

Page 24: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1424

and wind have to be considered for the roof strength analysis. The snow loads for Slovenian circumstances are in the range of about 3000 N/m2.

4 ARCHITECTURE DESIGN METH-ODS

The basic parameters of roof analysis and construction are related to: • Geometry or design (thickness, width, height, design …),• Material – fire-baked brick, steel, concrete (elastic modulus, Poisson factor, density …),• Loads (direction, intensity …),• Supports (position, type …),• Discretionary of the roof structure (number of elements type …).In this paper focus will be given to geometry and design. To create new architectural design of buildings, various numerical methods may be used. Nowadays, techniques like parametric modelling and optimal de-sign are very promising.

4.1 PARAMETRIC MODELLINGGenerally, in parametric design, geometric shapes are defined by adequate parameters and the correspond-ing equations. This has to be done in such a way that a variation of any parameter changes the shape of the structure, but preserves the validity of the design, i.e. it preserves the imposed requirements and constraints. [10] Frequently, the used parameters are related to sim-ple quantities like length, width and thickness, but also to more sophisticated ones like control point positions. The essence of parametric modelling is to simplify the variation of design and shape in such a way that shape variation can easily be performed by computer-support-ed procedures.

4.2 OPTIMAL DESIGNPROCEDURE

Optimal design procedure is a systematic comput-er-supported search for the best solution. In the scope of this paper this means that one has to determine the optimal values of design variables (variable parameters) in order to get the desired properties (measured by the objective function) and fulfil the requirements (meas-ured by the imposed constraints) of the structure. All methods of optimal design search for the minimum of the objective function – the optimum point. Optimum

points can be local or global. The goal of optimal design methods is to find the global optimum; in reality, the procedure typically finds the nearest optimum point, which can often be a local optimum only.

Gradient-based methods, which are commonly used to solve engineering optimization problems, use the func-tion gradients, evaluated at current point, to compute a better (more optimal) point. Such a method would give a local optimum, if starting from point A, but would find a global optimum, if starting from point B [Figure 8].

Figure 8: Local and global minimum of the objective function

Other types of common optimization techniques include evolutionary methods, such as the genetic algorithm. By using evolutionary methods it is very probable that one may find the global optimum, and no gradient calculations are necessary. But, evolution-ary methods typically require an enormous number of structural analysis computations. For a problem with a long-running analysis computation, this can be a huge burden.

In order to define a structural optimization problem, the following items need to be properly selected or defined:• Design variables (the parameters that are varied by

the optimizer, here denoted as b_i),• Objective function (the quantity that has to be

minimized, e.g. strain energy, mass …),• Constraint functions (the quantities that are limit-

ed, e.g. displacements, stresses, …), and• Response equation (the equation, needed to com-

pute the response of the structure).

Once the optimization problem for a roof structure is defined, one can solve it by applying an iterative proce-dure, e.g., by the one depicted in Figure 9.

ORIGINAL RESEARCH I BIOMIMETICS

Page 25: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 25

Figure 9: Schema of the optimal design procedure

In this paper the optimal design problem is defined as follows: one has to find such values of design variables b_i so that the objective function g_o will be mini-mized, while the behavioural constraints g_i ≤ 0 are fulfilled.

Since we know that good designs have low strain energy Π, the latter is adopted as the objective func-tion. In other words, we define g_o=Π. The constraints are related to the total volume of the roof and to the displacement of its mostly exposed point. The volume V is limited by V ≤V_max and the displacement Δz by |Δz|≤Δzmax. The response equation is the finite element model of the employed finite element software.

In architecture a lot of commercial programs can be used for parametric modelling and strength analysis, e.g., AutoCAD, Roofs design, Abaqus, Ansys, etc. However, these commercial programs are not easily integrated into an optimal design procedure. For this reason, in this work the structural analysis program STAKx has been used, which is easily combined with a gradient-based optimization program iGOx. Both of these research programs were developed at the Faculty of Mechanical Engineering at the University of Mar-ibor. STAKx is actually a finite element program for static analysis of elastic structures. The speciality of this program is its strong orientation into the shape param-eterization of structures and the possibility to compute the gradients of response quantities. The employed pa-rameterization is based on the so-called design elements whose shape is determined by the positions of control points. The program iGOx is actually a gradient-based optimizer which enables interactive optimization by

making use of external response analysis programs like STAKx.

5 NACRE SHELL DESIGN

At first, parametric modelling of a shell [Figure 10] has been done by using STAKx in order to confirm that the strength of a wavy shell is higher than that of a flat one. For this purpose, a wavy and a flat shell were modelled using 27 control points. The shell is simply supported along all three short sides in the vertical direction. It is loaded in the vertical direction by a distributed load of 106 N/m2 . The thickness of shell is 2 mm, the elastic modulus of the used material is 210∙109 N/m2 and its density is 7800 kg/m3 .

Figure 10: Model of the wavy shell in the program STAKx

The calculated surface of the flat shell is 78.2 cm2 and the needed material volume is 15.7 cm3 which corre-sponds to a mass of 123 g. Meanwhile, the surface of the wavy shell is 94.6 cm2 and the needed material vol-ume is 18.9 cm3, corresponding to the mass of 147 g. The computed displacements of the flat and wavy shell under 100 % load are presented in Figure 11. One can see that the displacements of the flat shell are approxi-mately 10 times larger than the ones of the wavy shell.

Figure 11: Displacements of the flat and wavy shell, computed by STAKx

Page 26: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1426

Furthermore, the distribution of strain energy Π of the flat and wavy shell under full load is presented in Figure 12.

Figure 12: The distribution of strain energy Π of flat and wavy shell, computed by STAKx

The value of the total strain energy of the flat shell is 38.7 Nm. This is about 10 times higher than the strain energy of the wavy shell, which is about 4.9 Nm. It is important to note that by reducing the thickness of the wavy shell, one can get lighter (less weight), stiffer (smaller displacement), and better (smaller strain ener-gy) structure, compared to the flat design [Table 1].

Characteristics Shell, Thickness

Flat, 2 mm

Wavy, 2 mm

Wavy, 1 mm

Material volume (cm3)

15.7 18.9 9.45

Maximal displace-ment z (mm)

37.1 3.55 8.5

Total strain energy Π (Nm)

38.7 4.9 12.08

Table 1: Some characteristics of the flat and wavy shell

According to the presented results, one can conclude that the wavy shell-like design has advantages regarding needed material, displacements, and strain energy with respect to the flat shell. For this reason, it seems to be advisable to enlarge the wavy shell, to cover, for exam-ple, an exposition pavilion covering an area of 1000 m2.

6 STRENGTH ANALYSIS OF A FLAT AND A WAVY SHELL OF THE ENLARGED MODEL – ROOF OF EXPOSITION PAVILION

In order to analyse the enlarged model, the shell model was scaled appropriately [Figure 13] where the coor-dinates of control points are given in meters and the supported edges of the roof are marked by thick lines.

Figure 13: The coordinates of control points of the enlarged model

A substantial difference between the small and the large model is that for the latter one the weight of the struc-tural material has significant influence on loading and has to be taken into account. Because of the weight of the structure, a snow load of 3000 N/m2 has been im-posed. The materials analysed were steel and concrete, where, of course, only concrete can be considered as a practically usable material.

At first a flat and a wavy steel roof of 30 cm thickness has been analysed. For a flat roof the actual surface is 1030 m2, its volume is 309 m3, the mass of steel is 2413 tons. The maximum displacement is 4.56 m and the total strain energy is over 20 MNm under full load.

For a wavy roof the actual surface is 1481 m2, its vol-ume is 445 m3 and the corresponding mass is 3 467 tons. The maximum displacement is 6.6 cm and total strain energy is over 0.4 MNm under full load [Figure 14].

ORIGINAL RESEARCH I BIOMIMETICS

Page 27: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 27

7 OPTIMAL DESIGN OF ROOFIn order to obtain a roof design that is really optimal with respect to specific criteria, it is necessary to enrich the parametric modelling by one of the optimal design methods.

For this purpose a flat concrete shell with 1000 m2 cover area was chosen as the initial design of the roof. The imposed snow the load was 3000 N/m 2 . and the actual weight of the roof was also taken into account in dependence on current roof design (thickness and sur-face). The shape of the roof was described by 33 control points, as depicted in Figure 16. The shell response and sensitivity analysis was performed using the program STAKx.

Figure 16: Control point positions of the roof, program STAKx

In order to optimize the shape of the roof, we assumed that the optimal roof will have a minimal total strain energy Π. Therefore, Π was selected to be the objective function, which has to be minimized. Since the aim was to obtain the optimal shape of the roof by using a limited quantity of material, a constraint on the vol-ume V of the structure was imposed. Furthermore, to assure a practically usable design, another constraint on the maximal displacement Δz was also imposed. More precisely, the optimal design problem was defined as follows: Minimize the total strain energy min Π subject to constraints V-V_max ≤0 and |Δz|-Δz_max ≤0 where V max=600 m3 and Δz_max=0.2 m.

The selected design variables b_i,i=1…21 were related to the roof thickness: d=1+ b_1 and x, y or z coordi-nates of some of the 33 control points as follows:

Figure 14: Displacements and distribution of strain energy of wavy roof

The analysis of influence of various thicknesses for steel and concrete wavy roofs on their characteristics is also done. In the case of concrete, the modulus of elasticity is 30*109 N/m2 and the concrete density is taken as 3000 kg/m3 . The influences of material and thickness are shown in Figure 15.

Figure 15: Influence of thickness on mass, displace-ment and strain energy of the wavy roof

It has to be pointed out, that analysed wavy shell is only a very rough approximation of the nacre shell. In spite of this, it is clear that a roof design similar to the nacre shell exhibits good strength, small displacements and small strain energy. Therefore, it seems to be reasona-ble to formulate an adequate optimization problem to make the design even better, i.e. optimal.

Page 28: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1428

The displacements and distribution of strain energy of the initial roof design are presented in Figure 18.

The optimization process converged nicely after 30 iterations. The final (optimal) values of design variables were as follows:

which corresponds the following optimal values of thickness and design dependent coordinates of the control points:

The obtained optimal design of the roof, which looks similar to the sea-shell design, is presented in Figure 19.

Figure 19: Optimal design of the roof

The response of the optimal roof, i.e., the displacements and the distribution of strain energy, are presented in Fig. 20.

z2,4,6,8,10,24,26,28,30,32=0.1+ b2, z17=1+25∙b3, z16,18=15∙b4, z15,19=1+ 20∙b5, z14,20=10∙b6, z13,21=1+12∙b7, y17=15+b8, y16,18=14+b9, y15,19=13+b10, y14,20=12+b11, y13,21=11+b12, y12,22=10+b13, x16,18=4+ 0.4∙b14, x15,19=8+ 0.4∙b15, x14,20=12+ 0.4∙b16, x13,21=16+ 0.4∙b17, x27,29=5+ 0.4∙b18, x26,30=10+ 0.4∙b19, x25,31=15+ 0.4∙b20, and x24,32=20+ 0.4∙b1.

Optimization has been performed by the program iGOx which can run the program STAKx in order to perform the response and sensitivity analysis of the structure. iGOx is an interactive gradient-based opti-mization program [Figure 17], which enables contin-uous monitoring and eventual adjustments during the optimization process.

Figure 17: Interactive optimization program iGOx

The initial values of all design variables were zero: bi

ini=0, i=1…21. The corresponding initial thickness and design dependent coordinates of the control points (in meters) were:

[(1.0000,0.1000,1.0000,0.0000,1.0000,0.0000,1.0000,15.0000,14.0000,13.0000,12.0000,11.0000,10.0000,4.0000,8.0000,12.0000,16.0000,5.000010.0000,15.0000,20.0000)]T

ORIGINAL RESEARCH I BIOMIMETICS

Page 29: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 29

Figure 20: Displacements and distribution of strain energy of the optimal roof

For comparison, the response and some other parame-ters of initial and optimal design are presented in Table 2. It is evident that by optimization both, the strain energy and the volume of the roof, decreased. Further-more, all constraints have been fulfilled by the optimi-zation process. If one considers the price of approxi-mately 100 €/m3 for the concrete, the optimal design offers a significant saving in material costs.

Table 2: Parameters of response analysis and optimiza-tion for initial and optimal roof design

Parameters of response analysis and optimization

Initial roof design

Optimal roof design

Π (Nm) 3 252 607 496 688Max. con-straint viola-tion

0.46774 < 0

|Δz| (m) 0.668 0.1999211V (m3) 871.47 531.12Mass (kg) 2 614 417 1 593 361

The obtained optimal roof design is obviously similar to the nacre shell. On the basis of the properties of the optimal roof design, given in Table 2, it is evident that the roof, which imitates the nacre shell design, has low mass and good functionality. Thus, it represents a good design from ecological and economical aspects.

8 EXPERIMENTAL VERIFICATION OF OPTIMAL ROOF DESIGN

To verify the correctness of the strength analysis and optimal design procedure, models of initial and optimal roof, reduced by a factor of 200, have been manufac-tured [Figure 21]. For this purpose rapid prototyping technology on the machine EOSINT P800 has been used. The used material is the composite PA 2200 on the basis of polyamide. Its elastic modulus is 1.7 GPa and its density is about 930 kg/m3 . The surface load of 300 N/m2 has been used for the numerical simulation and for the experiment. For the experiment this load was approximated by a water-filled plastic bag [Figure 22].

Figure 21: The manufactured reduced models of initial and optimal roof

Figure 22: Measurement of maximal displacements of the manufactured model

Page 30: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1430

A comparison of the computed and measured displace-ment for initial and optimal roof design is presented in Table 3. The measurements of the optimal roof model were in the range between 0.3 and 0.4 mm. Therefore, an approximate mean value of 0.35 is listed in the table.

Manufac-tured model of initial (flat) roof design

Manufac-tured model of optimal (nacre shell) roof design

STAKx experi-ment

STAKx experiment

|Δz| (mm)

6.0 7.0 0.3 0.35

Table 3: Comparison of numerically (STAKx) and experi-mentally obtained displacements

As one can see from Table 3, the numerically and ex-perimentally obtained displacements agree quite well. This is especially true, if one takes into account that the experimental loading is quite far from an ideal constant distributed load. Furthermore, for practical reasons the supports in the experiment could not be realized as prescribed in the numerical simulation. Taking only these two quite significant sources of error into account, one can say that the agreement is good enough in order to conclude that the numerical simulation was accurate within reasonable limits.

9 PRACTICAL APPLICABILITY

The roof obtained by the optimization is in some re-spects similar to the nacre shell. The supporting in the vertical direction of this roof structure is supposed along all three short edges. Therefore, the face side of the roof can be open, offering free access, e.g., for visitors, logis-tics and so on.

Obviously this nacre shell-like roof could be potentially used, e.g., for:• Exhibitions pavilion [Figure 23],• Commercial building,• Sport stadium,• Market building and so on.

In the case of an exhibitions pavilion [Figure 23] the front side can be glazed. This would offer various pos-sibilities for light effects, which are very important to attract visitors and potential buyers.

Figure 23: Usage of nacre shell-like roof for car exhibi-tion pavilion

10 CONCLUSIONS

On the basis of reviewed literature and the results obtained in this work, the following conclusions can be made:• Many creatures in nature can be imitated for the purpose of modern architecture,• A nacre shell surely represents an interesting draft of a roof design,• Some of the most important quantities, related to a free form roof design are strain energy, displace-ments, mass and volume of material,• Parametric modelling and optimal design can offer efficient techniques in architecture where statical and aesthetic aspects has to be taken into account: the combination of the strength analysis program STAKx and the optimization program iGOx is a good example,• Optimization of a free form roof design by min-imizing the strain energy can yield a light weight but strong roof structure, capable of covering exhibitions pavilions, commercial buildings, sport stadiums, market buildings, and so on.

ACKNOWLEDGEMENTS

This investigation was supported by company CHEMETS, Kranj, d.o.o, Slovenia, where the roof models were manufactured by rapid-prototyping.

ORIGINAL RESEARCH I BIOMIMETICS

Page 31: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 31

REFERENCES

1. Zbašnik-Senegačnik M, Koprivec L. Biomimetics in the architecture of tomorrow. Architecture Research 2009, Vol 1, 40-49.

2. Yahya H. Biomimetics: Technology Imitates Nature, Global Publishing, Istanbul, 2006.

3. Kegl T. Z biomimetiko do učinkovite strehe nad glavo, Re-search work. II. Gymnasium Maribor, Maribor, 2011.

4. Harmer, AMT, Blackledge TA, Madin JS, Herberstein ME. High-performance spider webs: Integrating biomechanics, ecology and behavior, Journal of the Royal Society Interface 2011, Vol 8, 457-471.

5. Usherwood JR, Lehmann FO. Phasing of dragonfly wings can improve aerodynamic efficiency by removing swirl, Journal of the Royal Society Interface 2008, Vol 5, 1303-1307.

6. Meyers MA, Lin AY-M, Chen P-Y, Mucho J. Mechanical strength of abalone nacre: Role of the soft organic layer. Jour-nal of the mechanical behaviour of biomedical materials 2008, Vol 1, 76-85.

7. Nagarajan R, Lea SEG, Goss-Custard, JD. Seasonal ariations in musseln, Mytilus L. shell thickness and strength and their ecological implications. Journal of experimental marine biolo-gy and ecology 2006, Vol 339, 241-250.

8. Baker HA, Williams CJ, Irwin PA. Experiences with model studies to determine roof snow loading. Proceedings, Annual Conference - Canadian Society for Civil Engineering 2003, Vol 2003, 1687-1695.

9. Rizzoa F, D’Asdiaa P, Lazzarib M, Procinoc L. Wind action evaluation on tension roofs of hyperbolic paraboloid shape. Engineering Structures 2011, Vol 33, 445-461.

10. Stavric M, Marina O. Parametric modeling for advanced ar-chitecture. International Journal of Applied Mathematics and informatics 2011, Vol 5, 9-16.

ABOUT THE AUTHOR

TINA KEGL is 19 years old and she is studying at Faculty of Chemistry and Chemical Engineering at University of Mar-ibor in Slovenia. She hopes that a better world without war, violence and illness and where we have more time to learn from our beautiful nature, will become reality soon.

Page 32: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1432

This report assesses how viable X-ray fluorescence is at detecting counterfeit coins. This was first done by looking at the elements in one batch of penny coins dating from 1920 to 2011: their composition was calculated and compared with Royal Mint figures. Thus, we established that the elemental content only changed in 1992, because the coin was remodelled, and, furthermore, the figures provide a composi-tional baseline in this time period, which can be used to determine age and legitimacy.

ABSTRACT

Assessing the Viability of X-ray Fluorescence as a Method of Counterfit Coin Detection

KEIR BIRCHALLUniversity of Nottingham, Nottingham, UK. Email: [email protected]

INTRODUCTION

Fluorescence can identify the elements in a mate-rial. It arises when an atom absorbs radiation of

specific energy, greater than or equal to its ionisation potential: the atom is excited; the high-energy radia-tion removes an electron from the atom’s inner shells. Thereafter, a second electron in a higher energy level drops into a lower energy level, which releases radiation equal in energy to the difference between the energy levels. It is lower than the energy absorbed (illustrated in Figure 1).

Atomic structure limits the number of ways in which secondary radiation is emitted. There are three main types: Kα, Kβ and Lα. In Kα emission, there is a transition from the L (second) shell to the K (first) shell; in Kβ emission, there is a transition from the M (third) shell to the K shell, and, in Lα emission, there is a transition from the M shell to the L shell. Transitions emit distinct radiation and its wavelength is calculated using Planck’s Law:

λ=h.c/E

Theoretically, beryllium is the lightest element which can be analysed: it is relatively transparent to X-rays; it has low density, and low atomic mass. Therefore,

it is often in the sample holder in the equipment. It is difficult to detect elements lighter than sodium: their second-ary radiation is very long in wavelength; it has low energy and lit-tle penetrating power. Primary radiation, which is fired at the sample, is created by bombarding Rh, W, Cu or Mo with high energy electrons. Their direction of motion changes as they approach the nucleus, resulting in decelera-tion and loss of kinetic energy. Therefore, the elec-tron emits a photon energetically equal to the energy lost through deceleration. Thus, a range of X-ray energies, a Bremsstrahlung continuum, is emitted.Glocker and Schreiber first proposed X-ray fluo-rescence in 1928. Today, this method is used as a routine, non-destructive analytical technique, to de-termine proportions of different chemical elements present in rocks, minerals, sediments and fluids. It is mainly used in geological research, the mining and petroleum industries and quality control when manufacturing.

Figure 1. Radiation being absorbed and then emitted by an atom’s energy levels

ORIGINAL RESEARCH I COUNTERFIT COINS

Page 33: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 33

There are two types of fluorescence detectors, Wave-length Dispersive X-ray Spectroscopy (WDX) and En-ergy Dispersive X-ray Spectroscopy (EDX). If a sample has many different elements present WDX is used: the complex emitted X-ray spectrum is broken down into the component wavelengths of each element present through a diffraction grating monochromator, usually a crystal. By varying the angle of incidence, a narrow wavelength band is selected. Figure 2 shows the basic arrangement of a WDX machine.

In EDX, the emitted radiation is directed onto a solid state detector, a 3 -5mm thick silicon diode with a -1000V bias: it will conduct, when an X-ray strikes the gold contact at the end of the detector. To ensure optimum results, conductivity is maintained at a low level, using liquid nitrogen. The detector produces a series of pulses proportional to the incoming photon energies, which are processed by pulse-shaping am-plifiers. The longer a pulse is processed, the better the resolution. However, pulse pile-up or successive photon overlap often occurs. Figure 3 shows the basic arrange-ment of an EDX machine.

The original objective of this project was to use X-ray fluorescence to determine a difference between pre- and post-decimalisation pennies. The coins changed in size and design. Was there any change in composi-tion? Further literature research indicated a significant change in 1992. Therefore, the study was extended to coins dating up to the current year. Shortly after, the Royal Mint conducted a survey of counterfeiting in the UK in April 2012. The report indicated that around 1 in 32 £1 coins is counterfeit: counterfeiting is on the rise. Currently, the only methods used to identify them are based on the quality of the cast. If the forger has high quality dies, counterfeit coins are difficult to spot and enter circulation. The Royal Mint displays compo-sitions of every coin in circulation on its website. The hypothesis is that X-ray fluorescence will establish a concentration baseline, similar to those from the Royal Mint, to determine a coin’s legitimacy.

METHODOLOGY

The EDX machine used had an operating range of 0-35keV. Initial research established reference values for the emission energies of elements within the operating range.

Figure 3. The basic arrangement of an EDX machine

Figure 2. Arrangement of wavelength dispersive spectrometer

Page 34: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1434

The goniometer was set so that the sample holder was at 45° to the source and the silicon detector, to allow as many secondary photons emitted from the coin to reach the detector as possible. The X-ray source was set to 35keV and 1mA. 425 (7 pre decimalisation, 99 post decimalisation and 319 post 1992) pennies were individually placed into the sample holder of the goni-ometer and exposed to the radiation for 10 minutes. A graph of the intensity at discrete energy values between 0 and 35 keV was plotted. To work out the elements that make up the coin, the energy of the largest peaks produced were compared to the list of emission ener-gies until a match was found.

To work out the concentration, samples of individual identified elements were placed into the machine and exposed to X -radiation for 10 minutes, to establish the intensity of the major peaks. The concentration, Ca, of an element, a, in a sample can be determined by find-ing the ratio between the intensity, Ia, of a line of the element in the sample and the intensity, Ie, of the same line of the pure element:

C_a=I_a/I_e Unfortunately, the equipment used could not cope with samples of differing thicknesses: values of intensity recorded for an element in a coin and its sample, usual-ly thinner, were inconsistent. When the above equation was applied to the intensity values obtained, it indicat-ed that one element in the coin had a concentration of over 100%. It also recorded the characteristic radiation of copper, molybdenum and several other elements when there was no sample placed in the machine, as shown in Figure 5. The molybdenum spike can be explained by the fact that the source was made of that element but none of the other spikes should be there. To overcome this, a handheld X-ray fluorescence analyser had to be used: it gives an immediate estimate of the concentration of each element in a coin. This device was used to analyse the entire sample. Finally, a mean percentage value for each element and acceptable deviations were established. The ranges were calculated by removing the coins that did not fit the pattern and taking the highest and lowest concentrations for each element.

RESULTS

The results are presented above as a series of graphs showing X-ray intensity against energy and as a table of compositions of penny coins. Figures 6-8 show typical examples of the X-ray graphs for coins from 1928, 1984 and 2001 respectively. Fig-ure 9 shows the results from a 1993 coin that had been

Figure 4. Bench top equipment used in X-ray fluorescence

Figure 5. A graph that shows the background reading of the bench top X-ray fluorescence machine

ORIGINAL RESEARCH I COUNTERFIT COINS

Page 35: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 35

treated to remove approximately half the copper plating from its face.

Results from the handheld machine made were pre-sented as a table of concentrations (too numerous to be shown here; data is summarised in Figure 10).

ANALYSIS

The typical X-ray emission spectra in Figures 6, 7 and 8 all show a single large peak at around 8.04 keV. This corresponds with K emission for copper. The large number of counts within this peak shows that the coins contain a very large concentration of copper. There should also be a K peak for copper at 8.9 keV, which is not clearly shown in the charts.

According to the Royal Mint, copper is the major con-stituent of penny coins but tin, zinc and iron should also be present. The expected emission energies for the metals are:

Tin: - K: 24.2keV K: 27.3keV L: 3.3keV

Zinc: - K: 8.6keV K: 9.6keV L: 1.0keV

Iron: - K: 6.4keV K: 7.1keV L: 0.0keV

The copper peak recorded is very broad. This and the failure to detect peaks for tin, zinc and iron indicate that the bench top machine had poor resolution: it was not sufficiently sensitive to clearly detect peaks of elements at lower concentrations. After 1992 there is a change from bronze to a steel core plated in copper, according to the Royal Mint: Figure 8 should have displayed a peak corresponding to the Kα emission for iron at around 6.4keV.

Calculations based on the properties of the beam sug-gested that the radiation from the source was energetic enough to penetrate at most 20 microns. Typically copper plating is 25 microns thick. Therefore, it did not excite any iron within the penny. The plating can be removed, but it had only been applied to a 2p coin. 1p and 2p coins are made from the same type of bronze and then changed at the same time to be made of cop-per plated steel. Therefore, it can be used as an accept-able substitute in this situation.

Figure 6. The composition of a 1928 1d coin (Pre Decimalisation)

Figure 7. The composition of a 1984 1p coin (Post Decimalisation)

Figure 8. The composition of a 2001 1p coin

Figure 9. The composition of a 1993 2p coin (with half the copper face removed)

Page 36: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1436

Figure 9 shows the graph of the treated 1993 2p coin. It has an extra peak at around 6.4 keV which is indica-tive of the radiation of iron.Due to the poor resolution and low penetration of the incident beam produced by the bench top machine, a handheld X-ray fluorescence analyser was used. This machine automatically determined the elements pres-ent and their relative concentrations. The aim of the work was to see if a baseline of com-position could be established in order to determine legitimacy. Figure 10 shows a summary of the results gathered for coins minted in the period 1971-Septem-ber 1992 and September 1992-Present. The initial results showed that penny coins contain mainly copper, zinc, tin, cobalt and iron. They show that there was a dramatic change in composition in September 1992 from bronze to copper plated steel. These results are consistent with the change from

bronze to copper plated steel given by the Royal Mint and show that X-ray fluorescence can be used to differ-entiate between each type of coin. The percentages in Figure 10 show mean composition values and acceptable deviations, determined by the results of the experiment, for the two types of modern 1p coin that are in circulation. However, the range of copper and iron in post-1992 coins is much larger than in earlier ones, which may be due to manufacturing differences. As in the methodology, the detector will only be able to emit X-rays up to a maximum value,

50 keV for the handheld Niton. Therefore, the X-rays will only be able to penetrate a certain distance into the coin. If the coins have been used previously, then they will have come into contact with the air for a long period of time and so could have formed an outer layer of oxide or the surface could have been damaged and worn away. Since the pre-1992 coins are made of a homogenous alloy, even if a small section was sampled a very similar set of readings will have been recorded, as illustrated in Figure 11a, assuming the energy received by each coin is constant and the machine is calibrated in the same way. Whereas the post 1992 coins are not homogeneous alloys, they are made of copper plated steel, as shown in figure 11b; so if the thickness of the outer layer was changed due to oxidation or wearing then this would affect how far the X-rays penetrated into each of the layers, as shown in figure 11c. Hence, this would affect the copper-to-iron ratio and would contribute to large fluctuations in the readings across the sample.

The elements that make up the largest proportion of each coin are shown and the impurities include ele-ments such as lead, nickel and titanium that the coins will have come into contact with. The coins produced since September 1992 are made of copper-plated steel. Consequently, they contain a large mass of carbon, too light to detect. Therefore, it would not register as part of the results and may account for the missing 8.6% in the total. Impurity value for the pre-1992 coins is three times larger than that of post-1992 coins, a result of a set of readings of coins containing approximately 1% tungsten. This anomaly didn’t appear in data gathering before or after this and is, probably, due to residue left behind from a previous experiment. Since the pre-1992 sample size is smaller than that of the post-1992 coins, it has appeared as a much larger percentage composi-tion value.

Figure 11a: X-rays penetrat-ing a uniform alloy

Figure 11b: X-rays penetrating a non-uniform alloy

Figure 11c: X-rays penetrating a non-uniform alloy with oxide layer (d2 < d1)

Figure 10. Mean values and ranges (after outliers are removed) of each element in the two types of modern 1p

ORIGINAL RESEARCH I COUNTERFIT COINS

Page 37: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 37

X-ray fluorescence analysis of pre 1992 coins showed an intensity trend in tin and zinc, illustrated in Figure 12.

In 1920 the penny contained about 4% tin and 0.5% zinc and changed in 1928 to about 3.5% tin and 1.5% zinc. This was consistent with the change in composi-tion in 1925. Due to a shortage of tin during the Sec-ond World War, the 1944 and 1945 pennies contain 0.5% tin and 2.5% zinc. Tin and zinc content returned to pre-war concentrations later on in 1945, according to research, but there were no coins available from that period. The youngest coins in the pre-decimalisation sample show that, in the 1960s, the composition re-turns to levels similar to those during the war and after decimalisation.

CONCLUSIONS

X-ray fluorescence can be used to determine the metal content of a penny with reasonable accuracy. The Royal Mint (see ‘email from Royal Mint Employee’) con-firmed that it is an effective method of counterfeit coin detection. The Royal Mint has used it for 15 years as part of a range of analysis techniques.It is not the best technique for analysing non-homo-geneous alloys. This could be alleviated by improving the sampling method used. The Royal Mint could be contacted to obtain a standard ingot used to mint cop-per coins, which could then be used as a more reliable reference when identifying counterfeits.In order to reduce variation, there needs to be an improvement in the sampling method. Since there are different images on either side of the coin, they could be affected differently by the air or impacts it receives. The side that is analysed needs to stay the same. The level of tarnishing must be factored in as well; this can be done by removing the oxides by submerging it in a solution of dilute acid but this can often damage the coin as it can remove some of the pure copper that coats the coin, thereby adversely affecting the results.

You could, instead, create a simple reference scale that contains coins of varying degrees of tarnish and give each coin a number. Then whenever you go to collect data for a new coin you can use the scale to record the number which shows a similar level of tarnish.Unfortunately, there were not a sufficient number of pre-decimalisation coins. Therefore, a larger sample should be sourced, particularly in pennies minted in decades that have yet to be addressed. This would allow for a detailed timeline of composition values to be established which could be used to determine a newly uncovered coin’s legitimacy or age. Further research needs to be done in investigating whether similar baselines can be established in coins with a higher incidence of counterfeiting such as the 50p, £1 or even antique coins.

EMAIL FROM ROYAL MINT EM-PLOYEE

“Keir,

Thank you for the information. I can confirm that The Royal Mint has been using XRF, amongst many other techniques for nearly 15 years. I hope that this infor-mation helps validate your conclusion. I can confirm that XRF can indeed be used for counterfeit coin detection provided that one has access to a good quality / consistent data set of known genuine articles to test against.

Best Regards,

Greg ClarkThe Royal Mint”

Figure 12. A graph to show how dramati-cally the concentration of tin and zinc has changed in coins minted between 1920 and 1991

Page 38: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1438

ACKNOWLEDGEMENTS

I would like to thank the University of Liverpool for providing me with a placement and specifically to Dr Helen Vaughan and Professor Paul Nolan for mentoring me throughout the project. Thanks also go to the Sharon Burke and the Nuffield Foundation for allowing me a place on this amazing scheme. I wish to thank my parents for having the patience to endlessly proofread countless copies of my work. I would also like to thank Greg Clark from the Royal Mint for the invaluable information he gave me about the techniques they use. Thanks to Mike George, from Wythenshawe FM, for allowing me to bore the general public with the contents of this project. And finally thanks to all my Physics teachers over the years, for helping me to see the wonders of science in all its forms.

REFERENCES

1. http://en.wikipedia.org/wiki/X-ray_fluorescence2. http://serc.carleton.edu/research_education/geochem-

sheets/techniques/XRF.html3. http://en.wikipedia.org/wiki/X-ray_crystallography4. http://www.csrri.iit.edu/periodic-table.html5. M. S. Schakley, (2011), X-ray Fluorescence Spectrometry

(XRF) in Geoarchaeology, Springer 6. http://www.royalmint.com/discover/uk-coins/counterfeit-

one-pound-coins7. http://www.royalmint.com/en/discover/uk-coins/coin-de-

sign-and-specifications/one-penny-coin8. http://www.royalmint.com/en/discover/uk-coins/coin-de-

sign-and-specifications/two-pence-coin9. http://en.wikipedia.org/wiki/Penny_%28British_deci-

mal_coin%2910. http://en.wikipedia.org/wiki/History_of_the_British_pen-

ny_%281901%E2%80%931970%2911. Personal contact with Royal Mint employee

ABOUT THE AUTHOR

KEIR BIRCHALL is 19 years old and currently studying for an MSci Physics degree at the University of Nottingham. His re-search on X-ray Fluorescence won him a place in the final at the 2013 National Science and Engineering competition. When not doing science, he enjoys photography and live music.

ORIGINAL RESEARCH I COUNTERFIT COINS

Page 39: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 39

This article aims to provide an introduction to cloud storage, which is becoming the next generation storage architecture of the IT enterprise, as it presents users with the ability to store and access infor-mation regardless of device or location in the world. The capacity of storage is nearly infinite, and data storage may span across multiple servers. Cloud storage providers focus on two key areas—reliability and security—and cloud storage users have the option to choose between three main types: private, public, and hybrid clouds.

Cloud Storage:Virtual Databases

REVIEW ARTICLE I VIRTUAL DATABASES

CHRISTINE LIPrinceton University, New Jersey, USA. Email: [email protected]

OVERVIEW

Cloud storage, contrary to popular belief, is not a reserve of clouds and weather. It is a service model

of online databases where data is maintained, managed, and backed up in virtualized pools of storage and made available to users over a network. Dropbox, Google Drive, and Amazon S3 are all examples of online cloud storage services. The process by which the cloud handles the data storage is invisible to the user whose link to the cloud storage is through an account over the Internet. The main advantages of cloud storage as opposed to a traditional file system include an enormous storage capacity and a variable distance between the storage location and the user’s location.

GENERAL ARCHITECTURE

Cloud storage architecture centres on the delivery of storage in a highly scalable and multitenant [1] way. Generic cloud storage architecture has a front end that includes the client’s computer and an API [2] to access the cloud computing system. Behind the front end is a layer of middleware that allows network computers to communicate with each other and ensures reliability and security, which are two crucial elements to the suc-cess of cloud storage. This layer implements a variety

of features, such as replication and data reduction, over the traditional data placement algorithms.1 Finally, the back end contains the physical storage for data—an internal protocol that implements specific features or a traditional back end to the physical disks—and is connected to the front end through a network, usually the Internet [Figure 1].2

ABSTRACT

Figure 1. Cloud Storage Architec-ture Diagram [Available from http://www.ibm.com/developerworks/cloud/library/cl-cloudstorage/figure1.gif ]

Page 40: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1440

REVIEW ARTICLE I VIRTUAL DATABASES

SECURITY

Security is one of the largest concerns regarding cloud storage. No client would want to entrust their data to another company without a guarantee that no one else apart from themselves can access their information. To ensure remote data integrity, a combination of en-cryption, authentication, and authorisation processes are used. Encryption is using a complex algorithm to encode information, which often utilizes the classic Merkle Hash Tree construction. Merkle trees are a type of data structure that contains a tree of summary information about a larger piece of data and hash func-tions are algorithms that map large data sets of variable length to smaller data sets of a fixed length. Usually a cryptographic hash function (such as SHA-1, Whirl-pool, or Tiger) is used for encryption.3 Authentication provides each user with a unique username and pass-code that can be used to access data stored on the cloud system. Authorisation allows the client to list people who are permitted to access certain information saved on the database.

RELIABILITY

Reliability is just as vital as security is to the success of a cloud storage service. Outages and service interruptions can cause entire databases to temporarily shut down, and thus need to be limited to occurring as infrequent-ly as possible. Service reliability lies in the reliability of individual components, particularly hard disk drives (HDDs). To achieve 99.999% high availability, redun-dancy of storage components is essential. High capacity SAS [3] HDDs offer dual port functionality—if one port goes down, the second port acts as a backup for the data to pass through.4

TYPES OF CLOUD STORAGE There are three main models of cloud storage: public, private, and hybrid clouds [Figure 2]. Public cloud is the most commonly used form of storage in which space is “rented” from the owner of the database system. It is based on the standard cloud computing model, in which a service provider makes resources

Figure 2. Public, Private, and Hybrid Clouds [Available from http://blog.nskinc.com/Portals/62040/images/Hybrid%20Cloud%20Edit.JPG]

USERS

Page 41: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 41

such as applications and storage available to the gen-eral public over a network, or the Internet.5 Private cloud, otherwise known as internal or corporate cloud, is a proprietary computing architecture that provides hosted services to a limited number of users behind a firewall.6 Storage is exclusive to a certain organisation that has complete control over the capabilities of the storage hardware.7 Hybrid cloud is a combination of at least one public and one private cloud. It is a cloud computing environment in which an organisation pro-vides and manages some of its own resources and uses other resources provided externally.8

NOTES

1. Multitenant: phrase used to describe multiple cus-tomers using the same public cloud.

2. Application Programming Interface (API): a language and message format used by an applica-tion program to communicate with the operating system or some other control program.

3. Serial-attached SCSI (SAS) is a method used in accessing computer peripheral devices that employs a serial (one bit at a time) means of digital data transfer over thin cables.

REFERENCES

1. Tim M. Jones, Anatomy of a Cloud Storage Infrastructure, IBM, 30 November 2010, accessed 7 July 2012. http://www.ibm.com/developerworks/cloud/library/cl-cloudstorage

2. Jonathan Strickland, How Cloud Storage Works, HowStuff-Works, accessed 1 July 2012. http://computer.howstuffworks.com/cloud-computing/cloud-storage.htm

3. Hash Tree, Wikipedia, 07 April 2012, accessed 5 July 2012. http://en.wikipedia.org/wiki/Hash_tree

4. Cloud Storage: Reliability is Key, Toshiba Corporate Blog, ac-cessed 5 July 2012. http://storage.toshiba.com/corporateblog/post/2011/06/03/Cloud-Storage-Reliability-is-Key.aspx

5. Margaret Rouse, Public Cloud, SearchCloudComputing, May 2009, accessed 3 July 2012. http://searchcloudcomputing.techtarget.com/definition/public-cloud

6. Margaret Rouse, Private Cloud, SearchCloudComputing, June 2009, accessed 3 July 2012. http://searchcloudcomputing.techtarget.com/definition/private-cloud

7. George Crump, How To Use Cloud Storage, Informa-tionweek, 20 October 2010, accessed 3 July 2012. http://www.informationweek.com/news/storage/data_protec-tion/229200607

8. Margaret Rouse, Hybrid Cloud, SearchCloudComputing, June 2010, accessed 3 July 2012. http://searchcloudcomput-ing.techtarget.com/definition/hybrid-cloud

ABOUT THE AUTHOR

CHRISTINE LI is from Palo Alto, California and currently attends Princeton University in Princeton, New Jersey. She is the project chair of Princeton Women in Computer Science and an active member of Princeton’s Pianists Ensemble. A project very close to her heart is KidsConnect, a mentoring program she started that raises money for OperationSmile. In her free time, she enjoys running, baking, solving logic puzzles, going for nature walks, and learning dance choreography.

Page 42: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1442

Cancer. Cancerous cells are able to infinitely divide and grow. Additionally, they can undergo sustained angio-genesis whereby the growth of blood vessels is stimu-lated to supply the tumour with nutrients. Moreover, they encourage their own growth and have the ability to resist inhibitory signals. Furthermore, cancerous cells evade cell death (apoptosis) and invade surrounding tissue as well as spreading to different parts of the body (metastasis).

Evaluation of the relationship between the linker lengths of bis-dimethylaminopyridine compounds and their inhibition of choline kinase alpha and the subsequent effect on cancer cell proliferation.

ABSTRACT

Diminishing CancerProfileration

ORIGINAL RESEARCH I CANCER PROLIFERATION

MERIAME BERBOUCHA WITH SEBASTIAN TROUSIL, ISRAT S ALAM, ERIC O. ABOAGYE

Imperial Centre of Translational and Experimental Medicine, London. UK. Email: [email protected]

Cancer cells display an over-expression of Choline kinase alpha (CHKA), an enzyme cooperating in

cell proliferation. A decline in cell growth is observed by inhibiting the enzyme with bis-dimethylaminopyri-dine compounds, thus making it a putative drug target. Determining the most effective inhibitor out of the four compounds; AK001, AK002, AK003 and AK024 was of great interest. A series of experiments were conducted to establish the stated relationship by studying the effect of the inhibitors on cell proliferation, reaction rate in addition to deriving IC50 and GI50 values. Our find-ings suggest that the longer the linker length the greater the effect on cell proliferation, thus AK024 was the most effective. To develop this study further, Nuclear Magnetic Resonance spectroscopy could be carried out to determine whether inhibiting CHKA decreases the production of Phosphocholine.

AIM

The aim of the project was to investigate the relation-ship between the linker length of bis-dimethylamino-pyridine compounds and the inhibition of CHKA as well as to study the effect on cancer cell proliferation.

CANCER

Several characteristics distinguish cancer cells from “normal” cells, as depicted in the diagram to the right, which is commonly known as The Hallmarks of

Figure 1: Six main ‘Hallmarks of Cancer’

Figure 2: Tumour forming

Page 43: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 43

On the contrary the characteristics of normal cells are quite dissimilar, they:• Reproduce in a tightly regulated fashion. • Adhere to each other correctly.• Undergo apoptosis (programmed cell death).• Specialise (mature).

Cancerous cells usually form after the mutation of one of the following types of genes:Oncogenes (encourage cell division).Tumour suppressor genes (stop cell multiplication).DNA repair genes.

Subsequently, the cell begins to divide uncontrollably and may apply pressure or damage surrounding nerves and organs causing harm. Cancer cells lose the proteins in their membrane responsible for keeping cells in the right place and become detached from surrounding cells therefore allowing the cancer to spread to other parts of the body, known as secondary cancer or metas-tasis.

CHKA a cytosolic enzyme is highly abundant in a great variety of tumours, causing a rise in intracellular phosphocholine concentrations, as shown in Figure 3. Through an unknown mechanism, this is thought to cause malignant progression. CHKA catalyses the first reaction in the Kennedy pathway, as depicted in Figure 3, this is where the production of phosphocholine from choline occurs. In subsequent reactions, an activating nucleotide is added to the phosphocholine, which allows ligation with diacylglycerol to form phosphati-dylcholine. Genetic and chemical inhibition of CHKA effectively kills cancer cells; this positive effect is benefi-cial for cancer therapy.

In order to establish the relationship mentioned above and achieve the aim, the following series of experiments were required to gain essential data:• Enzyme based assay to derive IC50 values (the half

maximal inhibitory concentration) of four different inhibitors.

• Enzyme Kinetics: measuring the rate of reaction.• Tissue Culture for growing HCT-116 (Human

colorectal adenocarcinoma cell line otherwise known as colon cancer cells).

• RB assay to study the effect of inhibitors on cancer cell by deriving GI50 values.

IC50 values are measures of the concentration of the inhibitor (drug as illustrated in Figure 5) at which a biological process is inhibited by 50%, in this case the biological process is the enzyme-catalysed reaction. By deriving these values the effectiveness of the four in-hibitors can be found and the most potent drug can be determined. An inhibitor with a low IC50 value shall indicate that it is the most effective as a lower concen-tration of the drug is required to inhibit the biological process by half.

The most potent drug can also be verified with an SRB assay where the amount of cell proliferation can be measured and GI50 values can be derived. GI50 values are measures of the concentration of the inhibitor at which growth is inhibited by 50%. The most effective inhibitor will reduce the amount of cell prolifera-tion greatly and have a low GI50 value. Carrying out enzyme kinetics will verify once again which inhibitor molecule is the most effective as the reaction rate of the enzyme-catalysed reaction will be greatly reduced. The nature of the inhibitor shall also be established – whether it is competitive or non-competitive.

Figure 3: Part of the Kennedy Pathway showing the formation of Phos-phocholine (PCho) and Adenosine diphosphate (ADP) from Choline (Cho) and Adenosine triphosphate (ATP) by the enzyme CHKA

Figure 4: Seeding cells

Figure 5: Bis-dimethylaminopyridine compound - drug molecule

Page 44: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1444

A competitive inhibitor competes with the substrate molecules for the same active site on the enzyme. Thus an increase of one of the competing substances can wash out the other from the binding pocket. Non-com-petitive inhibitors bind to the enzyme on a site other than the active site and alter its function. Therefore an increase in substrate concentration does not alter the rate of reaction. The latter is shown in Figure 8.

The following prediction was made prior to commenc-ing the project:Varying the linker length shall have an effect on the efficacy of the inhibitor, the longer the linker length the better it will fit in the enzyme’s active site.

RESULTS

ENZYME-BASED ARRAY:

The enzyme-based assay was repeated twice and the re-sults were plotted as a semi-logarithmic plot, producing typical sigmoidal-shaped dose-response curves. This al-lows more accurate curve fitting and allows IC50 values to be obtained with greater accuracy. The absorbance of ultraviolet light (y-axis) in arbitrary units is plotted against the logarithmic drug concentration (x-axis). By analysing the data shown in Figure 6 the most effective drug can be identified. AK024, with an average IC50 value of 179.1 nM was shown to be the most effective inhibitor. AK003 with an average IC50 value of 3.678 μM indicates this inhibitor was the least effective.Both Velocity and Lineweaver-Burk-Plots use the same raw data, but are presented in a different manner. While the velocity plots have a linear x- and y-axis, the Lineweaver Burk Plots are double reciprocal plots. This results in linearisation of the data and eases data analysis.

The Choline – Lineweaver Burk Plot portrays a reac-tion that is dependent on the substrate concentration as Vmax is constant. Referring to the graph titled Cho-line – Velocity, at low concentrations of the substrate choline, the initial velocity increases almost linearly with increasing substrate concentration. However, as the concentration rises, the graph begins to plateau as does the velocity and eventually overlap indicating the rate of reaction is at its maximum, Vmax. The results obtained suggest that the inhibitor was competitive with choline due to its sensitivity to substrate concen-tration.

The graph titled ATP – Lineweaver Burk Plot demon-strates that the inhibitor is non-competitive with ATP and Vmax varies as depicted in the graph by the differ-ent interceptions of the y-axis. This is a result of the in-hibitor binding to a site other than the substrate-bind-ing site and therefore contributing to the lower Vmax value seen in the graphs.

Figure 6: Competitive and non-competitive inhibitors

Figure 7: Enzyme-based assay data

Figure 8: Illustrates the results from the reactions carried out for Enzyme Kinetics

ORIGINAL RESEARCH I CANCER PROLIFERATION

Page 45: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 45

ABOUT THE AUTHOR

MERIAME BERBOUCHA is 18 years old and cur-rently reading Physics at Imperial College London and hopes to complete her PhD within the field of medical physics and later train to become a medical physicist. Additionally she aspires to inspire younger students into the STEM (Science, Technology, Engineering and Mathematics) subjects, and particularly to encourage young females into these male-dominated fields.

SRB ASSAY:

AK024 with the lowest GI50 value of 0.337 μM suggests that it is the most potent inhibitor having the greatest effect on cell proliferation. This corresponds to the conclusion made with the enzyme-based assay results as AK024 was also said to be the most effective inhibitor, thus increasing the reliability of the results obtained. AK003 is portrayed to be the least effective inhibitor in this assay which is also supported by the results from the enzyme-based assay.

CONCLUSIONS:

From the data collected, the following can be conclud-ed: AK024 with the longest linker length of 21 Åm was the most effective inhibitor as shown by its low IC50 and GI50 values from the enzyme-based and SRB as-says, respectively. Higher concentrations of the inhibi-tor lead to greater inhibition and the rate of the reac-tion decreased significantly, as shown by the enzyme kinetics data. Furthermore, this inhibitor is competitive with choline but non-competitive with ATP. The rela-tionship discovered: The longer the linker length, the better the inhibition and the greater the reduction in cell proliferation is depicted in the table below.

Inhibitor Average IC50 values (nM)

Average GI50 values (μM)

Distance between charges/ linker length (Å)

AK024 179.05 0.337 21 BestAK002 444.8 3.21 16.5AK001 773.25 3.71 15AK003 3678 3.99 13.5 Worst

Inhibition of CHKA effectively reduces the amount of

cancer cell proliferation as shown in this investigation, making CHKA a potential target for cancer therapy. Elevated phosphocholine formation can further be utilised as an imaging biomarker to aid diagnosis and treatment surveillance of cancer patients.

ACKNOWLEDGEMENTS

First and foremost I offer my sincerest gratitude to my supervisors, Dr. Israt S Alam and Sebastian Trousil, who have made this a unique experience and have sup-plied me with outstanding help, support, patience and knowledge throughout the project duration. Addition-ally, a huge thank you goes to Professor Eric O. Aboag-ye and Ms Lynn Maslen for giving me this invaluable and irreplaceable opportunity. Thanks go to the whole Comprehensive Cancer Imaging Centre team who have provided on-going assistance. A special thank you goes to the Nuffield Science Bursary Scheme and Tamily Phillips for allowing me to carry out this challenging, rewarding and enjoyable research project. I offer my greatest gratitude to Mr S. Newton without whose assistance with the Nuffield application form and con-tinuous support and guidance; this project would not have been possible. He inspired me to do my best, take on new challenges and achieve my best potential for which I am very grateful.

REFERENCES:

1. Figure 1: Hanahan, D and Weinberg, R.A (January 7, 2000) The Hallmarks of Cancer. Retrieved 28 July, 2012 from: http://www.weizmann.ac.il/home/fedomany/Bioinfo05/lec-ture6_Hanahan.pdf at16:57

2. Figure 2:Cancer Research UK (November 3, 2011) The cancer cell. Retrieved July 28, 2012 from: http://cancerhelp.cancerre-searchuk.org/about-cancer/what-is-cancer/cells/the-cancer-cell at12:04

3. Figure 8: 3.6 Enzymes Retrieved 8 September, 2012 http://www.tokresource.org/tok_classes/biobiobio/biomenu/en-zymes/competitive_inhibit_c_la_784.jpg at 08:21

4. Figures 3-7 and 9: These Figures were produced or provided by the Imperial Centre of Translational and Experimental

Figure 9: SRB data

Page 46: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1446

tions (such as a disinfectant) 37Cl and 25Cl are used together. An independent application of 37Cl is in neutrino (neutral sub-atomic particles with a mass very close to zero) detection: an absorption of an electron neutrino brings about reactions which cause the de-tection of auger electrons (electrons emitted from the atom when extra energy is supplied to it by the move-ment of a high energy level electron), demonstrating that a neutrino event occurred4. 36Cl is used to date ground water which is more than a million years old, as it has a half-life of around 305,000 years.

Carbon has 15 (known) isotopes, which are mainly unstable. Carbon-12 is stable, and constitutes about 98.89% of natural carbon5. Carbon-14 is radioac-tive and is used for carbon-dating: cosmic rays in the atmosphere release neutrons which are absorbed into nitrogen:

147N + n → 14

6C + p

The Carbon-14 reacts with Oxygen to form 14CO2. This is taken up by plants and made into different or-ganic compounds which are passed up the food-chain; while plants and animals are alive, they are continuously exchanging Carbon-14 with the atmos-phere, but this ceases at death. The amount of Car-bon-14 gradually decreases by beta decay (forming 14

7N). The rate of decay can show the number of C-14 atoms, which is compared to the number in the organ-ism at death, and so the death date can be calculated (it is assumed that cosmic rays are equally active in the atmosphere over long periods of time). 13C can also be used for dating marine carbonates and in NMR.

Isotopes are atoms of an element that have the same atomic number but a different atomic mass. In this article I am going to look at the formation, and differences of various isotopes including carbon and bromine as well as going on to explore their practical uses.

ABSTRACT

IsotopesREVIEW ARTICLE I ISOTOPES

ELISSA FOORDRugby School, Warwickshire UK. Email: [email protected]

ISOTOPES

Isotopes are atoms of an element with the same num-ber of protons and different numbers of neutrons;

therefore, their atomic masses are different, whereas their atomic number is the same. Many elements have multiple isotopes, for example tin has ten stable iso-topes, the largest number observed for any element1.

Isotopes have varying physical properties, but almost identical chemical properties. The different number of neutrons means certain isotopes are heavier. An-other physical difference is stability; protons are held together in the nucleus by a strong nuclear force, which overcomes their electrostatic repulsion. However, the presence of neutrons effects this balance and too many or too few will cause the nucleus to undergo radioactive decay, until it reaches a stable state. Chemical prop-erties are determined by number and configuration of electrons, which is identical in each isotope. However, isotopes with greater neutron numbers tend to undergo reactions more slowly than lighter isotopes of the same element; the kinetic isotope effect. This particularly affects protium and deuterium as the mass difference is proportionally greater than with other isotopes2. This effect is normally quite small. Another differ-ence which affects their chemical properties is that the vibrational mode (the way the bonds within a molecule move) is changed by the changed mass of isotopes, which effects how photons are absorbed3.

Chlorine has 24 isotopes (from 28Cl – 51Cl), mainly 37Cl and 35Cl (abundaces of 24.24% and 75.76% respectively), and most of these isotopes (except 36Cl) have half-lives of under one hour. In many applica-

Page 47: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 47

Bromine has two stable isotopes and 30 unstable ones: its most stable (though some are metastable) radio isotope is 72Br, and its other stable isotopes are 79Br and 81Br. Many Bromine isotopes are products of fission, and several emit delayed neutrons (these are emitted after nuclear fission by a fission product; they are ejected from the nucleus a few milliseconds to a few minutes after fission, when the new nucleus is in an ex-cited state). Bromine isotopes can be used in medicine, for example Bromine-81 is used to make a Krypton isotope used in diagnostics, and Bromine-77 has been suggested for use in radiotherapy.

Isotopic ratio is the proportional abundance of each isotope naturally found on a planet. The isotopic ratio within a substance is the proportion of each isotope present in the mixture. An example is the isotopic ratio of 37Cl to 35Cl, about 1:3, so the RAM = (37 x 1/4)+(35 x 3/4) = 35.5. This isotopic ratio within a substance can be detected using mass spectrometry; for example, this is particularly good at detecting 37Cl and 35Cl 6: detecting different isotopic ratios is especially useful for dating substances, and paleoclimatology. Iso-topic ratios are used in carbon dating, where the ratio is compared to the Vienna Pee Dee Belemnite Stand-ard7. Nitrogen, Sulphur and Oxygen isotopic ratios have similar uses. The natural isotopic ratio is different on different planets; in pre-solar grains the isotopic ratio can be radically different8. Some isotopes will be abundant because they are stable, and so, once formed, take a long time to disappear (for example, Carbon-12) whereas others are abundant because although they are unstable, they are being created by other decays rapidly (for example 148-Samarium and 147-Samarium).

In nature, three isotopes of Hydrogen are found: 1H (Protium), 2H (Deuterium), 3H (Tritium), and five others have been made in the laboratory. This means

C-13 NMR (http://showme.physics.drexel.edu)

CANDU Heavy-Water reactor (from http://en.wikipedia.org/wiki/CANDU_reactor)

Page 48: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1448

that some H2O molecules are marginally heavier: H2O with Deuterium is called ‘heavy water’. Heavy water is used in some nuclear power stations to slow down neutrons so that they will be more likely to cause radioactive decay by being absorbed by a Uranium-235 nucleus; heavy water is better than water with mainly protium atoms as it absorbs less of the neutrons, so it keeps the reaction going better. Tritium can be used in nuclear weapons, self-powered lights and also, mainly in the past, as a radiolabel in chemistry and biology. Muonic helium is sometimes considered an isotope of hydrogen too and behaves more like hydrogen than helium in its chemical reactions (muonic helium has a muon (a lepton similar to an electron) instead of one of its electrons, which orbits the nucleus much more closely)9.

Isotope ratio mass spectrometers normally bend a beam of ionized particles using a magnetic field towards conductive beakers which catch charged particles (Far-aday cups), which produce electric current as particles collide with it, because of the particles’ charge. By this process they can make the measurements and they have the sensitivity required to detect even very small proportions of isotopes within a substance10.

REFERENCES

1. Trace Sciences International (http://www.tracesciences.com/sn.htm).

2. Westheimer, F. H. “The Magnitude of the Primary Kinetic Isotope Effect for Compounds of Hydrogen and Deuterium.” Chemical Reviews 61.3 (1961): 265-273.

3. M.A. Thompson, Cambridge Pre-U Chemistry (2009).4. F.H. Shu (1982). The Physical Universe: An Introduction to

Astronomy. University Science Books. p. 122.5. General information on elements retrieved from www.webele-

ments.com.6. M.A. Thompson, Cambridge Pre-U Chemistry (2009).7. http://www.physics.utoronto.ca/students/undergradu-

ate-courses/course-homepages/jpa305h1-310h1/stableiso-topes.pdf.

8. http://www.physics.utoronto.ca/students/undergradu-ate-courses/course-homepages/jpa305h1-310h1/stableiso-topes.pdf.

9. http://www.physics.utoronto.ca/students/undergradu-ate-courses/course-homepages/jpa305h1-310h1/stableiso-topes.pdf.

10. M.A. Thompson, Cambridge Pre-U Chemistry (2009).

ABOUT THE AUTHOR

ELISSA FOORD is studying Chemistry, Maths, Further Maths, Latin and Greek in the Lower Sixth form; she is also keen on modern languages, which she tries to keep going on the side. In terms of science, she is doing the PreU course for Chemistry, and is particularly interested in organic chemistry. She plays in a school team for netball, and enjoys running and sailing - she also likes music (she plays the violin and has sung in the choir) and is currently involved in a school musical. She lives in Ox-fordshire, and goes to Rugby School, Warwickshire, where she has been for three years.

REVIEW ARTICLE I ISOTOPES

Page 49: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 49

Music is, and has been since ancient times, an integral part of human culture. Its many purposes range from entertainment to career options and it plays a large part in cultural enrichment. It’s a commonly accepted idea that listening to music, specifically classical music, can even make babies smarter. This so-called “Mozart effect,” named after the composer, whose music is thought to be especially effectual, is common in daily usage; it is not rare to hear mothers speaking of how they have their babies listen to Mozart, even in the uterus, in the hope that it will make their children more intelligent.

ABSTRACT

REVIEW ARTICLE I MOZART EFFECT

ALEXANDRA CONGLynbrook High School. Email: [email protected]

FROM MOZART TO MYTHS:

Dispelling the“Mozart Effect”

Page 50: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1450

REVIEW ARTICLE I MOZART EFFECTto newspaper and in turn, to consumer, facts generally tend to become blurred and sensationalized; truths are stretched and rumours are accepted as fact. 1 The Mozart effect and the manner in which it permeated the media and common knowledge, becoming distort-ed and dramatically different along the way, is a prime example of this phenomenon.

In addition to claiming improved intelligence in gener-al, somehow the notion spread that in particular, Mo-zart was beneficial for young children. Rauscher stated in an interview with National Public Radio (NPR), ‘Generalising these results to children is one of the first things that went wrong. Somehow or another the myth started exploding that children that listen to classical music from a young age will do better on the SAT, they’ll score better on intelligence tests in general, and so forth’. 11 It is worth noting that even in experiments following Rauscher’s, young children were not used as test subjects. 6,7,8,12,13 Nevertheless, this myth became so widely known and accepted that in 1998 the governor of Georgia asked for funding to give each child in the state a CD of classical music. 1

The fact that the original study had nothing to do with showing improved general intelligence after listening to Mozart was lost on the general public; the scientific community, however, was sceptical. Disregarding the jump from Mozart improving spatial-temporal reason-ing to Mozart improving IQ which the media and soci-ety made, whether listening to Mozart truly improved spatial reasoning still remained uncertain. The original experiment was flawed in that it used a relatively small sample size (only 36 people 9) which could be small enough to skew the results. Additionally, it was only one experiment, and therefore cannot provide conclu-sive results on its own. In order for the conclusion that listening to Mozart improves spatial reasoning abilities to be considered scientifically valid, the results must be reproducible by other scientists. Accordingly, many scientists conducted their own experiments similar to Rauscher’s to attempt to replicate the original findings.

Subsequent studies differed in their results greatly. In 1995, Rauscher et al. performed a revised version of their original experiment, using 79 students instead of 36 and minimalist music by Philip Glass, audio-taped stories, and dance music instead of the relaxation tape used previously. [8] The original Mozart sonata and the silence used for the control group were preserved from the first experiment. The results confirmed those of the

DISCUSSION

However, this belief is not the conclusion of scien-tific experiment, but rather an urban myth with

its basis in science, but it has been so exaggerated that it no longer resembles the conclusions made by science. In fact, numerous scientific studies 1,7 have been conducted on the topic which actually dispel the idea of the “Mo-zart effect” and render it mere scientific legend.

On the other hand, studies 2,4,5,7,10 also suggest that actually learning an instrument has real effects on the brain, from the more basic motor and auditory func-tions to higher level functions. Although the popular belief - that merely listening to certain types of mu-sic can have lasting positive effects, such as increased general intelligence (especially in children) - has been disproven by scientific evidence 1,7, musical training, for a period of three to four years, actually does improve verbal abilities in young children.

The Mozart effect was conceived in 1991 by researcher Alfred Tomatis who wrote the book Pourquoi Mozart? [14] which describes how listening to Mozart could im-prove speech and auditory disorders and help disabled children improve learning skills. Two years later, the media introduced the concept into popular culture, af-ter neurobiologists Rauscher, Shaw, and Ky conducted a study 9 investigating the effects of listening to music on spatial-temporal reasoning (the ability to visualize and manipulate mental images and sequences). This entailed a group of 36 college students completing tasks measuring spatial Intelligence Quotient (IQ) after 10 minutes of: silence, listening to a relaxation tape, and listening to Mozart’s sonata for two pianos in D Major (K. 448). The study demonstrated that listening to Mozart increased spatial IQ by about 8 to 9 points (using the Stanford-Binet Intelligence Scale) over silence or listening to relaxation tapes for a period of 10 to 15 minutes. 9 It was this result that the media took and exaggerated, and this eventually grew and was distorted into the popularized interpretation of the Mozart effect.

After the study was published, it caught the attention of the Associated Press, and soon became headline news across the United States. Instead of reporting that listening to Mozart improved spatial reasoning abilities in college students for a very limited amount of time, the newspapers reported that Mozart increased general intelligence.11 As information is passed from researcher

Page 51: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 51

original experiment: only the Mozart group showed appreciable increase in spatial ability, which lasted for 10-15 minutes. 8 On the other hand, researchers New-man et al., who conducted a study on 114 students, found that while after listening to music the students performed better on the tests; the relaxation tape they used (similar to the one used by Rauscher) also pro-duced similar improvements. 7 Stough, Kerkin, Bates, and Mangan also found no correlation between listen-ing to Mozart’s music versus dance music or silence in their study of 30 students. 12 However, both of these studies used Raven’s Progressive Matrices as opposed to the Stanford-Binet Intelligence Scale used by Rauscher as the tool for measuring spatial intelligence. Raven’s Progressive Matrices measures spatial recognition which is simply the ability to recognize similarities among images, while the Stanford-Binet test measures spa-tial-temporal reasoning, which is the ability to actually manipulate and transform the images, so this difference in tests used could potentially be responsible for the differing results. 13

Regardless of the flaws in using different testing meth-ods, further study indicated that there were other factors which could potentially cause the spatial-tem-poral reasoning improvement: mental arousal and enjoyment. It was already well documented that mental arousal, as well as a positive mood, improves cognitive function while negative moods or boredom tend to inhibit mental ability. 13 Researchers hypothesized that the slight increase found by Rauscher was due to the fact that listening to Mozart is more mentally stimu-lating and enjoyable than listening to relaxation tapes, minimalist music, or silence, and set out to confirm (or disprove) that theory.

When Schellenberg and Nantais performed an experi-ment in which 84 college students listened to Mozart, Schubert (a composer of a slightly later time period than Mozart, but whose music is of similar complex-ity and organization), an audio tape of a short story, or silence, they found their results to be consistent with their prediction.6 Performance after listening to Mozart, Schubert, and the short story was all improved over performance after listening to silence,6 imply-ing that listening to something which is of sufficient complexity to stimulate the brain will improve spatial reasoning regardless of what it is. Additionally, the researchers polled the students on their preferences for Mozart or the short story and found that they

performed better after listening to whichever one they preferred. 6

A similar study was conducted by Schellenberg, Thompson, and Husain two years later, comparing the effects of listening to the same sonata of Mozart to an adagio by Albinoni. The main contrast between these two pieces is that, while the Mozart is cheery-sound-ing and upbeat, the Albinoni is slow and described as sounding depressed or dejected. They found that listening to Mozart induced a positive mood and im-proved spatial performance over silence while listening to Albinoni induced a negative mood and, as expected, did not affect spatial performance. 13

What all of this indicates, ultimately, is that the effects of listening to even complex music such as Mozart, have, at best, somewhat significant but temporary effects on spatial intelligence. As this improvement can likely be attributed to psychological reactions that lis-tening to music triggers, it is therefore highly doubtful that Mozart, as the general public believes, makes one ‘smarter’. Rather, one simple study caught the attention of the media and exploded into a myth with a seem-ingly trustworthy basis – after all, it did come from a scientific experiment.

Subsequent studies have only confirmed the original result that any improved spatial-temporal reasoning is short-lived, and have also provided evidence that the improvement is due to the enjoyment factor in listen-ing to Mozart. However, this is not to say that Mozart, or classical music in general, has no effects on intelli-gence. Although merely listening to classical music is not enough to affect the brain, musical training for a sufficiently long period of time definitely has a positive impact on the brain. 2,5,10

It makes sense that musical training would shape the brain to be different from those who have never learned music; after all, learning to play an instrument requires precise coordination of the fingers, translation of visual cues into motor functions, and memorisa-tion of anywhere from minutes to hours of music. Few would debate the statement that musical training would improve motor, auditory, and visual functions, as they are directly related to playing music, and the differences in the regions of the brain related to these between musicians and non-musicians has been shown by many scientific studies. 2,3,4,10 Neuroscientists Gaser and Schlaug have shown that the brains of musicians

Page 52: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1452

have significantly more grey matter in the motor, audi-tory, and visual regions, suggesting that this adaptation is beneficial for instrument performance, and also that musicians have increased ability to process multimodal sensory information. 3

While this could possibly be the result of nature and not nurture (people with more developed functions relating to music may be predisposed to pursue it), when Schlaug et al imaged the brains of children who intended to start music lessons and compared them to those of children who had no intention to learn music, no significant differences were found, implying that the differences found in adult musicians are the result of their training. 10

Additionally, other studies show evidence for brain plasticity, or the notion that the brain is capable of changing its structure and organization in response to stimuli. Moreno et al. found that only six months of training was sufficient to improve pitch discrimination abilities, both in music and in spoken word, in eight-year-olds, in comparison to children who were given painting lessons.5 Brain scans also showed that there had been neural changes in the music group, as the brain waves taken from before and after differed. 5

The process, by which training one function improves others, is referred to as transfer. In the cases described, where musical training has an effect on motor, audi-tory, visual, and multimodal sensory processing, the type of transfer shown is near transfer (when the two domains of transfer are relatively similar), since these skills are all directly involved in learning an instrument.

Near transfer is relatively common and easy to show, 2 compared to far transfer, which is when the transfer do-mains are not immediately linked. Far transfer is rarer and more difficult to prove as well as less intuitively be-lievable [2] than near transfer, since far transfer involves overlap of brain regions and implies that the same areas of the brain can be responsible for unrelated functions. Learning music and becoming better at vocabulary, for instance, is an example of far transfer, and surprisingly, musical training for a few years in young children actu-ally can improve verbal skills.

Hyde et al. did an experiment testing for near and far transfer after 15 months of keyboard lessons on children who were an average of 6 years old at the commencement of training, and found that only near

transfer was prevalent. 4 When tested against a control group, the keyboard group performed better on finger motor skills and on melodic and rhythmic discrimi-nation, but showed no improvement on different far transfer tests measuring spatial reasoning and recogni-tion and verbal abilities. 4

Although at first this seems to be evidence for lack of far transfer as opposed to the presence of it, the likely reason for no improvement being found stems from the short duration of the training. When Schlaug et al. did a similar experiment, training the children in an instru-ment (the majority chose keyboard) for 14 months and then had them take similar tests, the results were the same, in that, only near transfer motor and auditory levels were increased, and no far transfer was apparent. [10] However, brain scans showed that there were neu-ral changes consistent with those which would be pres-ent if far transfer were to occur. 10 Further testing, of slightly older children from ages 9-11 who had had on average four years of musical training, showed that the musicians had significantly better results than matched non-musicians on not only the near transfer tests, but the far transfer test of verbal ability and vocabulary as well. 10

Perhaps the most convincing evidence for musical training improving vocabulary is the study done by Forgeard et al. Near transfer and far transfer tests were done on children who had studied a musical instru-ment for at least three years, and in addition to better performance on the auditory and motor tests, they were also shown to have improved verbal skills and non-verbal reasoning over the control group. 2 When vocabulary was removed as a factor in testing non-ver-bal reasoning, the musicians’ superiority disappeared, but not vice versa, indicating that while the reason for their better non-verbal reasoning is due to improved vocabulary, the converse is not true.2 In addition, the length of training was found to have a direct relation with the amount of verbal improvement, providing even more evidence for the link between music lessons and increased vocabulary. 2

While these results merely show correlation and not necessarily causation, the evidence for brain plasticity shows that the brain is indeed capable of adapting and changing in response to repeated training, and the correlation between music and vocabulary is so strong that, put together, it is definitive that musical training for a period of at least three to four years improves

REVIEW ARTICLE I MOZART EFFECT

Page 53: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 53

verbal ability. 2

CONCLUSIONS

While many believe in the myth that merely listening to Mozart or similar music can increase general intelli-gence, few are actually aware of the roots of this myth and of the limited, temporary effect. Additional evi-dence has even shown that the slight improvement can be attributed to enjoyment of Mozart’s music, making the effect applicable to anything which is enjoyable to listen to and marking the “Mozart effect” as no more than a scientific myth.

Perhaps the reason why this myth became so popular is laziness: in all honesty, who doesn’t like the idea of in-creasing intelligence while passively listening to music which is, at the very worst, a little dull? However, there is truly no such thing as a free lunch, and what people should be focusing on instead are the real benefits of actually learning a musical instrument. With a few years of dedication and perseverance, in addition to the joys of being able to play timeless classical music, chil-dren also gain valuable verbal skills which undoubtedly are beneficial for the rest of their lives.

ABOUT THE AUTHOR

ALEXANDRA CONG, 17 years old, is a 12th grader at Lynbrook High School in San Jose, California. She has been interested in math and science since she was little, and has pursued these interests throughout high school. She has partici-pated in the national semifinals for the USA Biology Olympiad and US National Chemistry Olympiad, as well as in the USA Junior Mathematical Olympiad. Outside of school, she’s been playing piano since she was six years old, garnering numer-ous awards and honors in her area. She also enjoys shopping, watching movies, reading books, taking walks, and staying up late for no good reason.

REFERENCES

1. Bangerter, Adrian and Chip Heath. “The Mozart effect: Tracking the evolution of a scientific legend.” British Journal of Social Psychology (2004): 605–623.

2. Forgeard, Marie, et al. “Practicing a Musical Instrument in Childhood is Associated with Enhanced Verbal Ability and Nonverbal Reasoning.” Public Library of Science One (2008).

3. Gaser, Christian and Gottfried Schlaug. “Brain Structures Differ between Musicians and Non-Musicians.” The Journal of Neuroscience (2003): 9240-9245.

4. Hyde, Krista L., et al. “Musical Training Shapes Structural Brain Development.” The Journal of Neuroscience (2009): 3019-3025.

5. Moreno, Sylvain, et al. “Musical Training Influences Linguistic Abilities in 8-Year-Old Children: More Evidence for Brain Plasticity.” Cerebral Cortex (2009): 712-723.

6. Nantais, Kristin M. and E. Glenn Schellenberg. “The Mozart Effect: An Artifact of Preference.” Psychological Science (1999): 370-373.

7. Newman, J, et al. “An experimental test of “the Mozart effect”: does listening to his music improve spatial ability?” Perceptual and Motor Skills (1995): 1379-1387.

8. Rauscher, Frances H., Gordon L. Shaw and Katherine N. Ky. “Listening to Mozart enhances spatial-temporal reasoning: Towards a neurophysiological basis.” Neuroscience Letters (1995): 44-47.

9. —. “Music and Spatial Task Performance.” Nature (1993): 611.

10. Schlaug, Gottfried, et al. “Effects of Music Training on the

Child’s Brain and Cognitive Development.” Annals New York Academy of Sciences (2005): 219-230.

11. Spiegel, Alix. “’Mozart Effect’ Was Just What We Wanted To Hear.” 28 June 2010. NPR: National Public Radio. 15 Janu-ary 2011 <http://www.npr.org/templates/story/stor y.php?sto-ryId=128104580>.

12. Stough, Con, et al. “Music and spatial IQ.” Personality and Individual Differences (1994): 695.

13. Thompson, William Forde, E. Glenn Schellenberg and Gabri-ela Husain. “Arousal, Mood, and the Mozart Effect.” Psycho-logical Science (2001): 248-251.

14. Tomatis, Alfred. Pourquoi Mozart? Paris: Diffusion, Hachette, 1991. Print

Page 54: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1454

The term consonance comes from the Latin word consonare, which means sounding together. The op-posite of consonance, or perhaps the absence of this, is described as dissonance. There are many theories behind what makes a number of musical notes either consonant or dissonant, and in this article I will look at one of the leading theories on this matter.

ABSTRACT

Consonance

REVIEW ARTICLE I CONSONANCE

ROSIE TAYLORThe King’s School Canterbury, Kent, UK. Email: [email protected]

OVERVIEW

Everyone hears music every day, whether inten-tionally or in the background, but have you ever

considered how music actually works? When a note is sung, or played on an instrument, sound waves are produced – travelling compressions in the air. These have a particular set of frequencies: the fundamental frequency and many overtones to this sound. Each overtone is described as a ‘partial’ to the overall sound which can be described as either harmonic or in-har-monic. A harmonic frequency has a frequency that is an integer multiple of the fundamental frequency, whereas an in-harmonic frequency has a frequency which is not an integer multiple of the fundamental frequency. The musical pitch of a note is generally perceived as the lowest sounding partial of the overall sound wave, or the fundamental frequency.

So then, what is harmony? We know simply that it is when two or more musical notes sound nice, or pleasing with one another. But, why does this happen with some combinations of notes and not with others? Why, to ask the same question in a different way, are some sounds consonant and others dissonant?

Many philosophers say that harmony lies in the percep-tion of order, and this can be explained by the theory of frequency ratios.

Galileo first argued this point looking at pendulums.

Consider two pendulums swinging with frequencies of ratio 2:3, for example two pendulums with lengths 1m and 4/9m. If you start swinging these two pendulums together they will immediately get out of step. Howev-er, in the time it takes one to do two swings, the other will do three swings, so they will come into step for repetitive instants, at the beginning of each third swing of the longer pendulum, and each fourth swing of the shorter pendulum. They will keep swinging in and out of step in this periodic manner.

In music, one of the most important intervals is called a perfect fifth (the interval between the second and third notes of ‘Twinkle, Twinkle Little Star’). This interval has a frequency ratio of 3/2, meaning the fre-quency of the higher of the two notes is 1.5 times the frequency of the lower note, the analogous situation to that of the two pendulums described above. Could it be, when judging if some combinations of sounds are consonant, the ear is detecting a pattern of regularity, such as the repeated moment when these two pendu-lums swing in phase for an instant?

A major triad is made up of three notes: the root note

Figure 1. Frequency is inversely proportional to the square root of length of a pendulum.

Page 55: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 55

with notes major third and the perfect fifth above it. This sort of chord is common throughout music through the ages. It is generally thought to sound consonant, and this is partly due the frequency ratios of these three notes being 4:5:6.

At a certain point, frequency ratios are too large to sound consonant, and the sound is described as disso-nant. At this point it takes too many waves cycles for the different sound waves for each note to line up and our ears and brain cannot find a regular pattern.This frequency ratio theory is the most widely held belief among musicians as to why some notes sound consonant with one another, but this remains a theory

as it is hard to prove without a deeper understanding of he brain.

REFERENCES

1. Johnston, Ian: ‘Measured Tones: The Interplay of Physics and Music’2. Heimiller, Joseph: ‘Where Math meets Music’ available at: http://

www.musicmasterworks.com/WhereMathMeetsMusic.html

Figure 2. Three green waves correspond to two red waves in this wave pattern.http://www.musicmasterworks.com/WhereMathMeetsMusic.html

ABOUT THE AUTHOR

ROSIE TAYLOR is 18 and doing A2s in Further Maths, Physics and Chemistry at King’s Canterbury. She is planning to study Natural Sciences at University.

Page 56: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1456

Figures 1 and 2 - Spread of the disease (23rd February 2012)

As shown on the map, the concentration of the virus began in the South East of the UK, but this changed rapidly with 11 cases being confirmed within a week at one point. Although Cornwall seemed to show an anomaly, it is possible it was found there due to unre-ported cases along the south coast or the farm buying an infected animal from a market in the South East. It can be spread over the UK through the selling of infected livestock which have not yet been detected with the disease. Statistics show that sheep suffer more than cattle with the virus, and present more symptoms. This may be due to the fact that lambing season starts

The Schmallenberg virus, an arbovirus, caused a relatively new outbreak of disease in 2012. It has been found in Belgium, the Netherlands, Germany and the UK. The name was derived from the town where it was first confirmed, in Germany. It mainly causes abortion and birth defects in sheep, cattle and goats but can cause mild cases in adult cattle. In the latter case, it is often defined by diarrhoea (scour), pyrexia or milk drop.

ABSTRACT

TheSchmallenberg Virus

REVIEW ARTICLE I The Schmallenberg Virus

ELLIE PRICEHaybridge Sixth Form, Hagely, worcestershire. Email: [email protected]

It has been estimated that the virus is transmitted via midges that came to the UK from Europe as opposed

to ticks or mosquitoes, and thought to have originated in the late summer/early autumn of 2011. It is thought that this was the beginning of the outbreak due to the effect on the livestock of 2012, and the nature of midg-es being extremely active during summer months. The Department for Environment, Food and Rural Affairs (DEFRA) confirmed the disease as present in the UK on 22nd January 2012.

Page 57: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 57

before the calving season and therefore cases may be present but not seen in cattle. The virus was not ‘noti-fiable’, meaning cases did not have to be reported, but the NFU (National Farmers Union) strongly urges that any new case is, and that farmers are extremely vigilant for the symptoms in adult animals. Symptoms can be a high temperature (40°C+), scour and a low milk yield, but these are only seen in cattle. Adult sheep/goats seem to show no sign of being infected. To prevent the spread, it is recommended any animal going into market is checked by a vet for it before being sold on, but this is not required; the decision to sell the animal on will then be between the farmer and the veterinar-ian. Russia banned all livestock imports on January 18th 2012 to prevent spread of the disease there. There is no indication that it can be transmitted to humans – through consummation of meat products or otherwise. This was investigated, and DEFRA confirmed it is not transmittable.

Figure 3 - A lamb born deformed due to the Schmal-lenberg virus.

THE TRANSMITTING VECTORS

The theory is that the virus was bought over by a plume of midges from Europe. These are small parasites that feed off the blood of another living creature, much like mosquitoes.

The disease is spread by the midges biting an infected animal, taking the virus into their blood and moving on to another animal. When they bite the next animal they instil their own saliva which contains anti-coagu-

lants, but will also inject small amounts of the previous animal’s blood. This will then allow the virus to enter the new animal’s blood stream and therefore give it the disease. As midges travel in “plumes”, many midges can bite a single infected animal and spread it throughout a large number of livestock. This is seen in humans when mosquitoes spread malaria.

Figure 4 - It is thought the disease was bought over by a plume of midges at the end of September in 2011. Image: Wikipedia.com

Another incident where disease was spread this way was Bluetongue; a virus affecting ruminants such as cattle and sheep that develops quickly and can cause death within a week. The UK was declared free of this in June 2011.

ECONOMIC IMPACT

The virus caused a devastating economic impact on many farms, with a lot of farmers losing approximately 20% of their lambs since it began. The price of a lamb is currently at about £80-£95 per head, and with 20% of the lambs in a flock being lost, this equates to much more. The farmers then also lose the money that is put into feeding and caring for the ewes in order to pro-duce a lamb that is born dead through Schmallenberg Virus.

In a barn-kept flock, the loss is usually only that of the lambs, with the ewes being kept alive as an assisted delivery is often required due to the deformity of the limbs that is the characteristic trait of Schmallenberg Virus. Farmers who keep sheep in fields or on hills are also likely to lose the ewe as she will be unable to give birth herself. Though this is a risk under ordinary

Page 58: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1458

circumstances, it is particularly significant with the rise of the Schmallenberg virus and is dependent on how severely deformed the lamb is. Many can be expected to be in abnormal positions; without assistance it is impossible for the ewe to have a natural birth.

The fall in lamb numbers will also have subsequent effects on the high street market as the price of lamb will increase. The agricultural industry is closely linked to urban life for products such as lamb and beef, and so the Schmallenberg virus will not only hit farm econo-my but also indirectly affect the high street.

SIGNS OF AN AFFECTED ANIMAL

An adult cow will show symptoms of the disease and is usually ill for about 2 or 3 weeks before recovering. However, there are no distinguishable symptoms in infected sheep, and it is usually only seen in the lambs which are born either already dead or deformed. The severity of the deformation varies between lambs. Most are undeveloped and have fused limbs and malforma-tions of the brain such as hydrocephalus and scoliosis. However, although a lot of the affected lambs are still-born or undeveloped, some show signs of the virus but are not so severely deformed i.e. they may have slightly buckled knees which they are unable to straighten properly, but are otherwise able to function normally. Other issues found in the affected lambs are not ex-ternal, and are known as ‘dummy’ presentations as the lamb appears normal but may suffer ataxia, blindness, an inability to suckle, recumbency and may suffer fits due to brain malformations or spinal damage.

It is assumed that the severity of the deformations is caused by how long into the pregnancy the ewe became infected, i.e. ewes who were transmitted the virus early on in pregnancy are more likely to have a lamb severe-ly affected by Schmallenberg Virus than an ewe who contracted it towards the end of her term.

Midge bites infected animal.

Moves on to feed on next animal and so spreads the virus.

This can be repeated throughout a whole flock to infect almost every animal. The midges then move on and spread it across the countryside.

Figure 5

REVIEW ARTICLE I The Schmallenberg Virus

Page 59: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 59

TREATMENT AND PROTECTION AGAINST THE DISEASE

As of March the 1st 2013, 1500 farms had reported cases of Schmallenberg. In Europe, the Bovilis SBV vaccine has been developed, with apparent success and so has been announced useable in summer 2013, giving farmers the option to vaccinate their livestock against further infection before the animals become pregnant.

However, it can be speculated that animals previously infected by the virus may have since developed a natu-ral resistance to it due to the nature of immune sys-tems; antibodies develop to new pathogens and remain in the bloodstream for quick response in case of further infection.

There are also precautions that can be taken by farmers in order to secure their own livestock and farms; • Being educated on the disease and what symp-toms to look out for.• Avoid buying new livestock from areas affected by the virus, or at least consult a veterinarian before doing so.• Remove all dead livestock (killed due to the virus) from the rest of the stock and dispose of it prop-erly.• When introducing new animals into existing healthy stock, carry out the proper isolation periods and look for any symptoms of illness.• Manage flocks properly, ensuring they get the correct nutrients that would help them to remain healthy.

WOULD A BAN ON THE IMPORT/EXPORT OF LIVESTOCK HELP?

A spokeswoman for DEFRA stated in early 2012 that “An import ban on animals would not have prevented animals in the UK from becoming infected and a trade ban now would be of no benefit.” This is due to the nature of how the virus spreads - through midges that travelled to the UK from Europe. It would have a huge economic impact and farms would eventually become unsustainable. Many farms will send livestock to be slaughtered for the meat industry, and so if unable to import new livestock they will not be replacing their losses. This is just one example of the disadvantages of banning the livestock trade.

Figure 6 - Areas of the lamb that are most commonly affected by the virus (highlighted red). Image (tem-plate): www.starchefs.com

Figure 7 - Spread of the virus across the UK between February 7th-March 5th 2012

As shown in the graph, the virus spread very rapidly, from 29 cases reported within the first week of Febru-ary 2012 to 121 cases on the 5th of March 2012. This shows that the number of cases roughly quadrupled within a month, but it is suspected that not all of the cases were reported due to the disease status (not no-tifiable). The cause of the rapid increase was probably not because the disease spread between animals but as lambing season progressed, the ewes which contracted the virus gave birth to the affected lambs in more num-bers. However, it was not completely ruled out that the disease was not being spread between animals in the form of ticks and fleas.

As shown by the previous map, the virus seemed to ar-rive in the South East of the UK and has spread inland, as would be expected as the plume of midges is suspect-ed to have moved in from Germany.

Page 60: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1460

The virus has not been seen to infect humans yet or spread between animals, so the cases are localized to just a single animal and not spread throughout flocks. All farms who reported cases stated they had not im-ported from overseas at the time the virus is presumed to have arrived, and there was no way to stop it infect-ing animals in the UK.

However, it can be argued that there is uncertainty about whether the virus is spread by fleas and ticks, which can be passed between animals and so could also be transmitting vectors. This is not thought to be the case.

Overall a ban on the livestock trade would prove un-beneficial and would have a negative impact as opposed to being a supporting factor in the fight against the virus.

The number of farms affected continued to increase since, and as of July 9th 2012, 275 farms had been reported, with 56 cases in cattle and 219 in sheep.

REFERENCES

1. www.BBC.co.uk2. www.nfuonline.com 3. www.gov.uk/government/organisations/department-for-envi-

ronment-food-rural-affairs4. http://www.nfuonline.com/home/5. http://www.fwi.co.uk/Articles/2012/02/20/131532/schmal-

lenberg-cases-pass-1000-mark.html 6. http://www.sheep101.info/

TIMELINE OF EVENTS TO 19TH MARCH 2012

ABOUT THE AUTHOR

ELLIE PRICE is an 18-year-old student from Haybridge Sixth Form, currently studying Biology, Chemistry and Maths at A-Level. Ellie is applying to study Veterinary Medicine at University and has a particular interest in Biology and Anatomy. She also plays the piano in her spare time and enjoys wildlife photography.

August/September 2011 Plume of midges moves over from Europe. They feed on the sheep and the virus is introduced.

October/November 2011 Ewes are “tupped” (mated). The virus is now given to the foetuses throughout the gestation period.

23rd January 2012 Lambing season begins, first cases of Schmallenberg are found at four farms in Norfolk, Suffolk and Sussex.

24th January 2012 The NFU begins to urge farmers to take precautions with livestock and be vigilant for cases.

7th February 2012 There are now 29 cases of the virus across the South-East coast of England.

15th February 2012 The figure rises from 29 to approximately 40 cases. It has spread to areas such as Hertfordshire and Hampshire within a week.

18th February 2012 The virus is reported on a farm in Cornwall. It is an isolated case, in an unknown location.

24th February 2012 74 cases have been reported, and a statement is released saying a vaccine is possibly two years away.

28th February 2012 It is declared that the virus will inevitably be found on Welsh farms, however culling would not be considered as the animals will develop an immunity to the virus.

27th February 2012 9 more cases are reported, bringing the total to about 83.

2nd March 2012 A further 9 cases are found, with the total now being 92 reported cases.

4th March 2012 Kent MP Laura Sandys calls for a ban on livestock export.

7th March 2012 2 more cases are found in Devon. The total is now over 121 reported cases.

9th March 2012 The total number of cases has rose to 145.

15th March 2012 The virus is found in the East Midlands in Leicester and Lincolnshire. There have now been 158 reported cases.

16th March 2012 The virus how now been found in Warwickshire and Greater London. Figures now show it has been identified on 176 farms.

REVIEW ARTICLE I The Schmallenberg Virus

Page 61: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 61

Taste, or ‘gustation’, is one of the chemical senses; the other is smell, or ‘olfaction’.1 These senses act as messengers between the external world and the internal world of consciousness in our brains. Taste and smell are closely related to emotion - meaning that certain molecules can charm us into a recollection or a mood.2 But do we know how and why we taste?

ABSTRACT

How and Why Do We Taste?

REVIEW ARTICLE I HOW AND WHY WE TASTE

VERA GRANKINAUniversity College London, London, UK. Email: [email protected]

OVERVIEW

What we often refer to as taste is in fact flavour. Flavour is a mixture of taste, smell, touch (the

texture of a substance), and other physical features, such as temperature.[3]

Taste in mammals is concentrated in the damp region inside the mouth, particularly the tongue. Human tongues usually have 2,000–8,000 taste buds,[4] which are groups of adapted epithelial cells (taste receptor cells) that are connected to a smaller number of nerve endings. The number of taste buds decreases with age. [5] Taste buds are collected on taste papillae, which can be seen on the tongue as little red dots because they are richly supplied with blood vessels.[3] Figure 1 shows the structure of a taste bud.Scientists previously theorised that different regions of the tongue correspond to different tastes. We now know that this is wrong. Taste cells have receptors for all five ‘tastants’ and are found all over the surface of the tongue and oral cavity, but some areas are more sensitive to certain tastes than others.[3]The sensation of taste may have initially helped us to distinguish between new sources of food and potential poisons.[3] Interestingly, taste preference often chang-es when our body requires specific chemicals. There are five basic tastes: sweet, sour, bitter, salty, and umami - which is the meaty or savoury taste of amino acids. None of these tastes are provoked by a single chemical.[6] Saltiness is detected by taste receptor cells that cor-

respond to sodium chloride. The proteins in the cell membranes attract the sodium ions, making them enter the cells. It causes the release of a substance called a neurotransmitter (serotonin) and stimulates nerve cells to carry information to the brain.[7]

Sourness is caused by the presence of protons - positive hydrogen ions released when acid dissolves in water. Protons either enter the sodium channels or block epithelial potassium channels, stimulating the release of the neurotransmitter. The bitterness of quinine and calcium is also detected when potassium channels are blocked in taste receptor cell membranes.[7]

Figure 1. Structure of a taste bud.http://en.wikipedia.org/wiki/File:Taste_bud.svg

Page 62: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1462

In contrast, other tastes, e.g. sweetness, some bitter tastes and umami, are detected when a chemical binds to a specific membrane protein in a taste receptor cell. Chemicals of a particular shape can fit in the binding site and initiate the response. This is called the ‘lock-and-key’ principle. The response is a change in the flow of ions across the membrane, which causes the release of the neurotransmitter, and a nerve impulse is sent to the brain.[7]

While the taste sensory system is not completely under-stood, understanding taste might have a lot of practical uses. It could be possible to lower salt intake of people with high blood pressure by designing an artificial salt receptor. Elderly people often lose their sense of taste and the pleasure of eating might be returned to them, thus improving their quality of life.[8]

REFERENCES

1. Tim Jacob, A tutorial on the sense of smell http://www.cardiff.ac.uk/biosi/staffinfo/jacob/teaching/senso-ry/olfact1.html

2. P. W. Atkins ‘Molecules’, ‘Scientific American Library’ 1987 Page 105 Lines 6-12

3. Tim Jacob, A tutorial on the sense of taste http://www.cf.ac.uk/biosi/staffinfo/jacob/teaching/sensory/taste.html

4. Wikipedia page, Taste buds http://en.wikipedia.org/wiki/Taste_bud

5. Savitri K Kamath PhD, RD http://www.ajcn.org/content/36/4/766.full.pdf

6. R. Bowen (2006) Physiology of Taste http://www.vivo.colostate.edu/hbooks/pathphys/digestion/pregastric/taste.html

7. Barbara E. Goodman, Taste Receptors http://www.chemistryexplained.com/St-Te/Taste-Receptors.html

8. Jane Bradbury, Taste Perception: Cracking the Code (What Next—and Why Study Taste Anyway?) http://www.ncbi.nlm.nih.gov/pmc/articles/PMC368160/

ABOUT THE AUTHOR

VERA GRANKINA is currently a first year student stud-ying Chemistry with Mathematics at UCL. She enjoys yoga and cross-country in her spare time. Her main areas of in-terest include history of science and chemistry of flavours and fragrances.

REVIEW ARTICLE I HOW AND WHY WE TASTE

Page 63: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 14 63

YOUNG SCIENTISTS JOURNAL

Chief EditorSophie Brown, UK

Editorial TeamTeam Leader: Rachel Wyles, UKTeam Members:Abbie Wilson, UKAilis Dooner, USAAreg Nzsdejan, UKChimdi Ota, UKClaire Nicholson, UKGeorge Tall, UKGeorgios Topaloglou, UKGilbert Chng, SingaporeJames Molony, UKJenita Jona James, UKKonrad Suchodolski, UKLisa-Marie Chadderton, UKMischa Nijland, NetherlandsRahul Krishnaswamy, USASophia Aldwinckle, UK

Technical TeamTeam Leader: Jea Seong Yoon, UKMark Orders, UK

DesignMichael Hofmann, UK, Invicton Ltd International Advisory Board The IAB is a team of experts who advise the editors of the journal.Team Leader: Christina Astin, UKTeam Members:Ghazwan Butrous, UKAnna Grigoryan, USA/ArmeniaThijs Kouwenhoven, ChinaDon Eliseo Lucero-Prisno III, UKPaul Soderberg, USALee Riley, USACorky Valenti, USAVince Bennett, USAMike Bennett, USATony Grady, USAIan Yorston, UKCharlie Barclay, UKJoanne Manaster, USAAlom Shaha, UK

Armen Soghoyan, ArmeniaMark Orders, UKLinda Crouch, UKJohn Boswell, USASam Morris, UKDebbie Nsefik, UKBaroness Susan Greenfield, UKProfessor Clive Coen, UKSir Harry Kroto, FRS, UK/USAAnnette Smith, UKEsther Marin, SpainMalcolm Morgan, UK

Young Advisory BoardSteven Chambers, UKFiona Jenkinson, UKTobias Nørbo, DenmarkArjen Dijksman, FranceLorna Quandt, USAJonathan Rogers, UKLara Compston-Garnett, UKOtana Jakpor, USAPamela Barraza Flores, MexicoCleodie Swire, UKMuna Oli, USA

All rights reserved. No part of this publication may be reproduced, or trans-mitted, in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, with-out permission in writing from the publisher. The Young Scientists Journaland/or its published contents cannot be held responsible for errors or for any consequences arising from the use of the information contained in this journal. The appearance of advertising or product information in the vari-ous sections in the journal does not constitute and endorsement or approval by the journal and/or its publisher of the quality or value of the said product or of claims made for it by its manufacturer. The journal is printed on acid free paper.

Websites: www.butrousfoundation.com/ysjwww.ysjournal.com

Email: [email protected]

Page 64: Young Scientists Journal Issue 15

Young scientists journal issue 15 – Jan 14 - June 1464

YOUNG SCIENTISTS

@YSJournalwww.ysjournal.com

READ AND CONTRIBUTE