bio project

62
1.CANCER Cancer / ˈ k æ n s ər / , known medically as a malignant neoplasm , is a broad group of diseases involving unregulated cell growth . In cancer, cells divide and grow uncontrollably, forming malignant tumors, and invading nearby parts of the body. The cancer may also spread to more distant parts of the body through the lymphatic system or bloodstream . Not all tumors are cancerous; benign tumors do not invade neighboring tissues and do not spread throughout the body. There are over 200 different known cancers that affect humans. The causes of cancer are diverse, complex, and only partially understood. Many things are known to increase the risk of cancer, including tobacco use, dietary factors, certain infections , exposure to radiation , lack of physical activity , obesity , and environmental pollutants. These factors can directly damage genes or combine with existing genetic faults within cells to cause cancerous mutations. Approximately 5–10% of cancers can be traced directly to inherited genetic defects. Many cancers could be prevented by not smoking, eating more vegetables, fruits and whole grains, eating less meat and refined carbohydrates, maintaining a healthy weight, exercising, minimizing sunlight exposure, and being vaccinated against some infectious diseases. Cancer can be detected in a number of ways, including the presence of certain signs and symptoms , screening tests , or medical imaging . Once a possible cancer is detected it is diagnosed by microscopic examination of a tissue sample . Cancer is usually treated with chemotherapy , radiation therapy and surgery . The chances of surviving the disease vary greatly by the type and location of the cancer and the extent of disease at the start of treatment. While cancer can affect people of all ages, and a few types of cancer are more common in children, the risk of developing cancer generally increases with age. In 2007, cancer caused about 13% of all human deaths worldwide (7.9 million). Rates are rising as more people live to an old age and as mass lifestyle changes occur in the developing world.

Upload: amitpatel95001

Post on 23-Oct-2015

9 views

Category:

Documents


1 download

DESCRIPTION

notes on nana technology,some diseases and etc..;

TRANSCRIPT

1.CANCER

Cancer / ̍ k æ n s ər / , known medically as a malignant neoplasm, is a broad group of diseases involving

unregulated cell growth. In cancer, cellsdivide and grow uncontrollably, forming malignant tumors, and

invading nearby parts of the body. The cancer may also spread to more distant parts of the body through

the lymphatic system or bloodstream. Not all tumors are cancerous; benign tumors do not invade

neighboring tissues and do not spread throughout the body. There are over 200 different known cancers

that affect humans.

The causes of cancer are diverse, complex, and only partially understood. Many things are known to

increase the risk of cancer, including tobaccouse, dietary factors, certain infections, exposure

to radiation, lack of physical activity, obesity, and environmental pollutants. These factors can directly

damage genes or combine with existing genetic faults within cells to cause cancerous

mutations. Approximately 5–10% of cancers can be traced directly to inherited genetic defects. Many

cancers could be prevented by not smoking, eating more vegetables, fruits and whole grains, eating less

meat and refined carbohydrates, maintaining a healthy weight, exercising, minimizing sunlight exposure,

and being vaccinated against some infectious diseases.

Cancer can be detected in a number of ways, including the presence of certain signs and

symptoms, screening tests, or medical imaging. Once a possible cancer is detected it is diagnosed

by microscopic examination of a tissue sample. Cancer is usually treated with chemotherapy, radiation

therapy and surgery. The chances of surviving the disease vary greatly by the type and location of the

cancer and the extent of disease at the start of treatment. While cancer can affect people of all ages, and

a few types of cancer are more common in children, the risk of developing cancer generally increases

with age. In 2007, cancer caused about 13% of all human deaths worldwide (7.9 million). Rates are rising

as more people live to an old age and as mass lifestyle changes occur in the developing world.

Definitions

There is no one definition that describes all cancers. They are a large family of diseases which form a

subset of neoplasms, which show some features that suggest of malignancy. A neoplasm or tumor is a

group of cells that have undergone unregulated growth, and will often form a mass or lump, but may be

distributed diffusely.

Six characteristics of malignancies have been proposed: sustaining proliferative signaling, evading growth

suppressors, resisting cell death, enabling replicative immortality, inducing angiogenesis, and activating

invasion and metastasis. The progression from normal cells to cells that can form a discernible mass to

outright cancer involves multiple steps.

Signs and symptoms

Main article: Cancer signs and symptoms

Symptoms of cancer metastasis depend on the location of the tumor.

When cancer begins it invariably produces no symptoms with signs and symptoms only appearing as the

mass continues to grow or ulcerates. The findings that result depend on the type and location of the

cancer. Few symptoms are specific, with many of them also frequently occurring in individuals who have

other conditions. Cancer is the new "great imitator". Thus it is not uncommon for people diagnosed with

cancer to have been treated for other diseases to which it was assumed their symptoms were due.

Local effects

Local symptoms may occur due to the mass of the tumor or its ulceration. For example, mass effects

from lung cancer can cause blockage of thebronchus resulting in cough or pneumonia; esophageal

cancer can cause narrowing of the esophagus, making it difficult or painful to swallow; andcolorectal

cancer may lead to narrowing or blockages in the bowel, resulting in changes in bowel habits. Masses in

breasts or testicles may be easily felt. Ulceration can cause bleeding which, if it occurs in the lung, will

lead to coughing up blood, in the bowels to anemia or rectal bleeding, in the bladder to blood in the urine,

and in the uterus to vaginal bleeding. Although localized pain may occur in advanced cancer, the initial

swelling is usually painless. Some cancers can cause build up of fluid within the chest or abdomen.

Systemic symptoms

General symptoms occur due to distant effects of the cancer that are not related to direct or metastatic

spread. These may include: unintentional weight loss, fever, being excessively tired, and changes to the

skin. Hodgkin disease, leukemias, and cancers of the liver or kidney can cause a persistent fever of

unknown origin.

Specific constellations of systemic symptoms, termed paraneoplastic phenomena, may occur with some

cancers. Examples include the appearance of myasthenia gravis in thymoma and clubbing in lung cancer.

Metastasis

Main article: Metastasis

Symptoms of metastasis are due to the spread of cancer to other locations in the body. They can include

enlarged lymph nodes (which can be felt or sometimes seen under the skin and are typically

hard), hepatomegaly (enlarged liver) or splenomegaly (enlarged spleen) which can be felt in

the abdomen, pain or fracture of affected bones, and neurological symptoms.Most cancer deaths are due

to cancer that has spread from its primary site to other organs (metastasized).

Causes

Cancers are primarily an environmental disease with 90–95% of cases attributed to environmental factors

and 5–10% due to genetics. Environmental, as used by cancer researchers, means any cause that is

not inherited genetically, not merely pollution. Common environmental factors that contribute to cancer

death include tobacco (25–30%), diet and obesity (30–35%),infections (15–20%), radiation (both ionizing

and non-ionizing, up to 10%), stress, lack of physical activity, and environmental pollutants.

It is nearly impossible to prove what caused a cancer in any individual, because most cancers have

multiple possible causes. For example, if a person who uses tobacco heavily develops lung cancer, then

it was probably caused by the tobacco use, but since everyone has a small chance of developing lung

cancer as a result of air pollution or radiation, then there is a small chance that the cancer developed

because of air pollution or radiation.

Chemicals

Further information: Alcohol and cancer and Smoking and cancer

The incidence of lung cancer is highly correlated with smoking.

Cancer pathogenesis is traceable back to DNA mutations that impact cell growth and metastasis.

Substances that cause DNA mutations are known as mutagens, and mutagens that cause cancers are

known as carcinogens. Particular substances have been linked to specific types of cancer.Tobacco

smoking is associated with many forms of cancer, and causes 90% of lung cancer.[16]

Many mutagens are also carcinogens, but some carcinogens are not mutagens. Alcohol is an example of

a chemical carcinogen that is not a mutagen. In Western Europe 10% of cancers in males and 3% of

cancers in females are attributed to alcohol.

Decades of research has demonstrated the link between tobacco use and cancer in the lung, larynx,

head, neck, stomach, bladder, kidney,esophagus and pancreas. Tobacco smoke contains over fifty

known carcinogens, including nitrosamines and polycyclic aromatic hydrocarbons.Tobacco is responsible

for about one in three of all cancer deaths in the developed world, and about one in five worldwide.

 Lung cancer death rates in the United States have mirrored smoking patterns, with increases in smoking

followed by dramatic increases in lung cancer death rates and, more recently, decreases in smoking rates

since the 1950s followed by decreases in lung cancer death rates in men since 1990. However, the

numbers of smokers worldwide is still rising, leading to what some organizations have described as

the tobacco epidemic.

Cancer related to one's occupation is believed to represent between 2–20% of all cases. Every year, at

least 200,000 people die worldwide from cancer related to their workplace. Most cancer deaths caused by

occupational risk factors occur in the developed world.[  It is estimated that approximately 20,000 cancer

deaths and 40,000 new cases of cancer each year in the U.S. are attributable to occupation. Millions of

workers run the risk of developing cancers such as lung cancer and mesothelioma from

inhaling asbestos fibers and tobacco smoke, or leukemia from exposure to benzene at their workplaces.

Diagnosis

Chest x-ray showing lung cancer in the left lung.

Most cancers are initially recognized either because of the appearance of signs or symptoms or

through screening. Neither of these lead to a definitive diagnosis, which requires the examination of a

tissue sample by a pathologist. People with suspected cancer are investigated with medical tests. These

commonly include blood tests, X-rays, CT scans and endoscopy.

Most people are distressed to learn that they have cancer. They may become extremely anxious and

depressed. The risk of suicide in people with cancer is approximately double the normal risk.[93]

Classification

Further information: List of cancer types and List of oncology-related terms

Cancers are classified by the type of cell that the tumor cells resemble and is therefore presumed to be

the origin of the tumor. These types include:

Carcinoma : Cancers derived from epithelial cells. This group includes many of the most common

cancers, particularly in the aged, and include nearly all those developing in

the breast, prostate, lung, pancreas, and colon.

Sarcoma : Cancers arising from connective tissue (i.e. bone, cartilage, fat, nerve), each of which

develop from cells originating in mesenchymal cells outside the bone marrow.

Lymphoma  and leukemia: These two classes of cancer arise from hematopoietic (blood-forming)

cells that leave the marrow and tend to mature in the lymph nodes and blood, respectively. Leukemia

is the most common type of cancer in children accounting for about 30%.[94]

Germ cell tumor : Cancers derived from pluripotent cells, most often presenting in the testicle or

the ovary (seminoma and dysgerminoma, respectively).

Blastoma : Cancers derived from immature "precursor" cells or embryonic tissue. Blastomas are more

common in children than in older adults.

Cancers are usually named using -carcinoma, -sarcoma or -blastoma as a suffix, with the Latin or Greek

word for the organ or tissue of origin as the root. For example, cancers of the liverparenchyma arising

from malignant epithelial cells is called hepatocarcinoma, while a malignancy arising from primitive liver

precursor cells is called a hepatoblastoma, and a cancer arising from fat cells is called a liposarcoma. For

some common cancers, the English organ name is used. For example, the most common type of breast

cancer is called ductal carcinoma of the breast. Here, the adjective ductal refers to the appearance of the

cancer under the microscope, which suggests that it has originated in the milk ducts.

Benign tumors (which are not cancers) are named using -oma as a suffix with the organ name as the root.

For example, a benign tumor of smooth muscle cells is called a leiomyoma (the common name of this

frequently occurring benign tumor in the uterus is fibroid). Confusingly, some types of cancer use the -

noma suffix, examples including melanoma and seminoma.

Some types of cancer are named for the size and shape of the cells under a microscope, such as giant

cell carcinoma, spindle cell carcinoma, and small-cell carcinoma.

Pathology

The tissue diagnosis given by the pathologist indicates the type of cell that is proliferating, its histological

grade, genetic abnormalities, and other features of the tumor. Together, this information is useful to

evaluate the prognosis of the patient and to choose the best

treatment. Cytogenetics and immunohistochemistry are other types of testing that the pathologist may

perform on the tissue specimen. These tests may provide information about the molecular changes (such

as mutations, fusion genes, and numerical chromosome changes) that has happened in the cancer cells,

and may thus also indicate the future behavior of the cancer (prognosis) and best treatment.

An invasive ductal carcinoma of the breast (pale area at the center) surrounded by spikes of whitish scar tissue and

yellow fatty tissue.

 

An invasive colorectal carcinoma (top center) in acolectomy specimen.

 

A squamous-cell carcinoma (the whitish tumor) near the bronchi in a lung specimen.

 

A large invasive ductal carcinoma in amastectomy specimen.

Prevention

Cancer prevention is defined as active measures to decrease the risk of cancer.[95] The vast majority of

cancer cases are due to environmental risk factors, and many, but not all, of these environmental factors

are controllable lifestyle choices. Thus, cancer is considered a largely preventable disease.[96] Greater

than 30% of cancer deaths could be prevented by avoiding risk factors

including: tobacco, overweight / obesity, an insufficient diet, physical inactivity, alcohol, sexually

transmitted infections, and air pollution.[97] Not all environmental causes are controllable, such as naturally

occurring background radiation, and other cases of cancer are caused through hereditary genetic

disorders, and thus it is not possible to prevent all cases of cancer.

Dietary

Main article: Diet and cancer

While many dietary recommendations have been proposed to reduce the risk of cancer, the evidence to

support them is not definitive.[5][98] The primary dietary factors that increase risk

areobesity and alcohol consumption; with a diet low in fruits and vegetables and high in red meat being

implicated but not confirmed.[99][100] Consumption of coffee is associated with a reduced risk of liver cancer.

[101] Studies have linked consumption of red or processed meat to an increased risk of breast

cancer, colon cancer, and pancreatic cancer, a phenomenon which could be due to the presence of

carcinogens in meats cooked at high temperatures.[102][103] Dietary recommendations for cancer prevention

typically include an emphasis on vegetables, fruit, whole grains, and fish, and an avoidance of processed

and red meat (beef, pork, lamb), animal fats, and refined carbohydrates.[5][98]

Medication

The concept that medications can be used to prevent cancer is attractive, and evidence supports their

use in a few defined circumstances.[104] In the general population NSAIDs reduce the risk of colorectal

cancer however due to the cardiovascular and gastrointestinal side effects they cause overall harm when

used for prevention.[105] Aspirin has been found to reduce the risk of death from cancer by about 7%.[106] COX-2 inhibitor may decrease the rate of polyp formation in people with familial adenomatous

polyposis however are associated with the same adverse effects as NSAIDs.[107] Daily use

of tamoxifen or raloxifene has been demonstrated to reduce the risk of developing breast cancer in high-

risk women.[108] The benefit verses harm for 5-alpha-reductase inhibitor such as finasteride is not clear.[109]

Vitamins have not been found to be effective at preventing cancer,[110] although low blood levels of vitamin

D are correlated with increased cancer risk.[111][112] Whether this relationship is causal and vitamin D

supplementation is protective is not determined.[113] Beta-Carotene supplementation has been found to

increase lung cancer rates in those who are high risk.[114] Folic acidsupplementation has not been found

effective in preventing colon cancer and may increase colon polyps.[115]

Vaccination

Vaccines have been developed that prevent some infection by some viruses.[116] Human papillomavirus

vaccine (Gardasil and Cervarix) decreases the risk of developing cervical cancer.[116] Thehepatitis B

vaccine prevents infection with hepatitis B virus and thus decreases the risk of liver cancer.[116]

Screening

Main article: Cancer screening

Unlike diagnosis efforts prompted by symptoms and medical signs, cancer screening involves efforts to

detect cancer after it has formed, but before any noticeable symptoms appear.[117] This may

involve physical examination, blood or urine tests, or medical imaging.[117]

Cancer screening is currently not possible for many types of cancers, and even when tests are available,

they may not be recommended for everyone. Universal screening or mass screeninginvolves screening

everyone.[118] Selective screening identifies people who are known to be at higher risk of developing

cancer, such as people with a family history of cancer.[118] Several factors are considered to determine

whether the benefits of screening outweigh the risks and the costs of screening.[117] These factors include:

Possible harms from the screening test: for example, X-ray images involve exposure to potentially

harmful ionizing radiation.

The likelihood of the test correctly identifying cancer.

The likelihood of cancer being present: Screening is not normally useful for rare cancers.

Possible harms from follow-up procedures.

Whether suitable treatment is available.

Whether early detection improves treatment outcomes.

Whether the cancer will ever need treatment.

Whether the test is acceptable to the people: If a screening test is too burdensome (for example,

being extremely painful), then people will refuse to participate.[118]

Cost of the test.

Recommendations

The U.S. Preventive Services Task Force (USPSTF) strongly recommends cervical cancer screening in

women who are sexually active and have a cervix at least until the age of 65.[119] They recommend that

Americans be screened for colorectal cancer via fecal occult blood testing, sigmoidoscopy,

or colonoscopy starting at age 50 until age 75. There is insufficient evidence to recommend for or against

screening for skin cancer, oral cancer, lung cancer, or prostate cancer in men under 75.[124] Routine

screening is not recommended for bladder cancer,testicular cancer, ovarian cancer,[127] pancreatic

cancer, or prostate cancer.

The USPSTF recommends mammography for breast cancer screening every two years for those 50–

74 years old; however, they do not recommend either breast self-examination or clinical breast

examination.[130] A 2011 Cochrane review came to slightly different conclusions with respect to breast

cancer screening stating that routine mammography may do more harm than good.[131]

Japan screens for gastric cancer using photofluorography due to the high incidence there.[6]

Genetic testing

See also: Cancer syndrome

Gene Cancer types

BRCA1, BRCA2 Breast, ovarian, pancreatic

HNPCC, MLH1, MSH2, MSH6, PMS1, PMS

2Colon, uterine, small bowel, stomach, urinary tract

Genetic testing for individuals at high-risk of certain cancers is recommended.[132] Carriers of these

mutations may then undergo enhanced surveillance, chemoprevention, or preventative surgery to reduce

their subsequent risk.[132]

Management

Main article: Management of cancer

Many management options for cancer exist with the primary ones

including surgery, chemotherapy, radiation therapy, and palliative care. Which treatments are used

depends upon the type, location and grade of the cancer as well as the person's health and wishes.

Palliative care

Palliative care refers to treatment which attempts to make the patient feel better and may or may not be

combined with an attempt to attack the cancer. Palliative care includes action to reduce the physical,

emotional, spiritual, and psycho-social distress experienced by people with cancer. Unlike treatment that

is aimed at directly killing cancer cells, the primary goal of palliative care is to improve the patient's quality

of life.

Patients at all stages of cancer treatment need some kind of palliative care to comfort them. In some cases, medical specialty professional organizations recommend that patients and physicians respond to cancer only with palliative care and not with cancer-directed therapy.[133] Those cases have the following characteristics:

1. patient has low performance status, corresponding with limited ability to care for oneself

2. patient received no benefit from prior evidence-based treatments

3. patient is ineligible to participate in any appropriate clinical trial

4. the physician sees no strong evidence that treatment would be effective

Palliative care is often confused with hospice and therefore only involved when people approach end of

life. Like hospice care, palliative care attempts to help the person cope with the immediate needs and to

increase the person's comfort. Unlike hospice care, palliative care does not require people to stop

treatment aimed at prolonging their lives or curing the cancer.

Multiple national medical guidelines recommend early palliative care for people whose cancer has

produced distressing symptoms (pain, shortness of breath, fatigue, nausea) or who need help coping with

their illness. In people who have metastatic disease when first diagnosed, oncologists should consider a

palliative care consult immediately. Additionally, an oncologist should consider a palliative care consult in

any patient they feel has a prognosis of less than 12 months even if continuing aggressive treatment.[135]

[136][137]

Surgery

Surgery is the primary method of treatment of most isolated solid cancers and may play a role in palliation

and prolongation of survival. It is typically an important part of making the definitive diagnosis and staging

the tumor as biopsies are usually required. In localized cancer surgery typically attempts to remove the

entire mass along with, in certain cases, the lymph nodes in the area. For some types of cancer this is all

that is needed to eliminate the cancer.[138]

.

Radiation

Radiation therapy involves the use of ionizing radiation in an attempt to either cure or improve the

symptoms of cancer. It is used in about half of all cases and the radiation can be from either internal

sources in the form of brachytherapy or external sources. Radiation is typically used in addition to surgery

and or chemotherapy but for certain types of cancer such as early head and neck cancer may be used

alone. For painful 

2. PLANT BIO – TECHNOLOGY

Biotechnology is the use of living systems and organisms to develop or make useful products, or "any

technological application that uses biological systems, living organisms or derivatives thereof, to make or

modify products or processes for specific use" (UN Convention on Biological Diversity, Art. 2).[1] Depending on the tools and applications, it often overlaps with the (related) fields

of bioengineering and biomedical engineering.

For thousands of years, humankind has used biotechnology in agriculture, food production, and

medicine. The term itself is largely believed to have been coined in 1919 by Hungarian engineer Károly

Ereky. In the late 20th and early 21st century, biotechnology has expanded to include new and diverse

sciences such as genomics, recombinant gene technologies, applied immunology, and development of

pharmaceutical therapies and diagnostic tests.

Definitions of biotechnology

The concept of 'biotech' or 'biotechnology' encompasses a wide range of procedures (and history) for

modifying living organisms according to human purposes — going back to domestication of animals,

cultivation of plants, and "improvements" to these through breeding programs that employ artificial

selection and hybridization. Modern usage also includes genetic engineering as well as cell and tissue

culture technologies. Biotechnology is defined by the American Chemical Society as the application of

biological organisms, systems, or processes by various industries to learning about the science of life and

the improvement of the value of materials and organisms such as pharmaceuticals, crops, and livestock.[4] In other words, biotechnology can be defined as the mere application of technical advances in life

science to develop commercial products. Biotechnology also writes on the pure biological sciences

(genetics, microbiology, animal cell culture,molecular biology, biochemistry, embryology, cell biology).

And in many instances it is also dependent on knowledge and methods from outside the sphere of biology

including:

chemical engineering ,

bioprocess engineering ,

bioinformatics , a new brand of Computer science

biorobotics .

Conversely, modern biological sciences (including even concepts such as molecular ecology) are

intimately entwined and heavily dependent on the methods developed through biotechnology and what is

commonly thought of as the life sciences industry. Biotechnology is the research and development in

the laboratory using bioinformatics for exploration, extraction, exploitation and production from any living

organisms and any source of biomass by means of biochemical engineering where high value-added

products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed,

manufactured and marketed for the purpose of sustainable operations (for the return from bottomless

initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to

this to receive national and international approval from the results on animal experiment and human

experiment, especially on thepharmaceutical branch of biotechnology to prevent any undetected side-

effects or safety concerns by using the products).[5][6][7]

By contrast, bioengineering is generally thought of as a related field with its emphasis more on higher

systems approaches (not necessarily altering or using biological materials directly) for interfacing with and

utilizing living things. Bioengineering is the application of the principles of engineering and natural

sciences to tissues, cells and molecules. This can be considered as the use knowledge from working with

and manipulating biology to achieve a result that can improve functions in plants and animals.[8] Relatedly, biomedical engineering is an overlapping field that often draws upon and

applies biotechnology (by various definitions), especially in certain sub-fields of biomedical and/or

chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic

engineering.

History

Brewing was an early application of biotechnology

Main article: History of biotechnology

Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the

broad definition of "'using a biotechnological system to make products". Indeed, the cultivation of plants

may be viewed as the earliest biotechnological enterprise.

Agriculture has been theorized to have become the dominant way of producing food since the Neolithic

Revolution. Through early biotechnology, the earliest farmers selected and bred the best suited crops,

having the highest yields, to produce enough food to support a growing population. As crops and fields

became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-

products could effectively fertilize,restore nitrogen, and control pests. Throughout the history of

agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new

environments and breeding them with other plants — one of the first forms of biotechnology.

These processes also were included in early fermentation of beer.[9] These processes were introduced in

early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing,

malted grains (containing enzymes) convert starch from grains into sugar and then adding

specific yeasts to produce beer. In this process, carbohydrates in the grains were broken down into

alcohols such as ethanol. Later other cultures produced the process of lactic acid fermentation which

allowed the fermentation and preservation of other forms of food, such as soy sauce. Fermentation was

also used in this time period to produce leavened bread. Although the process of fermentation was not

fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food

source into another form.

For thousands of years, humans have used selective breeding to improve production of crops and

livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated

to produce offspring with the same characteristics. For example, this technique was used with corn to

produce the largest and sweetest crops.[10]

In the early twentieth century scientists gained a greater understanding of microbiology and explored

ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological

culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to

produce acetone, which the United Kingdom desperately needed to manufacture explosives during World

War I.[11]

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the

mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by

Howard Florey, Ernst Boris Chain and Norman Heatley - to form what we today know as penicillin. In

1940, penicillin became available for medicinal use to treat bacterial infections in humans.[10]

The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's

(Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San

Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by

transferring genetic material into a bacterium, such that the imported material would be reproduced. The

commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when

the United States Supreme Court ruled that a genetically modified microorganism  could be patented in the

case of Diamond v. Chakrabarty.[12] Indian-born Ananda Chakrabarty, working for General Electric, had

modified a bacterium (of the Pseudomonasgenus) capable of breaking down crude oil, which he proposed

to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of

entire organelles between strains of the Pseudomonas bacterium.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the

biotechnology sector's success is improved intellectual property rights legislation—and enforcement—

worldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an

ageing, and ailing, U.S. population.[13]

Rising demand for biofuels is expected to be good news for the biotechnology sector, with

the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel

consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to

rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing

genetically modified seeds which are resistant to pests and drought. By boosting farm productivity,

biotechnology plays a crucial role in ensuring that biofuel production targets are met.[14]

Applications

A rose plant that began as cells grown in a tissue culture

Biotechnology has applications in four major industrial areas, including health care (medical), crop

production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable

plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of

organic products (examples include beer andmilk products). Another example is using naturally

present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste,

cleanup sites contaminated by industrial activities (bioremediation), and also to produce biological

weapons.

A series of derived terms have been coined to identify several branches of biotechnology; for example:

Bioinformatics is an interdisciplinary field which addresses biological problems using computational

techniques, and makes the rapid organization as well as analysis of biological data possible. The field

may also be referred to as computational biology, and can be defined as, "conceptualizing biology in

terms of molecules and then applying informatics techniques to understand and organize the information

associated with these molecules, on a large scale."[15] Bioinformatics plays a key role in various areas,

such as functional genomics, structural genomics, andproteomics, and forms a key component in the

biotechnology and pharmaceutical sector.

Blue biotechnology  is a term that has been used to describe the marine and aquatic applications of

biotechnology, but its use is relatively rare.

Green biotechnology  is biotechnology applied to agricultural processes. An example would be the

selection and domestication of plants viamicropropagation. Another example is the designing

of transgenic plants to grow under specific environments in the presence (or absence) of chemicals.

One hope is that green biotechnology might produce more environmentally friendly solutions than

traditional industrial agriculture. An example of this is the engineering of a plant to express

a pesticide, thereby ending the need of external application of pesticides. An example of this would

be Bt corn. Whether or not green biotechnology products such as this are ultimately more

environmentally friendly is a topic of considerable debate.

Red biotechnology  is applied to medical processes. Some examples are the designing of organisms

to produce antibiotics, and the engineering of genetic cures through genetic manipulation.

White biotechnology, also known as industrial biotechnology, is biotechnology applied

to industrial processes. An example is the designing of an organism to produce a useful chemical.

Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals

or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources

than traditional processes used to produce industrial goods.[citation needed] http://www.bio-

entrepreneur.net/Advance-definition-biotech.pdf}

The investment and economic output of all of these types of applied biotechnologies is termed as

"bioeconomy".

Medicine

In medicine, modern biotechnology finds applications in areas such as pharmaceutical drug discovery and

production, pharmacogenomics, and genetic testing (or genetic screening).

DNA microarray chip – some can do as many as a million blood tests at once

Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how

genetic makeup affects an individual's response to drugs.[16] It deals with the influence of genetic variation

on drug response in patients by correlating gene expression or single-nucleotide polymorphisms with a

drug's efficacy or toxicity.[17] By doing so, pharmacogenomics aims to develop rational means to optimize

drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse

effects.[18] Such approaches promise the advent of "personalized medicine"; in which drugs and drug

combinations are optimized for each individual's unique genetic makeup.[19][20]

Computer-generated image of insulin hexamers highlighting the threefoldsymmetry, the zinc ions holding it together, and

the histidine residues involved in zinc binding.

Biotechnology has contributed to the discovery and manufacturing of traditional small

molecule pharmaceutical drugs as well as drugs that are the product of biotechnology - biopharmaceutics.

Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The

first genetically engineered products were medicines designed to treat human diseases. To cite one

example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with

a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of

diabetes, was previously extracted from the pancreas of abattoir animals (cattle and/or pigs). The

resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human

insulin at relatively low cost.[21][22] Biotechnology has also enabled emerging therapeutics likegene therapy.

The application of biotechnology to basic science (for example through the Human Genome Project) has

also dramatically improved our understanding of biology and as our scientific knowledge of normal and

disease biology has increased, our ability to develop new medicines to treat previously untreatable

diseases has increased as well.[22]

Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used

to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition

to studying chromosomes to the level of individual genes, genetic testing in a broader sense

includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes

associated with increased risk of developing genetic disorders. Genetic testing identifies changes

in chromosomes, genes, or proteins.[23] Most of the time, testing is used to find changes that are

associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected

genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As

of 2011 several hundred genetic tests were in use.[24][25] Since genetic testing may open up ethical or

psychological problems, genetic testing is often accompanied by genetic counseling.

Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of

which has been modified using genetic engineering techniques. In most cases the aim is to introduce a

new trait to the plant which does not occur naturally in the species.

Agriculture

Examples in food crops include resistance to certain pests,[26] diseases,[27] stressful environmental

conditions,[28] resistance to chemical treatments (e.g. resistance to a herbicide [29] ), reduction of spoilage,[30] or improving the nutrient profile of the crop.[31] Examples in non-food crops include production

of pharmaceutical agents,[32] biofuels,[33] and other industrially useful goods,[34] as well as

for bioremediation.[35][36]

Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land

cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers (4,200,000

acres) to 1,600,000 km2 (395 million acres).[37] 10% of the world's crop lands were planted with GM crops

in 2010.[37] As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160

million hectares) in 29 countries such as the USA, Brazil, Argentina, India, Canada, China, Paraguay,

Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and

Spain.[37]

Genetically modified foods are foods produced from organisms that have had specific changes introduced

into their DNA using the methods of genetic engineering. These techniques have allowed for the

introduction of new crop traits as well as a far greater control over a food's genetic structure than

previously afforded by methods such as selective breeding and mutation breeding.[38]Commercial sale of

genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening

tomato.[39] To date most genetic modification of foods have primarily focused on cash crops in high

demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for

resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been

experimentally developed, although as of November 2013 none are currently on the market.[40]

There is broad scientific consensus that food on the market derived from GM crops poses no greater risk

to human health than conventional food.[41][42][42][43][44][45] GM crops also provide a number of ecological

benefits.[46] However, opponents have objected to GM crops per se on several grounds, including

environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to

address the world's food needs, and economic concerns raised by the fact these organisms are subject to

intellectual property law.

Industrial biotechnology

An industrial biotechnology plant for the production of modified wheat starch and gluten

Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of

biotechnology for industrial purposes, including industrial fermentation. It includes the practice of

using cells such as micro-organisms, or components of cells like enzymes, to generate industrially useful

products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels.[47] In doing so, biotechnology uses renewable raw materials and may contribute to lowering greenhouse

gas emissions and moving away from a petrochemical-based economy.[48]

Regulation

Main articles: Regulation of genetic engineering and Regulation of the release of genetic modified

organisms

The regulation of genetic engineering concerns the approaches taken by governments to assess and

manage the risks associated with the use ofgenetic engineering technology and the development and

release of genetically modified organisms (GMO), including genetically modified crops andgenetically

modified fish. There are differences in the regulation of GMOs between countries, with some of the most

marked differences occurring between the USA and Europe.[49] Regulation varies in a given country

depending on the intended use of the products of the genetic engineering. For example, a crop not

intended for food use is generally not reviewed by authorities responsible for food safety.[50] The European

Union differentiates between approval for cultivation within the EU and approval for import and

processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs

have been approved for import and processing. [51] The cultivation of GMOs has triggered a debate about

coexistence of GM and nonGM crops. Depending on the coexistence regulations incentives for cultivation

of GM crops differ.[52]

Education

In 1988, after prompting from the United States Congress, the National Institute of General Medical

Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology

training. Universities nationwide compete for these funds to establish Biotechnology Training

Programs (BTPs). Each successful application is generally funded for five years then must be

competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then

stipend, tuition and health insurance support is provided for two or three years during the course of

their Ph.D. thesis work. Nineteen institutions offer NIGMS supported BTPs.[53] Biotechnology training is

also offered at the undergraduate level and in community colleges.

3.NANO - TECHNOLOGY

Nanotechnology (sometimes shortened to "nanotech") is the manipulation of matter on

an atomic and molecular scale. The earliest, widespread description of nanotechnology[1][2] referred to the

particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale

products, also now referred to as molecular nanotechnology. A more generalized description of

nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines

nanotechnology as the manipulation of matter with at least one dimension sized from 1 to

100 nanometers. This definition reflects the fact that quantum mechanical effects are important at

this quantum-realm scale, and so the definition shifted from a particular technological goal to a research

category inclusive of all types of research and technologies that deal with the special properties of matter

that occur below the given size threshold. It is therefore common to see the plural form

"nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and

applications whose common trait is size. Because of the variety of potential applications (including

industrial and military), governments have invested billions of dollars in nanotechnology research.

Through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars. The European

Union has invested 1.2 billion and Japan 750 million dollars.[3]

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse

as surface science, organic chemistry, molecular biology,semiconductor physics, microfabrication, etc.[4] The associated research and applications are equally diverse, ranging from extensions of

conventional device physics to completely new approaches based upon molecular self-assembly, from

developing new materials with dimensions on the nanoscale to direct control of matter on the atomic

scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to

create many new materials and devices with a vast range of applications, such as

in medicine, electronics, biomaterials and energy production. On the other hand, nanotechnology raises

many of the same issues as any new technology, including concerns about the toxicity and environmental

impact of nanomaterials,[5] and their potential effects on global economics, as well as speculation about

various doomsday scenarios. These concerns have led to a debate among advocacy groups and

governments on whether special regulation of nanotechnology is warranted.

Origins

Main article: History of nanotechnology

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard

Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of

synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio

Taniguchi in 1974, though it was not widely known.

Inspired by Feynman's concepts, K. Eric Drexler independently used the term "nanotechnology" in his

1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a

nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary

complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is

no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts

and implications.

Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's

theoretical and public work, which developed and popularized a conceptual framework for

nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the

prospects of atomic control of matter.

For example, the invention of the scanning tunneling microscope in 1981 provided unprecedented

visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in

1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research

Laboratory received a Nobel Prize in Physics in 1986.[6][7] Binnig, Quate and Gerber also invented the

analogous atomic force microscope that year.

Buckminsterfullerene C60, also known as the buckyball, is a representative member of thecarbon structures known

asfullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.

Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won

the 1996 Nobel Prize in Chemistry.[8][9] C60 was not initially described as nanotechnology; the term was

used regarding subsequent work with related graphene tubes (called carbon nanotubes and sometimes

called Bucky tubes) which suggested potential applications for nanoscale electronics and devices.

In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to

both controversy and progress. Controversies emerged regarding the definitions and potential

implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology.[10] Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular

nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.[11]

Meanwhile, commercialization of products based on advancements in nanoscale technologies began

emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic

control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an

antibacterial agent, nanoparticle-based transparent sunscreens, and carbon nanotubes for stain-resistant

textiles.[12][13]

Governments moved to promote and fund research into nanotechnology, beginning in the U.S. with

the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and

established funding for research on the nanoscale.

By the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce

nanotechnology roadmaps[14][15] which center on atomically precise manipulation of matter and discuss

existing and projected capabilities, goals, and applications.

Fundamental concepts

Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current

work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected

ability to construct items from the bottom up, using techniques and tools being developed today to make

complete, high performance products.

One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond

lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and

a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the

bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken

as the scale range 1 to 100 nm following the definition used by the National Nanotechnology Initiative in

the US. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which are

approximately a quarter of a nm diameter) since nanotechnology must build its devices from atoms and

molecules. The upper limit is more or less arbitrary but is around the size that phenomena not observed in

larger structures start to become apparent and can be made use of in the nano device.[16] These new

phenomena make nanotechnology distinct from devices which are merely miniaturised versions of an

equivalent macroscopic device; such devices are on a larger scale and come under the description

of microtechnology.[17]

To put that scale in another context, the comparative size of a nanometer to a meter is the same as that

of a marble to the size of the earth.[18] Or another way of putting it: a nanometer is the amount an average

man's beard grows in the time it takes him to raise the razor to his face.[18]

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices

are built from molecular components which assemble themselves chemically by principles of molecular

recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-

level control.[19]

Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved

during the last few decades to provide a basic scientific foundation of nanotechnology.

Larger to smaller: a materials perspective

Image of reconstruction on a cleanGold(100) surface, as visualized usingscanning tunneling microscopy. The positions of

the individual atoms composing the surface are visible.

Main article: Nanomaterials

Several phenomena become pronounced as the size of the system decreases. These include statistical

mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where

the electronic properties of solids are altered with great reductions in particle size. This effect does not

come into play by going from macro to micro dimensions. However, quantum effects can become

significant when the nanometer size range is reached, typically at distances of 100 nanometers or less,

the so-called quantum realm. Additionally, a number of physical (mechanical, electrical, optical, etc.)

properties change when compared to macroscopic systems. One example is the increase in surface area

to volume ratio altering mechanical, thermal and catalytic properties of materials. Diffusion and reactions

at nanoscale, nanostructures materials and nanodevices with fast ion transport are generally referred to

nanoionics. Mechanical properties of nanosystems are of interest in the nanomechanics research. The

catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can show different properties compared to what they exhibit on a

macroscale, enabling unique applications. For instance, opaque substances can become transparent

(copper); stable materials can turn combustible (aluminum); insoluble materials may become soluble

(gold). A material such as gold, which is chemically inert at normal scales, can serve as a potent

chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these quantum

and surface phenomena that matter exhibits at the nanoscale.[20]

Simple to complex: a molecular perspective

Main article: Molecular self-assembly

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to

almost any structure. These methods are used today to manufacture a wide variety of useful chemicals

such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of

control to the next-larger level, seeking methods to assemble these single molecules into supramolecular

assemblies consisting of many molecules arranged in a well defined manner.

These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to

automatically arrange themselves into some useful conformation through a bottom-upapproach. The

concept of molecular recognition is especially important: molecules can be designed so that a specific

configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–

Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a

single substrate, or the specific folding of the protein itself. Thus, two or more components can be

designed to be complementary and mutually attractive so that they make a more complex and useful

whole.

Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper

than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired

assembly increases. Most useful structures require complex and thermodynamically unlikely

arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular

recognition in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The

challenge for nanotechnology is whether these principles can be used to engineer new constructs in

addition to natural ones.

Molecular nanotechnology: a long-term view

Main article: Molecular nanotechnology

Molecular nanotechnology, sometimes called molecular manufacturing, describes engineered

nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is

especially associated with the molecular assembler, a machine that can produce a desired structure or

device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context

of productive nanosystems is not related to, and should be clearly distinguished from, the conventional

technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.

When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the

time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology

based on molecular machine systems. The premise was that molecular scale biological analogies of

traditional machine components demonstrated molecular machines were possible: by the countless

examples found in biology, it is known that sophisticated, stochastically optimised biological machines can

be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other

means, perhaps using biomimetic principles. However, Drexler and other researchers[21]have proposed

that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately

could be based on mechanical engineering principles, namely, a manufacturing technology based on the

mechanical functionality of these components (such as gears, bearings, motors, and structural members)

that would enable programmable, positional assembly to atomic specification.[22] The physics and

engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

In general it is very difficult to assemble devices on the atomic scale, as one has to position atoms on

other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno,[23] is that

future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard

Smalley argued that mechanosynthesis are impossible due to the difficulties in mechanically manipulating

individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[24] Though biology clearly demonstrates that molecular machine systems are possible, non-biological

molecular machines are today only in their infancy. Leaders in research on non-biological molecular

machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley.

They have constructed at least three distinct molecular devices whose motion is controlled from the

desktop with changing voltage: a nanotube nanomotor, a molecular actuator,[25] and a

nanoelectromechanical relaxation oscillator.[26] See nanotube nanomotor for more examples.

An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee

at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon

monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically

bound the CO to the Fe by applying a voltage.

Current research

Graphical representation of a rotaxane, useful as a molecular switch.

This DNA tetrahedron[27] is an artificiallydesigned nanostructure of the type made in the field of DNA nanotechnology. Each

edge of the tetrahedron is a 20 base pair DNAdouble helix, and each vertex is a three-arm junction.

This device transfers energy from nano-thin layers of quantum wells to nanocrystalsabove them, causing the nanocrystals to

emit visible light.[28]

Nanomaterials

The nanomaterials field includes subfields which develop or study materials having unique properties

arising from their nanoscale dimensions.[29]

Interface and colloid science  has given rise to many materials which may be useful in

nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles

and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and

nanoelectronics.

Nanoscale materials can also be used for bulk applications; most present commercial applications of

nanotechnology are of this flavor.

Progress has been made in using these materials for medical applications; see Nanomedicine.

Nanoscale materials such as nanopillars are sometimes used in solar cells which combats the cost of

traditional Silicon solar cells.

Development of applications incorporating semiconductor nanoparticles to be used in the next

generation of products, such as display technology, lighting, solar cells and biological imaging;

see quantum dots.

Bottom-up approaches

These seek to arrange smaller components into more complex assemblies.

DNA nanotechnology utilizes the specificity of Watson–Crick basepairing to construct well-defined

structures out of DNA and other nucleic acids.

Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) also

aim at designing molecules with well-defined shape (e.g. bis-peptides [30] ).

More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and

molecular recognition in particular, to cause single-molecule components to automatically arrange

themselves into some useful conformation.

Atomic force microscope  tips can be used as a nanoscale "write head" to deposit a chemical upon a

surface in a desired pattern in a process called dip pen nanolithography. This technique fits into the

larger subfield of nanolithography.

Top-down approaches

These seek to create smaller devices by using larger ones to direct their assembly.

Many technologies that descended from conventional solid-state silicon methods for

fabricating microprocessors are now capable of creating features smaller than 100 nm, falling under

the definition of nanotechnology. Giant magnetoresistance-based hard drives already on the market

fit this description,[31] as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert

Fert received the Nobel Prize in Physics in 2007 for their discovery of Giant magnetoresistance and

contributions to the field of spintronics.[32]

Solid-state techniques can also be used to create devices known as nanoelectromechanical

systems or NEMS, which are related tomicroelectromechanical systems or MEMS.

Focused ion beams  can directly remove material, or even deposit material when suitable pre-cursor

gasses are applied at the same time. For example, this technique is used routinely to create sub-

100 nm sections of material for analysis in Transmission electron microscopy.

Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is

then followed by an etching process to remove material in a top-down method.

Functional approaches

These seek to develop components of a desired functionality without regard to how they might be

assembled.

Molecular scale electronics  seeks to develop molecules with useful electronic properties. These could

then be used as single-molecule components in a nanoelectronic device.[33] For an example see

rotaxane.

Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-

called nanocar.

Biomimetic approaches

Bionics  or biomimicry seeks to apply biological methods and systems found in nature, to the study

and design of engineering systems and modern technology. Biomineralization is one example of the

systems studied.

Bionanotechnology  is the use of biomolecules for applications in nanotechnology, including use of

viruses and lipid assemblies.[34] [35] Nanocellulose  is a potential bulk-scale application.

Speculative

These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an

agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with

more emphasis on its societal implications than the details of how such inventions could actually be

created.

Molecular nanotechnology is a proposed approach which involves manipulating single molecules in

finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its

proposed techniques are beyond current capabilities.

Nanorobotics  centers on self-sufficient machines of some functionality operating at the nanoscale.

There are hopes for applying nanorobots in medicine,[36][37][38] but it may not be easy to do such a thing

because of several drawbacks of such devices.[39] Nevertheless, progress on innovative materials and

methodologies has been demonstrated with some patents granted about new nanomanufacturing

devices for future commercial applications, which also progressively helps in the development

towards nanorobots with the use of embedded nanobioelectronics concepts.[40][41]

Productive nanosystems are "systems of nanosystems" which will be complex nanosystems that

produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-

emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete

(i.e. atomic) nature of matter and the possibility of exponential growth, this stage is seen as the basis

of another industrial revolution. Mihail Roco, one of the architects of the USA's National

Nanotechnology Initiative, has proposed four states of nanotechnology that seem to parallel the

technical progress of the Industrial Revolution, progressing from passive nanostructures to active

nanodevices to complex nanomachines and ultimately to productive nanosystems.[42]

Programmable matter  seeks to design materials whose properties can be easily, reversibly and

externally controlled though a fusion of information science and materials science.

Due to the popularity and media exposure of the term nanotechnology, the

words picotechnology and femtotechnology have been coined in analogy to it, although these are

only used rarely and informally.

Tools and techniques

Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in

a phonograph but on a much smaller scale. Alaser beam reflects off the backside of the cantilever into a set

of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.

There are several important modern developments. The atomic force microscope (AFM) and

the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched

nanotechnology. There are other types of scanning probe microscopy. Although conceptually similar to

the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic

microscope(SAM) developed by Calvin Quate and coworkers in the 1970s, newer scanning probe

microscopes have much higher resolution, since they are not limited by the wavelength of sound or light.

The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional

assembly). Feature-oriented scanning methodology suggested by Rostislav Lapshin appears to be a

promising way to implement these nanomanipulations in automatic mode.[43][44] However, this is still a slow

process because of low scanning velocity of the microscope.

Various techniques of nanolithography such as optical lithography, X-ray lithography dip pen

nanolithography, electron beam lithographyor nanoimprint lithography were also developed. Lithography

is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.

Another group of nanotechnological techniques include those used for fabrication

of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet

lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic

layer deposition, and molecular vapor deposition, and further including molecular self-assembly

techniques such as those employing di-block copolymers. The precursors of these techniques preceded

the nanotech era, and are extensions in the development of scientific advancements rather than

techniques which were devised with the sole purpose of creating nanotechnology and which were results

of nanotechnology research.

The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as

manufactured items are made. Scanning probe microscopy is an important technique both for

characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling

microscopes can be used to look at surfaces and to move atoms around. By designing different tips for

these microscopes, they can be used for carving out structures on surfaces and to help guide self-

assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules

can be moved around on a surface with scanning probe microscopy techniques.[43][44] At present, it is

expensive and time-consuming for mass production but very suitable for laboratory experimentation.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule.

These techniques include chemical synthesis, self-assembly and positional assembly.Dual polarisation

interferometry is one tool suitable for characterisation of self assembled thin films. Another variation of the

bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like

John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in

the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum

Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down

atomically precise layers of atoms and, in the process, build up complex structures. Important for

research on semiconductors, MBE is also widely used to make samples and devices for the newly

emerging field of spintronics.

However, new therapeutic products, based on responsive nanomaterials, such as the ultradeformable,

stress-sensitive Transfersome vesicles, are under development and already approved for human use in

some countries.[citation needed]

4.GLOBAL WARMING

Global warming is the rise in the average temperature of Earth's atmosphere and oceans since the late

19th century and its projected continuation. Since the early 20th century, Earth's mean surface

temperature has increased by about 0.8 °C (1.4 °F), with about two-thirds of the increase occurring since

1980.[2] Warming of the climate system is unequivocal, and scientists are 95-100% certain that it is

primarily caused by increasing concentrations of greenhouse gases produced by human activities such as

the burning of fossil fuels and deforestation.[3][4][5] These findings are recognized by the national science

academies of all major industrialized nations.[6][A]

Climate model projections were summarized in the 2007 Fourth Assessment Report (AR4) by

the Intergovernmental Panel on Climate Change (IPCC). They indicated that during the 21st century the

global surface temperature is likely to rise a further 1.1 to 2.9 °C (2.0 to 5.2 °F) for their lowest emissions

scenario and 2.4 to 6.4 °C (4.3 to 11.5 °F) for their highest.[7] The ranges of these estimates arise from the

use of models with differing sensitivity to greenhouse gas concentrations.[8][9]

Future climate change and associated impacts will vary from region to region around the globe.[10] [11] The effects of an increase in global temperature include a rise in sea levels and a change in the

amount and pattern of precipitation, as well as a probable expansion of subtropical deserts.[12] Warming is

expected to be strongest in the Arctic, with the continuing retreat of glaciers,permafrost and sea ice. Other

likely effects of the warming include more frequent extreme weather events including heat waves,

droughts and heavy rainfall; ocean acidification; and species extinctions due to shifting temperature

regimes. Effects significant to humans include the threat to food security from decreasing crop yields and

the loss of habitat from inundation.[13][14]

Proposed policy responses to global warming include mitigation by emissions reduction, adaptation to its

effects, and possible futuregeoengineering. Most countries are parties to the United Nations Framework

Convention on Climate Change (UNFCCC),[15] whose ultimate objective is to prevent dangerous

anthropogenic (i.e., human-induced) climate change.[16] Parties to the UNFCCC have adopted a range of

policies designed to reduce greenhouse gas emissions[17]:10[18][19][20]:9 and to assist in adaptation to global

warming.[17]:13[20]:10[21][22] Parties to the UNFCCC have agreed that deep cuts in emissions are required,[23] and that future global warming should be limited to below 2.0 °C (3.6 °F) relative to the pre-industrial

level.[23][B] Reports published in 2011 by the United Nations Environment Programme [24]  and

the International Energy Agency [25]  suggest that efforts as of the early 21st century to reduce emissions

may be inadequate to meet the UNFCCC's 2 °C target.

Global mean land-ocean temperature change from 1880–2012, relative to the 1951–1980 mean. The black line is the annual

mean and the red line is the 5-year running mean. The green bars show uncertainty estimates. Source: NASA GISS. (click

for larger image)

The map shows the 10-year average (2000–2009) global mean temperature anomaly relative to the 1951–1980 mean. The

largest temperature increases are in the Arctic and the Antarctic Peninsula. Source: NASA Earth Observatory[1]

Fossil fuel related carbon dioxide (CO2) emissions compared to five of the IPCC's "SRES" emissions scenarios. The dips are

related to global recessions. Image source: Skeptical Science.

Greenhouse gases

Main articles: Greenhouse gas, Greenhouse effect, Radiative forcing, and Carbon dioxide in Earth's

atmosphere

The greenhouse effect is the process by which absorption and emission of infrared radiation by gases in

the atmosphere warm a planet's lower atmosphere and surface. It was proposed by Joseph Fourier in

1824, discovered in 1860 by John Tyndall,[58] was first investigated quantitatively by Svante Arrhenius in

1896,[59] and was developed in the 1930s through 1960s by Guy Stewart Callendar.[60]

Annual world greenhouse gas emissions, in 2005, by sector.

Bubble diagram showing the share of global cumulative energy-related carbon dioxide emissions for major emitters between

1890-2007.[61]

Naturally occurring amounts of greenhouse gases have a mean warming effect of about 33 °C (59 °F).[62]

[C]Without the earth's atmosphere, the temperature across almost the entire surface of the earth would be

below freezing.[63] The major greenhouse gases are water vapor, which causes about 36–70% of the

greenhouse effect; carbon dioxide (CO2), which causes 9–26%; methane (CH4), which causes 4–9%;

and ozone (O3), which causes 3–7%.[64][65][66] Clouds also affect the radiation balance through cloud

forcings similar to greenhouse gases.

Human activity since the Industrial Revolution has increased the amount of greenhouse gases in the

atmosphere, leading to increased radiative forcing from CO2, methane, tropospheric

ozone, CFCs and nitrous oxide. According to work published in 2007, the concentrations of CO2 and

methane have increased by 36% and 148% respectively since 1750.[67] These levels are much higher

than at any time during the last 800,000 years, the period for which reliable data has been extracted

from ice cores.[68][69][70][71] Less direct geological evidence indicates that CO2 values higher than this were

last seen about 20 million years ago.[72] Fossil fuelburning has produced about three-quarters of the

increase in CO2 from human activity over the past 20 years. The rest of this increase is caused mostly by

changes in land-use, particularly deforestation.[73] Estimates of global CO2 emissions in 2011 from fossil

fuel combustion, including cement production and gas flaring, was 34.8 billion tonnes (9.5 ± 0.5 PgC), an

increase of 54% above emissions in 1990. Coal burning was responsible for 43% of the total emissions,

oil 34%, gas 18%, cement 4.9% and gas flaring 0.7%[74] In May 2013, it was reported that readings for

CO2 taken at the world's primary benchmark site in Mauna Loasurpassed 400 ppm. According to

professor Brian Hoskins, this is likely the first time CO2 levels have been this high for about 4.5 million

years.[75][76]

Over the last three decades of the 20th century, gross domestic product per capita and population

growth were the main drivers of increases in greenhouse gas emissions.[77] CO2 emissions are continuing

to rise due to the burning of fossil fuels and land-use change.[78][79]:71 Emissions can beattributed to

different regions, e.g., see the figure opposite. Attribution of emissions due to land-use change is a

controversial issue.[80][81]:289

Emissions scenarios, estimates of changes in future emission levels of greenhouse gases, have been

projected that depend upon uncertain economic,sociological, technological, and natural developments.[82] In most scenarios, emissions continue to rise over the century, while in a few, emissions are reduced.[83][84] Fossil fuel reserves are abundant, and will not limit carbon emissions in the 21st century.[85] Emission

scenarios, combined with modelling of the carbon cycle, have been used to produce estimates of how

atmospheric concentrations of greenhouse gases might change in the future. Using the six

IPCC SRES "marker" scenarios, models suggest that by the year 2100, the atmospheric concentration of

CO2 could range between 541 and 970 ppm.[86] This is an increase of 90–250% above the concentration

in the year 1750.

The popular media and the public often confuse global warming with ozone depletion, i.e., the destruction

of stratospheric ozone by chlorofluorocarbons.[87][88] Although there are a few areas of linkage, the

relationship between the two is not strong. Reduced stratospheric ozone has had a slight cooling

influence on surface temperatures, while increased tropospheric ozone has had a somewhat larger

warming effect.[89]

Atmospheric CO2 concentration from 650,000 years ago to near present, using ice core proxy data and direct measurements

Particulates and soot

Ship tracks over the Atlantic Ocean on the east coast of the United States. The climatic impacts from particulate forcingcould

have a large effect on climate through the indirect effect.

Global dimming, a gradual reduction in the amount of global direct irradiance at the Earth's surface, was

observed from 1961 until at least 1990.[90] The main cause of this dimming is particulates produced by

volcanoes and human made pollutants, which exerts a cooling effect by increasing the reflection of

incoming sunlight. The effects of the products of fossil fuel combustion – CO2 and aerosols – have largely

offset one another in recent decades, so that net warming has been due to the increase in non-

CO2 greenhouse gases such as methane.[91] Radiative forcing due to particulates is temporally limited due

to wet deposition which causes them to have an atmospheric lifetime of one week. Carbon dioxide has a

lifetime of a century or more, and as such, changes in particulate concentrations will only delay climate

changes due to carbon dioxide.[92] Black carbon is second only to carbon dioxide for its contribution to

global warming.[93] In addition to their direct effect by scattering and absorbing solar radiation, particulates

have indirect effects on the Earth's radiation budget. Sulfates act as cloud condensation nuclei and thus

lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more

efficiently than clouds with fewer and larger droplets, known as the Twomey effect.[94] This effect also

causes droplets to be of more uniform size, which reduces growth of raindrops and makes the cloud more

reflective to incoming sunlight, known as theAlbrecht effect.[95] Indirect effects are most noticeable in

marine stratiform clouds, and have very little radiative effect on convective clouds. Indirect effects of

particulates represent the largest uncertainty in radiative forcing.[96]

Soot may cool or warm the surface, depending on whether it is airborne or deposited. Atmospheric soot

directly absorbs solar radiation, which heats the atmosphere and cools the surface. In isolated areas with

high soot production, such as rural India, as much as 50% of surface warming due to greenhouse gases

may be masked byatmospheric brown clouds.[97] When deposited, especially on glaciers or on ice in arctic

regions, the lower surface albedo can also directly heat the surface.[98] The influences of particulates,

including black carbon, are most pronounced in the tropics and sub-tropics, particularly in Asia, while the

effects of greenhouse gases are dominant in the extratropics and southern 

5.RECYCLING

Recycling is a process to change (waste) materials into new products to prevent waste of potentially

useful materials, reduce the consumption of fresh raw materials, reduce energy usage, reduce air

pollution (from incineration) and water pollution (from landfilling) by reducing the need for "conventional"

waste disposal, and lower greenhouse gas emissions as compared to plastic production.[1][2] Recycling is

a key component of modern waste reduction and is the third component of the "Reduce, Reuse,

Recycle" waste hierarchy.

There are some ISO standards related to recycling such as ISO 15270:2008 for plastics waste and ISO

14001:2004 for environmental management control of recycling practice.

Recyclable materials include many kinds of glass, paper, metal, plastic, textiles, and electronics. Although

similar in effect, the composting or other reuse of biodegradable waste—such as food or garden waste—

is not typically considered recycling.[2] Materials to be recycled are either brought to a collection center or

picked up from the curbside, then sorted, cleaned, and reprocessed into new materials bound for

manufacturing.

In the strictest sense, recycling of a material would produce a fresh supply of the same material—for

example, used office paper would be converted into new office paper, or used foamed polystyrene into

new polystyrene. However, this is often difficult or too expensive (compared with producing the same

product from raw materials or other sources), so "recycling" of many products or materials involves

their reuse in producing different materials (e.g., paperboard) instead. Another form of recycling is

the salvage of certain materials from complex products, either due to their intrinsic value

(e.g.,lead from car batteries, or gold from computer components), or due to their hazardous nature (e.g.,

removal and reuse of mercury from various items). Critics dispute the net economic and environmental

benefits of recycling over its costs, and suggest that proponents of recycling often make matters worse

and suffer from confirmation bias. Specifically, critics argue that the costs and energy used in collection

and transportation detract from (and outweigh) the costs and energy saved in the production process;

also that the jobs produced by the recycling industry can be a poor trade for the jobs lost in logging,

mining, and other industries associated with virgin production; and that materials such as paper pulp can

only be recycled a few times before material degradation prevents further recycling. Proponents of

recycling dispute each of these claims, and the validity of arguments from both sides has led to enduring

controversy.

Recycling consumer waste[edit]

Collection[edit]

A three-sided bin at a railway station in Germany, intended to separate paper (left) and plastic wrappings (right) from other

waste (back)

A number of different systems have been implemented to collect recyclates from the general waste

stream. These systems lie along the spectrum of trade-off between public convenience and government

ease and expense. The three main categories of collection are "drop-off centres," "buy-back centres," and

"curbside collection".[2]

Drop-off centres[edit]

Drop-off centres require the waste producer to carry the recyclates to a central location, either an installed

or mobile collection station or the reprocessing plant itself. They are the easiest type of collection to

establish, but suffer from low and unpredictable throughput.

Buy-back centres[edit]

Buy-back centres differ in that the cleaned recyclates are purchased, thus providing a clear incentive for

use and creating a stable supply. The post-processed material can then be sold on, hopefully creating a

profit. Unfortunately, government subsidies are necessary to make buy-back centres a viable enterprise,

as according to the United States' National Waste & Recycling Association, it costs on average US$50 to

process a ton of material, which can only be resold for US$30.[2]

Curbside collection[edit]

Main article: Curbside collection

Curbside collection encompasses many subtly different systems, which differ mostly on where in the

process the recyclates are sorted and cleaned. The main categories are mixed waste collection,

commingled recyclables and source separation.[2] A waste collection vehicle generally picks up the waste.

A recycling truck collecting the contents of a recycling bin in Canberra, Australia

At one end of the spectrum is mixed waste collection, in which all recyclates are collected mixed in with

the rest of the waste, and the desired material is then sorted out and cleaned at a central sorting facility.

This results in a large amount of recyclable waste, paper especially, being too soiled to reprocess, but

has advantages as well: the city need not pay for a separate collection of recyclates and no public

education is needed. Any changes to which materials are recyclable is easy to accommodate as all

sorting happens in a central location.[2]

In a commingled or single-stream system, all recyclables for collection are mixed but kept separate from

other waste. This greatly reduces the need for post-collection cleaning but does require public

education on what materials are recyclable.[2][4]

Source separation is the other extreme, where each material is cleaned and sorted prior to collection.

This method requires the least post-collection sorting and produces the purest recyclates, but incurs

additional operating costs for collection of each separate material. An extensive public education program

is also required, which must be successful if recyclate contamination is to be avoided.[2]

Source separation used to be the preferred method due to the high sorting costs incurred by commingled

collection. Advances in sorting technology (see sorting below), however, have lowered this overhead

substantially—many areas which had developed source separation programs have since switched to

comingled collection.[4]

Distributed Recycling[edit]

For some waste materials such as plastic, recent technical devices called recyclebots [10]  enable a form of

distributed recycling. Preliminary life-cycle analysis(LCA) indicates that such distributed recycling

of HDPE to make filament of 3-D printers in rural regions is energetically favorable to either using virgin

resin or conventional recycling processes because of reductions in transportation energy [11]

Sorting[edit]

Early sorting of recyclable materials: glass and plastic bottles in Poland

A recycling point in New Byth, Scotland, with separate containers for paper, plastics and differently colored glass.

Once commingled recyclates are collected and delivered to a central collection facility, the different types

of materials must be sorted. This is done in a series of stages, many of which involve automated

processes such that a truckload of material can be fully sorted in less than an hour.[4] Some plants can

now sort the materials automatically, known as single-stream recycling. In plants a variety of materials are

sorted such as paper, different types of plastics, glass, metals, food scraps, and most types of batteries.[12] A 30 percent increase in recycling rates has been seen in the areas where these plants exist.[13]

Initially, the commingled recyclates are removed from the collection vehicle and placed on a conveyor belt

spread out in a single layer. Large pieces ofcorrugated fiberboard and plastic bags are removed by hand

at this stage, as they can cause later machinery to jam.[4]

 

Recycling sorting facility and processes

Next, automated machinery separates the recyclates by weight, splitting lighter paper and plastic from

heavier glass and metal. Cardboard is removed from the mixed paper, and the most common types of

plastic, PET (#1) and HDPE (#2), are collected. This separation is usually done by hand, but has become

automated in some sorting centers: a spectroscopic scanner is used to differentiate between different

types of paper and plastic based on the absorbed wavelengths, and subsequently divert each material

into the proper collection channel.[4]

Strong magnets are used to separate out ferrous metals, such as iron, steel, and tin-plated steel

cans ("tin cans"). Nonferrous metals are ejected bymagnetic eddy currents in which a rotating magnetic

field induces an electric current around the aluminium cans, which in turn creates a magnetic eddy current

inside the cans. This magnetic eddy current is repulsed by a large magnetic field, and the cans are

ejected from the rest of the recyclate stream.[4]

Finally, glass must be sorted by hand on the basis of its color: brown, amber, green, or clear.[4]

This process of recycling as well as reusing the recycled material proves to be advantageous for many

reasons as it reduces amount of waste sent to landfills, conserves natural resources, saves energy,

reduces greenhouse gas emissions, and helps create new jobs. Recycled materials can also be

converted into new products that can be consumed again such as paper, plastic, and glass.[14]

The City and County of San Francisco's Department of the Environment offers one of the best recycling

programs to support its city-wide goal of Zero Waste by 2020.[15] San Francisco's refuse hauler, Recology,

operates an effective recyclables sorting facility in San Francisco, which helped San 

Environmental impact

Economist Steven Landsburg, author of a paper entitled "Why I Am Not an Environmentalist," [71] has

claimed that paper recycling actually reduces tree populations. He argues that because paper companies

have incentives to replenish their forests, large demands for paper lead to large forests, while reduced

demand for paper leads to fewer "farmed" forests.[72]

When foresting companies cut down trees, more are planted in their place. Most paper comes from pulp

forests grown specifically for paper production.[66][73][74][75] Many environmentalists point out, however, that

"farmed" forests are inferior to virgin forests in several ways. Farmed forests are not able to fix the soil as

quickly as virgin forests, causing widespread soil erosion and often requiring large amounts of fertilizer to

maintain while containing little tree and wild-life biodiversity compared to virgin forests.[76] Also, the new

trees planted are not as big as the trees that were cut down, and the argument that there will be "more

trees" is not compelling to forestry advocates when they are counting saplings.

In particular, wood from tropical rainforests is rarely harvested for paper. Rainforest deforestation is

mainly caused by population pressure demands for land.[77]

With many materials that can be recycled, such as fossil fuels and metals, there is only a finite amount of

those resources, and people continue to use more of them all. Report asserts to reduce the current usage

greatly and reuse them much more efficiently. For example, only 1% of rare earth metals are reused.[78] Those materials cannot easily be recovered when a product that contains them (such as a cell phone)

is deposited in a landfill compared to if it was recycled.

BACTERIAL INFECTION (1)TETANUS

Tetanus (from Ancient   Greek : τέτανος tetanos “taut”, and τείνειν teinein "to stretch") is a medical

condition characterized by a prolonged contraction of skeletal muscle fibers.[1] The primary symptoms are

caused by tetanospasmin, a neurotoxin produced by the Gram-positive, rod-shaped, obligate anaerobic

bacterium Clostridium tetani.[2]

Infection generally occurs through wound contamination and often involves a cut or deep puncture

wound. As the infection progresses, musclespasms develop in the jaw (thus the name lockjaw) and

elsewhere in the body.[2] Infection can be prevented by proper immunization or post-exposure prophylaxis.[3]

Signs and symptoms[edit]

Tetanus often begins with mild spasms in the jaw muscles—also known as lockjaw. The spasms can also

affect the chest, neck, back, abdominal muscles, and buttocks. Back muscle spasms often cause arching,

called opisthotonos. Sometimes the spasms affect muscles that help with breathing, which can lead to

breathing problems.[3]

Prolonged muscular action causes sudden, powerful, and painful contractions of muscle groups, which is

called "tetany". These episodes can cause fractures and muscle tears. Other symptoms include drooling,

excessive sweating, fever, hand or foot spasms, irritability, swallowing difficulty, and uncontrolled

urination or defecation.[citation needed]

Mortality rates reported vary from 48% to 73%. In recent years,[when?] approximately 11% of reported

tetanus cases have been fatal. The highest mortality rates are in unvaccinated people, people over 60

years of age or newborns.[3]

Incubation period[edit]

The incubation period of tetanus may be up to several months, but is usually about eight days.[4][5] In

general, the further the injury site is from the central nervous system, the longer the incubation period.

The shorter the incubation period, the more severe the symptoms.[6] In neonatal tetanus, symptoms

usually appear from 4 to 14 days after birth, averaging about 7 days. On the basis of clinical findings, four

different forms of tetanus have been described.[3]

Cause

Tetanus is caused by the tetanus bacterium Clostridium tetani.[9] Tetanus is often associated with rust,

especially rusty nails. Objects that accumulate rust are often found outdoors, or in places that harbour

anaerobic bacteria, but the rust itself does not cause tetanus nor does it contain more C. tetani bacteria.

The rough surface of rusty metal merely provides a prime habitat forC. tetani endospores to reside in, and

the nail affords a means to puncture skin and deliver endospores deep within the body at the site of the

wound.

An endospore is a non-metabolizing survival structure that begins to metabolize and cause infection once

in an adequate environment. Because C. tetani is an anaerobic bacterium, it and its endospores thrive in

environments that lack oxygen. Hence, stepping on a nail (rusty or not) may result in a tetanus infection,

as the low-oxygen (anaerobic) environment is caused by the oxidization of the same object that causes

a puncture wound, delivering endospores to a suitable environment for growth.[10]

Tetanus is an international health problem, as C. tetani spores are ubiquitous. The disease occurs almost

exclusively in persons unvaccinated or inadequately immunized.[2] It is more common in hot, damp

climates with soil rich in organic matter. This is particularly true with manure-treated soils, as the spores

are widely distributed in the intestines and feces of many animals such as horses, sheep, cattle, dogs,

cats, rats, guinea pigs, and chickens.[3] Spores can be introduced into the body through puncture wounds.

In agricultural areas, a significant number of human adults may harbor the organism. The spores can also

be found on skin surfaces and in contaminated heroin.[3] Heroin users, particularly those that inject the

drug, appear to be at high risk for tetanus.

Prevention

Tetanus can be prevented by vaccination with tetanus toxoid.[13] The CDC recommends that adults

receive a booster vaccine every ten years,[14] and standard care practice in many places is to give the

booster to any patient with a puncture wound who is uncertain of when he or she was last vaccinated, or if

he or she has had fewer than three lifetime doses of the vaccine. The booster may not prevent a

potentially fatal case of tetanus from the current wound, however, as it can take up to two weeks for

tetanus antibodies to form.[15]

The World Health Organisation certifies countries as having eliminated maternal or neonatal tetanus.

Certification requires at least two years of rates of less than 1 case per 1000 live births. In 1998

in Uganda, 3,433 tetanus cases were recorded in newborn babies; of these, 2,403 died. After a major

public health effort, Uganda in 2011 was certified as having eliminated tetanus.

(2)LEPTOSPIROSIS

Leptospirosis (also known as Weil's syndrome, canicola fever, canefield fever, nanukayami

fever, 7-day fever, Rat Catcher's Yellows,Fort Bragg fever, black jaundice, and Pretibial fever is

caused by infection with bacteria of the genus Leptospira and affects humans as well as other animals.

Leptospirosis is among the world's most common diseases that transmits from animals to

people (zoonosis). The infection is commonly transmitted to humans by allowing water that has been

contaminated by animal urine to come in contact with unhealed breaks in the skin, the eyes, or with

themucous membranes. Outside of tropical areas, leptospirosis cases have a relatively distinct

seasonality—most cases occur in spring and autumn.

Causes and transmission[edit]

Leptospirosis is caused by a spirochaete bacterium called Leptospira spp. At least five

important serotypes exist in the United States and Canada, all of which cause disease in dogs:[2][3][4]

Icterohaemorrhagiae

Canicola

Pomona

Grippotyphosa

Bratislava

Other (more common) lethal infectious strains exist. Genetically different leptospira organisms may be

identical serologically and vice versa. Hence, some argue about strain identification. The traditional

serologic system currently seems more useful from a diagnostic and epidemiologic standpoint—but this

may change with further development and spread of technologies like polymerase chain reaction (PCR).

Leptospirosis is transmitted by the urine of an infected animal, and is contagious as long as the urine is

still moist. Rats, mice, and moles are important primary hosts—but a wide range of other mammals

including dogs, deer, rabbits, hedgehogs, cows, sheep, raccoons, opossums, skunks, and certain marine

mammals carry and transmit the disease as secondary hosts. In Africa, the banded mongoose has been

identified as a carrier of the pathogen, likely in addition to other African wildlife hosts.[5] Dogs may lick the

urine of an infected animal off the grass or soil, or drink from an infected puddle.

House-bound domestic dogs have contracted leptospirosis, apparently from licking the urine of infected

mice in the house. The type of habitats most likely to carry infective bacteria are muddy riverbanks,

ditches, gullies, and muddy livestock rearing areas where there is regular passage of wild or farm

mammals. The incidence of leptospirosis correlates directly with the amount of rainfall, making it seasonal

in temperate climates and year-round in tropical climates. Leptospirosis also transmits via the semen of

infected animals.[6]

Humans become infected through contact with water, food, or soil that contains urine from these infected

animals. This may happen by swallowing contaminated food or water or through skin contact. The

disease is not known to spread between humans, and bacterial dissemination in convalescence is

extremely rare in humans. Leptospirosis is common among water-sport enthusiasts in specific areas, as

prolonged immersion in water promotes the entry of the bacteria. Surfers and whitewater paddlers[7] are at

especially high risk in areas that have been shown to contain the bacteria, and can contract the disease

by swallowing contaminated water, splashing contaminated water into their eyes or nose, or exposing

open wounds to infected water.

Treatment[edit]

Doxycycline may be used as a prophylaxis 200–250 mg once a week, to prevent infection in high risk

areas.[14] Treatment is a relatively complicated process comprising two main components: suppressing the

causative agent and fighting possible complications. Effective antibiotics include cefotaxime,

doxycycline, penicillin, ampicillin, and amoxicillin. Human therapeutic dosage of drugs is as follows:

doxycycline 100 mg orally every 12 hours for 1 week or penicillin 1–1.5 MU every 4 hours for 1 week. In

dogs, penicillin is most commonly used to end the leptospiremic phase (infection of the blood), and

doxycycline is used to eliminate the carrier state.

Supportive therapy measures (especially in severe cases) include detoxification and normalization of

the hydro-electrolytic balance. Glucose and salt solution infusions may be administered;dialysis is used in

serious cases. Elevations of serum potassium are common and if the potassium level gets too high

special measures must be taken. Serum phosphorus levels may likewise increase to unacceptable levels

due to renal failure.

Treatment for hyperphosphatemia consists of treating the underlying disease, dialysis where appropriate,

or oral administration of calcium carbonate, but not without first checking the serum calcium levels (these

two levels are related). Corticosteroids administration in gradually reduced doses

(e.g., prednisolone starting from 30–60 mg) during 7–10 days is recommended by some[citation

needed] specialists in cases of severe haemorrhagic effects. Organ specific care and treatment are essential

in cases of renal, liver, or heart involvement.

Prevention[edit]

The Native American lifestyle exposed them to the leptospiral life cycle

Human vaccines are available in a few countries, including Cuba and China.[15] Animal vaccines are only

for a few strains. Dog vaccines are effective for at least one year.[16] Currently, no human vaccine is

available in the US.

(3)GangreneGangrene is a serious and potentially life-threatening condition that arises when a considerable mass of

body tissue dies (necrosis).[1][2] This may occur after an injury or infection, or in people suffering from any

chronic health problem affecting blood circulation.[2] The primary cause of gangrene is reduced blood

supply to the affected tissues, which results in cell death.[3] Diabetes and long-term smoking increase the

risk of suffering from gangrene.[2][3]

There are different types of gangrene with different symptoms, such as dry gangrene, wet gangrene, gas

gangrene, internal gangrene and necrotizing fasciitis.[1][2] Treatment options include debridement (or, in

severe cases, amputation) of the affected body parts, antibiotics, vascular surgery,maggot

therapy or hyperbaric oxygen therapy.

Causes[edit]

Gangrene is caused by ischemia or infection, such as by the bacteria Clostridium perfringens [5]  or

by thrombosis (a blood vessel blocked by a blood clot). It is usually the result of critically

insufficient blood supply (e.g., peripheral vascular disease)[6] and is often associated with diabetes [7]  and

long-term tobacco smoking. This condition is most common in the lower extremities. The best treatment

for gangrene is revascularization (i.e., restoration of blood flow) of the afflicted organ, which can reverse

some of the effects of necrosis and allow healing. Other treatments include debridement and

surgical amputation. The method of treatment is generally determined by the location of affected tissue

and extent of tissue loss. Gangrene may appear as one effect of foot binding.

Treatment[edit]

Treatment is usually surgical debridement, wound care, and antibiotic therapy, although amputation is

necessary in many cases.

"Most amputations are performed for ischemic disease of the lower extremity. Of dysvascular

amputations, 15-28% of patients undergo contralateral limb amputations within 3 years. Of elderly

persons who undergo amputations, 50% survive the first 3 years."[12]

In the United States, 30,000–40,000 amputations are performed annually. There were an estimated

1.6 million individuals living with the loss of a limb in 2005; these estimates are expected to more

than double to 3.6 million such individuals by the year 2050.[13] Antibiotics alone are not effective

because they may not penetrate infected tissues sufficiently.[14] Hyperbaric oxygen therapy (HBOT)

treatment is used to treat gas gangrene. HBOT increases pressure and oxygen content to allow

blood to carry more oxygen to inhibit anaerobic organism growth and reproduction.[15] A regenerative

medicine therapy was developed by Dr. Peter DeMarco to treat gangrene using procaine and PVP.

He applied his therapy to diabetic patients to avoid amputations. Growth factors, hormones and skin

grafts have also been used to accelerate healing for gangrene and other chronic wounds.[citation needed]

Angioplasty should be considered if severe blockage in lower leg vessels (tibial and peroneal artery)

leads to gangrene.

Types

Dry

Dry gangrene begins at the distal part of the limb due to ischemia, and often occurs in the toes and feet of

elderly patients due to arteriosclerosis and thus, is also known as senile gangrene.[citation needed] Dry

gangrene is mainly due to arterial occlusion. There is limited putrefaction and bacteria fail to survive. Dry

gangrene spreads slowly until it reaches the point where the blood supply is inadequate to keep tissue

viable. The affected part is dry, shrunken and dark reddish-black, resembling mummified flesh. The dark

coloration is due to liberation of hemoglobinfrom hemolyzed red blood cells, which is acted upon

by hydrogen sulfide (H2S) produced by the bacteria, resulting in formation of black iron sulfide that

remains in the tissues.[8] The line of separation usually brings about complete separation, with eventual

falling off of the gangrenous tissue if it is not removed surgically, also called autoamputation.

Dry gangrene is actually a form of coagulative necrosis. If blood flow is interrupted for a reason other than

severe bacterial infection, the result is a case of dry gangrene. Individuals with impaired peripheral blood

flow, such as diabetics, are at greater risk of developing dry gangrene.

The early signs and symptoms of dry gangrene are a dull ache and sensation of coldness in the affected

area along with pallor of the flesh. If detected early, the process can sometimes be reversed by vascular

surgery. However, if necrosis sets in, the affected tissue must be removed just as with wet gangrene.

Wet

Wet gangrene occurs in naturally moist tissue and organs such as the mouth, bowel, lungs, cervix, and

vulva.[citation needed] Bedsores occurring on body parts such as the sacrum, buttocks, and heels — although

not necessarily moist areas — are also wet gangrene infections.[citation needed] This condition is characterized

by thriving bacteria and has a poor prognosis (compared to dry gangrene) due to septicemia resulting

from the free communication between infected fluid and circulatory fluid. In wet gangrene, the tissue is

infected by saprogenic microorganisms (Clostridium perfringens or Bacillus fusiformis, for example),

which cause tissue to swell and emit a fetid smell. Wet gangrene usually develops rapidly due to blockage

of venous (mainly) and/or arterial blood flow. The affected part is saturated with stagnant blood, which

promotes the rapid growth of bacteria. The toxic products formed by bacteria are absorbed, causing

systemic manifestation ofsepticemia and finally death. The affected part is edematous, soft, putrid, rotten

and dark. The darkness in wet gangrene occurs due to the same mechanism as in dry gangrene. Wet

gangrene is coagulative necrosis progressing to liquefactive necrosis.

Gas[edit]

Gas gangrene is a bacterial infection that produces gas within tissues. It is a deadly form of gangrene

usually caused by Clostridium perfringens bacteria. Infection spreads rapidly as the gases produced by

bacteria expand and infiltrate healthy tissue in the vicinity. Because of its ability to quickly spread to

surrounding tissues, gas gangrene should be treated as a medical emergency.

Gas gangrene is caused by a bacterial exotoxin-producing clostridial species, which are mostly found in

soil and other anaerobes (e.g., Bacteroides and anaerobic streptococci). These environmental bacteria

may enter the muscle through a wound and subsequently proliferate in necrotic tissue and secrete .

(4)TuberculosisTuberculosis, MTB, or TB (short for tubercle bacillus), in the past also called Phthisis or Phthisis

pulmonalis, is a common, and in many cases lethal, infectious disease caused by various strains

of mycobacteria, usually Mycobacterium tuberculosis.[1] Tuberculosis typically attacks the lungs, but can

also affect other parts of the body. It is spread through the air when people who have an active TB

infection cough, sneeze, or otherwise transmit respiratory fluids through the air.[2] Most infections

are asymptomatic and latent, but about one in ten latent infections eventually progresses to active

disease which, if left untreated, kills more than 50% of those so infected.

The classic symptoms of active TB infection are a chronic cough with blood-tinged sputum, fever, night

sweats, and weight loss (the latter giving rise to the formerly prevalent term "consumption"). Infection of

other organs causes a wide range of symptoms. Diagnosis of active TB relies

on radiology(commonly chest X-rays), as well as microscopic examination and microbiological culture of

body fluids. Diagnosis of latent TB relies on thetuberculin skin test (TST) and/or blood tests. Treatment is

difficult and requires administration of multiple antibiotics over a long period of time. Social contacts are

also screened and treated if necessary. Antibiotic resistance is a growing problem in multiple drug-

resistant tuberculosis (MDR-TB) infections. Prevention relies on screening programs and vaccination with

the bacillus Calmette–Guérin vaccine.

Signs and symptoms

Tuberculosis may infect any part of the body, but most commonly occurs in the lungs (known as

pulmonary tuberculosis). Extrapulmonary TB occurs when tuberculosis develops outside of the lungs,

although extrapulmonary TB may coexist with pulmonary TB as well.

General signs and symptoms include fever, chills, night sweats, loss of appetite, weight loss, and fatigue.[9] Significant finger clubbing may also occur.

Causes

Mycobacteria

Main article: Mycobacterium tuberculosis

Scanning electron micrograph ofMycobacterium tuberculosis

The main cause of TB is Mycobacterium tuberculosis, a small, aerobic, nonmotile bacillus.[9] The

high lipid content of this pathogen accounts for many of its unique clinical characteristics.[18] It divides every 16 to 20 hours, which is an extremely slow rate compared with other bacteria, which

usually divide in less than an hour.[19] Mycobacteria have an outer membrane lipid bilayer.[20] If a Gram

stain is performed, MTB either stains very weakly "Gram-positive" or does not retain dye as a result of the

high lipid and mycolic acid content of its cell wall.[21] MTB can withstand weak disinfectantsand survive in

a dry state for weeks. In nature, the bacterium can grow only within the cells of a host organism, but M.

tuberculosis can be cultured in the laboratory.[22]

Using histological stains on expectorated samples from phlegm (also called "sputum"), scientists can

identify MTB under a regular (light) microscope. Since MTB retains certain stains even after being treated

with acidic solution, it is classified as an acid-fast bacillus (AFB).[1][21] The most common acid-fast staining

techniques are the Ziehl–Neelsen stain, which dyes AFBs a bright red that stands out clearly against a

blue background,[23] and theauramine-rhodamine stain followed by fluorescence microscopy.[24]

The M. tuberculosis complex (MTBC) includes four other TB-causing mycobacteria: M. bovis, M.

africanum, M. canetti, and M. microti.[25] M. africanum is not widespread, but it is a significant cause of

tuberculosis in parts of Africa.[26][27] M. bovis was once a common cause of tuberculosis, but the

introduction of pasteurized milk has largely eliminated this as a public health problem in developed

countries.[1][28] M. canetti is rare and seems to be limited to the Horn of Africa, although a few cases have

been seen in African emigrants.[29][30] M. microti is also rare and is mostly seen in immunodeficient people,

although the prevalence of this pathogen has possibly been significantly underestimated.[31]

Other known pathogenic mycobacteria include M. leprae, M. avium, and M. kansasii. The latter two

species are classified as "nontuberculous mycobacteria" (NTM). NTM cause neither TB norleprosy, but

they do cause pulmonary diseases that resemble TB.

Prevention

Tuberculosis prevention and control efforts primarily rely on the vaccination of infants and the detection

and appropriate treatment of active cases.[7]The World Health Organization has achieved some success

with improved treatment regimens, and a small decrease in case numbers.[7]

Vaccines

The only currently available vaccine as of 2011 is bacillus Calmette–Guérin (BCG) which, while it is

effective against disseminated disease in childhood, confers inconsistent protection against contracting

pulmonary TB.[67] Nevertheless, it is the most widely used vaccine worldwide, with more than 90% of all

children being vaccinated.[7] However, the immunity it induces decreases after about ten years.[7] As

tuberculosis is uncommon in most of Canada, the United Kingdom, and the United States, BCG is only

administered to people at high risk.[68][69][70] Part of the reasoning arguing against the use of the vaccine is

that it makes the tuberculin skin test falsely positive, and therefore, of no use in screening.[70] A number of

new vaccines are currently in development.

Public health

The World Health Organization declared TB a "global health emergency" in 1993,[7] and in 2006, the Stop

TB Partnership developed a Global Plan to Stop Tuberculosis that aims to save 14 million lives between

its launch and 2015.[71] A number of targets they have set are not likely to be achieved by 2015, mostly

due to the increase in HIV-associated tuberculosis and the emergence of multiple drug-resistant

tuberculosis (MDR-TB).[7] A tuberculosis classification system developed by the American Thoracic

Society is used primarily in public health programs

(5)Bacterial meningitiesMeningitis (from Ancient Greek μῆνιγξ/méniŋks, "membrane"[1] and the medical suffix -itis,

"inflammation") is an acute inflammation of the protective membranes covering the brain and spinal cord,

known collectively as the meninges.[2] The inflammation may be caused by infection with viruses,bacteria,

or other microorganisms, and less commonly by certain drugs.[3] Meningitis can be life-threatening

because of the inflammation's proximity to the brain and spinal cord; therefore, the condition is classified

as a medical emergency.[2][4]

The most common symptoms of meningitis are headache and neck stiffness associated

with fever, confusion or altered consciousness, vomiting, and an inability to tolerate light (photophobia) or

loud noises (phonophobia). Children often exhibit only nonspecific symptoms, such as irritability and

drowsiness. If a rash is present, it may indicate a particular cause of meningitis; for instance, meningitis

caused by meningococcal bacteria may be accompanied by a characteristic rash.[2][5]

A lumbar puncture diagnoses or excludes meningitis. A needle is inserted into the spinal canal to extract

a sample of cerebrospinal fluid (CSF), that envelops the brain and spinal cord. The CSF is examined in a

medical laboratory.[4] The first treatment in acute meningitis consists of promptly

administered antibiotics and sometimes antiviral drugs. Corticosteroids can also be used to prevent

complications from excessive inflammation.[4][5]Meningitis can lead to serious long-term consequences

such as deafness, epilepsy, hydrocephalus and cognitive deficits, especially if not treated quickly.[2]

[5] Some forms of meningitis (such as those associated with meningococci, Haemophilus influenzae   type

B, pneumococci or mumps virusinfections) may be prevented by immunization.[2]

Signs and symptoms[edit]

Clinical features

In adults, the most common symptom of meningitis is a severe headache, occurring in almost 90% of

cases of bacterial meningitis, followed by nuchal rigidity (the inability to flex the neck forward passively

due to increased neck muscle tone and stiffness).[6] The classic triad of diagnostic signs consists of

nuchal rigidity, sudden high fever, and altered mental status; however, all three features are present in

only 44–46% of bacterial meningitis cases.[6][7] If none of the three signs are present, meningitis is

extremely unlikely.[7] Other signs commonly associated with meningitis includephotophobia (intolerance to

bright light) and phonophobia (intolerance to loud noises). Small children often do not exhibit the

aforementioned symptoms, and may only be irritable and look unwell.[2] The fontanelle (the soft spot on

the top of a baby's head) can bulge in infants aged up to 6 months. Other features that distinguish

meningitis from less severe illnesses in young children are leg pain, cold extremities, and an

abnormal skin color.[8][9]

Nuchal rigidity occurs in 70% of bacterial meningitis in adults.[7] Other signs of meningism include the

presence of positive Kernig's sign or Brudziński sign. Kernig's sign is assessed with the person

lying supine, with the hip and knee flexed to 90 degrees. In a person with a positive Kernig's sign, pain

limits passive extension of the knee. A positive Brudzinski's sign occurs when flexion of the neck causes

involuntary flexion of the knee and hip. Although Kernig's sign and Brudzinski's sign are both commonly

used to screen for meningitis, the sensitivity of these tests is limited.[7][10] They do, however, have very

good specificity for meningitis: the signs rarely occur in other diseases.[7] Another test, known as the "jolt

accentuation maneuver" helps determine whether meningitis is present in those reporting fever and

headache. A person is asked to rapidly rotate the head horizontally; if this does not make the headache

worse, meningitis is unlikely.[7]

Meningitis caused by the bacterium Neisseria meningitidis (known as "meningococcal meningitis") can be

differentiated from meningitis with other causes by a rapidly spreading petechial rash, which may precede

other symptoms.[8] The rash consists of numerous small, irregular purple or red spots ("petechiae") on the

trunk, lower extremities, mucous membranes, conjuctiva, and (occasionally) the palms of the hands or

soles of the feet. The rash is typically non-blanching; the redness does not disappear when pressed with

a finger or a glass tumbler. Although this rash is not necessarily present in meningococcal meningitis, it is

relatively specific for the disease; it does, however, occasionally occur in meningitis due to other bacteria.[2] Other clues on the cause of meningitis may be the skin signs of hand, foot and mouth

disease and genital herpes, both of which are associated with various forms of viral meningitis.

Prevention[edit]

For some causes of meningitis, protection can be provided in the long term through vaccination, or in the

short term with antibiotics. Some behavioral measures may also be effective.

Behavioral[edit]

Bacterial and viral meningitis are contagious; however, neither is as contagious as the common

cold or flu.[37] Both can be transmitted through droplets of respiratory secretions during close contact such

as kissing, sneezing or coughing on someone, but cannot be spread by only breathing the air where a

person with meningitis has been.[37] Viral meningitis is typically caused by enteroviruses, and is most

commonly spread through fecal contamination.[37] The risk of infection can be decreased by changing the

behavior that led to transmission.

Vaccination[edit]

Since the 1980s, many countries have included immunization against   Haemophilus influenzae   type B  in

their routine childhood vaccination schemes. This has practically eliminated this pathogen as a cause of

meningitis in young children in those countries. In the countries where the disease burden is highest,

however, the vaccine is still too expensive.[38][39] Similarly, immunization against mumps has led to a sharp

fall in the number of cases of mumps meningitis, which prior to vaccination occurred in 15% of all cases of

mumps.[11]

Meningococcus vaccines exist against groups A, C, W135 and Y.[40] In countries where the vaccine for

meningococcus group C was introduced, cases caused by this pathogen have decreased substantially.[38] A quadrivalent vaccine now exists, which combines all four vaccines. Immunization with the ACW135Y

vaccine against four strains is now a visa requirement for taking part inHajj.[41] Development of a vaccine

against group B meningococci has proved much more difficult, as its surface proteins (which would

normally be used to make a vaccine) only elicit a weakresponse from the immune system, or cross-react

with normal human proteins.[38][40] Still, some countries (New Zealand, Cuba, Norway and Chile) have

developed vaccines against local strains of group B meningococci; some have shown good results and

are used in local immunization schedules.[40] In Africa, until recently, the approach for prevention and

control of meningococcal epidemics was based on early detection of these disease.

:

:

:

:

CONTENTS:

1:CANCER

2:PLANT BIO TECHNOLOGY

3:NANO TECHNOLOGY

4:GLOBAL WARMING

5:RECYCLING

6:BACTERIAL INFECTION:(1)TETANUS

(2)LEPTOSPIROSIS

(3):GANGRENE

(4):TUBERCLUOSIS

(5):BACTERIAL MENINGITIES