Thursday, October 31, 2019

Environmental Conservation Essay Example | Topics and Well Written Essays - 1750 words

Environmental Conservation - Essay Example An environment is made up of resources. Resources refer to existing means of supplying what is needed or a stock that can be drawn on. Renewable resources are those that can be regenerated at a constant level. This is because they recycle rapidly, is alive or has the capacity to reproduce and grow. Examples are water which can be recycled, organisms because they can reproduce and many others. These resources will always keep on regenerating themselves as long as the rater of use is less than their rate of regeneration and the environments are kept suitable. Non renewable resources are not regenerated or reformed in nature at rates equivalent to the rate of use e.g. oil. There is a special category of non-renewable resources which are not affected by the way we use them e.g. metals which can be recycled. Environmental conservation is defined as the rational use of the environment to provide highest sustainable quality of living for humanity. With rapid development, there is stress on most resources including human resources in the developed countries which are registering negative population growth rate. According to World Commission on Environment and Development (WCED) (1987), sustainable development should be development that meets that need of the present without compromising the ability of future generation to meet their own needs. The biggest problem facing the world currently is environmental pollution. This has brought about global warming. Gorge Philander (1998), says that, global warming is the rise in world temperatures. This is caused by accumulation of carbon dioxide gas and other air polluting elements that collect in the atmosphere and forms a thickening blanket trapping the sun's heat and causing the planet to warm up. It is caused by several factors with coal burning alone producing about 2.5 billion tons of carbon dioxide every year. Several other human activities contribute to global warming. Growth o f rice in rice pads has been attributed as one of the greatest cause of carbon dioxide and methane gas which a cause global warming. Animal feacal materials are also thought to contribute a substantial amount of methane gas that contribute to global warming. This emission of carbon dioxide gas, coupled with depletion of forest cover of the world is the main cause of the problem. Human activities like clearing of forest and burning of charcoal are also contributing to global warming. According to John Houghton (1997), global warming is being evidenced in many ways. Disappearance or reduction of ice caps on major mountains of the world especially those along the equatorial region. There has also been evidence of rise in sea level with fears expressed that some coastal towns and islands may be submerged in the near future if immediate measures are not taken to avert the situation. Continuing global warming has brought effects like increase in diseases incidences and pests challenging agriculture, changes in populations and plant and animal ranges shifts, coral leef bleaching and alpine meadows disrupting ecosystems, heavy downpours and flooding drought and fires. These are considered as early signs of global warming. Its effects has been felt on every corner of the world with famous elnino and lanina periods of heavy down pour and prolonged droughts respectively, the Katrina hurricane in America and other

Tuesday, October 29, 2019

Jurnal Essay Example | Topics and Well Written Essays - 250 words - 1

Jurnal - Essay Example She believes that as a woman, she not only needs to look beautiful but also confident and efficient to show the opposite sex that women have brain that can outsmart them. She also thinks that she can get away with anything! She is a marketing manager in a cosmetic company. She has a dynamic personality and her confidence in her abilities is important part of her professional success. Her smiling appearance and ready apologies for lateness just do not let any negative feelings to persist. Would my impatient behavior have any long lasting impact? I do not think so! She comes from different culture where personal relationships are important but punctuality is not. This is what I have come to realize which may or may not be true for others. But she has a heart of pure gold and I love her for her vivacious personality. She bears no hard feelings for others. She is late for the 5th time in a row. I have now resigned myself that we would be late for cinema. She is very conscious of her looks. Appearances are very important for her and dressing smartly ensures that clients and business partners are impressed at first sight. She is as usual impeccably dressed. (words:

Sunday, October 27, 2019

TTX and Genotoxicity of Diodon Hystrix Organs

TTX and Genotoxicity of Diodon Hystrix Organs Identification of TTX and Genotoxicity of Diodon hystrix Organs Adwaid Manu K, Vignesh M., Riven Chocalingum Abstract Tetrodotoxin is alkaloid based aquatic toxins. These toxins are one of the most potent non-proteinaceous toxins as well as the best-known marine natural toxins. Diodon hystrix (porcupine fish) were collected from Chennai costal region and dissected under sterile conditions to obtain: liver, skin, gonads, intestine, eyes and kidney. 20g of each organ was macerated in 200ml of Methanol:Acetic Acid [99:1]. The filtrate is then condensed in Rota-Vaccum evaporator to obtain crude extract. The focus of this study is to confirm the presence of TTX (Tetrodotoxin) in six different organs of Diodon hystrix. Analytical techniques used were GC-MS and UV spectroscopy. Also, genotoxicity of the crude extract were analysed using human leukocyte culture and SCE assay using onion root tips. The results suggest the presence of TTX in major skin, liver and intestine and that, the organ extract does not have any genotoxic effect but is capable of increasing the sister chromatid exchange. Key Words: TTX, Diodon hystrix, genotoxicity, root tip assay. Introduction Tetrodotoxin (TTX) is a very powerful alkaloid neurotoxin that is non-proteinacious in nature. TTX can withstand very high temperature and is water soluble but is affected by extreme pH conditions, i.e., above 8.5 and below 3.0 [1, 2, 3, 4, 5]. These properties make it a dangerous toxin capable to interact best with its environment [1, 2, 5]. It is found in both aquatic as well as terrestrial organisms and studies have proven that it is synthesized by symbiotic microorganisms, bacteria precisely, present in the gut, initially acquired through the food chain or found on the skin of the animals but its biosynthesis pathway is still unknown [ 1, 2, 5, 6, 7, 8]. TTX acts as an ion pore blocker, binding to site 1 sodium channel receptor of the axon membrane thus inhibiting the influx of sodium ions and therefore leading to the blockage of action potentials [1, 2, 3, 4, 5, 6, 7, 8, 9]. TTX is ten thousand times poisonous than cyanide and one of the most fatal poisons on Earth. The LD50 is approximately 0.2ÃŽ ¼g when injected in mice [2, 5]. On the other hand, along with the lethal characteristics, clinical trials and research studies have demonstrated that TTX has remarkable therapeutic properties as an analgesic in cancer treatment process [2]. Puffer fish belonging to the order Tetraodontiformes, had been identified to be the cause of many mortalities due to food poisoning as a result of TTX intoxication. In many countries such as Japan and China, puffer fish is regarded as a food delicacy provided that it is prepared by a licensed and well experienced chef but some cases of poisoning still prevail [1, 3, 4, 5, 6, 7, 8, 9, 10]. It has been reported that only a very low dose of TTX in blood is adequate for an immediate impact on the host [5]. Studies have concluded that the most toxic organs of the puffer fish are the liver followed by the intestine and then the skin and ovary. In addition to that, TTX is also found in low concentration in other organs such as the eyes and muscles [3, 5, 8, 10]. The study is focused on Diodon hystrix which is a type of puffer fish belonging to the class Diodontidae and it is also known as Porcupinefish because of the sharp needle-like structures covering its entire body as a defense mechanism against predators. Presence of TTX has been reported in Diodon hystrix around the world [2, 4, 5] but studies on this animal from the sea of the eastern coast of India that is the Bay of Bengal is yet to be reported. The aim of this research is to identify TTX in the crude extract from Diodon hystrix collected from Chennai Coastal line and to investigate the Genotoxicity of the crude extract from respective organs using human leukocyte culture and onion root tips. Materials and methods Sample collection The puffer fish was collected from the coastal lines of marina beach, Chennai in early July 2014. The identification of the puffer fish was done by visual comparison with an online fish database www.fishbase.org. The database parameters were set accordingly to sample collection site and the possible species available in Bay-of-Bengal region with the matching morphology were only two types of Diodon sp.. Out of which Diodon hystrix had the closest match, based on the skin coloration pattern. Organ separation and extraction process The collected puffer fish were dissected and visceral organs like liver, intestine, kidney, eye, and skin were removed and organs were weighed. The isolation for the tetrodotoxin[3] include from the samples 10 grams of organs were taken and Then suspended in 100ml of three volume of 1% acetic acid in methanol without damaging the tissues then the whole materials were in the Refrigerator for 24 hours at a sterile condition, as an incubation period In the next step the tissue were macerated in a mortar and pestle gently, if the tissues get dried up add required volume of the chilled ethanol if needed. Then the slurry were filtered by using whatman no. 1 filter paper. Then the filtrate solutions were centrifuged at 12000 rpm for 10 minutes at 4 degree Celsius. Then the supernatant were separated and lastly the samples were concentrated by using lyophilisation to obtain crude extracts for our purpose of study Dragendorff’s test To identify the presence of alkaloids [10] to 2mg of crude extracts 5ml of distilled water were added and then 2M hydrochloric acid was added until an acid reaction occurs. To this 1 ml of Dragendorff’s reagent was added. Formation of orange or orange red precipitate indicates the presence of alkaloids GasChromatography-Mass Spectrometry Gas chromatography (GC) and mass spectrometry (MS)[8][11][12]forms an effective combination for Chemical analysis. GC-MS analysis were an indirect method to detect TTX in a crude extract, which was difficult to purify in other advanced analysis methods. In this method, we dissolved TTX and its derivatives in 2 ml of 3 M NaOH and heated in a boiling water bath for 30 min. After cooling to room temperature, the alkaline solution of decomposed compounds was adjusted to pH 4.0 with 1N HCl and the resulting mixture was chromatographed on a Sep- Pak C18 cartridge (Waters). After washing with H2O first and then 10% MeOH, 100% MeOH fraction were collected and evaporated to dryness in vacuo. To the resulting residue, a mixture of N, O-bis acetamide, trimethylchlorosilane and pyridine (2: 1: 1) was added to generate trimethylsilyl (TMS) ‘‘C9-base’’ compounds. The derivatives were then placed in a Hewlett Packard gas chromatograph (HP-5890-II) equipped with a mass spectrometer (AutoSpec, Micromass Inc., UK). A column (φ 0.25 mm Ãâ€" 250 cm) of UB-5 was used, and the column temperature is increased from 180 to 250 °C at the rate of 5 or 8 °C/min . The flow rate of inlet helium carrier gas were maintained at 20 ml/min. The ionizing voltage is generally maintained at 70 eV with the ion source temperature at 200 °C. Scanning was performed in the mass range of m/z 40–600 at 3s intervals. The total ion chromatogram (TIC) and the fragment ion chromatogram (FIC) were selectively monitored. Ultraviolet (UV) spectroscopy In UV spectroscopy, TTX was generally determined by irradiating a crude toxin with UV light [11][12]. A small amount of samples were dissolved in 2 ml of 2 M NaOH and heated in a boiling water bath for 45 min. After cooling to room temperature, samples were examined in UV spectrum and results were observed in the range 270nm to 280nm. Genotoxicity Human Leukocyte Culture Chromosome preparations were obtained from PHA-stimulated peripheral blood lymphocytes[14][15]. To the fresh tubes 5ml of Hikaryo XL RPMI ready-mix media and 0.5ml of heparinized Blood (50drops) were added and the contents were mixed gently by shaking. Then Incubated for 72 hours in standing position in an incubator. At the end of 48th hour of incubation, the culture was treated with TTX (0.5ug/ml) (10ul/ 5ml of culture) and again kept it in incubator for another 24 hours. At the end of 24th hour incubation, the culture was thoroughly washed by centrifuging the content at 1500rpm for 5 minutes, discard the supernatant and add 5ml of RPMI 1640 medium. To the content 60 microliter of colchicine was added and tubes were kept for 20 minutes incubation in incubator at 37oC and the content was centrifuged at 1500 rpm for 10 minutes after incubation. The supernatant was removed and 6ml of pre-warmed 0.075M hypotonic solution was added. The content was mixed using a Pasteur pipette and incub ated at 37 oC in incubator for 6 minutes. After incubation the content tube was centrifuged at 2000 rpm for 5 minutes. The supernatant was discarded and 6ml of Carnoy’s fixative was added and mixed vigorously. After fixation the content was kept in room temperature for 1-2 hours. The content was again centrifuged at 1500 rpm and supernatant was removed and this step was continued until pellet becomes white. For the preparation of slides the new slides were first refrigerated and then cell button mix was dropped over the slides and dried immediately on a hot plate, and then was kept in an incubator for proper drying. The slides were then placed in a coplin jar containing Giemsa staining for 4 minutes and destained in a coplin jar containing distilled water for 1 minute. The slides were dried and then viewed under microscope for stained chromosome. . The slides were then viewed under 100X power under oil immersion objective of the microscope to analyze the chromosome aberration s. Onion Root Tip SCE Assay The onion root tips[1], 2-3 cm long, were soaked in 100  µM 5-bromodeoxy uridine (BrdUrd) for almost 20 h followed by 1 hour treatment with the crude extract After a brief wash, the roots were allowed to grow for another round in growing media. The treatments were terminated by washing the roots with distilled water and then 0.05% Colchicine was added then incubated for 2.5 h. Roots were washed, excised and fixed in Carnoy’s fixative, for 1-3hrs and preserved at 4 °C. The roots were processed using cytology methods for SCE analysis.. The roots were then hydrolysed in 5 N HCI at 25 °C for 92 min and stained with haematoxylin for at least 2hrs. The stained root[16] were washed in distilled water, squashed in a drop of 45% acetic acid and tapped for metaphase chromosome separation under coverslips. Tap water controls were included in the assay. The slides were observed at 100X magnification in oil immersion using light microscope RESULTS AND DISCUSSION Dragendorff’s test Fig 1: Showing result of sample after Dragendorff’s test The alkaloids present in the puffer fish was precipitated as a complex formation by dragendorff’s reagent. Dragendorff’s test results showed very high precipitation in skin and intestine, high precipitation in liver and very low precipitation or almost no precipitation was observed in kidney, gonads and eye. Gas-Chromatography-Mass Spectrometry Characteristic peak was observed at retention time 8.33 and 8.66 in liver, intestine and skin after performing alkaline treatment and there was no characteristic peak observed in kidney, eyes and gonads. After boiling of samples which contain TTX in alkaline solution (NaOH) the compound TTX present gets reduced to C9 base TMS (trimethysilyl). It is noteworthy that each peak of selected ion monitored at m/z = 376, 392 and 407 appears at the same retention time in the Selected ion-monitored mass chromatogram of the TMS derivatives of alkali-hydrolyzed. From samples of liver, kidney and intestine, mass fragments of ion peaks was observed at ion M/z 376, 392 and 407, which are characteristic of the quinazoline skeleton (C9 base), which was almost similar as those from the TMS-C9 Base derived authentic TTX Fig 2: Showing GC-MS spectrum of the TMS derivatives of alkali-hydrolysed toxin from Diodon hystrix UV-spectroscopy In UV analysis method characteristic peaks were observed in all samples. Shoulder peak was observed in liver, intestine and skin, Declining and Inclining Peaks were observed in kidney, eyes and gonads. The UV spectrum is analyzed for the characteristic of absorptions, associated with C9-base .The shoulder peaks were observed at 276 nm indicates the formation of C-9 base which were specific to TTX or related substances. Fig 3: Showing chart of UV-spectroscopy of the crude extract from various organs of Diodon hystrix, peak at 276nm indicating the presence of TTX. Genotoxicity Human Leukocyte Assay Metaphase plates were obtained while observing under 100X magnification in oil immersion using light microscope. It has been observed in all the samples that there were no chromosomal aberration that is structural or numerical chromosomal modification were not observed. From this result, it can be reported that the crude extract from Diodon hystrix has no clastogenic (breakage of chromosome) or aneugenic (change in chromosomal number) effects. Fig4(left): Showing metaphase plate from control leukocytes. Fig5(right):  Showing metaphase plate from crude extract leukocytes. Onion Root Tip SCE Assay The Sister Chromatid Exchange (SCE) assay has been reported to be one of the most sensitive short-term genotoxicity assays because of its capability to identify genotoxins at very low doses (Tucker et al.1993). It has been observed that the crude extract from Skin and intestine enhanced SCE significantly over the control while the Liver, Eye, Gonads and Kidney have very low effects. Therefore it can be put forth that the crude extract from skin and intestine interfere to a great deal with the SCE and further studies need to be carried out. Fig6(left) : Showing result of SCE in control onion root tip. Fig7(right): Showing result of SCE in crude extract root tip. Conclusion: From the study, it can be reported that Diodon hystrix from the eastern coastal region of India, observed to have accumulated TTX in its organs. Thus it can be toxic when ingested and even lethal to the predators. Nevertheless further studies should be carried out on this fish to confirm the presence of a homologue of TTX and obtain a purified sample of the TTX. References: Samanta S.Khora, Kamal K.Panda and Brahma B.Panda (1997): Genotoxicity of tetrodotoxin from puffer fish tested in root meristem cells of Allium cepa L. Mutagenesis vol.12 no.4 pp.265-269 Keyvan Mirbakhsh, Ulf Gà ¶ransson: Tetrodotoxin – evolutionary selection and pain relief Course in Biological Active Natural Products in Drug discovery A8/C, 5p. Distanse course – Fall 2004 Department of Medicinal Chemistry Division of Pharmacognosy Uppsala University. Firoz Ahmed, Aamir Javed, Anup Baranwal, Annu Kumari, Farnaz Mozafari Parvathi Chandrappa (2013):EXTRACTION OF TETRODOTOXIN FROM PUFFER FISH, DIODON LITUROSUS FROM SOUTH ANDAMAN SEA. G.J B.A.H.S., Vol.2 (2) 2013:58-6, ISSN: 2319 – 5584. Teetske F. Van Gorcum, Max Janse, Marianne E.C. Leenders, Irma de Vries, Jan Meulenbelt (2006): Intoxication following minor stabs from the spines of a porcupine fish clinical. Toxicology , 2006, 44(4) p. 391- 393. Vaishali Bane, Mary Lehane, Madhurima Dikshit, Alan O’Riordan and Ambrose Furey (2014): Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection. Toxins 2014, 6, 693-755, ISSN 2072-6651. Bragadeeswaran S, Therasa D, Prabhu K, Kathiresan K (2010): Biomedical and pharmacological potential of tetrodotoxin-producing bacteria isolated from marine pufferfish Arothron hispidus (Muller, 1841). The Journal of Venomous Animals and Toxins including Tropical Diseases ISSN 1678-9199 | 2010 | volume 16 | issue 3 | pages 421-431. J. S. OliveiraI; O. R. Pires JuniorII; R. A. V. MoralesII, III; C. Bloch JuniorIII; C. A. SchwartzII; J. C. FreitasI (2003): Toxicity of puffer fish two species (Lagocephalus laevigatus, linaeus 1766 and Sphoeroides spengleri, Bloch 1785) from the Southeastern Brazilian coast. J. Venom. Anim. Toxins incl. Trop. Dis vol.9 no.1 Botucatu 2003, ISSN 1678-9199. Tamao Noguchi, Kazue Onuki and Osamu Arakawa (2011): Tetrodotoxin Poisoning Due to Pufferfish and Gastropods, and Their Intoxication Mechanism. International Scholarly Research Network ISRN Toxicology Volume 2011, Article ID 276939, 10 pages. Niharika Mandal, Soumya Jal, K. Mohanapriya and S. S. Khora (2013): Assessment of toxicity in puffer fish (Lagocephalus lunaris) from South Indian coast. African Journal of Pharmacy and Pharmacology Vol. 7(30), pp. 2146-2156, ISSN 1996-0816 Md. Moyen Uddin Pk, Rumana Pervin, Dr.Yearul Kabir, Dr. Nurul Absar (2013): PRELIMINARY SCREENING OF SECONDARY METABOLITES AND BRINE SHRIMP LETHALITY BIOASSAY OF WARM-WATER EXTRACT OF PUFFER FISH ORGANS TISSUES, TETRAODON CUTCUTIA, AVAILABLE IN BANGLADESH. Journal of Biomedical and Pharmaceutical Research 2 (5) 2013, 14-18, ISSN: 2279 – 0594 Nagashima, Y., J. Maruyama, T. Noguchi andK. Hashimoto (1987) Analysis of paralyticshellfish poison and tetrodotoxin by ionpairing high performance liquid chromatography.Nippon Suisan Gakkaishi 53:1 819-8 Nakamura, M. and T, Yasumoto (1985)Tetrodotoxin derivatives in puffer fish.Toxicon 23: 271-273 Myoung Ja Lee,Dong-Youn Jeong, Woo-Seong Kim,Hyun-Dae Kim,Cheorl-Ho Kim,Won-Whan Park,Yong-Ha Park,Kyung –Sam Kim,Hyung-Min Kim and Dong –Soo Kim(2000) A tetratoxin –producing Vibrio Strain ,LM-1 from the puffer fish Fugu vermicularisradiates.Appl.Environ.Micribial.Vol.66 no 4 1698-1701 Moorhead, P.S., P.C. Nowell, W.J. Mellman, D.N. Batipps and D.A.Hungerford: Chromosome preparations of leucocytes cultures fromhuman peripheral blood. Exp. Cell. Res., 20, 613-616 (1960). Hungerford, D.A., 1965. Leukocytes cultured fromsmall inocula of whole blood and the preparationof metaphase chromosomes by treatment with Hypotonic KCl. Stain Technol., 40: 333-338. Perry, P. and S. Wolff: New giemsa method for differential staining of sisterchromatids. Nature, 251, 156-158 (1974).

Friday, October 25, 2019

Abortion: Pro Choice View :: essays research papers

Abortion: Pro Choice View Abortion is a growing issue in America among women and their right to reproduce children. Approximately one to three million abortions are done each year. Women get abortions for many reasons such as for rape, teen pregnancy and health reasons.   Ã‚  Ã‚  Ã‚  Ã‚  Rape is one of many reasons that cause women to choose abortion to end their pregnancies. What to do about their pregnancy is mandatory, although many or them felt they were ending a life. They are wise enough to know how they would treat their illegitimate child. They hate their rapist, and worry that if they kept their babies, they would hate their children for reminding them of such a painful time.   Ã‚  Ã‚  Ã‚  Ã‚  Young women between 15 and 19 account for at least 5 million abortions every year -- 1 million of them in the United States. In fact, one of every five pregnancies happens to a teen-age girl. In situations like this, some people are sure that they could take care of the child, while others know that they aren't ready or mature enough to take so much responsibility. In many cases the child would have no one to rely on but a single mother with no schooling, and maybe a non-supportive family. He or she would have a twisted, miserable upbringing, left vulnerable later in life.   Ã‚  Ã‚  Ã‚  Ã‚  Another reason that causes women choice abortion is health problem. There is a range of problems, including the child being born with Down's Syndrome, Cystic Fibrosis, or a disposition to obesity, which can later in life cause clogged arteries and heart failure.

Thursday, October 24, 2019

School Crime And Violence

Crime and violence in schools are issues that are of significant public concern, especially after the series of tragic school shootings recently. The schools have exercised care in making the students safe but many schools are now facing serious problems so that effective strategies can be devised to prevent school violence and increase school safety. (Small and Tetrick). The terms â€Å"school violence† and â€Å"school safety,† are still terms that need to be commonly defined. The authors maintain that â€Å"Multiple approaches can prove beneficial as each discipline brings to bear the full force of its knowledge and experience, but they complicate the task of summarizing the state of school violence. For instance, should school violence be considered a subset of youth violence? †(Small and Tetrick). Most of the violence in schools are involved in gangs. A gang is a group of people who form an allegiance for a common purpose and engage in unlawful or criminal activity. Gangs give members companionship, guidance, excitement and identity. When a member needs something, the others come to the rescue and provide protection. Gangs members have significantly lower levels of self-esteem compared to their non-gang peers. They also could name fewer adult role models than did their non-gang ,peers. It is no doubt that America has become a violent society. Television programs alone show gruesome murders and violence as if they are normal incidences in our lives. It is said that children learn to imitate the violence that they see on television. These take root in a lot of issues, foremost of which is on gun control. The positive effects of the strict enforcement of gun laws are readily seen. The Brady Campaign, for example believes that â€Å"background checks nationwide stopped over 600,000 felons and other prohibited purchasers from buying handguns from federally licensed firearm dealers. † Some say this is one big reason why there is a need to advocate gun control. Once people realize that there is a direct correlation between the increases in violence as correlated with gun possession, they would also be against gun violence in society. However, opinions are at odds with regards to the issue of gun control. Remarkably, both advocates and opponents of gun control policies in the United States use statistics to back up their stance. The Bureau of Justice Statistics reports that: According to the National Crime Victimization Survey (NCVS), in 2003, 449,150 victims of violent crimes stated that they faced an offender with a firearm. Incidents involving a firearm represented 7% of the 4. 9 million violent crimes of rape and sexual assault, robbery, and aggravated and simple assault. The FBI's Crime in the United States estimated that 67% of the 16,503 murders in 2003 were committed with firearms. â€Å"Gun Control vs. Gun Rights). Advocates of gun control directly use statistics such as this to assert that the increase in violence is positively correlated with gun possession. Organizations such as the National Rifle Association of America (NRA) and other proponents of gun rights oppose such view. Alexander, for one, insists that such arguments contradict factual studies. He contradicts the correlation, saying that â€Å"cities with the most restrictive gun laws, like Washington, D. C. , and Atlanta, Georgia, in fact, have the highest murder rates in the nation. At the center of the gun control issue is the Second Amendment to the Constitution: â€Å"A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms shall not be infringed. † Arguments usually focus on the interpretation of the law. The Second Amendment is the main banner of gun rights activists. Gun control advocates, on the other hand, feel that gun control opponents misinterpret the law when they assume that it means absolute right to ownership of guns by private individuals (Krouse). They assert that the Second Amendment only refers to gun ownership in situation of â€Å"militia† and not for general purposes. Gun control supporters feel that gun possession by just anybody is a contributing cause of increased incidence of crimes in the United States. In Guns and Terror, Berger and Henigan present studies to support their statement that â€Å"Gun shows are a breeding ground for gun sales to terrorist [and that] nothing in federal law prevents terrorists from quickly amassing arsenals of weapons† (4). Opponents to the gun control movement also believe that there is not enough factual evidence to the other side’s claim that banning the sale of some kinds of guns result to lower crime rates (â€Å"Gun Control vs. Gun Rights†). Alexander accuses gun control groups of ‘demagogue-ing’ the issue and maintains that the root of the problem is culture, and not the gun. He even notes that â€Å"many of the problems in question are the result of Leftist doctrines. † The Brady Campaign believes that stricter rules on gun ownership will result in communities with minimal crimes since they correlate guns with most crimes committed. Proponents of gun control also base this belief on what they see as positive effects of enforcement of gun laws. The Brady Campaign, for example believes that â€Å"background checks nationwide stopped over 600,000 felons and other prohibited purchasers from buying handguns from federally licensed firearm dealers. † Gun rights proponents maintain that stricter rules could actually cause crime rates to rise as criminals only tend to go places with stricter gun rules. Alexander quotes Thomas Sowell as saying, â€Å"Most criminals aren’t that stupid; they tend to go where the guns aren’t. †(Brady Campaign to Prevent Gun Violence). Proponents of gun control continue to push for the passage of new laws and/or amendments to existing laws to restrict gun ownership. For one, they are lobbying for legislation covering what they refer to as loopholes, specifically in issues such as juvenile access to firearms, and sales at gun shows and through the Internet (Brady Campaign). Opponents are also continuing efforts to gather support to counter the move of gun control advocates. The NRA is drawing support through massive information drive advocating Second Amendment rights and the protection of Americans’ hunting heritage. The NRA also performs studies and surveys contradicting the position of gun control supporters. In their survey on what Americans think of Right-to-Carry laws, results show that 79% of the voters stood in favor of such laws (National Rifle Association of America). The Brady Campaign notes, â€Å"according to an analysis of the FBI Uniform Crime Report, the percentage of violent crimes committed with firearms has declined dramatically after the Brady Law went into effect. † Supporters of gun control assert the need for more restrictions in gun ownership, especially in the federal level. They call for stricter background checks for people who intend to purchase licensed firearms. Gun rights supporters do not see additional laws and amendments to laws as a solution to increase in crime rates. Alexander writes, â€Å"Gun restrictions have not protected citizens in Atlanta, Washington, D. C. , New York or Boston, much less anyone in Columbine or Red Lake. Nor did such laws protect Jews from Hitler or Stalin or Chinese peasants from Mao, etc. , ad infinitum. † Alexander also advises politicians and gun controllers alike to look at the cultural aspects of the problem and not the instruments. The National Rifle Association of America has opposed every effort by gun control advocates, specially the Brady Campaign group, which they think will encroach on their rights under the Second Amendment. They maintain that gun ownership is their constitutional right and should not be limited to recreational purposes only. The NRA, in fact, is supporting enactment of laws involving Self-Defense and â€Å"freedom† to carry guns (Alexander, Mark 2005). Indeed, the right of law-abiding citizens to carry concealed firearms for purpose of self-defense has become a hot and controversial topic and one that will continue to be so for a long time. Schools need to protect the children from these gun-toting individuals and avoid another Columbine or Virginia Tech incident.

Wednesday, October 23, 2019

SCI Case Study

1. Why did Allen’s heart rate and blood pressure fall in this time of emergency (i.e. at a time when you’d expect just the opposite homeostatic response)? Pg. 969 This occurred because Allen’s spinal cord has decreased perfusion due to damage, and a broken vertebral bone. Also, there has been a disruptions of the sympathetic fibers of his autonomic nervous system therefore it can no longer stimulate the heart. Allen likely has spinal shock.2. Upon admission to the hospital, Allen’s breathing was rapid and shallow, can you explain why? Pg. 969 Due to Allen’s fall he likely has an incompetent diaphragm due to injuring a cervical segment. This would alter effect the lower motor neurons and external intercostal muscles. This would cause his chest x-ray to show a decreased lung expansion. This may have caused Allen to have to take rapid shallow breaths to maintain oxygenation. Overall, interruption of spinal innervation to the respiratory muscles would a lso explain his acidotic state.3. Why did Allen lose some sensation to his arms and all sensation from the upper trunk down? This is because Allen’s C5 segment was injured. Therefore, the dorsal column tracts and spinothalamic tracts were altered. This would cause Allen to have lost and decreased sensations.4. Why did Allen have dry skin and a fever upon admission to the hospital? pg. 970 The rationale for the dry skin and fever is that Allen had lack of sympathetic and hypothalamic control. Therefore, his body adapted to the temperature of the environment as wells as attempting to increase extracellular fluid. Overall, spinal shock would result in these symptoms along with decreased sweat production resulting from decreased sympathetic motor neuron stimulation.5. Based on the physical exam findings, which vertebral bone do you think was fractured? Give reasons for your answers? Pg. 969 Based on the physical findings I would say Allen’s fracture occurred at C5. I belie ve this is  where the fracture occurred because Allen had minimal biceps brachial stretch reflex, was able to raise his shoulders and tighten them, and could tighten his biceps.In addition Allen could not raise his arms against gravity, had flaccid lower extremities, and was without triceps or wrist extensor reflexes, and other muscle stretch reflexes were absent. If the fracture was at C4-5 Allen would not be able to shrug his shoulders and if the fracture was at C7 he could extend his flexed arms. Top of FormBottom of Form6. What is the normal pH of blood? Why was Allen’s blood pH below normal? Pg. 970-971. The normal blood pH is between 7.35 and 7.45. Allen’s blood was acidotic due to a decrease in lung expansion and an alteration in the perfusion to his spinal cord. He also has an alteration in spinal innervation to the respiratory muscles including the phrenic nerve that controls the diaphragm. This would further cause Allen to not be able to adequately take in enough oxygen and blow off enough CO2 to adequately have gas exchange, within the alveoli. Respiratory failure.7. What is the primary muscle of respiration? What nerve initiates this muscle? The primary muscle of respiration is the diaphragm. The nerve that initiates this muscle is the phrenic nerve.8. Which spinal neurons to the nerve you named in question #7? Pg. 969. The cervical spinal nerve C3-5 innervate the phrenic nerve. These are the lower motor neurons.9. By four days after the injury, some of Allen’s signs and symptoms had changed. Allen’s arm muscles were still flaccid, yet his leg muscles had become spastic and exhibited exaggerated stretch reflexes. Use your knowledge of motor neural pathways to explain these findings. Pg. 969. Allen is experiencing these signs and symptoms because he is his spinal shock is now resolved. Therefore his lower motor neurons will then be able to fire impulses unlike the upper motor neurons due to the injury being at C5. There fore, due to his cervical injury muscle spasticity, bladder activity, and reflex activity will begin. This is called spastic paralysis.10. Why did Allen suffer from urinary incontinence? Pg. 970. Allen suffered from urinary incontinence because of autonomic dysfunction. Initially autonomic dysfunction causes an areflexic bladder, also known as a neurogenic bladder. This means his bladder had zero ability to contract. Autonomic dysfunction then leads to urinary retention.

Tuesday, October 22, 2019

The Red Planet essays

The Red Planet essays Mars, otherwise known as the red planet, is approximately 227,940,000 km from the sun (Arnett 2000). It has two moons, Phobos and Deimos, which are believed to have been asteroids captured by the planets gravity (Sheehan 220). Mars is the 4th closest planet to the sun and it is the only other planet besides earth that at this time scientists believe could be inhabited by human life. Mars is in many ways like Earth. It has polar ice caps and is on almost the same tilted axis as earth. The similarity between the axis of Earth and Mars allow them to have similar seasons. There have been many unmanned space travel missions to Mars that have sent pictures, video, and information about the planet back to earth. An example would be the Surveyor 98 and the Mars Pathfinder (Exploring 2000). The future of exploration includes many more unmanned missions such as the Mars 2003 Rover Mission and 2001 Mars Odyssey. Mars is approximately 228,000,000 km from the Sun and is the first planet outside of Earths orbit. It has a diameter is 6,794 km which is about one half of the diameter of Earth but is still the 7th largest planet in our solar system (Arnett 2000). Even though Mars is smaller than Earth in size, the land surface area of the two planets is about the same. The gravity on Mars is much less than that of the Earth. Gravity on Mars is approximately two fifths of the gravity on Earth (Raeburn 128). The orbit of Mars is much like the orbit of Earth. Both are slightly tilted at nearly the same angle and direction. This tilt gives Mars four seasons similar to Earths. However, because a Martian year is about twice as long as an Earth year, each Martian season is also twice as long (Raeburn 415). It takes 687 days for Mars to rotate completely around the Sun. Because Mars is much further from the sun than Earth, temperatures on Mars are colder. The average temperature o...

Monday, October 21, 2019

Being an Adult VS. Child essays

Being an Adult VS. Child essays When I was a kid life was so much fun but still all I wanted was to be an adult. I was always fascinated by all the great and interesting things that adults were able to do. Every year that passed I was grew more excited because I was getting closer to being an adult. Now that I am an adult I wish I could be a kid again. Its quite amazing how fast your opinion can change. All the activities that I can do now do not compare to even one day as being a child. On the other hand I still enjoy being an adult and I still have a lot to experience. My life as a child and as an adult is vastly different but still is similar in small ways. When I was a child I was allowed to do basically anything that I wanted to do. I would go outside, play with my friends and play sports all day long. Being a child was fun and had many positive sides to it. As a child I never had to worry about having to work or if I had a test the next day. Life was fairly stress free. All I had to worry about was what time my favorite cartoons were going to be on Saturday morning or what my friends down the road were doing. On Saturdays I would wake up bright and early and sit in the living room in front of the television or go outside to meet my friends for a game of tag. As I would be sitting around the house, all I could smell was my mother cooking breakfast. It was nice to have someone cook for you, and not have to worry about feeding yourself. As a child I did not have to worry about money. There was no need to, all the money I needed was to buy some candy or a favorite movie or CD. Every week I would receive an allowance which was ten dollars that I was allowed to do anything I pleased with. I thought this was a lot of money when I was young. When youre a child there never is a thought of having to save up in order to purchase something you really want. Childhood does come with its great moments but its not as fun as it sounds. Being ...

Sunday, October 20, 2019

A Look At Sumer in Ancient History

A Look At Sumer in Ancient History In about 7200 B.C., a settlement, Catal Hoyuk (Çatal Hà ¼yà ¼k), developed in Anatolia, south-central Turkey. About 6000 Neolithic people lived there, in fortifications of linked, rectangular, mud-brick buildings. The inhabitants mainly hunted or gathered their food, but they also raised animals and stored surplus grains. Until recently, however, it was thought the earliest civilizations began somewhat further south, in Sumer. Sumer was the site of what is sometimes called an urban revolution affecting the entire Near East, lasting about a millennium, and leading to changes in government, technology, the economy, and culture, as well as urbanization, according to Van de Mieroop A History of the Ancient Neareast. Sumers Natural Resources For civilization to develop, the land must be fertile enough to support an expanding population. Not only did early populations need a soil rich in nutrients, but also water. Egypt and Mesopotamia (literally, the land between rivers), blessed with just such life-sustaining rivers, are sometimes referred to together as the Fertile Crescent. The 2 rivers Mesopotamia lay between were the Tigris and the Euphrates. Sumer came to be the name of the southern area near where the Tigris and Euphrates emptied into the Persian Gulf. Population Growth in Sumer When the Sumerians arrived in the 4th millennium B.C. they found two groups of people, the one referred to by archaeologists as Ubaidians and the other, an unidentified Semitic people- possibly. This is a point of contention Samuel Noah Kramer discusses in New Light on the Early History of the Ancient Near East, American Journal of Archaeology, (1948), pp. 156-164. Van de Mieroop says the rapid growth of population in southern Mesopotamia may have been the result of semi-nomadic people in the area settling down. In the next couple of centuries, the Sumerians developed technology and trade, while they increased in population. By perhaps 3800 they were the dominant group in the area. At least a dozen city-states developed, including Ur (with a population of maybe 24,000- like most population figures from the ancient world, this is a guess), Uruk, Kish, and Lagash. Sumers Self-Sufficiency Gave Way to Specialization The expanding urban area was made up of a variety of ecological niches, out of which came fishermen, farmers, gardeners, hunters, and herdsmen [Van de Mieroop]. This put an end to self-sufficiency and instead prompted specialization and trade, which was facilitated by authorities within a city. The authority was based on shared religious beliefs and centered on the temple complexes. Sumers Trade Led to Writing With an increase in trade, the Sumerians needed to keep records. The Sumerians may have learned the rudiments of writing from their predecessors, but they enhanced it. Their counting marks, made on clay tablets, were wedge-shaped indentations known as cuneiform (from cuneus, meaning wedge). The Sumerians also developed monarchy, the wooden wheel to help draw their carts, the plow for agriculture, and the oar for their ships. In time, another Semitic group, the Akkadians, migrated from the Arabian Peninsula to the area of the Sumerian city-states. The Sumerians gradually came under the political control of the Akkadians, while simultaneously the Akkadians adopted elements of the Sumerian law, government, religion, literature, and writing. SourcesMost of this introductory article was written in 2000. It has been updated with material from Van de Mieroop, but still depends mainly on the old sources, some of which are no longer available online: (http://loki.stockton.edu/~gilmorew/consorti/1anear.htm) The Middle East Inner Asia: A World Wide Web Research Institute(art-arena.com/iran1.html) MapBlack and white map shows the Near East from 6000-4000 B.C.(wsu.edu:8080/~dee/MESO/SUMER.HTM) The SumeriansClear, well-written history of the Sumerians, from Richard Hookers World Cultures Site.(eurekanet.com/~fesmitha/h1/ch01.htm) Genesis in SumerFrank Smithas chapter on the Sumerians includes information on education, religion, slavery, the role of women, and more. [Now at Sumer]

Saturday, October 19, 2019

The fundamentality of Holdens alienation-- a form of self protection Essay

The fundamentality of Holdens alienation-- a form of self protection to resist the process of maturity - Essay Example The book was banned in certain communities, however, because of Salinger's free language and frequent use of profanity. Catcher in the Rye is a book about an adolescent boy caught between desire to appear grown-up and suave, while at the same time being repulsed by what he feels is phoniness in the adult world. The novel represents Holden Caulfield's attempts to come to terms with both of these polarities. Throughout the novel, the reader is presented time and time again with Holden's revolving patterns-his attempt to connect and his habit of alienating himself both from the adult world and people he meets. This essay will take a look at Holden's patterns of alienation, which in the writer's opinion represent his attempt to avoid the process of maturity. By constantly 'running away' Holden manages to evade the demands and pressures The novel describes a period of time of three days in Holden Caulfield's life. Holden sums it up in his talk with the psychiatrist: "I'll just tell you about this madman stuff that happened to me around last Christmas just before I got pretty run-down and had to come out here and take it easy." (The Catcher in the Rye, 1) "If you really want to hear about it, the first thing you'll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don't feel like going into it, if you want to know the truth," (1) This first interaction that we witness already sets forth some of Holden's patterns. Holden anticipates what the psychiatrist will ask him, but has no desire to talk about details. This is one of Holden's repeating patterns throughout the novel-he reveals only as much as he sees fit, but rarely does he tell the whole truth. He does not feel comfortable revealing his inner self to an adult, and we will see this again and again in the novel. But his attitude about sharing is not limited to adults only; Holden often "shoots the bull" with his peers as well, but his words are evasive. The bottom line of his resistance to self-disclosure is that he does not feel comfortable in the world. In fact, he feels that he is a part of the world in which he feels he does not really belong. This is often a typical adolescent attitude, but what sets Holden apart from his peers is that he searches for the truth, the truth about himself and the truth connected to innocence. Holden's story starts on "the day [he] left Pencey Prep" (The Catcher in the Rye, 2). There is a football game going on, but Holden does not participate. He instead wishes to feel some kind of good-bye as he is leaving the school. Holden is getting kicked out as he did not "apply [himself]" (4) enough to the subjects. His non-committal attitude towards the school work might be puzzling to the reader, as he is very bright, but it reveals his deep resistance to "play the game" of life by the rules set forth by others. He does not believe

Friday, October 18, 2019

Business plan- Reflective report Essay Example | Topics and Well Written Essays - 3000 words

Business plan- Reflective report - Essay Example However, this paper focuses on the barriers I faced in building organizational strategies to make this business plan a successful one. There were many lessons learnt when we executed our marketing plan, we gained tremendous experience and I was able to evaluate myself as the design director of the company. Even though, we had a clear objective for carrying out organizational goals but sometimes there can be fuzziness in focusing on a specific direction. In this regard, there’s always a need to identify strategies that could give appropriate solutions in conflicting situations. In my opinion, mutual understanding among the managers is a must to make a business plan a successful one. Once, we identified the purpose of our business we were ready for its implementation. However, planning events and identifying strategies is a difficult part. I felt that in order to deploy goals and objectives to different members of the workforce communication and coordination was essential. These communicative strategies negotiate the meaning of situations where there are problems in identifying notions and solutions (Fà ¦rch & Kasper, 1984). It is also important to track changes of the desired implementation plan and strategies. Just like the statement of Benjamin Franklin, who once stated, â€Å"if you fail to plan, you are planning to fail but if you fail to track you are definitely going to fail to reach your desired future state† (Cox et al., 2014). This means that, as a project director, I had to make sure that all the scheduled tasks are checked on a regular basis in order to be successful in reaching the destiny. With time, I realized that there were actually three fundamental aspects that are needed to be examined before building a strategy to execute our business plan. Firstly, we had to determine ourselves by having a clear set of roles and

The Features And Problems Of The Educational Process Essay

The Features And Problems Of The Educational Process - Essay Example He insists that education is on the level of psychological and sociological, these two are combined to mold an individual in the society to be their own person in the civilized society that is and continues to be, it trains them to have use of all their capabilities. He purposes school as a social institution that should be used to simplify life for the young minds so that they do not become confused or have their capabilities develop prematurely (Dewey 463). It expounds and forms the social life that is installed at home, the information imparted in schools and lessons learned is a preparation for the future. The teachers in the school should not control or command the kids rather see influences that will positively affect the child’s future and examinations that test the child’s fitness for social life. The social life of a child is based on their social activities, thus the subject matter of the education taught in schools should progress civilization (Dewey 464). It should not be objective but help interpret past experiences and develop new attitudes and interests. The author tries to inform us that when a child is involved in a passive and absorbing attitude rather than active and experimental the result is waste and friction (Dewey 465). Laws and regulations should not be imposed because education is the moral duty of any society in order to shape itself in its own direction. John Dewey uses repetition and examples in his essay My Pedagogic Creed to convince his audience that is mainly the educators, parents of young students, and the community that instant education reform is of utmost importance for the proper social and psychological growth of children. He uses strong emotion and opinions on how the subject matter of the education should be and what he thinks is the duty of the teachers and the society in instilling education in a  child.

Heart of Darkness Essay Example | Topics and Well Written Essays - 1000 words - 1

Heart of Darkness - Essay Example He says: He has to live in the midst of the incomprehensible, which is also detestable. And it has a fascination, too, that goes to work upon him. The fascination of the abomination—you know. Imagine the growing regrets, the longing to escape, the powerless disgust, the surrender, the hate. This quote serves as pre-emptive explanation for why Marlow could not deny the power Kurtz held over other humans, despite his barbarity. Marlow then goes on to establish his love of reason and things that are real. In describing the appearance of several natives along the shore, Marlow relates: It was something natural, that had its reason, that had a meaning. Now and then a boat from the shore gave one a momentary contact with reality. It was paddled by black fellows. You could see from afar the white of their eyeballs glistening. They shouted, sang; their bodies streamed with perspiration; they had faces like grotesque masks—these chaps; but they had bone, muscle, a wild vitality, an intense energy of movement, that was as natural and true as the surf along their coast. They wanted no excuse for being there. They were a great comfort to look at. For a time I would feel I belonged still to a world of straightforward facts; but the feeling would not last long. Something would turn up to scare it away. The love of the real and tangible, of work, effort and improvement are themes Conrad returns to again and again through Marlow. The character Marlow likes belonging to a world where things really are as they appear. He does not like intrigues, rumors, or deviousness. He likes steel plates and rivets, honest emotion and truthfulness. The honest work, the seat and effort of the natives was solace to Marlow as he was surrounded by plotting privateers. Marlow’s distain for intrigues and falsehood is embodied by the station manager. Of him, Marlow says: He was obeyed, yet he inspired neither love nor fear, nor even respect. He inspired uneasiness. That was it! Uneasiness. Not a definite mistrust—just uneasiness—nothing more. This character is so loathsome to Marlow that he doesn’t even inspire a single honest emotion. The manager is held in contempt in every way by Marlow. The only possible complement that can be said of the man is that he survives, but even that is not attributed to any sort of effort on his part. It is simply a result of his constitution. In fact, the whole of the station is repugnant to Marlow. He states: There was an air of plotting about that station, but nothing came of it, of course. It was as unreal as everything else—as the philanthropic pretense of the whole concern, as their talk, as their government, as their show of work. The only real feeling was a desire to get appointed to a trading-post where ivory was to be had, so that they could earn percentages. The station was a mash of plots and intrigues that were so contrived as to never even come to any account. The inhabitants of the station held titles but acted in no manner to accomplish the work associated with the title given. Work, and the importance of it is mentioned by Marlow on several occasions in telling his story. This is important because it is a vital link between himself and Kurtz. Marlow reveals his feelings towards work when he stated: I don't like work—no man does—but I like what is in the work,—the chance to find yourself. Your own reality—

Thursday, October 17, 2019

Nursing Theory Analysis Essay Example | Topics and Well Written Essays - 2250 words

Nursing Theory Analysis - Essay Example Dr. Neuman is a renowned pioneer in the context of active nursing involvement in community mental health (Nurses.info, 2010). Her experiences in the field of medical nursing have been able to create extensive impact in theory of development. It can be affirmed that she became quite successful in forming greater diversity in this particular field by acquiring in-depth knowledge about the field of medical nursing. Moreover, she developed an efficient model which has been named as The Neuman Systems Model. Due to her expertise and experience, she had been able to make full utilization of the model and gave a new direction to the overall field of medical nursing (Nurses.info, 2010). 3. Examine Crucial References For The Original and/or Current Work Of The Theorist And Other Authors Writing About The Selected Theory In relation to The Neuman Systems Model, the original work that appeared from the part of Dr. Neuman created a greater relevance to this particular model. As the selected theo ry provided various positive ideologies regarding broader aspects of effective clinical practice setting, it also delivered grounds for other authors and theorists to discuss about the b relevance of the theory in the field of medical nursing. According to Reed (1993), The Neuman Systems Model provided a greater aid in delivering an effective framework and also in offering better wellness to the patients (Reed, 1993). In accordance with the viewpoints of Bomar (2004), the aforesaid model can be understood as a broad tool which values the aspect of health promotion at large. In addition, it can be affirmed that if this particular model is effectively utilized, then it could lead towards... The Neuman Systems Model is duly considered to be a nursing theory which is fundamentally based on a person’s affiliation to stress, the response to it and re-structuring aspects that are vibrant in nature. The theory has been developed by Betty Neuman, a professor, community health nurse and a counselor. The fundamental element of the model comprise several energy resources that encompass genetic structure, organ strength or weakness along with normal temperature range among others that are bordered by numerous lines of resistance i.e. the flexible and the normal line of defense. In this regard, the normal line of defense represents the person's state of balance and the flexible line of defense signifies the vibrant nature of individual’s state of balance that can speedily change over a short phase of time. The Neuman Systems Model is duly considered to be one of the imperative models, which showed greater relevance in the overall field of nursing and medication. Moreover, it has also been viewed that the model showed high level of applicability in the ground of nursing. On a positive side, the theory has been analyzed in an in-depth manner and based on a proper analysis, it has been realized that the theory can be utilized to deliver long-term benefits in the overall field of nursing. In addition, the theory can be applied in several broad areas that include education, administration or informatics and practice among others.

Human nature and Western civilization Essay Example | Topics and Well Written Essays - 1000 words

Human nature and Western civilization - Essay Example No one can call a place West. Civilization - what does it mean? Around three hundred years ago European intellectuals, inspired by the astonishing cultural changes they had witnessed over the previous century, began to develop the concept of â€Å"civilization† as a way of describing the differences they perceived between their manner of understanding the world and that of other peoples. These intellectuals were convinced that their fellow Europeans had recently discovered the one true way of understanding nature, including human nature, and that they had done so by liberating themselves from the prejudices, superstitions and dogmatic ignorance of those that had preceded them. The label â€Å"civilization† was born in this context as a term of collective description connoting the â€Å"advanced† beliefs, practices, and cultural habits which the Europeans had acquired. For Europeans â€Å"civilization† was the benefit they had received from the intellectual upheavals which had overturned medieval barbarism and ignorance. Similarly, when Europeans began to travel around the globe on their many voyages of discovery and conquest, they carried their notion of â€Å"civilization† with them, using it to describe the differences they saw between their manner of viewing things and those of the people they encountered. The idea of â€Å"Western Civilization† was thus born when Europeans began to employ the new concept of â€Å"civilization† to contrast the European approach to life and nature (which they believed to be the one, true, â€Å"modern† way of viewing things) with that of non-Europeans. In this respect, the concept of â€Å"Western Civilization† emerged in... The essay focuses on human nature and Western civilization and their terms. Take for example, human nature. What's human nature? Is it different from human behavior? . Human nature is affected by both genetic and experiential factors. People develop just the way they are because of the social circumstances they were born in and in the context of their genetic potential. How about civilization? What does it mean? The label â€Å"civilization† was born in this context as a term of collective description connoting the â€Å"advanced† beliefs, practices, and cultural habits which the Europeans had acquired. For Europeans â€Å"civilization† was the benefit they had received from the intellectual upheavals which had overturned medieval barbarism and ignorance. Similarly, when Europeans began to travel around the globe on their many voyages of discovery and conquest, they carried their notion of â€Å"civilization† with them, using it to describe the differences they saw between their manner of viewing things and those of the people they encountered. The essay's conclusion focuses on this point. One handles emotions by society and genetic predisposition as mentioned. In the end though, we can determine that power struggle and that desire to be the leader of the pack still governs the motivations of people in the course of human history. May it be from the deep dirges of history or with different forms of government, humans will be humans and will continue to have the same weaknesses, just different manifestations as dictated by time.

Wednesday, October 16, 2019

Nursing Theory Analysis Essay Example | Topics and Well Written Essays - 2250 words

Nursing Theory Analysis - Essay Example Dr. Neuman is a renowned pioneer in the context of active nursing involvement in community mental health (Nurses.info, 2010). Her experiences in the field of medical nursing have been able to create extensive impact in theory of development. It can be affirmed that she became quite successful in forming greater diversity in this particular field by acquiring in-depth knowledge about the field of medical nursing. Moreover, she developed an efficient model which has been named as The Neuman Systems Model. Due to her expertise and experience, she had been able to make full utilization of the model and gave a new direction to the overall field of medical nursing (Nurses.info, 2010). 3. Examine Crucial References For The Original and/or Current Work Of The Theorist And Other Authors Writing About The Selected Theory In relation to The Neuman Systems Model, the original work that appeared from the part of Dr. Neuman created a greater relevance to this particular model. As the selected theo ry provided various positive ideologies regarding broader aspects of effective clinical practice setting, it also delivered grounds for other authors and theorists to discuss about the b relevance of the theory in the field of medical nursing. According to Reed (1993), The Neuman Systems Model provided a greater aid in delivering an effective framework and also in offering better wellness to the patients (Reed, 1993). In accordance with the viewpoints of Bomar (2004), the aforesaid model can be understood as a broad tool which values the aspect of health promotion at large. In addition, it can be affirmed that if this particular model is effectively utilized, then it could lead towards... The Neuman Systems Model is duly considered to be a nursing theory which is fundamentally based on a person’s affiliation to stress, the response to it and re-structuring aspects that are vibrant in nature. The theory has been developed by Betty Neuman, a professor, community health nurse and a counselor. The fundamental element of the model comprise several energy resources that encompass genetic structure, organ strength or weakness along with normal temperature range among others that are bordered by numerous lines of resistance i.e. the flexible and the normal line of defense. In this regard, the normal line of defense represents the person's state of balance and the flexible line of defense signifies the vibrant nature of individual’s state of balance that can speedily change over a short phase of time. The Neuman Systems Model is duly considered to be one of the imperative models, which showed greater relevance in the overall field of nursing and medication. Moreover, it has also been viewed that the model showed high level of applicability in the ground of nursing. On a positive side, the theory has been analyzed in an in-depth manner and based on a proper analysis, it has been realized that the theory can be utilized to deliver long-term benefits in the overall field of nursing. In addition, the theory can be applied in several broad areas that include education, administration or informatics and practice among others.

Tuesday, October 15, 2019

Critically evaluate the current status of the setting including policies and practices Essay Example for Free

Critically evaluate the current status of the setting including policies and practices Essay Self-reflection is a very important tool to be used in order to keep the nursery up to date with current legislation and to raise service standards. By regularly looking at where we are as a setting we can ensure that we continue to offer high quality education to our children. â€Å"Research has proven that self-reflection and evaluation both support good practice within a setting as a part of continual development. Importantly this self-reflection supports good outcomes for children.† (Barber and Paul-Smith 2009, pg. 8) We have been using Ofsted SEF to evaluate where we are doing well and assess where we need to improve. â€Å"The self-evaluation form is designed to help early years providers to review and improve their practice, so that it is of the highest standard and offers the best experience for young children. Importantly it is a useful tool for you and any assistants or staff to evaluate the impact of what you do on children’s welfare, learning and development.† (Ofsted 2009, pg.13) PEST ANALYSIS POLITICAL ï‚ · Politically unsettled ï‚ · Arab spring ï‚ · Benevolent dictatorship ï‚ · No pressure groups ï‚ · Frequent change of legislation but no clear guidelines ï‚ · No official body or organization to refer to ï‚ · Government policies are not consistent and not properly disseminated ECONOMICAL ï‚ · Unsecure financial world markets ï‚ · Fluctuating exchange rates ï‚ · Expensive living ï‚ · Uncertainties in the economy ï‚ · No direct income taxes or VAT ï‚ · Several indirect taxes ï‚ · Increasing running costs ï‚ · Rapid expansion of Early Years industry ï‚ · Increasing competition SOCIAL ï‚ · Increasing number of working mothers ï‚ · Wider range of people ï‚ · Image of good standard of living and overall safety ï‚ · People moving and settling in the area ï‚ · Increased focus on Early Years Education ï‚ · Attractive area for different nationalities ï‚ · Broad spectrum of curriculum covered in the country TECHNOLOGICAL ï‚ · Easier information access (internet) ï‚ · Wider audience ï‚ · Better ability to reach out to the Community ï‚ · Blogging (positive and/or negative) ï‚ · Personal Technology ï‚ · CCTV cameras (name of city) is a safe place, its economy is considered ever flourishing but due to the ongoing global economical crisis it has its financial restraints. This analysis has also outlined how a business can be easily set up but also how difficult it is for it to thrive due to frequent changes in the legislation and the lack of consistent guidelines. Nevertheless, the increased interest in Early Years Education will soon allow practitioners to offer to all children high standards of learning. SWOT ANALYSIS STRENGTHS ï‚ · Well trained, qualified and experienced staff ï‚ · Impressive facilities ï‚ · Outstanding outdoor area ï‚ · Central location ï‚ · Extra-curricular activities ï‚ · Early years themed workshops and professional advisors WEAKNESSES ï‚ · Turnover of staff due to economic situation ï‚ · Policies and Procedures (only few in place) ï‚ · Being tenant of the facilities ï‚ · Old building badly maintained ï‚ · Lack of training opportunities ï‚ · Limited parental involvement ï‚ · Managerial limited decision making ability OPPORTUNITIES ï‚ · Several marketing options ï‚ · Exponential growth of market ï‚ · Partnership with professionals in childcare ï‚ · Diversified skills of staff with different backgrounds ï‚ · Wide professional network THREATS ï‚ · Extremely high price of rent ï‚ · Frequent new regulations with high implement costs ï‚ · New nurseries poaching staff and rising salary expectations ï‚ · Loss of key staff ï‚ · Frequent families’ relocations (name of nursery) is a newly established nursery in (name of city) that was initially planned with a greater focus on business rather than learning. Policies and Procedures were compiled and printed out hastily, without team brainstorming or subsequent effective compliance. The absence of specific legal requirements in the country makes it possible for nursery and day cares to operate, though in the best interest of children, without proper policies and procedures in place. Going through the process of self-evaluation has been the most valuable eye opener possible for me. I now know where we are and where we want to be. PEST and SWOT analysis have  allowed me to celebrate our strengths and pinpoint our weaknesses. Generally all policies will need to be rewritten, implemented and properly disseminated, but priority will be given to Child Protection and Behaviour Management Policies. A Staff training plan also needs to be put in place in order to ensure high standards in care and education of all children. Last but not least, it will be paramount to monitor progress and ensure that our procedures reflect our policies and that good practice is consistent throughout the academic year by gathering evidence regularly.

Monday, October 14, 2019

Decision Tree for Prognostic Classification

Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   < Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   <