Thursday, October 31, 2019

Employee Relations Essay Example | Topics and Well Written Essays - 3000 words - 5

Employee Relations - Essay Example industrial relations take place within a particular organization is determined by the frame of reference through which its top managers recognize the formal relationship with individuals and/or their representatives. The unitary frame of reference is widespread among managers. According to this unitary perspective, all individuals in the organization are working towards one goal, where there is one sense of power and where conflict is abhorred. Managers frequently view themselves and other managers in the organization as part of a ‘managerial team’, and they expect their employees to pledge to the same point of view. Managers who take this approach consider themselves as the only rightful source of power and authority which they value and protect. They view their role as one of controlling the activities of the workforce and presume that all employees share the common goals of the organization, are faithful to the `management team’ and entirely dedicated to the objective of the organization. On the other hand, a different way of looking at organizations is to view them as pluralities of interest groups, each with differing and occasionally opposing interests, which might come together in alliances, although these alliances shift and modify according to the situations. Whatever the long-term interrelatedness of interest groups, in their daily struggle for resources and in their work-related activities, they emphasize on sectional interests. Here, managers might tolerate and keenly promote freedom of expression and the development of groups, which create their own norms and choose their own informal leaders. In this way, authority and control takes place in a number of areas of the organization and loyalty is commanded by the leaders of the groups, which are frequently in competition with each other for resources. The managers accomplish results by combining the groups, promoting participation, motivating employees and managing their work efforts (Gennard &

Monday, October 28, 2019

Quality Improvement Essay Example for Free

Quality Improvement Essay America had long lost to Japan its dominance in the world marketplace even in America itself because of Japanese quality management philosophy. To illustrate that workers involvement in quality improvement is the core of this management philosophy, the article compared three kinds of firms operating in the U. S.: A companies or strictly American firms, AJ companies or American firms employing Japanese quality control methodologies, and J companies or Japanese firms operating in the U. S.  It focused on two areas: (1) production workers knowledge and use of Statistical Quality Control Tools (SQC tools), and (2) production workers’ quality responsibilities. Of the 17 recognized SQC tools, 7 were noted to be highly employed in the U. S. The findings indicate that (1) SQC tool utilization of U. S. firms classified as AJ and J is higher than that of U. S. firms classified as A, and (2) delegation of responsibility for quality to production workers by U. S. firms classified as AJ and J is higher than that found in U. S. firms classified as A. Assessment of the Article From the selection of the sample group to the development of the questionnaire and all the way to the analysis of data collected, the research methodology cannot be faulted. I am especially impressed with how exacting the authors were with their definitions. They showed great care in determining exactly what is meant by organizational size as it is related to other organizational functioning, likewise they found fit to define what is exactly meant by production worker as opposed to pure quality control personnel classified as production worker. However, upon perusal of their two findings, one is led to ask whether or not the first area was sufficiently addressed. The first area was actually composed of two, knowledge and utilization of SQC tools. While their data validly showed that utilization was either high or low, it says nothing of knowledge; specifically how knowledgeable these production workers were with the SQC tools, or are we to assume that usage is the same as knowledge? Conclusions With the exception on that little confusion as to the distinction between knowledge and usage, the article had shown that the Japanese quality management philosophy entails worker involvement in quality improvement to the extent that the worker is suppose to use the statistical quality control tools and be delegated responsibilities over the quality of the products themselves.

Saturday, October 26, 2019

Globalization Impacted on Indian Economy

Globalization Impacted on Indian Economy Introduction: Indian economy had experienced major policy changes in early 1990s. The new economic reform, popularly known as, Liberalization, Privatization and Globalization (LPG model) aimed at making the Indian economy as fastest growing economy and globally competitive. The series of reforms undertaken with respect to industrial sector, trade as well as financial sector aimed at making the economy more efficient. Globalization has many meanings depending on the context and on the person who is talking about. Though the precise definition of globalization is still unavailable a few definitions are worth viewing, Guy Brainbant: says that the process of globalization not only includes opening up of world trade, development of advanced means of communication, internationalization of financial markets, growing importance of MNCs, population migrations and more generally increased mobility of persons, goods, capital, data and ideas but also infections, diseases and pollution. The term globalization refers to the integration of economies of the world through uninhibited trade and financial flows, as also through mutual exchange of technology and knowledge. Ideally, it also contains free inter-country movement of labor. In context to India, this implies opening up the economy to foreign direct investment by providing facilities to foreign companies to invest in different fields of economic activity i n India, removing constraints and obstacles to the entry of MNCs in India, allowing Indian companies to enter into foreign collaborations and also encouraging them to set up joint ventures abroad; carrying out massive import liberalization programs by switching over from quantitative restrictions to tariffs and import duties, therefore globalization has been identified with the policy reforms of 1991 in India. (http://www.fibre2fashion.com/industry-article/8/738/impact-of-globalization1.asp) Aims and Objectives:- The details mentioned below will help to know the research issue, the reason for the issue, cause of being the present issue and explain that how this research can be helpful in future. The research issue:- The main issue is the impact of globalization on the Indian economy. Globalization has come to dominate world since the 19th century. Globalization has many meanings depending on the frame of mind of person who thinks about it. However, Globalization means the integration of economies and societies through the exchange of ideas, technology, services, finance and people. The reason that led to globalization in India was the significant decline in GDP of some East Asian companies, lack of growth in developing countries and the foreign exchange markets of the developed nations. Due to the above reasons- Inflation in India rose sharply during 1998-99, reaching the height of 8.8% in September 1998 and dropping down in January 1999. Manufacturing growth in terms of GDP fell to 7.7% in 1996-97 from previous years height of 15%, whereas in 1997-98 it fell to 6.8%. The above mentioned impacts show the connection of India with the global economy, production decisions and government policies. Why is it an issue? There is a concern that relates to the loss of autonomy following the economic policies. It is certain that in the progressing world, all countries cant implement the same techniques, there are suppose to be some differences depending on various circumstances. Why is it an issue now:- Though there will be a help of foreign investment to promote the economic development in the short run, but there is also a possibility that at the times of recession investors may withdraw their funds causing further problems. Domestic producers are being affected by overseas giants,that are having competitive advantages over the domestic producers; having huge funds to invest. This has started to result in closure of many domestic owned firms. Globalization has resulted in outsourcing of jobs to developing countries, resulting the loss of jobs in developed countries; in the near future there is a chance that multinational corporation with there immense power may rule the world. What could this research shed light on:- The research mainly concentrates on finding the positive and negative impacts of globalization on Indian economy, talking about reforms, mentioning some important stats and figures, casual analysis of various factors influencing the country, etc. Background section:- In the early 1990s Indian economy had experienced major policy changes. The economic reform known as Liberalization, Privatization and Globalization (LPG model) aimed to make the Indian economy, the fastest growing economy and also to make it globally competitive. The series of reforms implemented with respect to industrial sector, trade as well as financial sector aimed at making the economy more efficient. July 1991 has led to a new start for India. This period of economic changes has had a tremendous impact on the overall development of almost all major sectors of the economy. Globalization has changed the mindset of Indian people. It has changed the traditional values such as self reliance and socialistic policies of economic development; that were mainly created to economic backwardness, inefficiency of the economy and also some other problems; which were created since the independence in 1947. Despite of such obstacles, India has always had the potential to be on the fast track to prosperity. Literature Review:- There have been many authors who have commented on the topic: Impact of Globalization on the economy. Let us take a close look at the ideas of such authors. In the book by Jeffrey A. Frankel (1998), named The regionalization of the world economy, he gave a brief idea about the free trade areas, custom unions and custom blocs that are prevalent in the entire world. He further said that Regionalization is the base which makes more economists hopeful about the opportunities that it may create in the near future, whereas it creates fear in the minds of others- making them think about the efforts it may take to encourage global free trade. The book provides answers to questions like- the extent of regional arrangements having affected the patterns of trade, maintains the safety effects for the arrangements and it also explains the economic effects on patterns of trade, via price differentiations or gravity models. In the book by Robert Boyer and Daniel Drache (1996), named States against markets: the limits of globalization, they commented that: As the countries are making efforts to increase their exports; this has indirectly led to crossing of national-borders and becoming dependent on other countries to satisfy their wants. Some important points discussed in the book are- clarification of whether globalization is a development or not, further it assesses the success of globalization as a media of convergence and uniformity across nations, it provides update on Hayek vs. Keynes debate and also helps to provide best benefits to the entire world. In the book by K R Gupta (1996), named Liberalisation and globalisation of Indian economy (Volume 1), he comments that it has been a long time since the process of Liberalisation and Globalisation was started in India in 1991. In the book he has examined the achievements and failures of economic reforms throughout this period, and has also made some suggestions to improve them. The book also presents the roles to be played by all states in accelerating the developments of the country as a whole. It describes the economic reforms of other countries from which India can learn a lot, it analyses the impact of reforms on the agricultural cottage and small scale sector and suggests a greater attention towards these sectors. In the book by Satyendra S. Nayak (2009) named Globalization and the Indian economy: Roadmap to convertible rupee, he examines the impact of globalization on the Indian economy- in respect to the trade, investment and financial aspects, he has also considered the balance of payment and exchange rate. In the first part of the book- he mentions the role played by the US in undergoing the globalization process, he also provides detailed analysis of monetary system. In the second part of the book- the author explains the Indian economic systems and its process of dealing with the globalization; he has given a brief idea regarding the economic reforms and state of liberalisation in India. Finally the author examines whether the Indian currency- rupee can be made fully convertible or not. Research Questions:- Based on the purpose of this research the primary question will be: Will Liberalisation, Privatization and Globalization help India to achieve faster growth and progress in future as well. What impact will the MNCs have on the growth and development of under-developed and developing countries? What were the important reforms undertaken by India in the early nineties as a part of liberalisation and globalization strategy? Research Design AND METHODOLOGY Collis and Hussey (2003 pg 113) defines a research design as a science of planning procedures for conducting studies to get the most valid findings. A research design is an important step for a research proposal Research process can have different design and different methods can be used depending on the chosen subject what is being analysed. The research process is used to define the research strategy of the study in detail. Figure 1 describes a generic research process onion that supports the researcher to depict the issues underlying the choice of data collection methods (Saunders et al 2000: 84) Figure 1. Research process onion http://www.thesisexpress.com/images/fig3.jpg [Accessed 15th May 2009]. The layers of the research onion represent the following aspects: Research philosophy Research approach Research strategy, methodology Time horizons and Data collection methods The research onion gives an overview, how one can achieve its objectives by using the techniques in each layer of the onion. This research proposal aims to take a closer look on market segmentation, package design, brand development and assessment, and understanding various processes, including consumers decision-making processes. The research design, philosophy of this proposal will be framed more within the qualitative (phenomenological paradigm) methodology. But in order to better understand the study respondents, to optimize the data collection process, to increase both the breadth and width of data collection requires the use of mixed methods. The main differences between them and what they are focussed on can be seen in table 1. Table1. Quantitative / Positivist paradigm Qualitative/ Phenomenological paradigm Older tradition derived from scientific enquiry Developed from research into human experience Data take the form of numbers Data take the form of non-numbers Reality is assumed to be a fixed concept Reality is assumed to alter according to perspective Researcher maintains objectivity, remains aloof and distant from the researched There is interaction between researcher and researched, possibly to the extent of inter-subjectivity where both collaborate on the work as a whole Ensuring reliability means that the work may be repeated with the same findings Reliability may not be possible with human experiences. It is less important Large representative samples Small samples not necessarily representative Validity may be low Great importance placed on validity- the truth or trustworthiness of the research Findings to be generalised to whole population studied Findings not generalisable; may be transferable in certain circumstances Deductive or hypothetico- deductive stance-tests pre-set theories and hypotheses Inductive stance develops theory from observation artificial research setting, controlled by the researcher natural setting for the researched Source: Lecture notes by Jonathan Knowles There are two main research approaches: deduction and induction. With deduction a theory and hypothesis (or hypotheses) are developed and a research strategy designed to test the hypothesis. With induction, theory would follow data rather than vice versa as with deduction. Major differences between deductive and inductive approaches to research are: Table 2 Deduction Induction Scientific principles Gaining an understanding of the meanings humans attach to events Moving from theory to data Need to explain casual relationships between variables Close understanding of the research context Collection of quantitative data Collection of qualitative data Application of controls to ensure validity of data Realization that the researcher is part the research process Operationalisation of concepts to ensure clarity of definition More flexible structure to permit changes of research emphasis as the research progresses Highly structured approach . Researcher independence of what is being researched Less concern with the need to generalize Necessity to select samples of sufficient size in order to generalize conclusions Source: Saunders et all, 2007, p.120 This proposal follows the inductive approach where data is collected and the theory is developed as result of the data analysis. Through the interviews, access will be gained to the understanding of meaning that humans attach to the events. The objective for using the inductive approach is to ensure that all angles are covered in terms of understanding the deeper structure of the research problem. The next step is to choose the strategy, methodology which is going to be used. According to Saunders et all (2007, p.135), any of these strategies can be used Experiment Survey Case study Action research Grounded theory Ethnography Archival research For the purpose of this research proposal the grounded theory methodology will be used. Grounded theory (Glaser and Strauss, 1967) is often thought of as the best example of the inductive approach. It helps in theory building through a combination of induction and deduction. A grounded theory strategy is, according to Goulding (2002), is helpful for research to predict and explain behaviour, the emphasis being upon developing and building theory. Constant references to the data to develop and test theory leads Collis and Hussey (2003) to call grounded theory an inductive/deductive approach, theory being grounded in such continual references to the data. Data collection methods are an integral part of research design. There are several data collection methods, each with its own advantages and disadvantages. Problems researched with the use of appropriate methods greatly enhance the value of the research. Data can be collected in a variety of ways and from different sources. Data collection methods include interviews- face-to-face interviews, telephone interviews, computer-assisted interviews, and interviews through the electronic media, surveys, questionnaires that are either personally administered, sent through the mail, or electronically administered, observation of individuals and events with or without videotaping or audio recording and a variety of other motivational techniques such as projective tests. Interviewing, administering questionnaires, and surveys are the three main data collection methods followed in this research. Timescale:- Its important to develop a time plan for the research to lead to a successful dissertation. For this reason the Gantt chart (developed by Henry Gantt, 1917) can be used. A Gantt chart is a graphical representation of the duration of tasks against the progression of time. It is a useful tool for planning and scheduling projects as well as monitoring a projects progress. A Gantt chart lets us see how remedial action may bring the project back on course. Table 3 Target date January-February Start thinking about research topic End February Identify research problem, finalize objectives March Devise research approach March-end July Collecting data, read literature June- September Analysing and interpretation of data By half September Draft finding chapters 13th August- 5th November Appointments with supervisor By 12th November Revise draft, writing format for submission By 16th November Print, bind Before 23rd November Submit Adapted from Saunders et al., 2007, p.41 Resources:- The resources required for this research may be categorized as finance, data access and equipment. The financial expenses for this research will not be too high. However, because of the research is mainly focused on India, it will be necessary to cover travel expenses which may occur in case of personal interview, but thanks to low cost airlines, it would be still affordable. Internet has provided most of the information about this subject. Internet access is available at the university campus. Other minor expenses are expected for photocopying or printing and posting questionnaires. The main equipment used will be PC, printer and recorder. Access to study population:- In this research data will be collected from Primary source due to its validity; as well as secondary data to supplement the primary data. The primary data will be collected by conducting survey using questionnaire technique among income groups and various age. The questionnaire will be checked for completion and interviewing quality. Editing is the review of the questionnaire with the objective of increasing accuracy and precision. There are several sources of secondary data, including books and periodicals, government publications of economic indicators, census data, Statistical Abstracts. Ethical Issues:- When doing research it is always important that all parties in research should exhibit ethical behaviour. Ethics are norms or standards of behaviour that guide moral choices about our behaviour and our relationships with others. The goal of ethics in research is to ensure that no one is harmed or suffers adverse consequences from research activities. There are six key principles of ethical research that will be addressed, whenever applicable: à ¢-   Research should be designed, reviewed and undertaken to ensure integrity and quality à ¢-   Research staff and subjects must be informed fully about the purpose, methods and intended possible uses of the research, what their participation in the research entails and what risks, if any, are involved. à ¢-   The confidentiality of information supplied by research subjects and the anonymity of respondents must be respected à ¢-   Research participants must participate in a voluntary way, free from any coercion à ¢-   Harm to research participants must be avoided à ¢-   The independence of research must be clear, and any conflicts of interest or partiality must be explicit http://www.esrc.ac.uk/ESRCInfoCentre/Images/ESRC_Re_Ethics_Frame_tcm6-11291.pdf [Accessed 16th May 2009]. Analysis/Interpretation of the Data:- For this research, a probability sampling technique will be used to answer the research questions and achieve objectives. The possible sampling techniques used will be stratified random and cluster. According to Saunders et al (2007, pg 221) stratified random sampling involves division of population into two or more relevant and significant strata based on one or more number of attributes. Further division of the population into series of relevant strata will ensure that the samples are more likely to be representative of the different customers in India. The data collected from the questionnaire will be neatly presented, analysed, and interpreted using pie-charts, bar graphs in the most efficient way to have the better understanding of the results. Conclusion: The main purpose of this research proposal was to identify and analyze the impact of globalization on the Indian economy. It also helped to determine the positive and negative impacts on globalization. This proposal helped me to identify the main objectives, questions and problems which this research may concentrate on; the literature review gave me the idea regarding the literature sources available- that will be enlarged by following research for the dissertation. Design and methodology stage helped to create the framework of possibilities and methods useful to achieve the specified objectives. It helped me to make a proper plan to undertake the research within the time available and to make sure that the results are in relation to knowledge and understanding. References: Malik T, 2004. Impact of globalization on Indian economy; accessed on April 25, 2010 (Source: http://www.fibre2fashion.com/industry-article/8/738/impact-of-globalization1.asp) Irving Fisher Group, 2003. Indian economy and globalization; accessed on April 25, 2010 (Source: http://www.slideshare.net/fathima_sy/globalization-and-indian-economy-1095107) Balakrishnan C, 2004. Impact of globalization on developing countries and India; accessed on April 29, 2010 (Source: http://economics.about.com/od/globalizationtrade/l/aaglobalization.htm) Trade Chakra; accessed on April 29, 2010 (Source: http://www.tradechakra.com/indian-economy/globalization.html) Goyal K, 2003. Impact of globalization on developing countries (with special reference to India); accessed on April 29, 2010 (Source: http://www.eurojournals.com/IRJFE%206%20goyal.pdf) Pavcnik N, October 26, 2006. Distributional effects of globalization in developing countries; accessed on May 3, 2010 (Source: http://www.princeton.edu/~pennykg/JEL_Globalization.pdf) Kaitila V. Economic globalization in developing countries; accessed on May 5, 2010 (Source: http://www.etla.fi/files/918_FES_02_3_developing_countries.pdf0) Research papers. Globalization can have a negative impact on developing economy (Source: http://www.oppapers.com/essays/Globalization-Can-Have-Negative-Impact-Developing/145452)

Thursday, October 24, 2019

History of Home Schooling Essay -- essays papers

History of Home Schooling Before public schools emerged, children were educated in the home by their parents. They were taught arithmetic, practical skills, and to read and write. Some wealthy families preferred hiring a tutor for their children (Koetzsch, 1997). In the 1840s, prominent leaders such as Horace Mann lead a movement to institute public schools in the United States (Thattai, 2001). These reformers argued that public schools would create good citizens, unite society, and prevent crime and poverty. As a result of their efforts, public elementary schools evolved in American society by the end of the 19th century. Massachusetts was the first state to pass a compulsory attendance law, ruling that all children had to attend at least elementary school (Thattai, 2001). By 1918, all states had the compulsory attendance law. Catholics weren’t happy with the public education system, so they instituted private schools (Thattai, 2001). When public institutions emerged, home schooling nearly vanished in the United States (Koetzsch, 1997). Education critics began voicing concerns about public schools in the 1960’s (Hess, 2002). Some of the complaints against public schools included, preaching alien values, failing to adequately educate, and adopting unhealthy approaches to child development (Hess, 2002). Many parents joined the de- schooling movement based on a popular book by John Holt called How Children Learn. John Holt was a professor in Boston, who believed that children are born with the desire to learn and educate themselves. The book states that all children need the following for a successful education; materials, access to the â€Å"real world†, emotional support, freedom to explore, and time to assess idea... ...ws of home schooling in the state of Maryland. The article included a summary of the Maryland education code. Home school facts. (2005). Home Education Magazine, 25. Home Education Magazine is a popular magazine among home school parents and children. It explores all aspects of home schooling. The article that I used for my research paper did not specify and author. Number of home schooled students rises. (2004, August 4). The New York Times, A14. This newspaper article describes the increase of home schooling in the United States through a recent survey. The article does not specify an author. Interview with Brigitte Mankiewicz, home school parent. March 21, 2005. I interviewed Brigitte Mankiewicz about what it is like to be a home school parent. She answered questions regarding the positive and negative aspect of home schooling.

Wednesday, October 23, 2019

Academic Skills Plus Essay

Atwood writes: â€Å"What I mean by ‘science fiction’ is those books that descend from H. G. Wells’s The War of the Worlds, which treats of an invasion by tentacled, blood-sucking Martians shot to Earth in metal canisters – things that could not possibly happen – whereas, for me, â€Å"speculative fiction† means plots that descend from Jules Verne’s books about submarines and balloon travel and such – things that really could happen but just hadn’t completely happened when the authors wrote the books. I would place my own books in this second category: no Martians.† (From In other worlds, p.6) With these remarks in mind, is it useful to distinguish between science fiction and speculative fiction? In answering this question you might consider Le Guin’s suggestion that people who refer to their works as ‘speculative fiction’ rather than ‘science fiction’ are simply trying to protect themselves from some of the negative connotations associated with science fiction (see In other worlds)? Discuss in relation to at least two works. ‘Science fiction’ is often defined as a wide literary genre related to fictional stories. It contains many subgenres, such as space opera, cyberpunk, utopia, dystopia, alternative histories and speculative fiction. Although there are an extensive number of subgenres, some writers, as Margaret Atwood, have been trying to differentiate ‘speculative fiction’ from ‘science fiction’. Maybe this wideness of subgenres existing under the genre ‘science fiction’ is exactly the reason why Atwood found interesting to present this differentiation. When we consider science fiction stories, many different things can came up to our mind, such as aliens, intergalactic travel, artificial intelligence and utopian (or dystopian) societies. Considering that, as we can notice in these examples, these topics can differ a lot from each other and it might be understandable that Atwood  wanted to differentiate (more than just defining different subgenres) the kind of fiction related to more ‘plausible’ things (things that could really happen, as she says). Definitely, ‘speculative fiction’ books have a completely different scenario from cyberpunk, aliens or space opera works and this could awake a desire to disconnect them in a more significantly manner. However, it is possible to affirm that this distinction between ‘science’ and ‘speculative fiction’ is not useful and that there is no reason for making it, especially considering that speculative fiction is just one more subgenre of science fiction. This thesis will be supported by a number of points presented throughout this essay. Firstly, it will be argued that the subgenre ‘speculative fiction’ fits perfectly into the definitions and requisites related to ‘science fiction’. Secondly, it will be discussed that Atwood’s definition of ‘speculative fiction’ is vague and can change according to interpretation, and also that it can be used to define as speculative fiction other books that she clearly had classified as belonging to ‘science fiction’. Thereby, her definition can be seen as not clear, which makes it not useful at all. Finally, it will be presented that Atwood seems to reinforce this division specially because distinguishing ‘speculative fiction’ from ‘science fiction’ is convenient for her. There are some evidences for that, for example, Le Guin once said Atwood was trying to protect herself from negative connotations associated with ‘science fiction’. This is even noticeable considering that many of her attempts to define the genre contained irony and clichà ©s. Firstly, it will be discussed that ‘speculative fiction’ fits perfectly into the definitions and requisites related to the ‘science fiction’ subgenres, which makes unnecessary and not useful the distinction between them. It was stated before that ‘science fiction’ has a big number of subgenres and it is clear that they differ considerably from each other. However, despite their singularities, all of them have one kind of cohesive element in common, which brings each subgenre to be defined as part of the genre ‘science fiction’. To define this common element noticed in all the science fiction subgenres, it is useful to consider two Suvin’s definitions about science fiction: â€Å"SF is, then, a literary genre whose necessary and sufficient conditions are the presence and interaction of estrangement and cognition,  and whose main formal device is an imaginative framework alternative to the author’s empirical environme nt† (Suvin 1979, p. 7) and â€Å"Science Fiction is distinguished by the narrative dominance or hegemony of a fictional ‘novum’ (novelty, innovation) validated by cognitive logic† (Suvin 1979, p. 63). Considering these two definitions, it is possible to affirm then that the necessary and sufficient conditions to identify one science fiction work are: the presence of a ‘novum’ and the presence of a ‘cognitive logic’, the logical consistency which makes the ‘novum’ become part of our knowledge about real things. With this in mind, we can analyse the book The Handmaid’s Tale from Atwood. She clearly have classified this book as not being ‘science fiction’, however, it is easy to identify the ‘novum’ and also the ‘cognitive logic’ in her book. The ‘novum’ is represented by the whole system of political organization in the Republic of Gilead described on the book and the ‘cognitive logic’ is given by some similarities that can be noticed between our society and the society described on the book. In the same way, for the book of H. G. Wells, The War of the Worlds, we can also identify the ‘novum’, which is given by the Martians and their technology; and the ‘cognitive logic’, given by the similarities existing between both societies. Thus, it can be affirmed that both books The Handmaid’s Tale and The War of the World belongs to the genre ‘science fiction’, contradicting Atwood’s previous proposition. This proves that although Atwood’s book can be classified as ‘speculative fiction’, it truly belongs to ‘science fiction’, leading us to verify again that ‘speculative fiction’ is just one more subgenre of ‘science fiction’. It makes clear then that the division between ‘science’ and ‘speculative’ fiction is not useful and not justifiable. Secondly, it will be presented that Atwood’s definition of ‘speculative fiction’ is imprecise and also can be used to define as speculative fiction other books that were categorized as ‘science fiction’ by her. In order to illustrate these points, we will analyse Atwood (2011) definition about ‘speculative fiction’ as â€Å"things that really could happen but just hadn’t completely happened when the authors wrote the books.† This is a vague and inaccurate idea. It could encompass different definitions because the range of things that could really happen is highly dependent of each person’s  beliefs and ideas, what makes this definition extremely subjective. Also, with just a few exceptions, it is not possible to say for sure what is and what is not going to happen. Besides, Atwood even gives us another definition: â€Å"Oryx and Crake is not science fiction. Science fiction is when you have chemicals and rockets.† (Watts 2003, p. 3). Considering both definitions given by her, it could be understood that she considers rockets and chemicals as things that really could not happen, as they belong to science fiction. However, it is known that rockets and chemicals are not things impossible to happen, especially because nowadays we can see some examples of them. Both definitions become contradictory then. Considering her first definition, books about this theme would be classified as speculative fiction; however, she decided to use these two themes to exemplify ‘science fiction’. Atwood’s definitions about ‘speculative fiction’ are imprecise, therefore, what is the purpose in using an imprecise and cloudy definition? It is simply not useful to distinguish ‘science’ from ‘speculative fictionâ €™ then. Thirdly, it will be presented that Atwood seems to reinforce this division specially because distinguishing ‘speculative fiction’ from ‘science fiction’ is convenient for her. Le Guin (2009) states that Atwood was trying to protect herself from negative connotations associated with science fiction and also â€Å"from being relegated to a genre still shunned by hidebound readers, reviewers and prize-awarders†. Considering Le Guin’s remarks, it is possible to observe that ‘science fiction’ was not a literary genre with considerable prestige in the intellectual audience. This could reduce her reputation on the high literary society. One possible reason for ‘science fiction’ being underestimated is that science fiction could be related to some works produced for mass audience like Star Trek and Dr Who and intellectuals would associate her books to these works. Then it would be interesting for her to dissociate the connecti on between her books and the genre ‘science fiction’ once it was not so appreciated by the intellectual audience. And this is also noticed by considering that some of her remarks about ‘science fiction’ contains irony, as she frequently uses clichà ©s to refer about it, such as ‘rockets’, ‘chemicals’, ‘blood-sucking Martians’, ‘talking squids in outer space’, and ‘skin-tight clothing’. Thus, it is possible to verify why Atwood reinforces the division between ‘speculative’ and ‘science’ fiction. And  considering her reasons we can see that they are not justifiable and strong enough to make the distinction between ‘speculative’ and ‘science fiction’ useful. Finally, this essay discussed a number of points in order to support the thesis that the distinction between ‘speculative’ and ‘science’ fiction is not useful. Firstly, it was stated that although it may be hard to define some literary genres it is noticeable that ‘speculative ficti on’ fits perfectly in most of definitions of science fiction, making it a subgenre only. Secondly, it was presented that Atwood’s definition about ‘speculative fiction’ is vague and could classify as ‘speculative fiction’ some books that she clearly classified as ‘science fiction’. Thirdly, it was discussed that is convenient for her to separate ‘speculative fiction’ from ‘science fiction’ since the genre of ‘science fiction’ was not so appreciated by reviewers and prize awarders and was associated to some mass audience works. She does not want to be linked to this image so she tries to put her works under a different literary classification. This point shows us clearly that there is no consistent and general reason for her to do the distinction. In conclusion, this essay illustrated that is not useful to distinguish between ‘science fiction’ and ‘speculative fiction’ and the reason for this was explained by all of the arguments stated previously. References Atwood, M 1985, The Handmaid’s Tale, Anchor Books, New York. Atwood, M 2011, In Other Worlds – SF and the Human Imagination, Doubleday. Le Guin, U 2009, ‘The Year of the Flood by Margaret Atwood’, The Guardian, 29 August. Available at http://www.theguardian.com/books/2009/aug/29/margaret-atwood-year-of-flood Suvin, D 1979, Metamorphoses Of Science Fiction, Yale University Press, New Haven Watts, P 2003, ‘Margaret Atwood and the Hierarchy of Contempt’, On Spec, vol. 15, no. 2, summer, pp. 3-5. Wells, H 1898, The War of the World, New York Review Books, New York.

Tuesday, October 22, 2019

BTEC L2 IT Assignment 1 Essay

BTEC L2 IT Assignment 1 Essay BTEC L2 IT Assignment 1 Essay Assignment front sheet Learner name Assessor name Uzair Majid Umar Faruk Date issued Completion date Submitted on W/C 22.09.2014 30.10.2014 Qualification Unit number and title BTEC Level 2 Diploma in IT Unit 1 – Communicating in the IT Industry Assignment title Assignment 1.1 – Communicating with different audiences In this assessment you will have opportunities to provide evidence against the following criteria. Indicate the page numbers where the evidence can be found. Criteria reference To achieve the criteria the evidence must show that the student is able to: Task no. Evidence P2 Communicate IT-related information to a technical audience 1 Page P3 Communicate IT-related information to a nontechnical audience 2 Page P4 Use IT tools safely to effectively communicate and exchange information 3 Page Learner declaration I certify that the work submitted for this assignment is my own and research sources are fully acknowledged. Learner signature: Uzair Majid Date: 30.10.2014 Assignment brief Qualification BTEC Level 2 Diploma in IT Unit number and title Unit 1 – Communicating in the IT Industry Start date W/C 22.09.14 Deadline Last lesson W/C 29.09.14 Assessor name Umar Faruk Assignment title Assignment 1.1 - Communicating with different audiences The purpose of this assignment is to: When working in the IT industry it is important to be able to communicate IT related information to a technical audience (like fellow IT developers) and to a non technical audience (for example low level IT users). In addition, any exchange of information must always be done safely, which is another important aspect of working with IT. Scenario: You have recently gained employment as a junior IT technician at a local graphic design company called iGraphix, who specialise in designing programmes and flyers for sporting events. The company takes on design work for all sorts of clients for a range of different sports, including football, hockey, netball, rugby and cricket. The company has a number of powerful personal computers to do the design work, which also has Internet access to allow clients to email details about the events to the iGraphix design team. Business has been going very well for the company, but there are two problems emerging with the use of email: Due to recent high-profile media stories, the company is concerned about the threat of email-borne viruses Some clients have limited experience of using IT and email and are put off by having to email details about their design work requirements to the company Task 1 Create an information sheet to be distributed to the design team, explaining the dangers of email-borne viruses. Ensure that the information sheet is written for a technical audience. [P2] Task 2 Create a leaflet to send out to new clients, explaining how to use email, in

Monday, October 21, 2019

The Deficit for Those Economics Classes Essays - Fiscal Policy

The Deficit for Those Economics Classes Essays - Fiscal Policy The Deficit for Those Economics Classes here's one on the deficit for those economics classes Subject: the deficit good or bad Deficit Spending Spending financed not by current tax receipts, but by borrowing or drawing upon past tax reserves. , Is it a good idea? Why does the U.S. run a deficit? Since 1980 the deficit has grown enormously. Some say its a bad thing, and predict impending doom, others say it is a safe and stable necessity to maintain a healthy economy. When the U.S. government came into existence and for about a 150 years thereafter the government managed to keep a balanced budget. The only times a budget deficit existed during these first 150 years were in times of war or other catastrophic events. The Government, for instance, generated deficits during the War of 1812, the recession of 1837, the Civil War, the depression of the 1890s, and World War I. However, as soon as the war ended the deficit would be eliminated and the economy which was much larger than the amounted debt would quickly absorb it. The last time the budget ran a surplus was in 1969 during Nixons presidency. Budget deficits have grown larger and more frequent in the last half-century. In the 1980s they soared to record levels. The Government cut income tax rates, greatly increased defense spending, and didnt cut domestic spending enough to make up the difference. Also, the deep recession of the early 1980s reduced revenues, raising the deficit and forcing the Government to spend much more on paying interest for the national debt at a time when interest rates were high. As a result, the national debt grew in size after 1980. It grew from $709 billion to $3.6 trillion in 1990, only one decade later. Increase of National Debt Since 1980 Month Amount 12/31/1980 $930,210,000,000.00 * 12/31/1981 $1,028,729,000,000.00 * 12/31/1982 $1,197,073,000,000.00 * 12/31/1983 $1,410,702,000,000.00 * 12/31/1984 $1,662,966,000,000.00 * 12/31/1985 $1,945,941,616,459.88 12/31/1986 $2,214,834,532,586.43 12/31/1987 $2,431,715,264,976.86 12/30/1988 $2,684,391,916,571.41 12/29/1989 $2,952,994,244,624.71 12/31/1990 $3,364,820,230,276.86 12/31/1991 $3,801,698,272,862.02 12/31/1992 $4,177,009,244,468.77 12/31/1993 $4,535,687,054,406.14 12/30/1994 $4,800,149,946,143.75 10/31/1995 $4,985,262,110,021.06 11/30/1995 $4,989,329,926,644.31 12/29/1995 $4,988,664,979,014.54 01/31/1996 $4,987,436,358,165.20 02/29/1996 $5,017,040,703,255.02 03/29/1996 $5,117,786,366,014.56 04/30/1996 $5,102,048,827,234.22 05/31/1996 $5,128,508,504,892.80 06/28/1996 $5,161,075,688,140.93 07/31/1996 $5,188,888,625,925.87 08/30/1996 $5,208,303,439,417.93 09/30/1996 $5,224,810,939,135.73 10/01/1996 $5,234,730,786,626.50 10/02/1996 $5,235,509,457,452.56 10/03/1996 $5,222,192,137,251.62 10/04/1996 $5,222,049,625,819.53 * Rounded to Millions Federal spending has grown over the years, especially starting in the 1930s in actual dollars and in proportion to the economy (Gross Domestic Product, or GDP). Beginning with the "New Deal" in the 1930s, the Federal Government came to play a much larger role in American life. President Franklin D. Roosevelt sought to use the full powers of his office to end the Great Depression. He and Congress greatly expanded Federal programs. Federal spending, which totaled less than $4 billion in 1931, went up to nearly $7 billion in 1934 and to over $8 billion in 1936. Then, U.S. entry into World War II sent annual Federal spending soaring to over $91 billion by 1944. Thus began the ever increasing debt of the United States. What if the debt is not increasing as fast as we think it is? The dollar amount of the debt may increase but often times so does the amount of money or GDP to pay for the debt. This brings up the idea that the deficit could be run without cost. How could a deficit increase productivity without any cost? The idea of having a balanced budget is challenged by the ideas of Keynesian Economics. Keynesian economics is an economic model that predicts in times of low demand and high unemployment a deficit will not cost anything. Instead a deficit would allow more people to work, increasing productivity. A deficit does this because it is invested into the economy by government. For example if the government spends deficit money on new highways, trucking will benefit and more jobs will be produced. When an economic system is in recession all of its resources are not being used. For example if the government did not build highways we could not ship goods and there would be less demand for them. The supply remains low even though we have the ability to produce more because we cannot ship them. This non-productivity comes at a cost to the whole economic system. If deficit spending eliminates non-productivity then its direct monetary cost will be offset if not surpassed by increased productivity. For example in the 1980s when the huge deficits were adding up the actual additions to the public capital or increased productivity were often as big, or bigger than the deficit.

Sunday, October 20, 2019

Jeffersonian Federalism essays

Jeffersonian Federalism essays In a time when America was but a wee nation, its economy struggling for stability, its people divided by lifestyle and political viewpoint, Thomas Jefferson ascended to presidency in what was said to be a revolution of politics and democracy. Creator and leader of his own political party, Jefferson sat his Democratic-Republican buttocks upon a Federalist presidential throne. Tom proceeded to convert the tariff-oriented, pro-upper class government into one more for the common man, yet early into his presidency, the revolution seemed to be happening more within his political thinking than his government. The self-proclaimed anti-federalist made a steady descent into the dark side. Jeffersons Jeffersonian priorities seemed to take on a gradual shade of Federalist, tainting his supposed identity as a passionate anti-Federalist. As the new president reshaped the government, Alexander Hamiltons framework was left practically untouched, with the exception of the excise tax. In fact, Jefferson later became a great supporter of the industry aiding tariffs, defying his Democratic-Republican ideal of a laissez-faire, pro-agricultural economy. He also wound up supporting previously set plans for a central national bank, a completely Federalist idea. The Louisiana Purchase was possibly the greatest real estate deal ever made at 3 cents an acre, but Jefferson succumbed to the deal biting his lip. Attempting strict Jeffersonian frugality and adherence to the constitution, he allowed himself $10 million to spend on this deal. When the price tag showed itself at $15 million and pressure was laid on the man to swiftly finish the deal, Jefferson agreed, quietly condemning its unconstitutionality. Possibly Jeffersons greatest example of Federalist thinking was the Embargo Act of 1807, where his interesting interpretation of the constitution brought him to believe control over commerce allowed the president to stop all foreign trade. Th...

Saturday, October 19, 2019

State Intervention in Private Spheres of Activity for their Intended Essay

State Intervention in Private Spheres of Activity for their Intended Public Interest - Essay Example It is an important feature of contemporary society and one which is set to grow as network technologies, such as the Internet, enable us to communicate almost instantaneously with organizations and individuals regardless of geographical location. For example, it is because of the emergence of a ‘borderless’ society that law enforcement agencies increasingly seek to be exempted from the full rigors of the privacy laws. That this kind of exemption can lead, in turn, to misuse and abuse of these powers is perhaps one of the ‘costs’ we have to bear if law enforcement agencies generally are to be effective in combating crime in the information age. However, before evaluating how ethically right is the State’s intervention in the privacy of the members of the society for its proposed public interests, the very terminology of â€Å"Privacy† needs to be understood.The extensive material the n literature on the definition of Privacy reveals that the term’s meaning differs under various approaches to privacy offered by different scholars. Privacy’s most widely spread definition has been coined by Warren & Brandeis (1890, p. 205) who define privacy, as an intrinsic value, the â€Å"right to be let alone† (Stahl, 2007). Another approach to define privacy by (Stalder, 2002) is that of informational self-determination which sees privacy as the right to determine who accesses person-related data. This interpretation is widely spread in continental Europe whereby privacy may be taken in terms of property which includes the protection of an individual’s financial records, health records, ex-directo ry telephone numbers, criminal records, etc. If person-related information can be treated as property, then privacy issues can be reduced to the more established (intellectual) property law as Spinello (2000) puts it.  

Friday, October 18, 2019

European Union and Ukraine. Eastern Partnership Essay

European Union and Ukraine. Eastern Partnership - Essay Example According to the paper the main factor driving the EU’s interests in Ukraine is energy. The EU Commissioner for external relations and ENP Benita Ferrero-Waldner stated that: â€Å"Energy and energy security have been at the heart of the European integration, energy is a perfect example of common sense driving ENP†. To sum up this perspective, it is clear that rationalism emphasizes that Ukraine’s democratic transformation are not of primary interest for the EU while economic. EU’s interest on energy resources is more important. The paper describes what for Ukraine needs EU and EU needs Ukraine. It shows all pluses and minuses accordig to this. Basically, the discourse used in Eastern Partnership reveals that the promotion of common values and democratic transformation as a first priority. As a framework, EaP is based on a reciprocal commitment between the EU and its neighbors that share common values. Under the Eastern dimension initiative EaP aims to diminish the barriers between a partner country and the EU by building of civil society and a closer co-operation based on common values. Basically, the discourse used in Eastern Partnership reveals that the promotion of common values and democratic transformation as a first priority. Ukraine is expected to gain some privileges out of maintaining bilateral relations as defined under the New Enhanced Agreement. However, Ukraine would remain outside of the enlargement policy. The paper describes a very interesting subject for research.

The Effects of Fiscal Policy on Private Business Investment Essay

The Effects of Fiscal Policy on Private Business Investment - Essay Example As a point of departure, private business investments are considered as fundamental channels in which fiscal policy influences the economic growth. For instance, the endogenous economic growth model explains the dynamism in the capital stock, which is believed to influence the long-term per capita growth rate. This can happen through two ways namely more quantitative investment and more-efficient investments. This follows that the aspect of fiscal policy can be said to influence investments by varying domestic demand, which influence the Growth Development Product (GDP) of a nation, thus influencing the economy growth of a country. Considering a model of a tight fiscal policy where expenditure is reduced and increased taxation like in the case of the US, immense negative expectations are eminent. This reduces the viable incentives fro investments. On equal measure, the fiscal policy, particularly the short terms, can directly affect investment through the cost of capital attributed t o the tax system (Razin, Assaf, and Jacob, 2006. This follows that, the long-term fiscal policy on well-designed tax system on liberalized and privatised programmes such as for the case of the US and UK, help private sector investments because of reduced direct government involvement. For the case of government interest increase, foreign capital is attracted from the foreign investors and this increases the demand for the country’s currency. This implies that the value of country’s currency is increased. It is imperative to note that the increase in the currency value makes the exports from the country in question more expensive. On equal measure, when the government funds discrepancy with issuance of government bonds, the interests’ rates increases across the market due to the government borrowing which creates a higher demand for the credit in the financial markets. It is imperative to note that, theoretically,

TABLE 1 References to Time Management Concept in EBSCO database by Essay

TABLE 1 References to Time Management Concept in EBSCO database by Decade - Essay Example As per the given tabulations, the factored systemic variance within the cycles of the outputs and alternatively, these classifications are deeply organized into behavioral models which work collectively with the predictable market and search models. In his analysis, Neville et al, (2005) illustrated that IT concepts were technically built from information obtained from business models. The valuable strength management of a system is proportionally matched with the most ideal technological interpretations of the data warehouses and the information portal (Rainer and Thomas, 2004). An increasingly secure data is protectively commissioned through the required security features but the evaluation of a search provides an important platform. Ideally, the databases which contain these items are repeatedly factored and allocated specially designed keywords called Keys. The technology itself works by classifying inputs, such as cost of the products, cumulative costs for different products, credit worth and the latest changes in prices. According to Neville et al, (2005) the database search aspect also provide the clients with adequate information regarding customer behaviors, responses and the basic build up models of future customer trends (Neville, 20052000). The data mining systems are effective communication strategies which build a bulk emailing component in the system. The inclusion of discounts and general reduction of prices, these updates are automatically relayed to clients’ inboxes (Carson, 1990) The key beneficial trends of time management are determined by the customer participations. These basically involve the relational implementation of the strategic time management phases. The specification of customer based information satisfaction is gained periodically by engaging the business trends with the people (Boulding et al., 2005). The core objective as per Carson, (1990) is to enhance the views of customers in

Thursday, October 17, 2019

European Free Trade Association (EFTA) Essay Example | Topics and Well Written Essays - 500 words

European Free Trade Association (EFTA) - Essay Example Three out of the four EFTA members – all excluding Switzerland – are now members of the EEA (European Economic Area) Accord, linking the EC and EFTA countries for the purposes of multi lateral trade. The trend towards globalization of the world economy is promoted through the scheme of generalized tariff preferences (GSP) of the four members of the European Free trade Association. This scheme allows for preferential tariff arrangements among the trading countries of the EFTA and the EC which resulted in savings of $1.5 billion in preferential imports in 1980 (Brown 1989). The EFTA has fewer such preferential arrangements with other countries as compared to the EC and is therefore less restrictive in choosing the beneficiaries of preferential trading arrangements. Brown (1989) has provided the preferential tariff margins for 22 countries, listing all the major beneficiaries of this system, under which the payment of import duty is suspended on industrial products and small reductions in tariffs are available for some agricultural products. Manufactured and semi manufactured goods enjoy higher amounts of reductions in tariff. This provides a significant boost to free trade in the international context. The salient difference between the countries of the European Community and the EFTA countries lies in the degree to which they are willing to share sovereignty. The countries of the European Free Trade Association wanted to restrict the limits of their cooperation with other European countries to that of economic cooperation, while members of the EC were willing to hand over some of their sovereignty and autonomy over their own affairs in order to receive some concessions in influencing the policies of other countries in exchange.(Henning et al, p 86). For the four countries that are still members of the EFTA however, maintaining their autonomy and sovereignty is of supreme importance and they

WAN intranet Research Paper Example | Topics and Well Written Essays - 1000 words

WAN intranet - Research Paper Example Companies with remote locations have started preferring using intranet instead of conventional network configurations because it is easier navigating the intranet than a Local Area Network (LAN). Thus, LANs are being replaced by LANs in progressive organizations. The users login to an internal site to carry out organizational chores, much like using a website. An intranet is just like an organization’s isolated internet. For a company that has numerous remote locations that need to be connected to the private network, remote-access, also called a Virtual Private Network (VPN), is implemented which is actually a user-to LAN connection. When a company has to implement an enormous remote-access, like that of 100 locations, the VPN has to be contracted out with an Enterprise Service Provider (ESP). This ESP will set up a network access server and will deliver desktop client software to the users at all the hundred remote locations, who then install it on their computers. This will be called the VPN client software. The users will then dial up a call to connect to a local Point of Presence (POP) of the ESP to eventually access the network access server and will use their VPN client software to get connected to the company’s private network after having themselves authenticated to the VPN server, where they can share the centralized database, carry out their web business, web meetings, and transfer information. The users are also able to access the centralized database because the installed intranet server enables fast and reliable access to database records by removing the need to replicate databases for separate clients at remote locations (Knight et al., 2005). VPN client is the calling router and the VPN server is the answering router. The VPN client software makes possible a safe and encrypted link between the company’s network and all the hundred remote locations via an intermediary third party. This connection is

Wednesday, October 16, 2019

TABLE 1 References to Time Management Concept in EBSCO database by Essay

TABLE 1 References to Time Management Concept in EBSCO database by Decade - Essay Example As per the given tabulations, the factored systemic variance within the cycles of the outputs and alternatively, these classifications are deeply organized into behavioral models which work collectively with the predictable market and search models. In his analysis, Neville et al, (2005) illustrated that IT concepts were technically built from information obtained from business models. The valuable strength management of a system is proportionally matched with the most ideal technological interpretations of the data warehouses and the information portal (Rainer and Thomas, 2004). An increasingly secure data is protectively commissioned through the required security features but the evaluation of a search provides an important platform. Ideally, the databases which contain these items are repeatedly factored and allocated specially designed keywords called Keys. The technology itself works by classifying inputs, such as cost of the products, cumulative costs for different products, credit worth and the latest changes in prices. According to Neville et al, (2005) the database search aspect also provide the clients with adequate information regarding customer behaviors, responses and the basic build up models of future customer trends (Neville, 20052000). The data mining systems are effective communication strategies which build a bulk emailing component in the system. The inclusion of discounts and general reduction of prices, these updates are automatically relayed to clients’ inboxes (Carson, 1990) The key beneficial trends of time management are determined by the customer participations. These basically involve the relational implementation of the strategic time management phases. The specification of customer based information satisfaction is gained periodically by engaging the business trends with the people (Boulding et al., 2005). The core objective as per Carson, (1990) is to enhance the views of customers in

Tuesday, October 15, 2019

WAN intranet Research Paper Example | Topics and Well Written Essays - 1000 words

WAN intranet - Research Paper Example Companies with remote locations have started preferring using intranet instead of conventional network configurations because it is easier navigating the intranet than a Local Area Network (LAN). Thus, LANs are being replaced by LANs in progressive organizations. The users login to an internal site to carry out organizational chores, much like using a website. An intranet is just like an organization’s isolated internet. For a company that has numerous remote locations that need to be connected to the private network, remote-access, also called a Virtual Private Network (VPN), is implemented which is actually a user-to LAN connection. When a company has to implement an enormous remote-access, like that of 100 locations, the VPN has to be contracted out with an Enterprise Service Provider (ESP). This ESP will set up a network access server and will deliver desktop client software to the users at all the hundred remote locations, who then install it on their computers. This will be called the VPN client software. The users will then dial up a call to connect to a local Point of Presence (POP) of the ESP to eventually access the network access server and will use their VPN client software to get connected to the company’s private network after having themselves authenticated to the VPN server, where they can share the centralized database, carry out their web business, web meetings, and transfer information. The users are also able to access the centralized database because the installed intranet server enables fast and reliable access to database records by removing the need to replicate databases for separate clients at remote locations (Knight et al., 2005). VPN client is the calling router and the VPN server is the answering router. The VPN client software makes possible a safe and encrypted link between the company’s network and all the hundred remote locations via an intermediary third party. This connection is

Properties of Gases Essay Example for Free

Properties of Gases Essay Purpose: The purpose of this experiment is to examine different properties of gases in order to be able to identify them in a laboratory setting. I will be using Hydrochloric acid, mossy zinc, Bromothymol blue, limewater, Manganese, and Alka Seltzer, to test different properties of gases formed by chemical reactions. In order to extract the gas I will use several household products such as vinegar, baking soda, water, and hydrogen peroxide in order to create the gases to be tested. Materials: Student Provided 1 Match 1 Toothpicks (or wooden splints) 1 Pie tin or similar 1 Marker pen 1 Household white vinegar 1 3% Hydrogen peroxide (H2O2) 1 Measuring spoons 1 Drinking straw 1 Tissue paper 1 Baking soda (sodium bicarbonate) From LabPaq 1 Goggles-Safety 1 Test Tube (5), 13 x 100 mm in Bubble Bag 1 Well-Plate-24 Auxiliary Supplies Bag Auxiliary Supplies Bag- CK1 1 Gas Assembly; copper/plastic tubing in #00 Stopper 1 Pipet, Empty Short Stem 1 Rubber stopper, #00, 1 hole, Pipet tip w/plastic Gas delivery tube Experiment Bag Properties of Gases 2 Alka Seltzer ® (1/4 tablet) in Bag 2 x 3 1 Bromothymol Blue, 0. 04% 4 mL in Pipet 1 Hydrochloric Acid, 2 M 20 mL in Dropper Bottle 1 Limewater (Calcium Hydroxide, Saturated) 6 mL in Pipet 1 Manganese Metal Pieces 4-6 Pieces in Bag 2 x 3 3 Pipet Bulbs, Wide-Neck with 1/4 Stem 1 Zinc, Mossy 4-6 Pieces in Bag 2 x 3 Procedure: 1) Hydrogen a) I placed a small amount of ZN( mossy zinc) in the test tube containing HCL b) I then capped the test tube with the small cap that allowed gas release from the top and placed it in one of the 24 well plates. I wedged the test tune in with a bit of toilet paper. c) I then filled the large stem pipet with water and placed it atop the rubber stopper. d) I placed the well plate in a pie tin to prevent the overflow from getting everywhere. e) After the water was replaced entirely with hydrogen I removed the wide mouthed pipet and placed my finger over the opening to prevent any gas from escaping. f) I lit a match and while holding the pipet about 1cm away from the flame blew the gas onto the flame, and recorded my observations in the table. g) With a marker I marked the wide-neck pipet on the outside into three parts h) I filled the bulb with water and set it on the test tube as before. i) Once the bulb was 2/3 full of gas I removed it and placed it aside, still inverted, for later use j) I then disassembled and rinsed the tools and threw away the ZN. 2) Oxygen A) I placed a few pieces of Mn (Manganese) into the second test tube. B) I filled the test tube to within 1cm of the top with hydrogen peroxide. Afterwards I placed the rubber stopper on the test tube and the test tube in the well plate. . C) I then filled another wide-neck pipet completely with  water and placed it on the top of the stopper in the test tube. D) After the water was displaced I removed the pipet and placed my finger over the opening to prevent any gas from leaking. E) I lit a match and extinguished it. While the match was still glowing I placed it inside the pipet and recorded the reaction. 3) Hydrogen and Oxygen Mixture A) I took the pipet from part 1 that is partially filled with hydrogen and placed it on the generation test tube from part 2. B) I let the bulb fill until the water was completely displaced. The mixture was about 2/3 hydrogen and 1/3 oxygen C) I removed the bulb and placed my finger over the open end to prevent any gas from escaping D) I lit a match, held the pipet horizontally about 1cm away from the flame and squeezed the gas onto the flame. E) I recorded my observations in the table. F) I disassembled the test tube washed the contents down the drain and rinsed the tools. 4) Carbon Dioxide A) Part I a) I placed approximately .5 ml of limewater in one well of the 24 well plate. b) I placed  ½ a teaspoon of baking soda in the generation test tube. c) I filled a pipet halfway with vinegar and added it to the baking soda. Immediately after the two stopped reacting I placed the rubber stopper with the copper and plastic gas delivery tube on the top of the test tube. d) I placed the open end of the tube into the well with limewater and recorded the reaction. B) Part II a) I placed approximately .5ml of Bromothymol blue in one of the wells. b) I thoroughly rinsed the gas generation test tube with water and set up another test with baking soda and vinegar as in part 1. c) After putting the stopper in place I inserted the open tube into the Bromothymol blue. d) I removed the stopper from the generation tube. e) I lit a match and inserted the flame into the upper part of the test tube. f) I recorded my observations. C) Part III a) I poured the chemicals down the sink and flushed with water. b) I put approximately .5 ml of limewater into another well. c) I crumbled the small piece of Alka Seltzer into a test tube d) I added one pipet full of water to the test tube and immediately inserted the stopper with the gas delivery tube. e) I placed the open end of the tube into the limewater well. f) I washed the test tube and well thoroughly with water. D) Part IV a) I put approximately .5 ml of limewater into a test tube. b) I inserted a straw and blew for a few seconds. c) I recorded my observations d) I washed the tools and flushed the limewater down the sink. Results: Gas Flame reaction Glowing splint Limewater reaction Bromothymol blue reaction Hydrogen Loud sound, flame went out n/a n/a n/a oxygen n/a Light up quickly dissipated and glowed for around 10 seconds n/a n/a Hydrogen oxygen Popping sound and flame went out n/a n/a n/a Carbon Dioxide n/a n/a Changed the color of the limewater from clear to nearly milky, bubbled rapidly. Bubbled rapidly, changed colors slightly to green Alka Seltzer n/a n/a Lots of pressure in the test tube, color change in limewater from clear to nearly milky, rapid small bubbles Breath n/a n/a Very large bubbles, color change from clear to nearly milky. n/a A) Give two reasons why we fill gas generator test tubes almost to the top with chemicals. First we do this because maximizing the pressure in the tube will maximize gasses. Secondly because this causes there to be a shorter distance for the gases to traveled B) What happens to the zinc in the hydrogen generation experiment? The mossy zinc causes a reaction with the HCL to form hydrogen gas. C) What happens in the oxygen generation experiment? The manganese reacts with the hydrogen peroxide to form oxygen. D) Write a balanced equation for the reaction between O2 and H2. 2H2 +O22H2O E) What is the function/purpose of the Bromothymol blue in the CO2 experiment? The Bromothymol blue in the CO2 experiment is used to indicate how much CO2 is in the solution. It indicates this with a color change. F) Bromothymol blue is blue in the presence of basic solutions and yellow in the presence of acidic solutions. If your solution is a murky green, what  might you assume about the solution I would assume the solution was neutral. Conclusion: In conclusion, I learned that properties of gases vary vastly in their reactions with flame, and other substances. I also learned how to create such gases in a controlled environment for future experimentation. The effect of these reactions put into perspective how dangerous gases can be and how their proper storage and transportation is a vital essential in our daily lives. Seeing the different reactions will make me more cautious of the way I materials.

Monday, October 14, 2019

Research and Review into Crisis Management: Mitigating Disaster

Research and Review into Crisis Management: Mitigating Disaster A crisis is a major occurrence with a potentially negative outcome. However, almost every crisis contains within itself the seeds of success as well as the roots of failure. At a conference in Japan on June 21, 2006, a Dell laptop suddenly exploded into flames, and lucky for its owner the fiery blast occurred while the PC was sitting on a table and not in his lap. An onlooker reported that the notebook continued to burn, producing several more explosions over the course of about five minutes. On August 15th, members of Dell Inc.s Global Corporate Communications/Investor Relations organization were part of a team facing an unprecedented challenge. The team had been working with regulatory agencies in various countries for an announcement of the largest recall in the history of consumer electronics, 4.2 million Dell branded lithium-ion batteries, with cells manufactured by Sony. The announcement was leaked to the press and Dell accelerated its plans by 12 hours, including launching the recall Web site early. The focus of the Corporate Communication/Investor Relations team and key business leaders remained clear: to effectively and efficiently inform customers, employees and shareholders about the recall. The recall, one of the largest in the history of the U.S. Consumer Product Safety Commission, couldnt have come at a worse time. According to a closely watched annual study by the University of Michigan, Dells efforts to improve service, which it only recently acknowledged publicly was inadequate, appear to be paying off. That progress is a key part of a long-awaited turnaround at the worlds largest PC maker, which is struggling with a host of problems, including sluggish sales growth in its core businesses. The news of the notebook computer recall hits just as Michigan released its American Customer Satisfaction Index (ACSI) showing that Dells customer-satisfaction score jumped 5.4% from a year ago, to 78-a point above the industry average. That puts Dell in second place, behind Apple Computer, whose score rose 2.5%, to 83. Dells recent improvements follow many quarters of poorer service that became fodder for countless customer complaints, sometimes publicized on high-profile chat rooms and blogs. Consumers, which account for about 14% of Dells total revenue, have complained of hold times stretching for 30 minutes or more, numerous call transfers, dropped calls, and, perhaps most important, an inability of the call-center representatives to communicate clearly and answer their questions. Indeed, many consumers posting on blogs and chat sites continue to complain bitterly about Dell. Literature Review In contrast to the disciplines of emergency and risk management, which deal primarily with natural disasters, the field of crisis management deals mainly with man-made or human-caused crises, such as computer hacking, environmental contamination, executive kidnapping, fraud, product tampering, sexual harassment, and workplace violence. Unlike natural disasters, human-caused crises are not inevitable. They do not need to happen. For this reason, the public is extremely critical of those organizations that are responsible for their occurrence. Nonetheless, even with the best of frameworks and the best of preparations, it is unfortunately still the case that not all crises can be prevented. This even holds true for those crises that we know with almost complete certainty will occur. But the impacts of all crises can be lessened if one has a thorough understanding of the essential basics of crisis management. While not all crises can be foreseen, let alone prevented, all of them can be managed far more effectively if we understand and practice the best of what is humanly possible (Mitroff and Anagnos, 2001). Effective management of information is vital to the operations of most organizations. Some years ago Wells said Without adequate communication an organization will soon grind to a halt (Wells, 1978). More recently Bakewell has pointed out Communication is the hallmark of good management (Bakewell, 1997). Good communication goes hand in hand with effective management of information. Effective management of information at a time of crisis is even more vital, when damage to an organizations reputation or damage to established goodwill can result in severe damage to operations. An organizations reputation is as important as any other corporate asset, and many organizations have some kind of crisis plan intended to protect that reputation should something go wrong. This is when effective management of information (controlling communications) is so vital and always difficult. A few basic rules have clearly emerged from some recent crises. First of all the importance of telling the truth. Second, rather than let the media network speculate, use the media network as a opportunity to disseminate your information. Leave no room for speculation if you cant tell them something, tell them why you cant tell them (PR Journal, 1995). Perhaps another basic rule to be mentioned at this point is the need to apologize promptly when appropriate. Sir Jeremy Morse, banker and past chairman of the Institute of Bankers, maintains that it almost always pays to issue an early apology. Although this could lead to an organization being blamed for something which is not its fault, he comments: Nevertheless, there are two central reasons why this is usually the right course. First, externally, the public respect an apology freely given rather than one that comes after a considerable period of stonewalling. Secondly, internally, an early apology frees managers to sort out the problems far more effectively than if they are still maintaining an outward front that nothing is wrong (Haywood, 1994, p. 177). However, Black (1993) points out that if a lawyer is present there may be pressure not to express sympathy in case this is taken to imply liability: Lawyers must be told that the consequences to the company of not communicating and showing sympathy in practical terms are sure to be much worse than if an open policy of full information and generosity is adopted. It is vital to realize the speed of media coverage because of new technological developments. Not only can stories be relayed by mobile phones and faxed from cars, but they can also be sent from helicopters and bounced off satellites. Pictures too can be taken by digital computerized cameras and sent down telecommunications lines: When Greenpeace staged its high-profile stand against the sinking of the Brent Spar oil platform, it not only posted information on the Web, but was reported to have airlifted sophisticated filming equipment and a satellite down station on to the rig, so that they could provide their own VNRs direct to news outlets (Nicholas, 1996a). This means that it is unlikely that there will be a time delay between an incident or crisis erupting and the resulting media coverage. This emphasizes the need to react quickly at a time of crisis, and to let all parties know immediately about the action you have taken. Communicating effectively was now more often seen as of t he same importance as putting the problems right. (IPR Journal, 1995, p. 14). Furthermore, the media are in competition and hungry for the most newsworthy stories. Generally bad news is very newsworthy and more sensational (Ashcroft, 1994). If immediate information is not available, this leaves room for speculative stories. Methodology This paper reviews how Dell itself has responded to the challenges raised by bloggers, how it has enhanced its customer services and how it has itself undertaken social media initiatives. We use a proven customer loyalty metric the Net Promoters index to assess whether these initiatives have been successful. We conclude that Dell has made some limited progress in reducing negative consumer commentary about its customer services. Where Dell has been most successful is in creating a conversation about its own social media initiatives: Dell has absorbed in its own product planning and its communications the hard lessons of the Dell Hell experience, and opened a dialogue with its customers, which is directly benefitting its word-of-mouth approval rating. Findings A crisis can be divided into six stages. First, the detection of prodromes is a way for the company involved to predict any potential occurrence. For example, if an organization in the same line as yours is faced with a crisis, then you may start looking out. The second stage is prevention. It refers to how a company can prevent a crisis. That can be done by maintaining public relations programs, or by establishing a corporate culture, or even by including a crisis management plan in the strategic planning process. The third stage is containment. It is a process of limiting the amplitude of the crisis, the impact of the crisis and the spread of the crisis. Then, the next stage is recovery. It consists of two major aspects: getting the organization back to normal, and restoring stakeholder confidence in the organization. The following stage, learning, is a post crisis process consisting of examining the crisis, looking at what was lost and what was gained, and how the organization fun ctioned during the crisis. Finally, the last stage is the adjustment of the crisis management plan and crisis communication team. In view of what was achieved in the learning stage, the crisis management plan and the crisis communication team must be updated and any new risks uncovered by the crisis should be incorporated in them. Detection of prodromes It turned out that this occurrence was not an isolated case. In December last year, Dell launched a massive recall of about 35,000 notebook batteries contained in laptops that were deemed to pose a potential fire risk. The company said at the time it had received three reports of batteries overheating, and while no injuries were sustained, damage to a tabletop, a desktop and minor damage to personal effects had been recorded. The problem is not limited to Dell laptops. According to the US Consumer Product Safety Commission, as many as 43 laptop fires have been reported in the US alone since 2001. It has been almost ten years now since we were first warned about the dangers of the lithium-ion accumulators/storage batteries, the only ones that include a flammable liquid in a pressurized container. In case of short-circuit, they can go up in flames and explode. This is therefore why this kind of battery is rarely used in do-it-yourself tools and hybrid cars. However, they are very popular in IT as they offer an energy density between two and four times superior to those of traditional batteries (nickel-cadmium, hybrid-metal or lead). Also being much lighter, they facilitate the manufacture of miniature devices able to hold a whole day with one single charge. Several cases of explosion have occurred in the past few years, but they were rarely given publicity in the media. At best, these explosive batteries were considered to be isolated incidents. At worst, they were seen as fabrications. In the summer of 2006, the context changed, after several explosions in Singapore in June, and in Utah in July. After an enquiry, we learnt that the problem had been diagnosed more than one year ago. Between 2004 and 2005, Dell analyzed a dozen batteries that had overheated. They detected a fault in the lithium-ion cells of its supplier Sony. Some small particles could infect the cells, provoke a short-circuit and overheat the battery. The fault would have been repaired in February 2006. Sony reviewed its manufacturing process as well as its quality control in order to limit the presence of these particles, and eventually everything was back to normal. But nothing had been done for the batteries already on the market. At the time, we had no serious confirmation of disaster, fire or explosion. There was therefore no reason to launch a substantial operation, added a member of the conception team for the Latitude laptops. We have to wonder: was Dell waiting for a drama to start before making a move? Today, Dell admits having known about these problems for more than a year, but declares that they had trouble in evaluating the seriousness of the situation. They also needed time to find the source of the problem, before launching a modest recall. However, its quite possible that Dell was simply trying to protect itself by sending some information to the Consumer Product Safety Commission, which would have allowed them to negotiate in a better position in case of litigation. The US Consumer Product Safety Commission actually doesnt blame Dell for anything, indicating that the company did its job by acknowledging the problem. It is not the first time that Dell acted this way; it happened three times in five years. Already in 2001, 284  000 computers had been recalled for the same kind of symptoms, and 35  000 others in December 2005. However, during their press conference, the American giant renewed its confidence in Sony, which would keep its status of battery supplier for the laptops of the number one worldwide. Prevention During 2005 and 2006 Dell experienced a series of financial shocks. On November 10th 2005 Dell announced quarterly profits had dropped 28%. On May 9th 2006, and again just a few weeks later on 21st July 2006 Dell announced that its earnings would not meet previous guidance. These profit warnings arose from a combination of continued price pressure on margins in the PC business and also the fall-out from its attempts to strip costs out of the business by a) off-shoring customer support functions and b) ending unprofitable aspects of warranty repair. Dells actions created an outburst of anger from customers on the receiving end of this cost cutting. Jeff Jarviss blog was symptomatic of this criticism. Dell responded to their critics by making two major changes: They began by investing an additional $150m in their customer service operations. The result (according to Dell) is that the average waiting time for support calls has come down from nine minutes to three minutes. They launched an official Dell customer services blog (summer 2006) along with two further social media sites Dell Studio and IdeaStorm. Containment On July 31st, Engadget posted photos of a Dell notebook that had caught fire in Singapore. Its comment: Well keep posting these until we see a recall or a solution, so please, Dell, treat them right. By then, Dell was working closely with the government to figure out the scope of the problem. It turned out that the glitch was the same as it had been the previous year: metal particles inside the battery were causing the problems. Apples problems with overheating batteries had been cropping up in the online media during the spring and summer as well. The CPSCs Stern says Sony connected the dots and figured out which of its batteries and which of its customers were affected. After The Inquirer, a European site for computer hardware news, expressed serious concerns about the batteries, Dell and Sony proposed a second recall to the CPSC. On August 13th, writer Theo Valich reported on The Inquirer site that another recall was on the way. Magee said the leak came from a Dell insider, whom he refused to identify. I attribute being on top of the story to old-fashioned print journalism standards-cultivating, and, if youll excuse the pun, not burning such contacts, he says. The formal recall was announced a day later, on August 14th. Once Dell announced the recall, it, too, harnessed the Web to reach out to the disgruntled computing masses. On August 14th, the company set up a Web site (www.dellbatteryprogram.com) telling customers how to get a replacement battery. On its customer-service blog, (www.direct2dell.com), Dell also published some postings from executives and staffers about the recall (Appendix 1). These included blow-by-blow descriptions of Dells response from Alex Gruzen, senior vice-president of the companys Mobility Product Group, and a detailed explanation of how lithium-ion batteries work from Forrest Norrod, vice-president of engineering. The company also elicited dozens of comments from customers, some of whom were plenty irked. On August 15th, George Johnson demanded to know why Chairman Michael Dell hadnt responded to questions about the battery problems at a press conference the previous day in Sydney, Australia. When he was asked about the recent problems and if there were any developments, he did not volunteer the information that a new battery recall was in the works. If he was so concerned about customer safety, why was the announcement held over until after the press conference was over? asked Johnson.   But most people who commented praised Dell for its response. I commend Dell for looking out for the consumer on this issue, wrote Jim Jones. I have been fearful of leaving my system on while unattended. Its nice that I can leave my system on overnight and not have to worry about my house catching fire.   Dell credits the blogosphere for helping it get through the crisis. Information travels around quickly, says spokeswoman Gretchen Miller. Also, its another channel to get the message to our customers so they can be safe.   On August 15th, Dell received more than 50 million hits on https://www.dellbatteryprogram.com, responded to more than 135,000 phone calls and received more than 150,000 battery replacement orders. Dell shipped the first replacement units the day it announced the recall. Dells Corporate Communication/Investor Relations team played a critical role in the implementation of the recall by developing and executing a strategy based on a key central message: Dell had taken aggressive, proactive action to retrieve and replace all suspect batteries with a clear focus on customer safety. The team worked to help key the stakeholders message to customers that the safety of Dells customers was of utmost importance. This message was supported by articulating the benefits of the companys direct business model including: 1) Dells detailed information on units sold to customers, including the units configuration when it shipped to the customer. 2) Dells records of customer contact information, which enabled Dell team members to reach out to customers immediately. 3) Dells close relationship with its suppliers, such as Sony, which enabled the company to identify the problem, diagnose it and find a remedy. By working so closely with suppliers, Dell was able to respond in a way unlike any other company in the industry. Recovery and Turnaround In February 2007 Dell went further and launched IdeaStorm and StudioDell. IdeaStorm allows Dell users to feedback valuable insights about the company and its products and vote for those they find most relevant. StudioDell is a place where Dell users could share videos about Dell-related topics. IdeaStorm has already been the site of an extraordinary exercise in stakeholder democracy the reprieve of Microsofts Windows XP operating system. Have Dell turned the corner? To answer this question Market Sentinel analyzed stakeholders perceptions of Dell customer service. The analysis compares the sentiment of online commentary before and after Dells commercial slump and their new online customer initiatives. We believe that anyone wishing to track the financial prospects of Dell over the next few quarters could do worse than to watch the key metrics on word of mouth. Using the Net Promoters, we identified five key topics of commentary about Dell customer service and placed each post into one of these categories, according to the most central concern expressed (Appendix 2). à ¢Ã¢â€š ¬Ã‚ ¢ Speed (the length of time it takes to get through to someone at the call centre, to get through to the right person to address callers concerns, to get issue resolved, to get problem fixed, to get delivery of items etc.) à ¢Ã¢â€š ¬Ã‚ ¢ Off-shoring (customers feelings towards technical supports relocation from USA/UK to India and other countries, especially in relation to language problems) à ¢Ã¢â€š ¬Ã‚ ¢ Errors (inaccuracies in dealing with Dell customer service e.g. wrong items sent, orders lost, incorrect delivery details etc.) à ¢Ã¢â€š ¬Ã‚ ¢ Technical Competence (of Dell technical support staff) à ¢Ã¢â€š ¬Ã‚ ¢ New Initiatives (Direct2Dell, Dell IdeaStorm, StudioDell, engaging directly with bloggers) The distribution of comments about Dell customer service between the categories remained roughly constant in the two years, with the largest share accounted for by general comments, followed by comments about the speed or promptness of service, and then comments about off-shoring. A significant change year-on-year was the number of comments, predominantly positive in tone, which were gathered in the second wave about Dells new customer initiatives. Although this was encouraging, the most immediate conclusion to be drawn from the Dell Net Promoters analysis is that negative commentary outweighs positive commentary across almost all categories. This is not at all unusual for a study of customer service attitudes, as people come to message boards or blogs in search of answers to problems they have failed to solve with the customer support services of the company in question. The tone is therefore somewhat negative. Have Dells actions had any noticeable effect upon online feeling on customer service? The good news for Dell is that opinion has improved overall, but there are still areas for concern. There is a slight improvement in customers feelings about Dells speed of service (up +4) and technical competence (up +1). This improvement is offset by increasing dissatisfaction with the policy of off-shoring technical support (down -12) and with the ongoing problem of order, service and delivery inaccuracies (down -8). However, there are two significant positive shifts in opinion about Dell. The first finding is the positive reception given to Dells new customer initiatives. However, the recent deterioration of Dells customer service had eroded much of the goodwill of the online community. Commentators are wary of show without substance. What will be definitely interesting to see is if Dell does anything with these comments or is this yet another example of a company putting all their efforts trying to make themselves look better instead of actually being better à ¢Ã¢â€š ¬Ã‚ ¦as long as they follow through and put some of the ideas in action. If they dont do this, people will realize that they dont actually have the power to influence the company and Dell is just trying to give that illusion à ¢Ã¢â€š ¬Ã‚ ¦if nothing comes out of this youll bring the wrath of khan down on your head The guarded welcome is spelled out clearly by B.L. Ochman on her whatsnextonline.com blog: I dont know if that will make Dells lousy service any better, but it shows they want to listen, and thats where recovery can begin. The second shift in opinion which we found in our analysis is the reduction in the negative comments about Dells customer service. Although the overall mood still appears quite gloomy across all comments, with a Net Promoters Index of -20, this is a considerable improvement on the position 12 months earlier when the Net Promoters Index stood at -38. Changes in the sentiment of commentary seem to lag service delivery. Many of the positive comments which were collected in the first wave of this study traded on the long term legacy of Dell as a provider of best-in-class customer service; the negative comments were more likely to be customers reports of unsatisfactory customer service experiences in the recent past. At the beginning of 2007 negative stories about customer service continued to circulate in message boards and in blogs, but these are now being counterbalanced by those who have more positive stories to tell having benefitted from the effects of Dells re-investment in customer service. It took many years for Dell to establish the reputation for exemplary customer service which it had built up in the years up to 2001, before technical support was off-shored to India and Dell cut back on engineer visits to customers homes. This reputation, as a long term legacy, is still present in some loyal customers minds. But the fallout from Dell Hell means that Dell now has a different and conflicting reputation to deal with poor service from a company which doesnt care about its customers. And it is this reputation which is freshest in the mind. Learning On August 14th, the Associated Press chronicled incidents stretching back to 1999, including, among others, a Lufthansa fire in Chicago, a UPS plane in flames in Philadelphia, and an emergency landing by a plane carrying the then-Vice Presidential candidate John Edwards, all apparently the result of computer batteries spontaneous combustion. It did not require great prescience to anticipate that air flight safety would dominate inevitable coverage of the burning batteries and their subsequent recall. The issue should indeed have been a main component of crisis war games at Dell. Once the recall was inevitable, Dell can reasonably have anticipated the necessity to talk loud and clear about air flight safety, and it should have been prepared to do so at the earliest possible moment. By delaying any announcement, Dell harmed its position on multiple fronts. It allowed the Consumer Product Safety Commission to define the story and cloak itself in the garb of public protector. Nor was the agency shy about describing the recall as the largest computer-related recall in history. The magnitude of the disaster became an integral part of the story reported in the first few paragraphs of both the Austin American-Statesman and Wall Street Journal Online on August 15th. Words like largest or first or worst become the story itself.   At the strategic level, theres a best practice called Bad News All At Once predicated on the time-tested wisdom that full and fast disclosure shortens the life of most stories. In fact, the art of both Investor and Consumer Relations supports this best practice almost every time. Investors want nothing more than closure, a sense that a crisis, no matter how multifaceted, will be resolved in the immediate future. Consumers, meanwhile, can be wooed back, but not so easily if the story drags on indefinitely, a new twist on each front page edition.   Bad News All at Once contains bad news in the exact meaning of the word contain. By stanching the flow of revelations, the story is separated from events that may still lie ahead. There are times when major news, like a terrorist plot or a hurricane, can indeed minimize attention to your story. Its a factor to weigh but not simply assume. In Dells case, the terrorist revelation magnified its crisis to an extent that must have been unimaginable when the company first decided to delay. Now theres the Securities and Exchange Commission account practices probe to further elongate the Dell litany. Unlike the terrorist story, this time bomb has been ticking since last year. There may be good practical and legal reasons why Dell did not reveal this material event. On the positive side, Dell seems to have done a better job working with Sony to coordinate a response to the crisis by avoiding the no-win scenario weve seen in the past when major brands blame each other in the national media. Customers do not care who is at fault. They only care that the problem gets fixed.   Even here, Dells performance was, unfortunately, less than perfect. In the opening paragraphs of those August 15th stories, we read that Dell blamed Sony for the problem. Only further down in the Austin story and nowhere in the Journal story does a Dell spokesperson express confidence in Sony. The fact that many other computer manufacturers may face the same product liability represents an opportunity for Dell to offer some sort of industry-wide support to safeguard products. Such an initiative would underscore Dells public safety leadership even as it reminds the world that it is not the only computer company with a problem. It is a company that is resolved to correct the problem and it deserves the recognition for doing so. Here are some basic lessons learned from the Dell laptop battery crisis:   Predict the future. Play war games. Had Dell done so, they might have anticipated that their exploding batteries were an airline disaster story waiting to happen, even without the terrorist plot that ultimately magnified the story. In determining when to disclose, watch for material events and early warning triggering mechanisms that compel public disclosure as soon as possible. Disclosing Bad News All At Once shortens the life of a negative story and contains it by preempting substantive links to other stories. An industry-wide public safety leadership role generalizes the problem beyond your own company. Adjustment of CMP/CCP Dell is reaching out into the world of blogs and user-generated media. Perhaps the most potent and valuable business lesson Dell has absorbed from its experiences lies in the way the company has taken into its business methods the idea of dialogue with its consumers. In the following blog post by Lionel Menchaca Dell digital commerce manager, he outlines in turn each of the advantages of opening a dialogue with customers online. It is worth reprinting the post in full and highlighting the lessons Lionel identifies: 1. Brands can quickly learn about and address, product bugs and issues; 2. Brands can open an additional communication channel for customer service; 3. Brands have to listen to their consumers and that means monitoring the web; 4. Brands can use blogs to help manage crisis communication; 5. Brands can enhance off-line conversations with consumers based on what they have learnt online; 6. Brands have to be honest and admit it when they get it wrong. Dell said: Our policy [towards blogs] in the past may have been look, dont touch. Today, its more like listen, and join the conversation the right way (Appendix 3). Conclusion The Dell Corporate Communications/Investor Relations teams focus was to engage key media, such as the New York Times, CNBC, and leading regional media, to ensure a wide distribution of the key messages. Within the first 12 hours of the recall, a Dell executive participated in interviews with the Today Show and Bloomberg TV and later in the process worked with global outlets such as BBC World News. Members of the Corporate Communications/Investor Relations team from around the world briefed industry analysts and responded to a number of inquiries from TV, radio, newspaper and wire services. The team faced challenges in responding to the volume of the media requests and with the expected quick turnaround of information. Team members across communications disciplines responded to help. The story shifted when Apple followed Dells recall 10 days later. At that time, none of the other PC manufacturers had made any statement that they could have been impacted by the same contaminated battery packs. Ultimately, Lenovo, Toshiba and Fujitsu also announced recalls, and eight weeks after the Dell announcement, Sony announced the recall of batteries used in its VAIO notebook line. Within 60 days after the recall launch, the story evolved from the initial but inaccurate perception that the battery issue was solely a Dell issue to the accurate story that the Sony battery cells were the sole cause of the issue. In the process, Dell became a model for Performance of Guggenheim Investments: An Analysis Performance of Guggenheim Investments: An Analysis Research Purpose This research aims to analyze and evaluate six different funds and their benchmarks and risk-free rates in order to compare the overall performance of the funds from Guggenheim Investments and the selected funds from three different asset managers. Research Design The research involved 60 monthly observations (from January 2012 to December 2016) to analyze the performance of the Guggenheim Investments and the selected different asset managers, Aberdeen, Wells Fargo, and Pimco, which were selected from the top 500 global asset manager (Towers Watson, 2016).   Three funds from Guggenheim Investments and three more funds from each asset manager, in total of six funds, were selected with similar investment strategies Small-Cap Blend, Mid-Cap Value and Large-Cap Blend for the consistency of the analysis.   The performance would be measured in term of return and risk based on statistical and financial theory.   The performance will be measured in term of return, risk and ratio. Findings All of the fund from Guggenheim Investments, from the results in this research, perform worse than the competitors funds in many ways. The ratios analysis shows lower level of risk rewarding from the poorer management of risk, the funds also generated lower level of return throughout the observation period. Furthermore, the regression analysis shows that all of the fund in this research utilize Fama French model effectively, but the competitors funds also utilized Carhart 4 Factors Model, incorporating the momentum factor which makes them perform better than the funds from Guggenheim Investments. Research Limitations Due to the small observation size, the daily and weekly volatility were ignored from this research.   In term of data quality, using OLS regression may not be efficient to analyze the data as the data consisted with unit-root, which was caused by the price drift. Research Implication The research should provide a good example of how could an investor conduct the statistical analysis of fund performance analysis using E-View 9 and Microsoft Excel.   The result of the research could support the investor, in term of analysis and decision making, to include funds as an asset class into their portfolio. Mutual fund has been around for many decades and is becoming more popular in both developed and emerging markets. As the mutual fund is managed by professional fund managers, it unlock many good features that non-professional investors desire such as well diversified portfolio, access to highly priced securities like bluechip stocks, infrastructure and real estates, or tracking any particular index which required a lot of money. Many mutual funds are provided by asset management companies, some are doing good, some are doing bad. The performance evaluation can be difficult for individual investors that are not specialized in this area of work. This research aim to provide basic knowledge and example of assessing the performance of mutual funds, both in terms of risk and return, to simplify this process for every investor in order for them to effectively and efficiently invest in mutual funds. Data Collection All of the numbers in this research including total return index of each fund and their benchmark, risk free rate are collected from Bloomberg Terminal. The funds fact sheets are directly downloaded from the asset managers website. The benchmarks that are used in this research are selected by using the same benchmark stated in the Guggenheim Investments fact sheets as the main benchmark. Three more supporting benchmark are selected by using the same category from MSCI, Russel, and SP. The Fama French and Momentum factor are collect from the data library of Tuck Business School at Dartmouth. Data Preparation After the data collection, they were processed via Excel Spreadsheet to apply the formulas in order to evaluate the performance of the funds, as well as several statistics.   In addition, the processed data from Excel would be exported to E-View 9 to estimate the regression model to apply CAPM, Fama French and Carhart Model. (Table 1 Funds Overview) According to the funds fact sheets, all the fund is passively managed as the funds objective is to seek long-term capital appreciation. The funds are also investing only in the United States of America. In this research, I additionally use Bloomberg category to match the characteristic of the funds in order to make a more distinctive and more consistent comparison of the similar funds. The main asset manager in this research is Guggenheim Investment and the comparable funds are from Aberdeen, Wells Fargo, and Pimco. The tables below show the statistical data of the funds in terms of simple calculated excess return and log calculated excess return. The returns can be calculated as the followings; The Average Logged Return   The monthly return of the fund and risk-free rate were calculated into logged return in order to calculate time-consistent return. The Arithmetic Mean Excess Return The Arithmetic Mean Excess Return can be calculated by deduct the country risk free rate, in this case US 3-month T-Bill Rate, from the return of the portfolio. (Table 2: Simple Return Statistic) (Table 3 Log Return Statistic) From the tables, we can observe that the funds from Guggenheim Investment generated lower return than the competitors and also have higher downside risk, which is semi-standard deviation. We will further observe the trend of each fund and its competitor and how it performs against the benchmark in these line graphs. (Figure 1 Performance Comparison: SSUAX) This line graph above and the following two graphs are calculated by setting the starting point as 100, and then increase exponentially with the excess return in that period. The illustration shows that both SSUAX and GSXAX effectively track the benchmark from the beginning of 2012 to the end of 2014. After that point, SSUAX started to underperform the benchmark and GSXAX started to outperform the benchmark. The trend tends to persist to the present. (Figure 2 Performance Comparison: SEVAX) The graph above show the strong underperforming trend of SEVAX, while CBMAX can effectively replicate the benchmark since the starting point of the observations. However, both funds still have the same pattern as the benchmark. This should due to difference in the weight of the assets allocation. (Figure 3 Performance Comparison: SECEX) This graph illustrate that PSPAX slightly outperform the benchmark for almost the whole period of the observation, while SECEX still able to replicate the pattern of the benchmark but slightly underperform the benchmark. Standard Deviation and Variance As the widely used mean dispersion or volatility measurement (Lhabitant, 2006), the variance and standard deviation capture the risk of an investment over time.   The larger the dispersion means the greater the value of both variance and standard deviation. With Microsoft Excel, the function STDEV.S() was used to calculate sample standard deviation from the logged return. Semi-Variance and Semi-Standard Deviation Focus only the downside, Semi-Variance and Semi-Standard Deviation present the downside risk of the distribution in term of dispersion to mean. Drawdown Risk Maximum Drawdown simply measures the largest percentage change between the peak price and low price during a specific period of an investment. Shortfall Risk Shortfall Risk measures the downside risk in term of probability that the downside will occur. Value at Risk The Value at Risk measures the maximum loss that a portfolio can be expected within an observation interval at a specified confidence level for example 95% or 99%.   Within this research, two type of method are introduced, the historical VaR and normal distribution VaR, where the confidence level was specified according to the historical distribution and normal distribution respectively. (Table 4 Risk Measurements) For small-cap and mid-cap funds, Guggenheim Investments tend to suffer from higher risk than its competitors as they have higher values in most of the risk measurements used in this research. On the other hand, the large-cap fund of Guggenheim Investments has better risk management than the competitor in all perspective. These risk measurements will affect the risk rewarding ratios in the next section. The ratios shown in this analysis can be calculated as the followings, Information Ratio    average excess return of the portfolio over the market benchmark return    standard deviation of The ratio is used to compare with the Sharpe Ratio of the asset as the Information Ratio is considered as the benchmarks Sharpe Ratio.   By comparing the asset Sharpe Ratio with Information Ratio, the higher Sharpe Ratio in compare to Information Ratio represent that the asset is outperforming the benchmark (Deborah, 2011). Sterling Ratio average excess return of the portfolio over the risk free rate average minimum drawdown over the observation period The ratio has similar meaning to reward-to-risk ratio.   It is used to measure the ability of the fund manager to control their risk to lower the portfolio drawdown. The Sterling Ratio as well as the following Burke Ratio are widely advertised by commodity trading advisors, who wish to highlight their perceived skill in letting profits run and cutting losses. Burke Ratio average excess return of the portfolio over the risk free rate sum square root of the minimum drawdown over the observation period The Burke Ratio assumes an investor to experience an increasing instead of a linear incremenal disutility as reaction to a one unit increase in drawdown. Sortino Ratio average excess return of the portfolio over the risk free rate semi standard deviation of This ratio considers the lower partial moment in the second moment, the semi-standard deviation, to measure the risk-premium of the asset in comparison to the downside volatility of that asset. Kappa 1 average excess return of the portfolio over the risk free rate semi-mean of excess return of the portfolio over the risk free rate Kappa 3 average excess return of the portfolio over the risk free rate semi-skewness of excess return of the portfolio over the risk free rate Treynor Ratio average excess return of the portfolio over the risk free rate sensitivity of the portfolio with the market The ratio considers the risk premium of the asset in compare to systematic component of an assets risk (Spaulding, n.d.). (Table 5 Risk Rewarding Ratios) From the table above, it is clearly shown that funds from Guggenheim Investment are highly inferior to its competitors in terms of risk rewarding return of every aspects, especially the information ratio, which measure how the fund outperform the benchmark. In this case, all Guggenheim Investment fund have highly negative information ratio, this shows that all the funds are underperformed the benchmark, which further confirmed the graph from previous section. We can conclude that the small-cap and mid-cap funds of the competitors can outperform the similar funds from Guggenheim Investment mainly because of the better risk management when the higher return further boost the ratios even higher. For the large-cap fund, although fund from Guggenheim Investments has a better risk management, it still suffer from lower return generated by itself, which make its risk rewarding ratios lower than its competitor. In this research, I run the regression based on the following models using EView, The Capital Asset Pricing Model (CAPM) The CAPM and the SML are included in the research to predict the expected relationship between the risk and return.   Using E-View, if the result from model (see appendix ) in term of ÃŽÂ ±_i or Jensen Alpha shows positive and significant, the asset is said to outperform the market benchmark and vice vesa. Fama French 3 Factors Model By applying some market anomalies to support the model, Fama-French Model included the empirical evidence that small stocks could outperform the large stocks and the high book value to market value stocks could also outperform the low ratio stocks. In this research, two factors that were included in addition to the CAPM model are the SMB (Small Minus Big) and HML (High Minus Low). Carhart 4 Factors Model   By applying more market anomalies to support the model, Carhart Model included the empirical evidence that the winner security will keep winning and the loser will keep losing, one factors that were included in addition to the Fama French model is the MOM (Momentum). (Table 6 CAPM Regressions) (Table 7 Fama French 3 Factors Regressions) (Table 8 Carhart 4 Factors Regressions) (Table 9 Correlation Matrix) From the regression tables above, we can see that most of the intercepts of the model are insignificant, indicate that the fund cannot outperform the benchmark. However, there are some intercepts that are statistically significant, but the value of those intercepts are very less likely to deviate from the benchmark, which make the decision whether the fund outperform or underperform the market becomes more difficult. We can further discuss more about this with the correlation tables, the tables show that the funds, both from Guggenheim Investment and from the competitors, have strong positive correlation (more than 0.95) with its own benchmark. From this point, we can imply that it is very difficult for these funds to outperform or underperform their benchmark due to the highly-correlated characteristic of them. In fact, we can say that these funds are doing well in tracking their benchmark. For Fama French model regression, the values are mostly significant when regress against its own factors, and not significant when regress with the other benchmarks. Furthermore, the regression results from this model confirm each fund objective and strategy. For SSUAX, the values for SMB are positively significant, which means that the small stocks are dominated in the portfolio, the values for HML are slightly positive and significant, meaning that the value stocks are also in the asset allocation of the funds. The Carhart 4 Factors Model regressions for this fund did not show any significant value for the MOM factor. As for GSXAX, the regression results are quite not clear, as we have both positively and negatively significant value of SMB and HML in the different regression. I therefore conclude that Fama French 3 Factors Model cannot effectively explain this fund. After using Carhart 4 Factors Model, the result shows the positively significant values for Market Risk Premium, SMB, HML, and MOM. This may conclude that the fund utilize the same strategies as SSUAX, but by following the momentum of the stocks makes this fund outperform SSUAX, which does not utilize the factor. SEVAX and CBMAX have the similar characteristic to SSUAX and GSXAX, this may due to the funds have closely related choice of investment for being mid-cap and small-cap funds respectively. The regression result still confirm that these two funds mainly invest in small, value stock as SMB and HML are positively significant. However, CBMAX has some evidence of incorporating momentum factor, as the MOM values are negatively significant, indicate that the fund is short selling the loser stocks. This may also help explain why CBMAX outperform SEVAX that does not effectively utilize momentum factor. SECEX and PSPAX can also be explained by Fama French 3 Factors Model, as the values for SMB and HML factors are all significant, but negatively. This as well confirm the investment strategies of these funds that mainly invest in big-cap stocks that lean toward growth style. We can also determine why PSPAX performs better than SECEX by looking into Carhart 4 Factors Model. PSPAX has positively significant MOM value indicated that it incorporated momentum factor in the strategy when SECEX does not. Noted that all model has very high level of adjusted r-squared, which means that the estimates are fitted and reliable. All of the fund from Guggenheim Investments cannot outperform the competitors funds. The funds show poorer risk measurement which result in worse risk rewarding ratio. The regression analysis indicate that the funds also have poorer choice of asset allocation compare with the competitors. The use of Carhart 4 Factors Model clearly shown strong evidence of higher return as seen in the competitors funds. In the funds selection, the investors should not only consider the return, but should also consider the level of risk that one is willing to take in order to match each individual risk profile and maximize ones the utility. Appendix 1: Fund Fact Sheets SSUAX http://fulfillment.marketpowerweb.com/showpdf-sku.cfg?clientcode=rdxsku=SGIFS-SCVA SEVAX http://fulfillment.marketpowerweb.com/showpdf-sku.cfg?clientcode=rdxsku=SGIFS-SMCV SECEX http://fulfillment.marketpowerweb.com/showpdf-sku.cfg?clientcode=rdxsku=GIFS-QTR-SLCC GSXAX http://www.aberdeen-asset.us/doc.nsf/Lit/FactsheetUSOpenSmallCap CBMAX https://www.wellsfargofunds.com/assets/edocs/fact-sheet/equity-fund/cb-mid-cap-value-retail.pdf PSPAX https://www.pimco.com/handlers/displaydocument.ashx?c=693390403wd=Fund%20Fact%20Sheetfn=StocksPLUS%20Fund%20Institutional.pdfid=JJRUIU9YBGygdBZkoxEM7%2b9RTHXIxyZIw0T%2bDne2n4UiurFgvuWSI8U3wKrDgiR8kjwOaIhElyjPQMcsZ%2bacURlLGpyqDSkrerDNZSiUec1YccO167PpDiuxswUDimVQPGA3zF19hjqoyfUcbclAy6QGDvzW7jER5g0rHppMRCXw703Hec%2bRG7KS%2fxoNdq5X%2bSjJwmdqQmUxuiAz3vlHMWzvm6AuGcBMvm21xM%2byPTeKc0imjl19hPI6kgDYi4pvkIWF4XaSXGC0Freoikh1YeOJlv6DRnEAICWDdyOS1bGFTMAt9JLXeE1YpNtVmWJlatcNbvkEsFiINtBzcupchii02oWEi0VYYMm6kkgLZr%2fAWYpymqhkFshcCdH5SgFvTOY9sv5cj6nt9YakDxDe6lTPMwDnUGIKX3H8b39X0JMtY6B3Y6f8HSGl5ylRsRIh Appendix 2: Assets Allocation Bodie, Z.; Kane, A. Marcus, A. (2014) Investments. 10th Edition. New York: McGraw-Hill Deborah Kidd, CFA (2011). The Sharpe Ratio and the Information Ratio. Online at https://www.cfainstitute.org/learning/products/publications/ipmn/Pages/ipmn.v2011.n1.7.aspx (accessed on 16-03-2016) William C. Spaulding (n.d.). Portfolio Performance. Online at   http://thismatter.com/money/investments/portfolio-performance.htm (accessed on 16-03-2016) Lhabitant, F.-S. . (2006) Handbook of Hedge Funds. Chichester: John Wiley Sons Chap 19 20