Monday, September 30, 2019

Money and Morality Essay

MONEY AND MORALITY: Gifts of eternal truth in moments of the mundane By Cheryl Leis, PhD, Management Consultant/Practical Philosopher As inhabitants of this 21st century Western world, we all have to deal with money. We participate in the world of commerce as a means to obtain those things considered necessities of life. Money plays the role of the most commonly accepted means in this giving and getting from others. And the more money one has, the greater one’s power to regulate the particulars of survival – one’s own and that of others. We use money to participate in the exchange of products or services, individually and corporately – whether employed by or leading an organization. In some cases these organizations are publicly funded non-profits, and in other cases they are private, for-profit ventures. Money and morality is a topic that has surfaced on many occasions in my line of work. One such instance was during a contract with CBC TV to work on the development of a six-part national series titled: â€Å"Beautiful, Filthy Money and the Search for Soul. † The title itself speaks to the ambivalent nature of our responses to money and its presence in our lives. As part of the contract, I appeared as a guest on the panel, where I was asked to complete the following sentence: â€Å"Money is†¦Ã¢â‚¬  Yes, what is money? My response was: Money is a tool for finding out who we really are. What you do with money, and how you live with money’s presence in your life, tells a lot about your values. Or, as Ralph Waldo Emerson puts it: â€Å"A dollar is not value, but representative of value, and, at last, of moral values. † This is apparently pretty close to what Buddhists believe about money. There are times when many of us are faced with an imbalance between money and morality and find ourselves asking in some form or another: How we can put â€Å"Money† and â€Å"Morality† in the same sentence and not end up with an ethical contradiction? The incompatibility of these Mwords is an inherent, yet complex part of being human. And it is only when we face the truth of their incompatibility that we can come to understand the utter necessity of their coexistence. The challenge stems from the fact that there is both a spiritual side and a material side to our situation. When we don’t bring the spiritual side into dialogue with the material side, problems result. This is true for individuals as well as organizations. Think about Enron – what do you think their way of dealing with money says about the moral values that guided senior management there? Each of us could turn the question on our own lives. Money, in and of itself, is neutral. It has no intrinsic value, but is a mere yardstick of value, a means of measuring or comparing in the exchange of one thing for another. Money â€Å"belongs to the class of great mental inventions, known as 1 measures†¦Ã¢â‚¬  Measures of distance – the meter or mile – span the gulf between two things or places yet are not themselves things or places. Similarly, money brings things of different value together without becoming one or the other. † Because money is merely a way of measuring, it is in itself, therefore, not real. Thus, money is both neutral and unreal. Nevertheless, we often seem oblivious to this unreal nature of money and equate it with things that are very real, like our own values. But if, as Aristotle says, â€Å"[a]ll things that are exchanged 2 must be somehow comparable,† what are we saying about our perception of reality when we measure our sense of self-worth by our net-worth? While money is a measure of value, that value can change depending on what the market is willing to bear. It’s rather similar to the story of the emperor’s new clothes. As soon as we agree something no longer has value, our whole perception of it changes. This change in the perception of the value of something affects humans psychologically and emotionally. So when the value of stocks falls through the floor, people react in fear or paranoia. Conversely, when stocks rise like crazy, there is frenzy fuelled by hope and even greed. What then, motivates our relationship with money? With what intention do we strive to accumulate wealth? Do we recognize what our relationship with money says about our values? Money Obsessing For some the question of ethics and money leads down another path. In â€Å"Is Lucre Really 3 that Filthy? † Craig Cox, executive editor of Utne magazine, reflects on his own journey from disdain for the almighty dollar as a child of the 60s to becoming – of all things – â€Å"bourgeios,† earning money and learning to manage it. There was the example by a leading voice of the counter-culture of the day, Allen Ginsberg, who wrote in Howl! of burning all his money in a wastebasket. Times have changed – even for Ginsberg, 1. David Appelbaum, â€Å"Money and the City,† Parabola, Volume XVI, No. 1 (Spring 1991), 40. 2. Aristotle, Nicomachean Ethics 1133a 18. 3. Craig Cox, â€Å"Is Lucre Really That Filthy,† Utne Reader (July-August, 2003), 63. who â€Å"†¦of course, sold his papers to Stanford University for 4 nearly a million bucks. † The irony, points out Cox, is that social justice activists who want to eschew wealth in order to bring about social justice and help the poor are in fact helping people to attain the very thing they, the activists, abhor: a comfortable life. He sets up an interesting dilemma when he insists that â€Å"If you insist on embracing poverty in your own life, how do you become a credible advocate for folks who would do almost anything to 5 escape it? † True enough, there are those who become enslaved to money in their attachment to mere accumulation of more and more capital. However, there are also those who are enslaved to money in their ascetic avoidance of it. Both are obsessive behaviours: obsessed with having money or obsessed with avoiding it – like the alcoholic’s family that is obsessed with avoiding alcohol. In neither case is money at the service of the individual as a means of providing for the necessities of life; rather, the individual is at the service of money. Our emotional responses to this neutral thing called money often lead to an automatic attachment of value-statements. We grab on to labels such as â€Å"evil,† â€Å"bewitching,† â€Å"aweinspiring,† or â€Å"filthy lucre. † Respect for money is replaced with either worship or condemnation of it. Emotional and value-laden responses are also evident when conversation turns towards money and self-righteous posturing rises very quickly to the surface with comments like: â€Å"Well, I don’t soil my hands with money. † Or: â€Å"I certainly don’t 4. Ibid 5. Ibid work for money. † A lot of judging of others happens: â€Å"He’s just in it for the money. † Or: â€Å"She’d do anything for money. † This judgmental posturing also leads to ideological positioning. Anyone who focuses on making money is immediately dubbed a capitalist and conversely, anyone who speaks of communal sharing is dubbed a socialist. Subtleties are lost and conversation ends right there. No dialogue is possible. We move from love of money to love of ideology, where anyone who thinks differently than I do about money is immediately evil. Spiritual Moments of Mundane Existence To judge from one side or the other is to forget that we inherently have one foot in heaven and one foot in the mud of the earth below. The challenge is to live in both simultaneously. Living as a human being means learning to deal with money – whether one has a lot or a little matters not. It will do us no good to merely pursue a spiritual life unless we are living equally and simultaneously in the material world. Christians are exhorted to remember that even Bishops, or spiritual leaders, are told to balance both. â€Å"For if someone does not know how to manage his own household, how can he take care of God’s church? † (1 Timothy 3:5) A life of wholeness, or one in which the spiritual and the material are in balance, guarantees freedom from distortion. Yet the need for wholeness is also at the heart of the contradiction. The spiritual and the material are of entirely different natures. Not only must they live in the same world, both the spiritual and the human sides of our existence must also have 2 their own identity and remain in full relationship with each other. We have to work at accepting this incompatibility for what it is. These are separate parts of who we are and of our daily existence. These separate parts are in a dynamic relationship one to the other, like notes in a beautiful song: you might have harmony, but you still have separate notes. If they are all the same note, there is not harmony, there is unison. Harmony has tension. It is beautiful because of the tension. Unison is nice, but harmony is richer. Morality And Business Just as it will not help us on an individual level to focus only on the one side of our nature at the expense of the other, likewise it will not help to divide our culture into the spirit-lead and others. It reminds me of a story I recently heard: Two men met for the first time, in of all places, a church on a Sunday morning. The one asked the other: â€Å"So what do you do? † To which the second responded: â€Å"I work as a director of XYZ division of a business. † â€Å"You’re in business? † quipped the first, who was a teacher, â€Å"Oh that’s too bad. † The work of the businessman was seen as inherently less worthy. How far could the conversation go after that? It is a difficult chasm. One finds a classic case of a religious-affiliated venture that refused to acknowledge that it must run itself like a business. After decades of mismanagement, the publishing house cried out to its constituency to get it out of a multi-million debt. One former board member was even quoted in a church publication as saying that this was seen as â€Å"a church venture, not a business venture. † The mistake lay in this eitheror posture. There was no acknowledgment that gifts and talents and skills of different sorts were needed. The disdain goes the other way too. One has only to think of the now infamous corporations like Enron or Livenet, where the situation is merely the reverse: a business enterprise that lacks spiritual sense, and results in moral bankruptcy. If our moral principles give us the framework within which we operate and the ability to continue operating depends upon financial viability, then integrity is automatically lost for any organization when either half of the morality and money equation is lost. Balancing the Equation Only when we pay attention and only when we come to recognize the true place and role we have allowed money in our lives, only then can we possibly hope to reach a deeper understanding of how important a balance between the material and the spiritual is. This deeper understanding may only come in flashes, only fleetingly. Yet the truth that is understood in an instant opens us up to the truth of our everyday actions and existence. In other words, we must become conscious, we must become aware of our human condition – this life lived in a dynamic balance between the spiritual and the material – and be attentive to both. But instead of giving the right amount of attention to those mundane and material aspects of life like taxes and monetary demands put upon us, we often get caught in a bias against money. We would rather point fingers and condemn in broad strokes than engage in dialogue of particular money matters. We would rather alienate than seek to understand. Instead of casting judgment or pretending we, personally, are above being affected by money, we need to face our human situation and recognize we live in two worlds simultaneously. Maybe then we would do a better job of living in both. â€Å"If great truth does not enter into our relation to money, it cannot 6 enter our lives. † And if we do not allow ourselves to face that truth, the negative aspects of our relationship to money will sneak up on us unawares. Bad debts, overdue bills, or an empty fridge will suddenly demand so much of our human attention that we will have no energy left to focus on matters of the spirit. Undeniably, it can be a challenge to live out our moral principles in the marketplace; it is inherent in the challenge of being spiritual and human at the same time. Not giving enough attention to either the spiritual or the material, on an individual or an organizational level, leads to bankruptcy, whether moral or financial. In his book, Business and the Buddha, Dr. Lloyd Field states, â€Å"greed is a choice. † We can choose to allow our insatiable desires to form our intentions or we can choose to recognize where our intentions are ultimately leading us. It is not money or wealth or even the capitalist system that is the problem, he argues. Buddhists regard wealth as neither bad nor negative. Rather, the problem sits plainly with us, human beings, and the intentions which we allow to motivate our thoughts, our emotions and our actions. It cannot be stated any clearer than said in this book: we are exhorted to â€Å"continually make the connection between money and human values. † And then the question that really gets to the heart of the matter: â€Å"What price do we put on our ethics? † We will need to move past our biases and disdain for those whom we consider to be on the other side of the money and morality equation and allow moments of eternal truth and even grace to infiltrate our discussions and our questions. When all gifts and skills are welcome and when integrity is our priority, then there will be the possibility of a true and dynamic relationship between money matters and morality. 6. Needleman, 265.

Sunday, September 29, 2019

Pasadena Foursquare Church Kitchen Renovation Project Essay

1. INTRODUCTION 1.1 Purpose of Risk Management 1.1.1 Knowing and Controlling Risks to Project Assets The process of Risk Management instituted tothe Project with knowledge and control over the risk position of the project. Not all identified risks can be removed. The likelihood of surpassing requirements can be traded off against the risk of surpassing the budget constraints. Risk Management is a process used to balance the project risk position across all project resource areas, controlling the distribution and magnitude of the identified risks against the cost constraints while obtaining the best possible confidence in achieving high project performance return. 1.2 Risk Management is a Project Team Effort 1.2.1 Integral Part of Project Implementation It is intended that Risk Management be an essential element in the Project Manager’s tool kit. This involves considering risk at the very beginning of the project conceptualization. The key features of risk management (RM) activity within the project are: 1.Managed risks are essential elements of the project management control process 2.Cognizant personnel accept the time imposed to develop and maintain the risk list 3.Project Management Team plans the effort and the Project Manager takes ownership of the plan 4.Risk status reports are integral to the project review process 5.Effective metrics are identified and delivered per the plan to all stakeholders These activities require require commitment from the project manager, and the Risk Representative. 1.2.2 A Team Effort Risk Management is a team effort. The project Risk Representative is the coordinator of the risk management activity. All members of the project team have important roles in identifying, assessing, and tracking risk, and in identifying the possible approaches to dealing with risks that are  necessary for the project to make good risk decisions. Risk decisions are supported by analyses and recommendations from the project team, but are ultimately made by the Project Manager in the same manner as all cost, schedule and performance impact decisions are made. 1.2.3 Integrated Risk View The Risk List developed and managed through the RM Process is a composite of the risks being managed by all elements of the project. It includes in one place the management view for risks from independent assessments, reviews, QA inspections, principles and policy, risk reviews, and residual risks from all project actions. Only in this way can it be managed as a comprehensive assessment of the liens on all project resource reserves, which allows optimized decisions to use these reserves to mitigate risks. 2. OBJECTIVES 2.1 Objectives of Risk Management The overall objective of Risk Management is to identify and assess the risks to achieving project success, and to balance the mitigation of these risks (and hence the additional cost) against the acceptance and control of these risks (and hence a possible higher degree of project performance objectives). To further these objectives, the Project Management process involves identifying risks to the success the project, understanding the nature of these risks, individually and in total, and acting to control their impact on the success of the project. 3. RISK MANAGEMENT OVERVIEW 3.1 Definitions 3.1.1Risk Risk is defined as: The combination of the likelihood of occurrence of an undesirable event and the consequence of the occurrence. This combination results in a risk severity, which is:  A measure of the risk magnitude. The higher risks dictate greater attention and urgency for action to mitigate. Risk severity is also influenced by the urgency of applying effective mitigations. Primary Risk is: A risk which rates high on the severity scale – generally high levels of  likelihood and consequence A specific risk to a project, identified in this process as a risk item, has four components, namely the undesirable event the likelihood of occurrence the severity of the consequences of the occurrence the timeframe in which mitigation decisions are required Residual Risk is: An accepted performance or safety risk, which remains after all possible or practical measures have been taken to reduce the severity. The term is especially used in identifying the risk remaining from all discrepancies dispositioned as â€Å"Use As Is† or â€Å"Repair† and accepted single failure points, or other decisions made which leave less than complete closure. The project risk position is defined as: The aggregate of the assessments of the individual risk items for the project, including the decisions made to mitigate, accept and control, or take additional risk. It is a goal that this risk position be measurable relative to project reserves. 3.1.2Risk Management In this context, risk management is defined as: An organized means of planning the risk management activity (Planning), identifying and measuring risks relevant to the Project (Identification and Assessment), identifying, selecting and implementing measures for controlling these risks so as to control the project risk position (Decision-Making), and tracking the decisions made and the evolving risk status (Tracking). Project reserves can be identified in different ways and are managed by a number of effective tools and methods. The Risk Management methodology looks at two aspects of the Project risk position – the risk to resource reserves and the measure of the project success criteria. The Risk Management methodology is based on the project risk position, which is the understanding of the â€Å"knowable† risk, while acknowledging that there are inherent â€Å"unknown† risk possibilities in any project, and must be acknowledged when judging adequacy of the reserves. 3.1.3 Significant Risk A significant risk is a risk considered by the Project Manager to require  focused attention by the Project Management Team on a regular basis. This group is largely but not necessarily identically the group of yellow and red risks in the 5X5 risk matrix, although some green risks may be included if their mitigation time frame is near-term. These are also generally the risks which are reported at the regular monthly status reviews. The Significant Risk List (SRL) is the subset of all the project isks which are significant risks. Not all risks in the project risk are significant risks, but all risks should be rated according to the 5X5 matrix 3.2 Consequence Categories of Risk Risk consequences are assessed against three fundamental categories – called consequence categories 1.The threat to achieving schedule 2.The threat to achieving Scope or Project Performance Success Criteria 3. The threat to the project budget These categories may be expanded or added-to – for instance, impact on facilities, or church activities ..etc. 3.3 The 5X5 Matrix This project has adapted the 5X5 Risk Matrix, which defines the criteria for assessing risk likelihood and consequence for both project and implementation risk. Primary risks are generally considered to be those in the red zone of the 5X5 matrix. 3.4 The Elements of the Risk Management Process The Activities of the Risk Management Process for this project are described as: 1.Identifying and characterizing risks 2.Prioritizing or ranking risks 3.Developing potential project responses to risks 4.Making decisions utilizing existing resources to restructure the program to reduce the potential effect of the risks 5.Tracking the evolving risk exposure and iterating the above actions as needed 6.Developing a plan for the above activities throughout the project life-cycle Each element of the Risk Management process requires interactions among the project team, and the process provides methodology and tools to enable effective communication and documentation. The Figure Below shows a process flow for the activity of risk management in the process. 3.5 Risk Management in the Project Life-Cycle The figure above shows the periods of activity, and generally the times of inputs/outputs of the Risk Management Process, within the project life-cycle. Each Risk Management element extends through the entire life-cycle, and the majority of effort shifts from planning through identification and assessment, decision making, to tracking as the project risk position changes and evolves. While the risk management process is serial, there is significant iteration and updating as the project progresses and matures and thus the identified risks change, are realized or retired, and new risks arise. As risk matures, probability of occurrence and/or impact will change. Risks can reduce to the level of insignificance, where they are retired, or can increase to point of occurrence, or realization. Also, new risks can and will be identified throughout the project life-cycle. The Risk Management process considers and responds to all of these outcomes by returning to earlier activities for reconsideration and update. The project’s Risk Management process can change significantly for operations, since more of the risk attention will be associated with human factors, and an update to the RM Plan may be needed. 4. THE RISK MANAGEMENT PROCESS IN PLANNING 4.1 Understanding Risk in the Planning/ Proposal Phase The Project Management Team works to define the requirements and time frames of the project. The pre-project activity involves concept defininition and exploratation with decisions made in the Project team as to the scope and budget with which the â€Å"Plan & Design† Stage Begins. In the Plan & Design Stage Each Project Task is analyzed by the Project Team and â€Å"Make Or Buy† Decisions are Madke based on the availibillity of Qualified Volunteers in the church community. Buy Decisions are made for each outstanding activity. Rough Designs are formulated by the project team calling on Subject Matter Experts as required. Rough Order of Magnitude estimations are made on the cost of each task. RFPs are released to potential contract or volunteer candidates. Proposals are received and Reviewed by the Project Team. The Designs are finalized, Schedules and budgets made and the Work Break down sturcture and Gant Charts are updated with the project â€Å"Base Line† is  established. It is at this point that the Risk List is established by the Project Team, Pulling inforomation from contractors, Subject Matter Experts and the Experience and judgement of the Project Team. The Risk List can then be used to identify the most attractive of a number of options in contractor selection, Design Changes, Scope Schedule, Buget Needs. In addition to performance and needed resources, risk should be a major consideration in justifying the chosen options. This requires specific identification of the apparent risks in each option – mitigating them if possible in the process of maturing each option The relative weights of the risks combine with the weights of the performance and resource assessments in selecting the option to go forward. Fig 4-1 Accounting for Risk in Project Formulation 4.2 Using Risk in Establishing Reserves In establishing the budget reserves for the project to be confirmed at the â€Å"Permitting† Phase, risks are used to define the risk exposure of the budget. Rrisks that are identified in the Planning Phase can be assessed for the potential cost, should they occur. This requires quantification of the risk consequence (in $) and the risk likelihood. 4.5The Preliminary Risk Management Plan At the end if Planning Phase, Preliminary RM Plan is drafted. The plan will consist of all of the Risks associated with the poject and a specific plan for controlling Each SRL risk identified 5. RISK MANAGEMENT IN IMPLEMENTATION 5.1 Risk Management Planning The SRL will be reviewed and updated weekly by the Project Management Team. Montly Status will be reported to all of the stakeholders. New Risks will probably imerge as the project progresses. Oportunity will be made to the Project Management Team Includint the Projec QA representative to add new risks the project Risk List. 5.1.1 Risk Mitigations The following subjects are considered when documenting Risk Mitigations. a)Map the project success criteria and project objectives into an overall  approach to risk Reflecting the prioritization of performance of project objectives and constraints, and weights the emphasis on the following: avoiding risk, by minimizing risk through redesign, alternative developments, parallel developments, large margins, additional equipment to buffer constrained schedules, etc.; accepting risk, by developing contingency plans and margin management criteria for exercising those plans, and/or allowing descope/ reduction in Project Performance return to trade against cost, schedule, and other resources; or taking risk, by finding and incorporating high potential performance/cost/schedule benefits with acceptable additional risk to reserves or margins. . 5.2 Risk Identification and Assessment 5.2.1 Identification and Assessment Requirements The requirements in identification and assessment are to identify the risk items, to describe them sufficiently to allow assessment and decision-making, to identify practical mitigation approaches, and to develop the Significant Risk List (SRL), which is a list of the identified risks to the project and their decision-enabling data 5.2.2 Risk Description Effective risk descriptions identify the root source of the risk concern (the Risk Condition), the event that is feared (Risk Event), and the risk Consequence to the project. The format is generally: â€Å"Because of (the condition giving risk to the risk), it is possible that (such and such event) will occur, with the consequence that (describe the impact – e.g. delivery â€Å"n† weeks late, loss of â€Å"xyz† performance capability, need to build another component, etc.)† Another, less favored, descriptive format sometimes used is: â€Å"If (such and such an event) occurs due to (the condition that †¦), then (describe the impact†¦)† Sometimes there will be further words needed to describe the uncertainty, explain why the condition is present, and what other factors are need to be considered and why. 5.2.3 Inputs When the pre-formulation or early formulation phase feasibility demonstration and scope definition results have been approved, the required inputs for Risk Identification can be assembled. The information needed for identifying and assessing risk include at least preliminary versions of: Requirements and Project Success Criteria Project Management Plan Project Requirements Risk Management Plan Staffing Plan/ key personnel Schedule/ Schedule drivers Budget/ budget drivers 5.2.4 Identifying Risk Items 5.2.4.1 Risk Identification Methodology The first step in developing the risk list is generally a brain-storming activity where potential risk items are identified by the key project personnel. These risks are characterized by two parameters – the likelihood of an adverse event and the consequences of that event. Whenever a potential risk is submitted for consideration, it is accompanied by estimates of these two parameters. The risks are identified by the â€Å"experts† in the specific subject of the risk item – that is, the key personnel submit candidate risks in their project areas of expertise. Risks may be suggested in areas outside their expertise, but they should be then presented to the expert in that area for concurrence. As these risk items are characterized, other data are needed which are described below. The mechanism for obtaining these submissions will vary. The brain-storming may occur as a group, or by e-mail, or separately in one-to-one discussions. The submissions should be â€Å"standardized† to remove very disparate interpretations of the rules before the first group consideration takes place. The following characteristics should be observed in the process. The candidate risks submitted by the team should be inclusive – if the item might be a risk, input it. The Project Manager will work with the submitter to delete inappropriate risks or modify the assessment as needed. They should have a common basis for interpretations. This is accomplished by the Project Manager iterating with the specific group members The Project Manager may use team discussion to assess the risk list, and remove differences of understanding. 5.2.4.2 Resources for Identifying Potential Risks a)External Resources Risks to the project may be identified through experiences of other projects, or the Construction industry in general b)Internal Resources Sources and resources available within Church Comunity or within the project management team which are used to develop inputs to the Risk Identification and Assessment element include: Expert Judgment The RM risk identification and assessment process relies heavily on the expert judgment of the project implementers and their peers Schedule, WBS, Work Agreement Assessments One can systematically examine the planned work and identify uncertainties to which the project has high sensitivity, which can result in risk items to be assessed. Technical, and Design Organization Assessments Functional Block Diagrams, Requirements Flow-Down, Fault Trees, etc. are all systematic organizations of the planned product which can be examined comprehensively for risk items. Review Board Reports Review Board reports include recommendations and issues, as well as RFAs. Review Board can also consist of members or contracting companies. Residual Risk Residual risks, which are identified in many activities within the project as unavoidable risk remaining after all reasonable actions have been taken, should be carried in the risk data base. They should be considered for inclusions in the SRL if applicable, such that they would be reported at monthly and quarterly management reviews as accepted risk. Early in the project design activity, decisions such as allowing selected single failure points or marginal design against worst case possibilities may be made with due consideration of the risk taken. These considerations should be retained in the residual risk descriptions and rationales. 5.2.4.3 Categories of Risk Categorization can be used to allow the aggregation of subsets of risks, and so provide insight into major risk areas in the project. Risk Source Categories A useful set of risk source categories identifies areas of the project where potential risk might reside – for example performance, cost, Or schedule, constraints within which the project must work, to be considered. Other risk source categorizations which might provide insight include: The project systems or subsystem area in which the risk is manifested, The WBS element primarily involved,  Technology areas (if breaking technolgy is used with appliances etc..) Risk Source Categorization is optional. 5.2.4.4 Risk Status Risk status is the process for configuration management of the risks, and also an indicator to external reviewers of the projects plans to deal with each risk.. For Risks that have been dispositioned, there are status classifications definitions are shown in the table below. –RESEARCH – A research category is assigned when more knowledge is required about the risk or the mitigation options. The objective is to move to mitigate, watch, or accept as soon as possible –ACCEPT – A risk is accepted if there are no practical mitigations identified. Depending on the severity of the risk, it may be needed to justify acceptance to the CMC as a Primary Risk. The risk is tracked for changes as the project matures –MITIGATE – A risk is in the mitigate category if there are funded actions under way to reduce the risk. This may have future decision milestones, or milestones where the mitigation risk reductions may be claimed -WATCH – A risk in the watch category has known future points of change, and requires tracking and possible future reassessments. Candidate mitigation options may be carried, and the risk may be re-categorized as the project evolves. 5.2.5 Risk Item Descriptors The draft SRL should list each identified risk item, and for each item should  include as a minimum: Description of the adverse event (Condition, event, consequence) Context of the Risk (If warranted) Categorization in the categories chosen Implementation Risk Assessment -Consequence -Likelihood of occurrence Project Risk Assessment -Consequence -Likelihood of occurrence (If quantified assessment is used) -Level of impact on resources (technical, cost and schedule) -WBS elements primarily affected -Task/ schedule elements primarily affected Mitigation Options – Description of potential mitigation for consideration – Costs of identified mitigation options Timeframe – Urgency of decisions for mitigation effectiveness Time window of potential occurrence – if applicable Resulting reductions in risk likelihood and impact if mitigation option is implemented Project personnel who are identifying risk items will record as much of this as is available at the time a risk is input to the project. Recording the likelihood and consequence descriptors require that the thought processes of risk assessment (described below) be gone through, and in general a first cut at each can be entered with the other data. Risk Description Data Timeframe Implementation (Schedule Or Cost) Risk Project Scope Risk Mitigation Data Risk No. Title Description Impact Near-term, mid-term, or far-term ImplementationConsequence (Cost to recover) Likelihood (implementation) Risk Cost Project Consequence (loss of performance) Likelihood (Project) Mitigation Options Mit. Cost Risk Reduction Figure 5‑ Sample Risk Identification and Assessment Data Sheet Risk Number:An ID number which can be used to find data in a data base. The number can be indexed to indicate updates Title:A short reference for reports, etc.  Description:Text describing the condition or root cause, the feared event, and the consequence. (additional columns can be added here to denote classification schemes to be used. Some risk managers add a time-frame classification to highlight near-term risks from long-term risks. Impact:Text that describes the change to the project due to the event described above. For implementation impact, the description might indicate what would be necessary to recover. For a Project risk, the description might indicate the reduction in project capability to return Project results. Implementation Consequence: A measure against the 5X5 assessment criteria (qualitative) or in resources expenditures (e.g. $) as a result of the impact described to get back on track. Implementation Likelihood: A measure against the 5X5 assessment criteria (qualitative) or in percent (quantitative) of the described consequence being realized. Risk Cost:For quantitative assessment, the product of the consequence in resource measure and the probability (e.g. $) Project Consequence: A measure against the 5X5 assessment criteria of the degradation of Project return due to the event occurring. Project Likelihood: A measure against the 5X5 assessment  criteria (qualitative) or in percent (quantitative) of the described consequence being realized. Mitigation Options:A description of one (or more) possible approaches to mitigating the risk Mitigation Cost:An estimate of the cost in project resources to implement the mitigation Risk Reduction:A description of the effect of the mitigation on the original risk assessments 5.2.6 Risk Item Assessment 5.2.6.1 Qualitative Assessment Qualitative Risk Assessment is the assignment of adjective ratings to the degree of significance of either likelihood or consequence of occurrence. Criteria like â€Å"High, Medium, and Low† are generally used. Scales can have fewer gradations (i.e. high and low) or more (e.g. very high, high, significant, moderate, and low). Definition of these levels is essential, and some iteration and discussion will be needed before the team understands a common distinction between assessed levels. Consequence of Occurrence LevelProject Risk Level Definitions Very HighProject failure HighSignificant reduction in project return ModerateModerate reduction in project return LowSmall reduction in project return Very LowMinimal (or no) impact to project LevelImplementation Risk Level Definition Very HighOverrun budget and contingency, cannot meet schedule with current resources HighConsume all contingency, budget or schedule ModerateSignificant reduction in contingency or schedule slack LowSmall reduction in contingency or schedule slack Very LowMinimal reduction in contingency or schedule slack The advantage of this qualitative approach is that, while subjective, the project team can quickly get in tune with the distinction between levels by working through a number of risks together, and can then assess their own risks fairly consistently. The disadvantage is that the system does not straightforwardly allow â€Å"adding-up† or otherwise aggregating the total risk. Rather, a risk distribution is used to display the project risk position,  as will be seen below.

Saturday, September 28, 2019

Boeing corporation crisis Essay

Attached is a report of the biggest crisis that the Boeing Corporation has ever faced in its existence. First it will describe the events leading up to the problem before it became a public issue. Then we will discuss in extensive detail exactly what the problem is that Boeing is facing and how they can overcome it. The team of xxx completed the research and the written report of the crisis. Boeing is an international supplier of commercial airline planes, military defense aircraft, and surveillance. Partially due to the September 11th attacks on the United States, the Boeing Corporation will be laying off 30,000 employees from their nationwide facilities. The layoffs will affect cities such as Los Angeles, Seattle, Houston, St. Louis, Philadelphia, and will affect employees from entry level to executive offices. The announcements of these issues have caused Boeing’s stock to fall to a surprising low and production levels to drop dramatically. XXX would like to thank XXX for giving us the opportunity to complete this research assignment. The research helped us learn how to more efficiently utilize the different databases available to us and put it into a format so it can be presented to a public organization or the media. The skills learned in the duration of completing this report will be able to be utilized when presenting to upper management a detailed issue and solutions to a specific problem. Boeing Corporation Crisis Cal State Fullerton Jean Fuller May 28, 2002 Executive Summary Today the Boeing Corporation is facing one of the largest crises in the history of the company. They are in the process of laying off a total of 30,000 employees from their facilities nationwide. The layoffs will take place in cities such as Los Angeles, Seattle, St. Louis, Philadelphia, and Atlanta. Most of the layoffs affect the commercial airline division, but the military defense and aerospace divisions will also be affected. The plan for the reduction in employee size began in July 2001, but the attacks on the United States on September 11th left the company having to lay off more employees. At the present time, Boeing is mainly focusing on reducing the amount of mandatory layoffs. This is going to be hard to accomplish because of the reduced demand for the company’s goods and services. In the future, Boeing’s focus will be on returning to a high level of production and profitability. They will be focusing on competing with the competition by increasing product innovation and reducing expenses that the company incurs during production in an effort to keep prices low. Due to economic slowdown and reduced spending by consumers, the Boeing Corporation was beginning to experience loss in revenues and a decline in production. Not more than three months later, the attacks on the World Trade Center in New York impacted the demand for commercial aircraft because of fear to travel by airplane. Also, heavy competition with Lockheed Martin and Northrop Grumman, Boeing is not being awarded as many contracts with the United States military, which is causing a decline in revenues for the aerospace and military defense divisions. There are not many ways to overcome the entire problem, but there are some alternatives that the company can consider in order to reduce them. The alternatives are to distribute hours equally among the employees, reallocate employees into different divisions, offer severance pay, and to continue to layoff employees. Boeing has to be careful in the way that this particular situation is handled. If employees feel as though they are being treated unfair, they will not have job satisfaction and production may decrease. The best possible solution for the Boeing’s problem is to equally distribute the hours among the employees. By doing this employees will maintain their jobs. This will result in higher job satisfaction than other alternatives, and Boeing will not have to go through an extensive process to rehire when they return back to a stage of profitability. Boeing Problem Statement As Boeing faces one of the greatest financial crises in the history of the airline industry, Boeing plans to cut production workers, engineers and support staff by mid-2002 (Nyhan, September 2001). Because of a declining economy as well as terrorist attacks that occurred on September 11th, Boeing is laying off a total of 30,000 employees in all divisions of the corporation: aerospace, commercial aviation, and military defense. The layoffs will happen in Los Angeles, St. Louis, Seattle, Oklahoma, and the Puget Sound area, and will affect everyone from salaried executives to the hourly paid maintenance employees. Layoffs are a sign of company turmoil and should be avoided to maintain the company stockholders. Short and Long-Term Goals Boeing’s primary short-term objective is to maintain a reasonable level of profitability given the recent occurrences. It will attempt to accomplish this by reducing the amount of dollars that are paid to the current employees by either reducing their hours, or completely terminating their employment with the company. Because of current supply and demand of the company, profits will be reduced if the current level of employees is maintained. Boeing’s long-term objective is to be the number one supplier of commercial, aerospace, and military aircraft and technology. They aim to accomplish this by maintaining a level of profitability that satisfies the stockholders and corporate executives. They also want to maintain a high level of competition with the current competition: Northrop Grumman and Lockheed Martin. If Boeing loses government aerospace and military defense contracts to the competition, there is a high probability that the company will become insolvent and declare bankruptcy. Details of the Problem Prior to September 11th, Boeing was going through trying times. Their satellite manufacturing operations were in a recession. This was due to the bursting of the internet and telecom bubbles (Laing, 2002). The Commercial Airline Industry was also facing a slowdown. This was a result of high fuel prices, labor cost increases, a softening of the national economy and low passenger traffic (Smith, 2001). Also, improvements in production efficiency for Boeing led to a plan to decrease up to 15% of its employees in the commercial-aircraft business. This efficiency in production was due to the industry’s first ever-moving assembly line for the final phase of the production process, which cuts unneeded steps (Holmes, 2001). Likewise, by the end of 2001, Boeing lost out on the largest military contract ever when the Pentagon picked rival Lockheed Martin to build the Joint Strike Fighter for shared use by the Air Force, Navy and Marines. This next generation manned fighter is expected to flow more than $200 billion in revenues over the next 20 years (Laing, 2002). But most traumatic for Boeing were the terrorist attacks on September 11th. They transformed what had been shaping up as a mild downturn in commercial jet orders into a veritable collapse in demand (Laing, 2002). After the attacks, the need to fly drastically declined due to fear and security issues that made flying a nuisance. This left the US Airline Industry in a serious crisis. Companies such as Continental, US Airways, American, and Delta cut up to 20% of their capacity (Smith, 2001). Source: www.bloomber.com The terrorist attacks resulted in Boeing’s stock to plummet. Prior to September 11th, Boeing’s stock was falling because of the downturn in the economy. From the graph above, we can see that the attacks made the stock price to fall from $50 a share to $30. This was a sign that investors knew the impact the terrorist attacks had on Boeing’s industry. After September 11th, Boeing planned to respond to these problems by cutting production rates by 50 percent (Nyhan, November 2001). On September 18th, one week after the attacks, Boeing announced at a press conference that it would layoff up to 30,000 employees by the middle of 2002 (Smith, 2001). On that day, Boeing reduced the level of employees by 12,000: 3,000 through retirement and attrition, and 9,000 through layoffs (Farley, 2001). Boeing also stated that their jetliner orders would decrease drastically. In the next three years 80% of their 2001 orders would be delivered (Smith, 2002). They also planned to cut their monthly production of aircraft by half, from 48 to 24. The director of people at Boeing’s commercial airplane unit said, â€Å"In order to match our reduced production rate, we will need to accomplish the majority of the 20,000 to 30,000 reductions in 2002 employment by midyear†. Members of the Associated Press and Kiro 7 Eyewitness News stated, â€Å"Last week Boeing officials announced plans to layoff as many as 30,000 employees, mostly in the Puget Sound area, by the end of next year because of plummeting demand for new planes and postponed deliveries since the terrorist attacks.† Boeing’s commercial airplane division is not the only division that the layoffs will effect. Surprisingly 5,000 of the 30,000 layoffs are predicted to come from the military division. The military division cutbacks are also due to the September 11th attacks, but they are mainly due to global economic slowdown (Klein, 2001). This comes as a surprise because the military division is expected to grow in a time of war or terrorist attacks. Stockholders may assume that the government will request an increased level of production of fighter jets and military bombers so that the United States can dominate in the war against terrorism. In addition, the layoffs will not only affect the Boeing employees, but also people outside the company. As many as 20,000 of the Boeing layoffs may occur in the Seattle area alone, resulting in an additional 34,000 jobs lost by Boeing suppliers, subcontractors, and others (Klein, 2001). Alternatives Before Boeing implements any solutions they must maintain a good level of communication with their employees. The employees must know the reasons for a particular action taken by Boeing in order to avoid any mistrust and confusion (Hoffman, 2001). For example, an employee will wonder why layoffs are taking place when Phil Condit, Boeing’s CEO, is making an annual bonus of $1.13 million (Webber, 2002). Boeing must carefully explain their plans and what they are hoping to accomplish through their actions. Boeing can reduce the amount of layoffs by implementing any of the following solutions: Distribute Hours Among Employees The first solution for Boeing is to spread the hours among the employees for each department. Every department is given so many hours it can use for each week at the beginning of the quarter, depending on the amount of business Boeing has. If those hours taken and spread among the employees for each department, not as many layoffs will occur. The hours will be spread out by reducing the workweek from five days to four. By cutting one day out of an employee’s schedule Boeing is able to give those hours to another employee, which under the circumstances would be laid off. Once four employees each receive a deduction in their workweek one employee will be able to maintain their job and not be laid off. The advantage to this solution is that fewer employees will have to be laid-off. Employees will have their hours cut according to seniority. Some employees that have been with the company for a number of years will not be affected by the action. By holding onto the employees and not laying them off Boeing will be prepared to handle new contracts as they arise. Boeing is predicting that the recent decline in contracts is only short-term and business will soon return to their previous levels. The disadvantage to the solution is that some employees will not be able to afford a reduction in hours. In this scenario employees will not be satisfied and hold each other responsible for less hours. If employees are not satisfied then their production will decrease due to their dissatisfaction. Re-Allocate Employees The second solution for Boeing is to train employees in other departments within the company. This will allow Boeing to reallocate employees in different departments rather than laying them off. With the commercial airline department being hit the hardest by the recent terrorist events, employees in that department could transfer to other departments if they possessed the knowledge. The advantage in training employees outside their departments is the value it will add to the employee. If an employee has the knowledge and know how to be productive and efficient in other departments, not just his own, they become an instant asset to the company. Due to their flexibility Boeing can move the employee around in accordance with demand. A disadvantage to this solution is that Boeing will incur high costs for training employees to do other jobs. A slowdown in production will also result due to the time spent on training. The transition for an employee to move from one department to another is difficult because the employee will not be as efficient. Severance Pay Early retirement packages will be available to qualified employees. The retirement packages to be offered will vary depending on the number of years an employee has with the company. For each full year of service an employees has with the company, up to twenty-six years, they will receive one week of pay (Hoffman, 2001). The employee can take the severance pay in either a lump sum or as an income continuation. The single lump sum plan pays the severance pay to the person in one check within one month of leaving the company. The income continuation plan will pay the severance pay on the regular paydays every two weeks (Boeing, 2000). The advantage to this solution is each individual makes their own decision and they have total control of what they want to do. Also high salaries will be eliminated as management personnel take the package. Once management leaves, the ones that find early retirement appealing, Boeing will be able to promote employees into those positions without having to pay the large salaries. The disadvantage to this solution is that not many jobs will be saved because not many employees will go for the early retirement package. Boeing will also lose experienced managers if they decide to take the early retirement package. If this solution is implemented Boeing will continue to layoff employees because not enough jobs will be cut. Continue Layoffs The last solution is to continue to layoff employees as necessary. This will allow Boeing to keep revenues high because the layoffs will occur according to the market. If Boeing does not get as many contracts as they expected for a particular quarter, the layoffs will help the company’s finances. The disadvantage to this alternative is the potential of business picking backup. The market for commercial jetliners is expected to boom in two years and Boeing needs to be able to handle the new contracts. If Boeing has to constantly train new employees as business increases, in an effort compensate for the ones that were laid-off, they will not be operating at full efficiency. Solution Boeing realizes that layoffs can’t be completely eliminated, however they want to reduce layoffs to the lowest possible amount. Boeing will accomplish that by distributing the hours in each department among the employees. This solution will allow Boeing to save jobs by reducing the employee’s workweek from forty hours to thirty-two hours. The management of each department will determine the hours to be cut and the number of employees that are affected. This will be implemented on June 1, 2002 throughout all departments. Most employees will be affected by the reduction in hours, and management must be prepared to cope with the initial negative reaction. In order to measure the results of the solution, Boeing must evaluate the impact on its bottom line along with the toll it’s taking on their employees. An evaluation will occur every six months and will be lead by top executives and the department managers. Once evaluated, a decision will be made by the board of directors on whether or not to continue with the reduction of hours or to incorporate a different action. The thirty-two hour workweek is expected to be temporary as analysts are predicting a turn around in demand for planes (Holmes, 2001). As production returns to capacity, hours will be returned back to employees according to seniority.    Reference List Airlines slash workforces. (n.d.) retrieved April 10, 2002, from www.proquest.com . Airwise News. (2001, September 22). Majority of Boeing layoffs in aircraft sector. Retrieved April 10, 2002, from www.dowjonesinteractive.com Associated Press Newswires. (2002, March). More Boeing layoff notices. Retrieved April 10, 2002, from www.dowjonesinteractive.com. Article No. A71327300 Associated Press Newswires. (2001, September). First Boeing layoffs set to take effect Dec. 14. Retrieved May 7, 2002, from www.seattleinsider.com/news/boeing.html Boeing Company. (2002). A Brief History. Retrieved April 8, 2002, from www.boeing.com/companyoffices/history/boeing/html. Boeing Company. (2002). Layoffs Benefits Plan. Retrieved May 7, 2002, from www.boeing.com/companyoffices/benefits/boeing/html. Carlton, D.R. (2002, January) Boeing bleak outlook. The Economist, 362 (8257), 58. Corliss, B. (2002, April). Boeing deliveries drop 10%. Retrieved May 7, 2002, from www.msnbc.com Farley, G. (2001, December). Union leaders file grievances. The Associated Press. Retrieved April 15, 2002, from www.king5.com/cgi-bin/gold.cgi Genna, C.A. (2002, April 19). More layoff notices to be issued at Boeing. Retrieved May 8, 2002, from www.latimes.com Gillie, J.F. (2001, November). Lost jobs in Puget Sound area. The News Tribune, Tacoma. Retrieved April 10, 2002, from www.dowjonesinteractive.com Gillie, J.F. (2001, December). 1,700 new layoff notices today. The News Tribune, Tacoma. Retrieved April 10, 2002, from www.dowjonesinteractive.com. Article No. TCMA0135500 Global general aviation industry delivery breakdowns for jets. (n.d.). Retrieved April 10, 2002, from http://rdswebl.rdsinc.com/texis/rds/suite.html. Hoffman, R. (2001, June 29). The Dynamics of Downsizing. Retrieved May 18, 2002, form www.hradvice.com Holmes, S.C. (2001, November 26). Aerospace industry downsizing. Business Week, (3759), 108-109 Klein, A. (2001, October 13). Boeing faces massive layoffs. The Washington Post. Retrieved April 15, 2002, from http://detnews.com/2001/business.html. Laing, J.R. (2002, April). Gaining Altitude: Corporate profiles. Barron’s, 82 (17), 21-25. Lloyd, M.K. (2001, December). Losing Altitude; Aviation. The Economist, 361 (8253), 81-83. More Boeing layoff notices going out. (n.d.) Retrieved April 26, 2002, from www.seattleinsider.com Nyhan, P.J. (2001, September). Boeing expects to layoff up to 10 percent in commercial division. Seattle Post-Intelligencer. Retrieved April 10, 2002 from www.dowjonesinteractive.com. Article No. SEPI012700. Nyhan, P.J. (2002, February). Boeing lays off 1,000 local workers. Seattle Post-Intelligencer. Retrieved April 10, 2002, from www.dowjonesinteractive.com. Nyhan, P.J. (2001, November). Majority of Boeing layoffs to hit by June. Seattle Post-Intelligencer. Retrieved April 10, 2002, from http://seattlepi.nwsource.com Schneider, R. (2001, December). Losing Altitude: aftershocks from September 11th. The Economist. Retrieved April 10, 2002, from www.infotrac.com. Article No. A81118376. Smith, B.A. (2002, January 21). Boeing continues its production cost focus. Aviation Week & Space Technology, 156 (3), 43-44. Smith, B.A. (2001, September 24). Boeing cuts delivery estimates, prepares for major layoffs. Aviation Week & Space Technology, 155 (13), 29-32. Song, K.M. (2001, December). Boeing layoff face challenge. The Seattle Times. Retrieved April 10, 2002, from www.dowjonesinteractive.com. Article No. SETL0135600. Song, K.M. (2002, April). Effects from Boeing cutbacks felt. The Seattle Times. Retrieved May 18, 2002, from www.dowjonesinteractive.com. Article No. SETL0211100. Standaert, J. (2002, January). Boeing trims 2,300 more jobs. The News Tribune, Tacoma. Retrieved April 10, 2002, from www.dowjonesinteractive.com. Article No. TCMA0201900. Thomas, G.D. (2002, April). Tough times ahead. Air Transport World, 39, (4), 31-33. Webber, J.P. (2002, April 19). Boeing hurt by slowdown. Los Angeles Times. Retrieved May 8, 2002, from www.latimes.com

Friday, September 27, 2019

IS PROPAGADA A TECHNIQUE OR A PHENOMENON Essay Example | Topics and Well Written Essays - 1000 words

IS PROPAGADA A TECHNIQUE OR A PHENOMENON - Essay Example on, the analysis will seek to determine whether or not the presence of propaganda throughout the modern world is merely something that exists naturally or whether or not it is an purposeful and authored process. Firstly, it should be understood that the nature and definition of propaganda itself lends the reader to assume that the process of information distribution and purposeful deception is not something that merely â€Å"happens†. 1 Of course there are many instances throughout the world in which incomplete information is transmitted to the media participant; however, these inadvertent instances do not accurately define the conventional definition of â€Å"propaganda†. Ultimately, the use of propaganda, by its very definition and nature, is to deceive or mislead the media participant to understand the world or a particular situation within a given construct or manner. As such, it is painfully obvious that the majority of propaganda that exists is most certainly a technique by which entities, individuals, or governments attempt sway the opinions of societal stakeholders. Therefore, the reader can adequately assume that the types of â€Å"propaganda† under discussion is more likely than not an authored process that is intended to be misleading, untrue, or inaccurate.2 As with a legal discussion of motive, the question that has thus far been represented ultimately reduces to the intention of the way the information is represented. In the event that a particular entity, government, or individual represents information in a willfully deceitful manner as a means of swaying individual opinions, then it is clear and apparent that the process is a technique which is engaged as a means of effecting a particular goal.3 Yet, in the event that incomplete, untrue, or inaccurate information is represented to a group or an audience with no intention to deceive or mislead, then it cannot be said that such a process is propaganda; rather, it is an inadvertent process that

Thursday, September 26, 2019

The Origins of Organizational Research (Case Study) Coursework

The Origins of Organizational Research (Case Study) - Coursework Example While Rousseau (1997) covers the same ground, particular attention in this case would be paid to the Barley and Kunda article which provides a thorough and scholarly review of the management literature dating back to the industrial revolution. The approach used in this research article is mainly theoretical in nature as it seeks to analyse the development of different models and theories that seek to explore how research in management has been developed over a period of time dating back to the industrial revolution. Various theories were formulated by other scholars hence this subject would not be new in the field of research which makes it imperative to comment on the existing theories and attempt to identify any research gap in the literature available if there would be need for the formulation of any other theoretical framework that may significant in conducting future researches. This case quotes the contribution of (Chandler, 1977), which posits that modern American industrial history is marked not only by the rise of large corporations and the professionalization of management but by the formulation of theories that minister to one of managements central problems: the control of complex organisations. Whilst this is not a direct quotation, the researchers acknowledge that the industrial history is a complex issue and some scholars have previously attempted to look at the same topic. An indirect quote is significant in that it generalises the theories or any other information that exists on the topic under review. A close analysis of the Barley and Kunda article and the Campion article show that the use of references has been widespread and extensive throughout. This is a review article and Campion posits that review articles should be more comprehensive in the references included. It can also be noted that multiple referencing is a common feature of the review article. In attempting to emphasise a point, in some instances, the authors used

Regulatory Interventions in the 2008 US Post-Economic Crisis Assignment

Regulatory Interventions in the 2008 US Post-Economic Crisis - Assignment Example However, there is a need to generate productivity following the series of Stimulus Funds in order to multiply the capital infused in trillions of dollars. Or the economic recovery will be transient and may return to perform another economic recession, right after funds are consumed. Regulations spearheaded by the Dodd-Frank Act are meant to make the financial institutions and big corporations more careful in their risk management. Such regulations were found to be critical after deregulation was given a chance to work for over 30 years and yet failed with its grandstanding recession. The question remaining is how funds can be effectively channelled to entrepreneurs given the past experiences wherein a greater part of the Stimulus Funds never reached the Small Business Entrepreneurs (SBEs) who can use capital to generate more productivity, hire people, and earn profits. Most of the Stimulus Funds went to social welfare and large corporation bail outs. Further study is required to eval uate the possibility of reinstating the Glass-Steagall Act for the purpose of further regulating the banks to focus on diligently supplying funds to SBEs and supporting those SBEs with sufficient guidance in order to earn successfully. This can logically stop the banks’ vested interests on Investment Portfolios since they will not be allowed to engage in other investment activities except to lend entrepreneurs what they will need in order to progress. I. Introduction Right after the economic recession declared by the National Bureau of Economic Research (NBER) to have lasted December 2007 all the way to June 2009, the phenomenon was described as not only â€Å"the longest and deepest recession of the post-World War II era† but also the â€Å"largest decline in output, consumption, and investment, and the largest increase in unemployment, of any post-war recession† (Labonte, M. 2010, p.2). Stimulus funds from the Federal Reserve worth more than a Trillion Dollars along with the monetary policy of maintaining almost zero interest rate, facilitated the recovery. $700 billion, which was later reduced to $ 470 billion infused into the financial system was done via a program called Troubled Assets Relief Program (TARP) in October 2008. The US Government purchased real estate properties that lost their values as a result of the recession, for the purpose of adding some liquidity to the banks. As of mid-2012, most programs under the TARP were reported closed. Major beneficiaries rescued were Fannie Mae and Freddie Mac, AIG, Citigroup, and Lehman Brothers of the financing sector, and later included General Motors and Chrysler of the automobile sector. Saving the giant enterprises reduced the need to retrench and lay-off employees. However, there were economic

Wednesday, September 25, 2019

Response Paper Essay Example | Topics and Well Written Essays - 1000 words

Response Paper - Essay Example According to Yallop, although the official cause of sudden death was disclosed by the Vatican as heart attack, he allegedly uncovered information that revealed otherwise. Apparently, it was theorized that the pope was allegedly poisoned for reasons that range from uncovering anomalies in running and operating the Vatican Bank; as well as in his bold planned to end strict prohibition in the use of artificial birth control, one of the firm Catholic dogmas conservatively retained through the years. The book comprised of seven chapters where the first six were presented to provide an effective overview of Albino Luciani’s background, the kind of man he truly is, and the alleged facts surrounding operating the Vatican Bank (Yallop, 2007). Chapter 5 was devoted to the pope’s 33 days as Pontiff and leading to the fateful night when he was allegedly poisoned. Yallop specifically inferred that six men (Marcincus, Villot, Calvi, Sindona, and Gelli) connived to orchestrate the pop e’s death, to wit: â€Å"I am equally convinced that one of these six men had, by the early evening of September 28th, 1978, already initiated a course of action to resolve the problems that Albino Luciani’s Papacy was posing. One of these men was at the very heart of a conspiracy that applied a uniquely Italian solution† (Yallop, 2007, p. xxiv). ... The contents were highly sensitive leaning towards suggesting the possibility of murder, connivance, and cover-ups within the strict confines of the Pope’s chamber – surely one of the most guarded global institutions given the authority and power relegated to the position occupying it. Likewise, the risks included tarnishing the writer’s reputation in case that his allegations were effectively proven wrong; as well as the reputation of the Vatican, the Catholic organization which Yallop alleged was filled with anomalous transactions and shielded from public scrutiny. Apparently, the effect of the risk taking endeavor by Yallop yielded beneficial and rewarding results for him in terms of generating as much as 6,000,000 copies sold of the book (Yallop, 2007). Despite the optimistic side of his risk-taking pursuit, he was criticized for the veracity, credibility and reliability in the contents of his writing. According to an article published in The Telegraph, Damia n Thomson’s review of Yallop’s writing indicated that â€Å"Rome dismissed his book as trash†¦ Then along came John Cornwell, an 'independent' author unsympathetic to the Vatican, who checked out Yallop's case. It crumbled into dust like an ancient parchment exposed to sunlight. The 'murder' of John Paul turned out to be just another conspiracy theory, glued together with innuendo and non sequiturs. Cornwell's book A Thief in the Night, which demonstrated that John Paul I had died of natural causes, left Yallop's theory looking jolly silly† (Thompson, 2007, pars. 1 & 2). In the most current article referring to the official statement on the cause of Pope

Tuesday, September 24, 2019

Veterinary school personal statement Example | Topics and Well Written Essays - 500 words

Veterinary school - Personal Statement Example Being an intern at the Vancouver Aquarium I had the opportunity to work with a variety of animals such as sloths and tortoises – but one of my favorite parts of working there was sharing my passion and knowledge of animals with guests of the aquarium. Another inspiring experience at the Vancouver aquarium was when I shadowed the vet at the aquarium and observed him working with the animals both on exhibit and at the Marine Mammal Rescue Center, behind the scenes of the aquarium. Getting to observe Dr. Martin Haulena aid in rehabilitating injured or orphaned marine mammals  was  very inspiring.   Another challenging part of this job is that it’s not just about learning one species, but dealing with a wide array of animals and understanding them. When I think about being a vet, one of the most difficult tasks coming to my mind is the fact that one cannot simply ask the animals what is hurting them, and that even adds emphasis to the importance of professionalism and knowledge. The important lesson I have learned is that there will be hardships in this life, but there is still incredible beauty to be seen in our broken world. In one of my favorite books, Finding Beauty in a Broken World the author, T. T. Williams, uses a metaphor of a mosaic, writing, â€Å"mosaic celebrates brokenness and the beauty of being brought together.† I want to make a change in lives of both people and animals creating a beautiful mosaic in this world, one made from the incredible healing connection between the lives of humans and animals. To my thinking, each living being on this planet is fascinating, and all the animals possess their own specific beauty that adds diversity and color to the world. I took great pleasure in working with Laura Vello and taking care of horses as well as working at Westbury Vet. At the same time, having had some work experience at the zoo, I have discovered a strong aptitude for working with wild animals

Monday, September 23, 2019

Vehicle Car service models for DMS Essay Example | Topics and Well Written Essays - 2500 words

Vehicle Car service models for DMS - Essay Example This use case has presented the main business operations performed all through the business. In this use case it is outlined how car servicing business will perform different operations. These operations include the appointment taking, servicing and payment. In this overall system working and handling we need to take different decision and assessment for giving appointment and dealing payments. These use case will offer a better insight into different business operations while development. In this diagram the main procedure and flow of business operations are shown. These diagrams have elaborated the main sequence of operations and procedures that have been performed through the overall business processes regarding the car servicing. This is a decision and action based approach that demonstrates the overall behaviors of the business dealing and operational handling. 1- In this BPMN diagrams for the business use cases I have presented the same scenario of operations as presented in the use case diagram. Here I have outlined the main decision points and areas where we can take the decision before moving to the next level. In the vehicle servicing process the initial step or decision is taken on the basis that the customer is new or old. If customer is old one then we assign him log no and issues the appropriate appointment. In case of new customer we need to take his complete information and feed it into main database and then issue the log and registration number. Then we take the next decision regarding the available working slot. Means time slot for servicing the vehicle. In case of available slot we issue the appointment otherwise we issue next date or time. 2- The scenario is about the detailed analysis of vehicle servicing and estimation of the cost. When a car is serviced and some new components are added in this scenario a

Sunday, September 22, 2019

The Critical Period (1781 - 1789) Essay Example for Free

The Critical Period (1781 1789) Essay The time period between 1781 and 1789 is often referred to as the Critical Period, and with a good reason. As a newly formed country America had a lot to lose if it did not survive and prove its self to the world as well as the citizens. Going into the critical period the United States was run under the Articles of Confederation but the lack of a centralized government soon proved the articles to be inept. The Problems with the Articles appeared almost upon completion. The fact that full state approval was needed to pass any official proclamation meant that congress never had any real power. Such was the case when in 1783 the Rhode Island Assembly refused to place any taxes on imported goods. Because congress wasnt given any power to enforce the laws only suggest states enforce them the economy as well as national unity suffered. The power to tax was crucial power needed by the government. Under the Articles of Confederation the US economy was extremely fragile having just emerged from depression. The market value would jump thousands of dollars one year and fall the next. The power to tax was needed to help stabilize the volatile market. The government also needed to be centralized in order to prove to other countries they were united. Proving to be unified would allow them more leverage when dealing with foreign policies. In a speech made to congress John Jay told of negotiations with Spains Minister, Diego de Gardoqui in which Spain denied the US navigation of the Mississippi River because he didnt see the US as unified and knew there was nothing the US could do about it. The government also needed the power to create treaties and alliances, this was extremely important in the survival of the country. The United States was weakened by the war and needed alliances for protection incase of an invasion. When evaluation these documents it becomes obvious that while not completely ineffective, the Articles of Confederation were ultimately ineffective. Had the United States continued to operate under the articles it would have most assuredly fallen to economic and political problems.

Saturday, September 21, 2019

Improving the Performance of Overbooking

Improving the Performance of Overbooking Improving the Performance of Overbooking by Application Collocate Using Affinity Function ABSTRACT: One of the main features provided by clouds is elasticity, which allows users to dynamically adjust resource allocations depending on their current needs. Overbooking describes resource management in any manner where the total available capacity is less than the theoretical maximal requested capacity. This is a well-known technique to manage scarce and valuable resources that has been applied in various fields since long ago. The main challenge is how to decide the appropriate level of overbooking that can be achieved without impacting the performance of the cloud services. This paper focuses on utilizing the Overbooking framework that performs admission control decisions based on fuzzy logic risk assessments of each incoming service request. This paper utilizes the collocation function (affinity) to define the similarity between applications. The similar applications are then collocated for better resource scheduling. I. INTRODUCTION Scheduling, or placement, of services is the process of deciding where services should be hosted. Scheduling is a part of the service deployment process and can take place both externally to the cloud, i.e., deciding on which cloud provide the service should be hosted, and internally, i.e., deciding which PM in a datacenter a VM should be run on. For external placement, the decision on where to host a service can be taken either by the owner of the service, or a third-party brokering service. In the first case, the service owner maintains a catalog of cloud providers and performs the negotiation with them for terms and costs of hosting the service. In the later case, the brokering service takes responsibility for both discovery of cloud providers and the negotiation process. Regarding internal placement, the decision of which PMs in the datacenter a service should be hosted by is taken when the service is admitted into the infrastructure. Depending on criteria such as the current loa d of the PMs, the size of the service and any affinity or anti-affinity constraints [23], i.e., rules for co-location of service components, one or more PMs are selected to run the VMs that constitute the service. Figure 1 illustrates a scenario with new services of different sizes (small, medium, and large) arriving into a datacenter where a number of services are already running. Figure 1: Scheduling in VMs Overload can happen in an oversubscribed cloud. Conceptually, there are two steps for handling overload, namely, detection and mitigation, as shown in Figure 2. Figure 2: Oversubscription view A physical machine has CPU, memory, disk, and network resources. Overload on an oversubscribed host can manifest for each of these resources. When there is memory overload, the hyper visor swaps pages from its physical memory to disk to make room for new memory allocations requested by VMs (Virtual Machines). The swapping process increases disk read and write traffic and latency, causing the programs to thrash. Similarly, when there is CPU overload, VMs and the monitoring agents running with VMs may not get a chance to run, thereby increasing the number of processes waiting in the VMs CPU run queue. Consequently, any monitoring agents running inside the VM also may not get a chance to run, rendering inaccurate the cloud providers view of VMs. Disk overload in shared SAN storage environment can increase the network traffic, where as in local storage it can degrade the performance of applications running in VMs. Lastly, network overload may result in an under utilization of CPU, disk, and memory resources, rendering ineffective any gains from oversubscription. Overload can be detected by applications running on top of VMs, or by the physical host running the VMs. Each approach has its pros and cons. The applications know their performance best, so when they cannot obtain the provisioned resources of a VM, it is an indication of overload. The applications running on VMs can then funnel this information to the management infrastructure of cloud. However, this approach requires modification of applications. In the overload detection within physical host, the host can infer overload by monitoring CPU, disk, memory, and network utilizations of each VM process, and by monitoring the usage of each of its resources. The benefit of this approach is that no modification to the applications running on VMs is required. However, overload detection may not be fully accurate. II. RELATED WORK The scheduling of services in a datacenter is often performed with respect to some high-level goal [36], like reducing energy consumption, increasing utilization [37] and performance [27] or maximizing revenue [17, 38]. However, during operation of the datacenter, the initial placement of a service might no longer be suitable, due to variations in application and PM load. Events like arrival of new services, existing services being shut down or services being migrated out of the datacenter can also affect the quality of the initial placement. To avoid drifting too far from an optimal placement, thus reducing efficiency and utilization of the datacenter, scheduling should be performed repeatedly during operation. Information from monitoring probes [23], and events such as timers, arrival of new services, or startup and shutdown of PMs can be used to determine when to update the mapping between VMs and PMs. Scheduling of VMs can be considered as a multi-dimensional type of the Bin Packing [10] problem, where VMs with varying CPU, I/O, and memory requirements are placed on PMs in such a way that resource utilization and/or other objectives are maximized. The problem can be addressed, e.g., by using integer linear programming [52] or by performing an exhaustive search of all possible solutions. However, as the problem is complex and the number of possible solutions grow rapidly with the amount of PMs and VMs, such approaches can be both time and resource consuming. A more resource efficient, and faster, way is the use of greedy approaches like the First-Fit algorithm that places a VM on the first available PM that can accommodate it. However, such approximation algorithms do not normally generate optimal solutions. All in all, approaches to solving the scheduling problem often lead to a trade-o↠µ between the time to find a solution and the quality of the solution found. Hosting a ser vice in the cloud comes at a cost, as most cloud providers are driven by economical incentives. However, the service workload and the available capacity in a datacenter can vary heavily over time, e.g., cyclic during the week but also more randomly [5]. It is therefore beneficial for providers to be able to dynamically adjust prices over time to match the variation in supply and demand. Cloud providers typically offer a wide variety of compute instances, differing in the speed and number of CPUs available to the virtual machine, the type of local storage system used (e.g. single hard disk, disk array, SSD storage), whether the virtual machine may be sharing physical resources with other virtual machines (possibly belonging to different users), the amount of RAM, network bandwidth, etc. In addition, the user must decide how many instances of each type to provision. In the ideal case, more nodes means faster execution, but issues of heterogeneity, performance unpredictability, network overhead, and data skew mean that the actual benefit of utilizing more instances can be less than expected, leading to a higher cost per work unit. These issues also mean that not all the provisioned resources may be optimally used for the duration of the application. Workload skew may mean that some of the provisioned resources are (partially) idle and therefore do no contribute to the performance during those periods, but still contribute to cost. Provisioning larger or higher performance instances is similarly not always able to yield a proportional benefit. Because of these factors, it can be very difficult for a user to translate their performance requirements or objectives into concrete resource specifications for the cloud. There have been several works that attempt to bridge this gap, which mostly focus on VM allocation [HDB11, VCC11a, FBK+12, WBPR12] and d etermining good configuration parameters [KPP09, JCR11, HDB11]. Some more recent work also considers shared resources such as network or data storage [JBC+12], which is especially relevant in multi-tenant scenarios. Other approaches consider the provider side of things, because it can be equally difficult for a provider to determine how to optimally service resource requests [RBG12]. Resource provisioning is complicated further because performance in the cloud is not always predictable, and known to vary even among seemingly identical instances [SDQR10, LYKZ10]. There have been attempts to address this by extending resource provisioning to include requirement specifications for things such as network performance rather than just the number and type of VMs in an attempt to make the performance more predictable [GAW09, GLW+10, BCKR11, SSGW11]. Others try to explicitly exploit this variance to improve application performance [FJV+12]. Accurate provisioning based on application requirements also requires the ability to understand and predict application performance. There are a number of approaches towards estimating performance: some are based on simulation [Apad, WBPG09], while others use information based on workload statistics derived from debug execution [GCF+10, MBG10] or profiling sample data [TC11, HDB11]. Most of these approaches still have limited accuracy, especially when it comes to I/O performance. Cloud platforms run a wide array of heterogeneous workloads which further complicates this issue [RTG+12]. Related to provisioning is elasticity, which means that it is not always necessary to determine the optimal resource allocation beforehand, since it is possible to dynamically acquire or release resources during execution based on observed performance. This suffers from many of the same problems as provisioning, as it can be difficult to accurately estimate the impact of changing the resources at runtime, and therefore to decide when to acquire or release resources, and which ones. Exploiting elasticity is also further complicated when workloads are statically divided into tasks, as it is not always possible to preempt those tasks [ADR+12]. Some approaches for improving workload elasticity depend on the characteristics of certain workloads [ZBSS+10, AAK+11, CZB11], but these characteristics may not generally apply. It is therefore clear that it can be very difficult to decide, f or either the user or the provider, how to optimally provision resources and to ensure that those resources that are provisioned are utilized fully. Their is a very active interest in improving this situation, and the approaches proposed in this thesis similarly aim to improve provisioning and elasticity by mitigating common causes of inefficient resource utilization. III. PROPOSED OVERBOOKING METHOD The proposed model utilizes the concept of overbooking introduced in [1] and schedules the services using the collocation function. 3.1 Overbooking: The Overbooking is to exploit overestimation of required job execution time. The main notion of overbooking is to schedule more number of additional jobs. Overbooking strategy used in economic model can improve system utilization rate and occupancy. In overbooking strategy every job is associated with release time and finishing deadline, as shown in Fig 3. Here successful execution will be given with fee and penalty for violating the deadline. Figure 3: Strategy of Overbooking Data centers can also take advantage of those characteristics to accept more VMs than the number of physical resources the data center allows. This is known as resource overbooking or resource over commitment. More formally, overbooking describes resource management in any manner where the total available capacity is less than the theoretical maximal requested capacity. This is a well-known technique to manage scarce and valuable resources that has been applied in various fields since long ago. Figure 4: Overview of Overbooking The above Figure shows a conceptual overview of cloud overbooking, depicting how two virtual machines (gray boxes) running one application each (red boxes) can be collocated together inside the same physical resource (Server 1) without (noticeable) performance degradation. The overall components of the proposed system are depicted in figure 5. Figure 5: Components of the proposed model The complete process of the proposed model is explained below: The user requests the scheduler for the services The scheduler first verifies the AC and then calculates the Risk of that service. Then already a running service is scheduling then the request is stored in a queue. The process of FIFO is used to schedule the tasks. To complete the scheduling the collocation function keeps the intermediate data nodes side by side and based on the resource provision capacity the node is selected. If the first node doesn’t have the capacity to complete the task then the collocation searches the next node until the capacity node is found. The Admission Control (AC) module is the cornerstone in the overbooking framework. It decides whether a new cloud application should be accepted or not, by taking into accounts the current and predicted status of the system and by assessing the long term impact, weighting improved utilization against the risk of performance degradation. To make this assessment, the AC needs the information provided by the Knowledge DB, regarding predicted data center status and, if available, predicted application behavior. The Knowledge DB (KOB) module measures and profiles the different applications’ behavior, as well as the resources’ status over time. This module gathers information regarding CPU, memory, and I/O utilization of both virtual and physical resources. The KOB module has a plug-in architectural model that can use existing infrastructure monitoring tools, as well as shell scripts. These are interfaced with a wrapper that stores information in the KOB. The Smart Overbooking Scheduler (SOS) allocates both the new services accepted by the AC and the extra VMs added to deployed services by scale-up, also de-allocating the ones that are not needed. Basically, the SOS module selects the best node and core(s) to allocate the new VMs based on the established policies. These decisions have to be carefully planned, especially when performing resource overbooking, as physical servers have limited CPU, memory, and I/O capabilities. The risk assessment module provides the Admission Control with the information needed to take the final decision of accepting or rejecting the service request, as a new request is only admitted if the final risk is bellow a pre-defined level (risk threshold). The inputs for this risk assessment module are: Req CPU, memory, and I/O capacity required by the new incoming service. UnReq The difference between total data center capacity and the capacity requested by all running services. Free the difference between total data center capacity and the capacity used by all running services. Calculating the risk of admitting a new service includes many uncertainties. Furthermore, choosing an acceptable risk threshold has an impact on data center utilization and performance. High thresholds result in higher utilization but the expense of exposing the system to performance degradation, whilst using lower values leads to lower but safer resource utilization. The main aim of this system is to use the affinity function that aid the scheduling system to decide which applications are to be placed side by side (collocate). Affinity function utilizes the threshold properties for defining the similarity between the applications. The similar applications are then collocated for better resource scheduling. IV. ANALYSIS: The proposed system is tested for time taken to search and schedule the resources using the collocation the proposed system is compared with the system developed in [1]. The system in [1] doesn’t contain a collocation function so the scheduling process takes more time compared to the existing system. The comparison results are depicted in figure 6. Figure 6: Time taken to Complete Scheduling The graphs clearly depict that the modified (Proposed overbooking takes equal time to complete the scheduling irrespective of the requests.

Friday, September 20, 2019

Nested transactions

Nested transactions Q1. Executing nested transactions requires some form of coordination. Explain what a coordinator should actually do? In order to make the answer to this question more solid and clear let me start wit a brief explanation on what actually is a nested transaction. Anested transaction is a new transaction begun within the scope of another transaction. Several transactions can begin from the scope of one transaction thus the transaction that starts the nested transaction is called theparentof the nested transaction. The features of nested transactions as to why they exist are listed below. Nested transactions enable an application to isolate errors in certain operations. Nested transactions allow an application to treat several related operations as a single operation. Nested transactions can function concurrently. Now coming on to answer the exact question; the function of a coordinator is that it should take the primary request automatically in the order in which it receives. It should check the unique identifier in case it has already received and executed the request and if it identifies, it should resend the response back Servers which perform requests in distributed transaction needs to communicate with each other to coordinate their actions, therefore there are a few process that involves when the coordinator is in play and they are; In order to keep track of the participants and their information the coordinator keeps a list of references whenever they are involved as this is will be helpful at the time of aborting it. When the client sends a request it first reaches the coordinator which then resends a unique ID to the client which ensures that the coordinator is now responsible for the exchange of transactions. At some instance if there is a new participant who joins the transaction, the coordinator should be informed and it is then the coordinator updates its list of participants and this is where the joint method in the coordinator interface is used. We argued that distribution transparency may not be in place for pervasive systems. This statement is not true for all types of transparencies. Explain what you understand by pervasive system. Give an example? In general Pervasive systems which is also well known as Ubiquitous computing, can be easily derived by the term ubiquitous which means being everywhere at the same time, When applying this logic to technology, the term ubiquitous implies that technology is everywhere and we can use it irrespective of the location and time. It is important to note that pervasive systems are built by a number of different distributed components integrated and tagged together that can be invisible and also visible at times which in general terms is known as transparency. The following points will make it clear to why pervasive systems are important in the current context. Pervasive systems are changing our day to day activities in a various ways. When it comes to using todays digitalized equipments users tend to communicate in different ways be more active conceive and use geographical spaces differently In addition, pervasive systems are global and local practically everywhere social and personal public and private invisible and visible From my understanding, reading and gathering its is true that Distribution transparency may not be in place for pervasive systems but arguably there are rare instances which it can be, because the backend of pervasive system is can be made invisible as the actual user need not know how the process takes place behind the scene. Here is a typical example on how a pervasive system can involve in a humans day to day life. Assume a lecturer is preparing himself for a lecture presentation. The lecture room is in a different campus which is a 15 minute walk from his campus. Its time to leave and he is not quiet ready. He takes his HTC palmtop with him which is a Wi-Fi enabled handheld equipment and walks out. The pervasive system transfers his undone work from his Laptop to his HTC Palmtop, so that he can make his editings during his walk through voice commands. The system knows where the lecturer is heading towards by the campus location tracker. It downloads the presentation to the projection computer in which he is going to present and keeps it prepared for the lecture to begin. Now by the time the lecturer reaches his class he has done the final changes. As the presentation proceeds, he is about to display a slide with a diagram with numerical information regard to forecasts and budgets. The system immediately realises that there might be a mistake in this and warns the lecturer, he realizing this at the right time skips the slide and moves on to other topics to make the presentation smooth leaving the students impressed by his quality presentation. Q2. Consider a chain of processes P1, P2 Pn implementing a multitiered client-server architecture. Process Pi is client of process P i+1, and P i will return a reply to Pi-1 only after receiving a replyfrom P i+1. What are the main problems with this organization when takinga look at the request-reply performance at process P1? From my understanding a Multitiered client-server Architecture basically refers to where more components in terms of hardware and more importantly softwares are added and tied up to build or in other words construct a complete architecture which facilitates the process of presentation, application processing, and data management to be logically processed separately. In relation to the question the limitations and the problems this organization would face is that if the processes are too large that is referring to Pn according to the example there will be bottle neck kind of situation arising and this can make the whole process slow and there will be a chain of processes un processed. A Multitier architecture does not run on its own there are other hardware and software components involved in it and if any of these components drop in performance the whole architecture will see a drop in performance. Another problem is that it would more difficult to program and test than in normal architectures because more devices have to communicate in order to complete a clients request. Q3. Strong mobility in UNIX Systems could be supported by allowing a process to fork a child on a remote machine. Explain how this would work? It is easy to get the initial understanding if the logic behind the term forking a child is made clear. Forking in UNIX refers to the process which the parents image is completely copied to the child. This start when UNIX starts a new process. Basically, how it works is that: the main parent process which already exists forks a child process which is the new process created. Then as the next step the newly created child process gets a duplicate copy of the parents data., and now it has 2 processes with the same data and the child process can now be activated To create a child process there are 2 basic steps to be followed. The System creates an exact copy of parent process by the process of forking The process in UNIX are built with different codes therefore the code of the parent process should be substituted within the code of the child process. We must also have the system reserved with ample resources to create the child process and memory map for it. As a result of this it can also be said that the child process inherits all the system variables of the parent process. The only issue in this would that using the forking process consumes more time and memory to duplicate the parents environment, and to create a unique structure for the child. Q4. Describe how Connectionless Communications between a client and a server proceeds when using sockets? Let me step into answering the question straightaway where the following paragraph will explain how the connectionless communication is taking place between the client and a server using the help of programmed sockets. It is clear that the connection uses UDP to connect and program where the server receives connectionless datagrams from many clients and prints them. Initially, a socket is constructed while it is in unconnected state, which means the socket is in its own and is not associated with any other destination beyond its boundary. The subroutines that needs to be connected binds a destinations i.e. the IP address of the server and the port number to which it listens the requests which is a permanent one to the socket and now puts it in connected state. Once this process is completed behind the scene an application program will call the subroutine to establish a connection before it prepares it self to transfer data through a socket. More importantly all sockets that are used with connectionless datagram i.e. UDP services does not need be connected before they are used but connecting them makes a more efficient and effective way to transfer data between the client and the sever without specifying the destination each an every time. Note: The processes cannot share ports during any time of the process as it is specified permanently to the desired connection itself having said that UDP multicast has the ability to share port numbers which uses a slightly different concept which will not be discussed in this answer. The diagram below illustrates the example in a clear view Q5. The Request-Reply Protocol is underlying most implementations of remote procedure calls and remote method invocations. In the Request-Reply Protocol, the request messages carry a request ID so that the sender can match answer messages to the requests it sent out. Task: Describe a scenario in which a client could receive a reply from an earlier request. Before stepping into answering the questions straightaway let me first briefly explain what the Request-Reply protocol is and why it is used for. The Request-reply protocol is an effective special-purpose protocol for distributed systems based on UDP datagrams The functions of the RRP are listed below When the RRP is in play the reply message from the server forms an acknowledgement for the message requested by the client => avoiding overhead There is no guarantee that if a requested message is sent that it will result in a method being executed Re-transmission and identification of messages can increase reliability RRP helps to keep history of messages to avoid re-execution and repetition in the method during a request when transmitting reply messages. Now coming onto answer the question, assume that a client requests the server and is waiting for a reply message, accordingly the client should get the requested reply within a certain period of time if it doesnt the client sends another request which in other words is known as idempotent operations i.e. operations that can be performed repeatedly with the same effect as if it had been performed exactly once: re-execute the operation. If the server receives the second request it then provides a conditional acknowledgement message this depicts that the server guarantees a reply for the client without letting the client to make any more requests for the same message which it has already made. The diagram below has also explained the same as said above. The Request-Reply-Acknowledge (RRA) protocol is a variant of the Request-Reply (RR) protocol, where the client has to acknowledge the servers reply. Assume that the operations requested by the client are not idempotent, that is, their outcome is different if they are executed a second time. Task: For each of the two protocols, RR and RRA, describe which information the server has to store in order to reliably execute the requests of the client and return information about the outcome. Discuss as well when the server can delete which piece of information under the two protocols Basically the main difference between Request-Reply (RR) and Request-Reply Acknowledge (RRA) is that In the Request-Reply Protocol, the requested messages carry a request ID so that the sender can match answer messages to the requests it sent out but where as this is not the case in Request-Reply-Acknowledgement (RRA) protocol, here the client acknowledges the servers reply messages, and the acknowledgement message contains the ID in the reply message being acknowledged. If we are specifically talking about transmitting requests in the transport layer the Request-Reply protocol is the most effective one to be used because: No acknowledgments are necessary at the transport layer. Since it is often built by UDP datagrams connection establishment overheads can be avoided. There is no necessity for flow control as there are only small amount of data being transferred. In order to reliably execute the requests made by the clients the server has to importantly store the information that is in the request ID so that it makes the server identify the client and respond to its request immediately. The Request ID contains the following information which the server has to store. Sending process identifier IP address of the client Port number through which the request has come Integer sequence number incremented by sender with every request Arguably this can also be the most efficient protocol compared with the Request-Reply Acknowledge protocol because this provides Non-idempotent operations i.e. re-send result stored from previous request but the exception here is that it requires maintenance of a history of replies so that it can make use whenever it receive a request. It is clearly said that the non-idempotent operations do have their limitations therefore to limit the size of history and make the connection more reliable and efficient we use Request-Reply Acknowledge protocol. REFERENCES Distributed Systems Concepts and Design, 3rd Ed. G Coulouris, Jean Dollimore, Tim Kindberg: Books Distributed Systems: Principles and Paradigms by Andrew S. Tanenbaum (Author), Maarten van Steen (Author) Other Internet sources Websites and Forums Lecture slides and notes