Thursday, August 27, 2020

Fitness Club System Essay Example | Topics and Well Written Essays - 2750 words

Wellness Club System - Essay Example The premise of this framework is to oversee viably the accessible assets engaged with The Fitness Center in particular the individuals, wellness advisors and the higher administration of the organization associated with vital choices. The principal area subtleties out the main role to think of an IT answer for the organization and the focal points to the partners of the organization. It follows with the jobs of the individuals associated with this venture. The Information framework sent here will support the organization and its individuals in recognizing their individual objective and add to the general goal of the organization to make a serious edge over others in the comparative business. Encompassing a data framework, which happens to be all in all a disintegrate of time, exertion and cash, would place the organization in the computerized world to deal with all its business forms, may how little or enormous it be, adequately making a record of the exercises and covering all the insufficiencies of the manual framework. Individuals: The current and planned individuals would utilize the framework to take care of in their own information and reason regarding which they have joined the Fitness community. The individuals may have different targets while joining the middle. Some are for straightforward work out schedules while others have distinctive goal. The framework would deal with every one of those and keep the most recent insights regarding execution and different estimates, for example, future interests. Arrangement: This framework gives client contributions to an enormous assortment of inquiries to dissect their requirements and future objectives to get them the best they want. Steady checking is a significant movement. They structure a significant piece of the framework and handle a ton of assignments identifying with the individuals exercises and arrangement to a few different projects and future interests. They interface with the administration giving significant data in regards to individuals and their

Saturday, August 22, 2020

Corporate Governance and Finance Essay Example | Topics and Well Written Essays - 3000 words

Corporate Governance and Finance - Essay Example Organization Background Apple Inc. was organized in the year 1977 and is headquartered in California, United States of America. The organization alongside its auxiliaries structures, delivers, and sells versatile specialized gadgets, PCs, media gadgets, and convenient computerized music players among others. Apple Inc. likewise sells a scope of related administrations, programming, organizing arrangements, peripherals, computerized content and different types of uses. Apple Inc. takes into account a wide cluster of customers, running from singular buyers, to little and fair sized undertakings and instruction, corporate and government clients (Apple Inc. (a), 2012). The items just as administrations offered by Apple Inc. involve iPhone, Mac, iPod, iPad, Apple TV, notwithstanding an assortment of specific and buyer programming applications. Apple Inc. likewise gives the iOS, iCloud, and Mac OS X working structure, notwithstanding a combination of adornment, administration just as help contributions. Apple Inc. vends and appropriates advanced substance just as applications by methods for the App Store, iTunes Store, Mac App Store and iBookstore. The Company showcases its items all through the globe by means of its stores, both online just as retail notwithstanding direct deals power. Apple Inc. additionally sells through wholesalers, mediator cell organize bearers, retailers, just as worth included affiliates. Besides, Apple Inc. additionally advertises a scope of outsider iPhone, Mac, iPad, and iPod adjusted items, for example, application programming, printers, speakers, earphones, stockpiling gadgets, just as numerous different backups and peripherals, through its retail and online stores (Apple Inc.(a), 2012). Organization History Apple was established by Steve Jobs and Steve Wozniak in the year 1976. Initially, the... This exposition focuses on that the subject of corporate administration is related with the occupations and accountabilities of a business organization’s Board of Directors in taking care of the business and their relationship with the organization’s investors just as other partner. Distinctively, in any corporate association the full time official executives have broad forces concerning the dealings and matters of the association they are paid to oversee on the side of the investors. All things considered, the official chiefs may not generally shoulder the interests of the investors in their brain while completing their official obligations. Therefore, this had brought about undertakings to make the executives progressively obligated for their systems and activities. This paper makes an end that Apple Inc. rehearses solid corporate administration standards and consequently the organization has not confronted any significant occurrences of irreconcilable circumstance. The extensive evaluation of the corporate administration just as the set of accepted rules of Apple Inc. uncovered that the Company maintains severe rules and consistently endeavors to secure the interests of its partners. This severe abidance to the necessary market rehearses have brought about positive fortune for the Company. The appraisal of the monetary situation of the organization showed that the situation of the organization had additionally invigorated since the years and the stock value developments uncovered that Apple is given acceptable incentive to shareholders’ cash.

Friday, August 21, 2020

The Popularity of Books About Saying Yes to Life... and Why We Still Say No

The Popularity of Books About Saying Yes to Life... and Why We Still Say No This past week, I started re-reading Yes Man,  by Danny Wallace.  A comic memoir about a man who decides to say yes to everything for the rest of the year, its  a fun read. Plus it always makes me think:  Gosh darn it, Steph. You really need to say yes to  life  more. Meanwhile, as Ive been re-reading, Ive said no to: a mommy and me music class at a local nursing home, a pumpkin parade party for toddlers, a Halloween Dance Party at the dance studio where my husband and I used to take salsa lessons, and a deep restorative experience at a yoga studio at which I used to teach. And I  love  deep restorative experiences. Clearly, I learn nothing from the books I read. Yet I continue to hoover them up like Pixy  Stix, and I know Im not the only one. Just this past year, Shonda Rhimes blew us all away with  Year of Yes,  a memoir about how yes changed her life.  Even more recently, Mindy Kaling followed up  Is Everyone Hanging Out Without Me?  with  Why Not Me?,  a collection of essays on her ongoing journey to build a happy life. And though its not as well known, I was completely charmed by Noelle Hancocks  My Year with Eleanor,  a work of stunt journalism in which the author determined to do one thing every day that scared her. On the flip side are those books that show us an alternate reality in which we might say no to our boring, humdrum lives (which, by extension, means saying yes to something more exciting). Jon Krakauers  Into the Wild,  for example, captured imaginations with its account of a young man who walked away from all of his worldly possessions and walkedâ€"ba-DUM-bumâ€"into the wild. Frances Mayess  Under the Tuscan Sun  allowed us to imagine that it might actually be possible to run away to Italy, buy a farmhouse, gorge ourselves on pasta, and find love. And on the fiction side, I just fell madly in love with Gayle Formans Leave Me, in which  an overworked, underappreciated mother with a full-time job has a heart attack. After her return home from the hospital, the narrator gets the sense that her family resents the time she’s taking to recuperate. Overwhelmed and angry, she decides to run away. (Let me just leave this book right here on my husbands pillow as a warning) What is it we love about inspirational books that challenge us to embrace life, and why is it thatâ€"when we turn the last pageâ€"we usually just go back to business as usual? Formans  Leave Me  pokes and prods at a possible answer. But when I explore that question for myself, I imagine several possible explanations. For one, as a work-at-home mom on a freelancers salary, I dont encounter many opportunities to say yes to anything life-changing or exciting, nor do I have the money or opportunity to peace out on my obligations and eat-pray-love my way straight outta Jersey. In fact,  if I spent an entire day saying yes to every request or  invitation, Danny Wallace-style, Id likely just end up with a toddler jacked up on yogurt, and a list of social gatherings I couldnt attend because, again: toddler. I assume many other readers are similarly hamstrung by reality. For another, as much as I love the  idea  of living under the Tuscan sun, I am lazy and also comfortable and also set in my ways. My life might be boring, but Im sorta  happy  with my boring life. Saying yes to a less boring life sounds  exhausting. And following naturally from my previous point is the fact that, in certain cases, books are  read as escapism, and nothing more. Theyre a way to live vicariously through the experiencesâ€"or fictional realitiesâ€"of others, giving us that sweet contact high before we finish the book and dive back into our day-to-day. What are your favorite books that let you temporarily imagine a life of yes?

Monday, May 25, 2020

Atlantic Telegraph Cable Timeline

The first telegraph cable to cross the Atlantic Ocean failed after working for a few weeks in 1858. The businessman behind the audacious project, Cyrus Field, was determined to make another attempt, but the Civil War, and numerous financial problems, interceded. Another failed attempt was made in the summer of 1865. And finally, in 1866, a fully functional cable was placed that connected Europe to North America. The two continents have been in constant communication since. The cable stretching thousands of miles under the waves changed the world profoundly, as news no longer took weeks to cross the ocean. The nearly instant movement of news was a huge leap forward for business, and it changed the way Americans and Europeans viewed the news. The following timeline details  major events in the long struggle to transmit telegraphic messages between continents. 1842: During the experimental phase of the telegraph, Samuel Morse placed an underwater cable in New York Harbor and succeeded in sending messages across it. A few years later, Ezra Cornell placed a telegraph cable across the Hudson River from New York City to New Jersey. 1851: A telegraph cable was laid under the English Channel, connecting England and France. January 1854: A British entrepreneur, Frederic Gisborne, who had run into financial problems while trying to place an undersea telegraph cable from Newfoundland to Nova Scotia, happened to meet Cyrus Field, a wealthy businessman and investor in New York City. Gisbornes original idea was to transmit information faster than ever between North America and Europe by employing ships and telegraph cables. The town of St. Johns, on the eastern tip of the island of Newfoundland, is the closest point to Europe in North America. Gisborne envisioned fast boats delivering news from Europe to St. Johns, and the information quickly being relayed, via his underwater cable, from the island to the Canadian mainland and then onward to New York City. While considering whether to invest in Gisbornes Canadian cable, Field looked closely at a globe in his study. He was struck with a far more ambitious thought: a cable should continue eastward from St. Johns, across the Atlantic Ocean, to a peninsula jutting into the ocean from the west coast of Ireland. As   connections were already in place between Ireland and England, news from London could then be relayed to New York City very quickly. May 6, 1854: Cyrus Field, with his neighbor Peter Cooper, a wealthy New York businessman, and other investors,  formed a company to create a telegraphic link between North America and Europe. The Canadian Link 1856: After overcoming many obstacles, a working telegraph line finally reached from St. Johns, on the edge of the Atlantic, to the Canadian mainland. Messages from St. Johns, on the edge of North America, could be relayed to New York City. Summer 1856: An ocean expedition took soundings and determined that a plateau on the ocean floor would provide a suitable surface on which to place a telegraph cable. Cyrus Field, visiting England, organized the Atlantic Telegraph Company and was able to interest British investors to join the American businessmen backing the effort to lay the cable. December 1856: Back in America, Field visited Washington, D.C., and convinced the U.S. government to assist in the laying of the cable. Senator William Seward of New York introduced a bill to provide funding for the cable. It narrowly passed through Congress and was signed into law by President Franklin Pierce on March 3, 1857, on Pierces last day in office. The 1857 Expedition: A Fast Failure Spring 1857: The U.S. Navys largest steam-powered ship, U.S.S. Niagara sailed to England and rendezvoused with a British ship, H.M.S. Agamemnon. Each ship took on 1,300 miles of coiled cable, and a plan was devised for them to lay the cable across the bottom of the sea. The ships would sail together westward from Valentia, on the west coast of Ireland, with the Niagara dropping its length of cable as it sailed. At mid-ocean, the cable dropped from the Niagara would be spliced to to the cable carried on the Agamemnon, which would then play out its cable all the way to Canada. August 6, 1857: The ships left Ireland and began dropping the cable into the ocean. August 10, 1857: The cable aboard the Niagara, which had been transmitting messages back and forth to Ireland as a test, suddenly stopped working. While engineers tried to determine the cause of the problem, a malfunction with the cable-laying machinery on the Niagara snapped the cable. The ships had to return to Ireland, having lost 300 miles of cable at sea. It was decided to try again the following year. The First 1858 Expedition: ANew Plan Met New Problems March 9, 1858: The Niagara sailed from New York to England, where it again stowed cable on board and met up with the Agamemnon. A new plan was for the ships to go to a point mid-ocean, splice together the portions of cable they each carried, and then sail apart as they lowered cable down to the ocean floor. June 10, 1858: The two cable-carrying ships, and a small fleet of escorts, sailed out from England. They encounter ferocious storms, which caused very difficult sailing for ships carrying the enormous weight of cable, but all survived intact. June 26, 1858: The cables on Niagara and Agamemnon were spliced together, and the operation of placing the cable began. Problems were encountered almost immediately. June 29, 1858: After three days of continuous difficulties, a break in the cable made the expedition halt and head back to England. The Second 1858 Expedition: Success Followed By Failure July 17, 1858: The ships left Cork, Ireland, to make another attempt, utilizing essentially the same plan.   July 29, 1858: At mid-ocean, the cables were spliced and Niagara and Agamemnon began steaming in opposite directions, dropping the cable between them. The two ships were able to communicate back and forth via the cable, which served as a test that all was functioning well. August 2, 1858: The Agamemnon reached Valentia harbor on the west coast of Ireland and the cable was brought ashore. August 5, 1858: The Niagara reached St. Johns, Newfoundland, and the cable was connected to the land station. A message was telegraphed to newspapers in New York alerting them of the news. The message stated that the cable crossing the ocean was 1,950 statue miles long. Celebrations broke out in New York City, Boston, and other American cities. A New York Times headline declared the new cable The Great Event of The Age. A congratulatory message was sent across the cable from Queen Victoria to President James Buchanan. When the message was relayed to Washington, American officials  at first believed the message from the British monarch to be a hoax. September 1, 1858: The cable, which had been operating for four weeks, began failing. A problem with the electrical mechanism that powered the cable proved fatal, and the cable stopped working entirely. Many in the public believed it had all been a hoax. The 1865 Expedition: New Technology, New Problems Continued attempts to lay a working cable were suspended due to a lack of funds. And the outbreak of the Civil War made the entire project impractical. The telegraph played an important role in the war, and President Lincoln used the telegraph extensively to communicate with commanders. But extending cables to another continent was far from a wartime priority. As the war was coming to an end, and Cyrus Field was able to get financial problems under control, preparations began for another expedition, this time using one enormous ship, the Great Eastern. The ship, which had been designed and built by the great Victorian engineer Isambard Brunel, had become unprofitable to operate. But its vast size made it perfect for storing and laying telegraph cable. The cable to be laid in 1865 was made with higher specifications than the 1857-58 cable. And the process of putting the cable aboard ship was greatly improved, as it was suspected that rough handling on the ships had weakened the earlier cable. The painstaking work of spooling the cable on the Great Eastern was a source of fascination for the public, and illustrations of it appeared in popular periodicals. July 15, 1865: The Great Eastern sailed from England on its mission to place the new cable. July 23, 1865: After one end of the cable was fashioned to a land station on the west coast of Ireland, the Great Eastern began to sail westward while dropping the cable. August 2, 1865: A problem with the cable necessitated repairs, and the cable broke and was lost on the sea floor. Several attempts to retrieve the cable with a grappling hook failed. August 11, 1865: Frustrated by all attempts to raise the sunken and severed cable, the Great Eastern began to steam back to England. Attempts to place the cable that year were suspended. The Successful 1866 Expedition: June 30, 1866:  The Great Eastern steamed from England with new cable aboard. July 13, 1866:  Defying superstition, on a Friday the 13th the fifth attempt since 1857 to lay the cable began. And this time the attempt to connect the continents encountered very few problems. July 18, 1866: In the only serious problem encountered on the expedition, a tangle in the cable had to be sorted out. The process took about two hours and was successful. July 27, 1866: The Great Eastern reached the shore of Canada, and the cable was brought ashore. July 28, 1866: The cable was proven successful and congratulatory messages began to travel across it. This time the connection between Europe and North America remained steady, and the two continents have been in contact, via undersea cables, to the present day. After successfully laying the 1866 cable, the expedition then located, and repaired, the cable lost in 1865. The two working cables began to change the world, and over the following decades more cables crossed the Atlantic as well as other vast bodies of water. After a decade of frustration the era of instant communication had arrived.

Thursday, May 14, 2020

Stanley Milgram’s Behavioral Study of Obedience Essay

â€Å"The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum....† ― Noam Chomsky, The Common Good â€Å"Disobedience is the true foundation of liberty. The obedient must be slaves.† ― Henry David Thoreau In the early 1960’s Stanley Milgram (1963) performed an experiment titled Behavioral Study of Obedience to measure compliance levels of test subjects prompted to administer punishment to learners. The experiment had surprising results. Purpose of the research. Stanley Milgram’s (1963), Behavioral Study of Obedience measured how far an ordinary subject will go beyond their fundamental moral character to comply with direction from†¦show more content†¦The subjects were informed that the punishment would not cause permanent tissue damage, however, could be extremely painful. The subjects observed the learner/accomplice being prepared with electrodes strapped in a chair. The teacher/subjects read a series of word-pairs to the learner then read the first word of the pair along with four terms. The learner’s role was to pair the first word with the correct term (Milgram, 1963). The learner would then press one of four switches attached to an electrical shock generator indicating his response. Unknown to the teacher, â€Å"in all conditions the learner gives a predetermined set of responses to the word pair test, based on the schedule of approximately three wrong answers to one correct answer† (Milgram, 1963). To authenticate the potential electrical intensity to the learner the teacher is sampled with a 45-volt shock to the wrist. The teacher is then instructed to administer an incrementally increasing punishing electrical shock for each incorrect answer. This follows several methods to inform the teacher of the potential impact of the electrical shock that they will administer. These included, warnings listing the voltage range of 15 to 450-volts labeled Slight Shock, Moderate Shock, Strong Shock, Extreme Intensity Shock, Danger Severe Shock, and XXX, bright redShow MoreRelatedCritique of Stanley Milgram’s â€Å"Behavioral Study of Obedience†905 Words   |  4 PagesA Critique of Stanley Milgram’s â€Å"Behavioral Study of Obedience† Stanley MIlgram is a Yale University social psychologist who wrote â€Å"Behavioral Study of Obedience†, an article which granted him many awards and is now considered a landmark. In this piece, he evaluates the extent to which a participant is willing to conform to an authority figure who commands him to execute acts that conflict with his moral beliefs. Milgram discovers that the majority of participants do obey to authority. InRead MoreAnalysis Of Stanley Milgram s Perils Of Obedience Essay1709 Words   |  7 PagesStill, many questions still remain prevalent as to how an individual reaches his or her decision on obedience in a distressing environment. Inspired by Nazi trials, Stanley Milgram, an American psychologist, questions the social norm in â€Å"Perils of Obedience† (1964), where he conducted a study to test how far the average American was willing to for under the pressures of an authority figure. Milgram s study showed that under the orders of an a uthoritative figure, 64% of average Americans had the capabilityRead MoreEssay on The Controversial Issues of Obedience1136 Words   |  5 PagesIndividuals think differently when it comes to obedience. One might think of how we train dogs to be obedient, another might relate obedience to punishing a child for breaking a rule, or even others think about Hitlers Regime in Germany. When it comes to obedience, there are several sides. Stanley Milgrams article, Obedience to Authority, expresses his view of obedience as an intensely embedded behavioral tendency to obey where a potent impulse can override training in ethics, sympathy, andRead MoreEssay on The Milgram Experiment1572 Words   |  7 PagesThe Milgram Experiment (Hart) Stanley Milgram’s experiment in the way people respond to obedience is one of the most important experiments ever administered. The goal of Milgram’s experiment was to find the desire of the participants to shock a learner in a controlled situation. When the volunteer would be ordered to shock the wrong answers of the victims, Milgram was truly judging and studying how people respond to authority. Milgram discovered something both troubling and awe inspiring about theRead MoreJournal Review : Behavioral Study Of Obedience Essay958 Words   |  4 PagesJournal Review of Behavioral Study of Obedience In 1963, Stanley Milgram conducted research, where the findings were published in the article, ‘Behavioral Study of Obedience.’ Milgram wanted to study the conflict between obedience to authority and personal conscience, by conducting an experiment where participants were ordered by authority to deliver strong electric shocks to another person. From an ad posted in a newspaper, Stanley Milgram choose 40 male participants between the ages of 20 andRead MoreThe Levels Of Obedience1224 Words   |  5 PagesHolo-caust, Stanley Milgram conducted an experiment to study the levels of obedience to authority; he used his experiment to find where evil resided in people and to discover the cause of the Holo-caust. Some people found his findings useful information, while others thought his experiment was morally unacceptable due to his use of deception. Diana Baumrind, author of â€Å"Some Thoughts on the Ethics of Research: After Reading Milgram’s ‘Behavioral Study of Obedi-ence,’† disagrees with Milgram’s use of deceptionRead MoreObedience Is The Psychological Mechanism That Links Individual Action1065 Words   |  5 Pagesâ€Å"Obedience is the psychological mechanism that links individual action to political purpose.† (Milgram, 1963). As a Psychologist at Yale University, Milgram proposed an experiment mainly focusing on the conflict between obedience to authority and personal conscience. In the 1960’s, Stanley Milgram analyzed justifications for genocide acts by those accused during World War II. The Nuremberg War Criminal trials, States the people were thought of them as simply following orders from their higher ranksRead MoreAnalysis Of Thomas Hobbes s Leviathan 1268 Words   |  6 Pagesattention to the decision making of the individual in fulfilling a covenant. However, through a reading of Stanley Milgram in â€Å"Behavioral Study of Obedience,† one is able to comprehend that after an individual has voluntarily committed to an agreement, in this case an experiment, they suddenly feel obliged to remain submissive and adhere to the instructions of the authority. Thus, considering Milgram’s contention that after submission to an authority there is no personal power in choosing to stop is crucialRead MoreAnalysis Of Stanley Milgram s Behavi oral Study Of Obedience 965 Words   |  4 Pagesstate of mind, a test subject must obtain a sense of submission or obedience.   In Stanley Milgram’s â€Å"Behavioral Study of Obedience†, he elaborates on the notion of obedience with accordance to the behaviors of a higher power and his subjects. Milgram’s defines obedience as â€Å"the psychological mechanism that links individual  action  to political  pur-pose.  It  is the dispositional  cement  that  binds men to systems of authority† (371). Milgram’s experiment was conducted with response to the Nazi war trials.Read MoreDeception Is Not Based On Ethical Concerns1413 Words   |  6 Pagesthat holds a position of authority was studied by Stanley Milgram. His experiments and research are well known. Gilovich et al (2012) classifies his experiments as â€Å"being part of our society’s shared intellectual legacy – that small body of historical incidents, biblical parables, and classic literature that serious thinkers feel free to draw on when they debate about human nature or contemplate human history† (Gilovich et al, 2012). Milgram Obedience Experiments Ethical and moral concerns often exist

Wednesday, May 6, 2020

The Automobile and the Economy Essay - 1021 Words

The Automobile and the Economy The effects the automobile has had on the economy of the world are tremendous. The major effects have came in many ways and include sales of the automobile, jobs provided to sell and manufacture the automobile, gas/oil sales to run the automobile, and the start of auto racing sport. The revolution of the automobile was the start of the most popular and successful industry in the world. The Effect of Gas/Oil There is a great effect on the economy due to the sale of gas. The major effect of how much gas is sold is how efficient the particular automobile uses gas and what automobile the people choose to buy. Since the start of production of the automobile fuel efficiency has†¦show more content†¦The current Fuel Economy Standards are as follows... ..................Model Year..................Passengercars M.P.G......... 1978 18.0 1979 19.0 1980 20.0 1981 22.0 1982 24.0 1983 26.0 1984 27.0 1985 and future 27.5 (http//www.cnie.org/nle/air-10.html#summ 4) The only problem with this chart is the lack of increase after 1985. This is due to several reasons especially the 90’s new kick with sport utilities vehicles which usually have a lower fuel efficiency. Americans are also behind the rest of the world in fuel efficiency as the following chart shows new car fleet fuel economy comparing federal standards, domestic fleet, import fleet, and the total fleet. (http//www.cnie.org/nle/air-10.html#summ 5) Some disagree that government regulations increase fuel efficiency. For example, Michael Sykuta’s report concluded that federal fuel regulations do not have a significant effect on miles per gallons in automobiles. ProductionShow MoreRelatedThe Economy Of The Automobile Industry1351 Words   |  6 PagesIn our economy today we face major issues dealing with manufacturing with how do we build or retain the capacity and competitive edge in the global market? Well manufactured is measured in a number of ways, such as statistics and analyses. These metrics range from the amount and type of goods produced, to a detailed breakdown of the people who contribute to this production, to the economic impact of both. But knowing the market is tough using lean manufacturing techniques as a tool any company canRead MoreEssay on Fuel Economy in American Automobiles1379 Words   |  6 PagesFuel efficiency in automobiles has become a topic of much discussion in recent years in the United States. This is due largely to the environmental devastation that fuel emissions cau se, but it is also sparked by the rising fuel costs. Making cars with high fuel efficiency not only saves consumers money, but also will drastically reduce the pollution that is caused by emissions. Today automakers are putting a tremendous amount of effort into making their cars more fuel efficient, both to meet governmentRead MoreA Brief Note On Economy And Environment Of The Automobile1935 Words   |  8 PagesThe Econonmental (Economy Environment) Analysis of the Automobile If someone were to ask you â€Å"when you step outside what are some of the things you see? your response 9.9 times out of 10 will likely mention a car which is normal in today’s world. As of the 20th century, automobiles have shaped the world from collectors’ items, to racing, and the most common: transportation. As some of the older generations may recall cars weren t always the norm. In fact according to ausbcomp (a website on history)Read MoreThe Automobile Industry Influenced The American Economy1012 Words   |  5 PagesIn 1769, the first automobile, a steam-powered carriage that would carry up to four people at two miles per hour, was created. Years pass as gasoline engines, wheels, and a steering device were added to the automobile, which began to make it useful but expensive (â€Å"The Invention of Automobiles†). They were hand-crafted at this time, therefore making it unaffordable. Until Henry Ford introduced the assembly line in 1913, automobiles rem ained expensive. His discovery of the assembly line turned theRead MoreIncreasing Economy With Growing Potential Automobiles Demand2006 Words   |  9 PagesThe continuously increasing economy with growing potential automobiles demand has made more attention be paid to China, while the other parts of the world seems to remain stagnant (Holweg et al., 2009). The Reportlinker website (2014) suggest that, two main reasons, growing replacement demand and rising affordability in lower-tier cities, might support the growing sales of auto vehicles in China. Luxury autos industry is expected to continue to contribute to economic growth in China. Middle classRead MoreImpact Of The Automobile Industry On The Economy Due The Industry s Cyclicality And The Multiplier Effect1549 Words   |  7 PagesThe automobile industry plays an outsized role in the economy due the industry’s cyclicality and the multiplier effect. For instance, a gearbox is purchased from a supplier that has to emplo y labor, purchase raw materials such as, copper, steel, wire, and other related components and services to support the activity. All of these parts are in turn purchased from other suppliers with costs to support their businesses. Therefore, as each supplier purchases components and services that they need, anRead MoreCalfee And CAFE Standards827 Words   |  4 PagesCAFE is an acronym for Corporate Average Fuel Economy. As stated in, â€Å"Will Corporate Average Fuel Economy Help,† CAFÉ standards became prevalent by Congress in 1975 after the oil crisis of the 1970’s. These standards were proposed in order to help the United States depend less on foreign oil (Sen et al. 2017, p. 279). The idea of CAFÉ standards does not only help us rely less on foreign oil, but it also reduces greenhouse gas emissions. Below general information about CAFÉ and the CAFÉ standardsRead MoreEconomic Overview In Auto Industry Essay1572 Words   |  7 Pagesmassive implica tions relating to the United States economy as well as affecting every American household. Shifts in the supply and demand of automobiles influence the current and future household purchases. Households must determine what amount of their hard-earned income to allocate to certain necessities. Because most households have a budget, the amount spent on transportation it limited. While most industries have an effect on the economy, the automotive industry has far-reaching implicationsRead MoreProtectionism Is An Extremely Debatable Topic Among International Trade Essay1747 Words   |  7 PagesHundreds of automobile companies exist today. Many companies are successful and are now well known brands around the world, while some have failed to keep a good reputation, lost all customers and have fallen and are now forgotten. This is the beauty of competition, an essential to economies. Millions of vehicles are in motion daily, in the United States. In 2015, about 7.8 million cars were purchased in America, and there are about 355 million registered cars on the road (â€Å"U.S.†). The automobile industryRead MoreImpact Of The Auto Industry On American Culture894 Words   |  4 PagesThe auto industry has been around long before I was born. Automobiles have become a necessity in American culture. â€Å"With the invention of the automobile and the mass production techniques of Henry Ford, which made the machine affordable, the American economy has been transformed by this key element in its prosperity.† (Davis, 2014) Being able to transport quickly from one destination to another is a great convenience. Almost every working family living in the United States owns at least one vehicle

Tuesday, May 5, 2020

Channeling My Energy free essay sample

At nine years old, I wouldn’t walk into supermarkets; I would fly. I would grip the cool metal handles of the towering shopping carts with my childishly hot hands and push off with one foot, propelling myself into infinity. The only thing that could bring me back to earth were my mother’s disapproving looks and barely successful attempts to make me â€Å"Slow down!† or â€Å"Come back here† since I might â€Å"plow into someone.† At school, the poster-plastered walls seemed to close in after long days, edging closer and closer until I felt energy-induced claustrophobia creeping up my spine. The blue and green and yellow of the carpet and walls and finger paintings tumbled and blurred as I turned myself upside down and shifted my weight onto my surprisingly steady palms. â€Å"No handstands in the classroom!† my teacher would admonish, kneeling beside me and gently lowering me to the floor, afraid my precarious center of gravity would soon destabilize. We will write a custom essay sample on Channeling My Energy or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page â€Å"You have to learn to stay seated.† To little me, this seemed just too much to ask; sitting down for such long periods seemed a feat only someone as grown up as she could accomplish. It wasn’t long before my teachers started making other comments. Soon it wasn’t just â€Å"You need to learn to stay seated,† but things like â€Å"Paige is slightly immature and behind the other children socially† and â€Å"Maybe you should consider keeping Paige back a grade so she has time to mature and settle down.† My mother knew she had to do something. Suddenly I was sitting in an over-air-conditioned room with a smiling lady who showed me flashcards of dogs and fire trucks and houses, and prompted me to repeat as many as I could remember. She gave me different samples of sounds, testing how long I could remain focused on the voice crackling through the recorder. I was too young to know that she was testing my attention span and mannerisms for ADHD. After I was positively diagnosed, my mother enrolled me in gymnastics to address my overabundant energy. I was mesmerized by the many ways I could contort my body and the countless flips I could execute in midair. The possibility of moving into the advanced group with the older girls motivated me to spend my boundless energy tumbling and balancing in the gym, instead of sprinting and rolling in the supermarket. I soon realized that this solution could be applied to other areas of my life – even those that weren’t physical. After all, I didn’t merely have an excess of physical energy, but mental energy as well. The world seemed to me an incredibly complicated tapestry, and I wanted to unravel its mysteries thread by thread. When I was 10 years old, my brother introduced me to the wonderful world of the fiction novel. From that day on, I was hooked. Stories of vampires and werewolves and witches and warlocks from other worlds swirled in my mind; I constantly had my head in a book. To this day, I continue burning my mental energy on novels, although my tastes have transitioned from teen fiction to classics like Charlotte Bronte’s Jane Eyre, Leo Tolstoy’s Anna Karenina, and Bram Stoker’s Dracula. But merely reading words on a page wasn’t enough. Somewhere inside me, I had created my own worlds, unbeknownst to my conscious mind. The day that my hand picked up a pen and put it to paper remains blurry in my memory; it is almost as though it happened of its own volition. I soon became addicted to the beauty of the English language, to the way hard consonants could be combined to elicit a sense of urgency and anger in a reader, and the way liquid consonants could be melded to coax out a sense of calm and happiness. High school came speeding toward me like a freight train, and instead of fully embracing the four years to come, I felt my excess energy – whether it be physical, creative, or inquisitive – made me different from everyone else. I was that teenager who pored over classic literature and wrote poetry for fun. The summer of eleventh grade, fate brought me to the moment when I discovered I was not alone in these pursuits. It was the first hot summer night of the Iowa Young Writer’s Workshop, and listening and observing the other teens around me, I felt the sense that I’d arrived at my intellectual home. Here were peers whose minds were always buzzing and whose hearts were always open. They were propelled by the same abounding energy that I was. They too understood the law of physics stating that energy could neither be created nor destroyed, only changed. And they, like me, had chosen to channel it into something positive.

Monday, April 6, 2020

The Buying Decisions of ‘Consumers’ on the Use of Microsoft or Apple Products” Essay Example

The Buying Decisions of ‘Consumers’ on the Use of Microsoft or Apple Products† Essay â€Å"The Buying Decisions of ‘Consumers’ On the Use of Microsoft or Apple Products† Submitted By: SANUSI SANI BUHARI Student No: 200922R7018 The Dissertation has been submitted to the Skyline University College In partial Fulfillment of the Degree: Bachelor of Business Administration (International Business) December-2012 Acknowledgement The writing of this dissertation has been one of the most significant academic challenges I have ever had to face. Without the support, patience and guidance of the following people, this study would not have been completed. It is to them that I owe my deepest gratitude. * Dr. Rashad Mohammed Al Saed, who undertook to act as my supervisor despite his many other academic and professional commitments. His wisdom, knowledge and commitment to the highest standards inspired and motivated me. * My friends and whoever directly or indirectly helped me to during the course of the dissertation. * The authors of the various books and web sites as well as the facilities and university library that helped me gain various information for this dissertation. Abstract This research paper describes the buying decisions of ‘consumers’, as to whether they prefer Microsoft or Apple products. People have different choice according to needs. Business organizations and telecommunication sectors judge product on usability. Data analysis suggests many important elements impact the buying decision of an individual or any specific company. Data analysis also suggests Microsoft and Apple continue to push the envelope when it comes to developing software and hardware. The main objective of this research is- which product do people prefer? We will write a custom essay sample on The Buying Decisions of ‘Consumers’ on the Use of Microsoft or Apple Products† specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The Buying Decisions of ‘Consumers’ on the Use of Microsoft or Apple Products† specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The Buying Decisions of ‘Consumers’ on the Use of Microsoft or Apple Products† specifically for you FOR ONLY $16.38 $13.9/page Hire Writer To find answers to the research questions of this research, data analysis and descriptive study has been used, because it involves observing and describing the behavior of a subject without influencing it in any way. This method is used to obtain a general overview of the subject, and answers who, what, and why of the study. The two giants pride themselves for producing cutting edge consumer and business products, and are leading the developments in software and hardware. But what about their websites how do they both compare, and more important, which one is better and more usable? This study will help reader to find all this answer. Chapter No. | Particulars| Pg No. | 1| Introduction * Aims of Independent Study * Objectives| 6-7| 2| Literature Review| 8-29| 3| Research Methodology| 30-32| | * Research Design| 31| | * Research Approach| 31| | * Research Instrument| 32| | * Sampling Design| 32| 4| Data Analysis| 33-37| 5| Conclusion| 38-40| | Bibliography | 41-42| | Appendixes | 43-46| Aims of the Study Technology has slowly started to rule our lives. No matter where we are, we have access to some sort of technological appliance such as cell phones, computers or televisions. Anything one could think of that might, in even the slightest way, make our lives easier is now available. So many different types of devices have been conceived and developed that is has become a complicated and confusing decision when trying to choose the right product. Computers and other forms of technology impact our lives daily. We encounter computers in stores, restaurants, and other retail establishments. We use computers and the Internet regularly to obtain information, experience online entertainment, buy product and services as well as communicating with others. Most of us carry a computer or a mobile phone with us at all times so we can remain in touch with others on a continual basis and can access internet information by the touch of a button. Businesses use computers to keep track of bank transactions, inventories, sales, credit card purchases and also provide business executives up to date information to make important decisions. Government’s use computers to support our nation’s defense system, for space exploration, for storing and organizing vital information of citizens, and other important tasks. Computers are used everywhere, and is a vital tool in one’s life. When you turn to your computer, it’s nice to think you’re in control. There’s the trusty computer mouse, which you can move anywhere on the screen, summoning up your music library or the internet browser by one click. Although it’s easy to feel like the director in front of your own desktop or laptop, there’s a lot going on inside, and the real man behind all these operations is the ‘Operating System’. The purpose of an operating system is to organize and control hardware and software so that the device it lives in behaves in a flexible but predictable way. Most desktop or laptop PCs come pre-loaded with Microsoft Windows. Macintosh computers come pre-loaded with Mac OS X. The operating system (OS) is the first thing loaded onto the computer. Without the operating system, a computer is useless. When it comes to computer technology, the two biggest giants are ‘Microsoft’ and ‘Apple’. Microsoft and Apple are by and large the biggest producers in cutting edge consumer and business products. Between the two companies, they continue to push the envelope when it comes to developing software and hardware. Question is, which do people prefer? Prior to this, I’ve decided to base my study on ‘ The buying decisions of consumers’, as to whether they prefer Microsoft or Apple products ’ Research Objectives Research Objectives ascertains specific points that may aid in gathering information related to the main objective. The purpose of this research will be: * To know the factors that affect the buying decisions of customers * To determine the products and services provided to customers Literature Review As stated by (Allan 2001), Microsoft Corporation is an American public multinational corporation headquartered in Redmond, Washington, USA that develops, manufactures, licenses, and supports a wide range of products and services predominantly related to computing through its various product divisions. While jointly developing a new Operating System (OS), working alongside IBM, Microsoft released Microsoft Windows. On February 26th 1986, moved its headquarters to Redmond, and decided to make the company go public. Microsoft worked closely with Apple during the development of Apples Macintosh computer, which was introduced in 1984. Revolutionary in its design, the Mac featured a graphical user interface based on icons rather than the typed commands used by the IBM PC, making its programs simple to use and easy to learn, even by computer novices. ( Iceboat, Daniel, and Susan L. Knepper 1991 p. 304) Apple INC was established on April 1, 1976 in Cupertino, California, and incorporated January 3, 1977; the company was previously named Apple Computer, Inc. for its first 30 years, but removed the word Computer on January 9, 2007, to reflect the companys ongoing expansion into the consumer electronics market in addition to its traditional focus on personal computers. (Price 1987) On August 15, 1998, Apple introduced a new all-in-one computer important of the Macintosh 128K: the iMac. The iMac design team was led by Jonathan Ive, who would later design the iPod and the iPhone. The iMac featured modern technology and a unique design. It sold close to 800,000 units in its first five months. Through this period, Apple purchased several companies to create a portfolio of professional and consumer-oriented digital production software. On May 19, 2001, Apple opened the first official Apple Retail Stores in Virginia and California. Later on July 9 they bought Spruce Technologies, a DVD authoring company. The same year, Apple introduced the iPod portable digital audio player. The product was phenomenally successful  Ã¢â‚¬â€ over 100 million units were sold within six years. In 2003, Apples iTunes Store was introduced, offering online music downloads for $0. 99 a song and combination with the iPod. The service quickly became the market leader in online music services, with over 5 billion downloads by June 19, 2008. It can understandably be said that when it comes to computer technology, the two biggest names are ‘Microsoft’ and ‘Apple’. (Suhail 2009) It success has become so immense that a customer’s choice can either be one of the two. Other competitors are too far off, but what do these two giants give in return to society? To most people, Microsoft represents computing. Those with a dynamic interest in technology usually believe that Microsoft Windows is the computer. This kind of brand association you won’t see in other companies, which makes it a very powerful source in the world of technology. Apple computers have grown very popular in the last few years, and its simplicity and user-friendly attributes is what keeps customers captivated. Historically, Microsoft began the personal computer revolution with their Windows Operating System, which offered people a different platform for their computer needs. Apple also introduced, with their own line of Macintosh computers and devices, though it did have it ups and downs. Microsoft followed a general policy by marketing their computers with non-expensive hardware and software parts which allowed every house to have a PC (personal computer). People can afford their computers with no restrictions on the type of hardware. Apple had a different mind frame whereby they went for exclusivity and decided to sell their products at a much higher price than Microsoft PC’s. Its elite hardware and software is what makes it more expensive. (Admin 2010) Although where Apple has the upper hand over Microsoft is related to security. Apple Mac computers are generally more secure because of its OSX operating system. Suhail 2009) It has more protection built in against malware and viruses. Customers want to feel comfortable and safe, knowing that the product they are purchasing is protected against threat at all times. Windows (operating system of Microsoft) is more prone to malware and viruses, and requires expensive protection software to make the PC more secure . Another advantage of Apple computers is they are more efficient when it comes to graphics acceleration and games. Microsoft has problems with that. If one would buy a Microsoft based PC, they would need to spend an additional amount to handle graphics of that scale. On the other hand, Microsoft is committed in making its products and services easier for everyone to use. The Windows operating system has many in-built accessibility features that are useful for individuals who have difficulty in typing, using the mouse, seeing or hearing difficulties. Microsoft also produces other computer hardware’s such as Xbox, Zune, Xbox 360 and MSN TV. (MSJ 1986) Apple also offers a wide variety of products such as the iPhone, iPod, Apple TV and the newly introduced iPad. Factors affecting buying decision Homepage The homepage is one of the most important pages of the whole site because it’s the first, and in many cases the only chance you get to impress the visitor enough to keep them browsing. You’ve got a few seconds to convince them that the site has enough value for them to keep using it, because if it doesn’t, the visitors will leave. Apple’s approach to the homepage has been consistent throughout all the years that the site has been running. They use this page as a kind of advertising board that always shows a big ad of their latest product, followed by 3 other ads to another 3 products or news that is important at the moment. If you’re not interested in any of the 4 suggested items, you can use the large navigation bar at the top, which is split into their core businesses: Mac, iPod and iPhone, followed by a couple of other important links, such as the online store and support pages. The navigation bar also incorporates a search field. (Dmitry Fadeyev) One other thing to note is the lack of content. You’re not distracted by sidebars, notices or extra navigation items — there are only a few items on the page, focusing your attention and making the decision of where to go next easier. Microsoft has a different approach to their homepage. Firstly, they feature a similar style of ad at the top, designed to be attention grabbing. These are large images, but only one out of 3 ads is shown at a time — you have to hover over the other two to expand them. This focuses attention, but may potentially weaken the effectiveness of the two hidden ads since the visitor has to work to see them right at the top of the page is the navigation, together with search. Flow All of the content of Microsoft is extremely monotonous, especially the â€Å"Learn More† box with a list of 8 links. The dry presentation gives the user less incentive to click around. Some Microsoft sites use better layout to direct the flow of attention, but they generally all suffer from the same illness: too much content. When you present the user with too many choices, you make them work — they have to think about what they want and they have to process more information. By reducing choice, Apple directs the users through a more carefully designed funnel, which generally delivers a better experience. (Dmitry Fadeyev) Navigation Apple’s website has a large navigation bar at the top, which remains there consistently whichever section of the site you go to. The options available show the main sections split by its lines of business as well as a couple of essentials, such as support and the store. The bar also integrates search and branding as the home button displays the Apple logo instead of a label. Any extra sub-navigation is located on individual site pages and is placed within the context of that page, whether on a sidebar, or as a horizontal bar at the top. Microsoft has a similar navigation bar on the homepage, but that navigation bar is not consistent across the site. Actually, all of the sub-pages tend to use their own navigation bar, in style and in content. The homepage navigation thus acts as a site map to the rest of the Microsoft website sections. In a lot of the navigation bars, including the one on the homepage, Microsoft uses drop-down menus — unlike Apple. They don’t just use drop-down menus — they use huge drop-down menus. In some cases, the menu even has a scrollbar (in Firefox): Is this good or bad? In a recent Alertbox entry, Jakob Nielsen, a well known usability guru, has written that mega drop-down menus can work. They work because they present a lot of choices in groups, so they allow for easier scanning as you can jump to the group that you want and scan the items inside them. You have to get certain things right though, like the order of the groups and only mentioning each element once, for them to work well. In this case, it makes sense for Microsoft to go the route of the drop-down menus, but feel that they may have gone a little too far. For example, some options point to the same thing, like the ‘Office’ drop down and ‘Office’ option in the ‘All Products’ drop down. The drop-down also blocks the content below, so if you accidentally moused over the menu, you have to mouse off from it again to get to the content below — all the while being careful not to hover over other items. There are also a lot of options under each group — sometimes showing about 13 items, which makes processing the options much more difficult. Also, the inconsistency of navigation across the different sections makes it much harder to jump from one area of the site to another, e. g from the Office site to the Xbox site. (Dmitry Fadeyev) Readability Because most of the content on the sites is text, it’s vital to ensure that everything is readable and legible. Here are the main things to consider when working on readability of your site’s content: * Make the text large enough so that it’s easy to see and read. Ensure that there is enough contrast between the text and background. * Provide enough white space around the text to keep other content and graphics from distracting the reader. * Provide plenty of headings or highlighted/bold text to allow users to quickly scan the content for key information. * Add images and icons to make it easier to focus on indivi dual sections of the text, i. e. product or feature descriptions. * Keep the text short and to the point. Apple does a great job of keeping everything easy to read. The text is generally small, but never too small so as to be a problem. Headings are set in heavier type and stand out, allowing you to quickly get the gist of each section. Apple also makes heavy use of white space to separate everything apart and adds images to make each text blurb more interesting. It follows the general usability guidelines by breaking things down into small bite size pieces of text that are easy to digest. It looks a lot busier than the Apple site because there is more content on one page and there are many different treatments for headings and highlighted words. Dmitry Fadeyev) Too much variety causes visual chaos on the page, with each different colored or bold item competing for your attention. In this case, the page really needs to be simplified to make it easier for the viewer to process. Search Apple’s search is integrated into the navigation bar. When you type something in the search box you actually get live search results with AJAX, by way of a l ittle box which pops up, showing you the results as you type. It’s very well done — there is no lag when typing, the results are grouped in categories and are fetched very quickly, usually before you finish typing your full query. If you want to see more results you can just hit Enter when you’ve finished typing and you’ll be taken to the standard search results page. It’s very clean and organized by categories. You can drill the results further down by category, selectable from the menu on the right. It’s functional and clean, and works well when you’re trying to find any products that they sell. Aesthetics Apple’s website aesthetics closely mirrors that of its product line. The navigation bar looks like it’s crafted out of aluminium and features gentle gradients and indented text. There are also plenty of reflections and minimalist design elements. Apple has always worked on unifying the look and feel of its interface across its entire product line, from the hardware to software, and their website is no exception. Do aesthetics have anything to do with usability? Actually, they do. Research shows that people perceive better looking interfaces as more usable. The site follows a faint Windows theme with the light blue clouds, but there is little else to say that this is a page for Internet Explorer or Windows. The look and feel is very generic and doesn’t do enough to differentiate itself or build a coherent brand. The designs are overall pretty good, but pretty good just isn’t enough. There are plenty of inconsistencies and a lack of polish, which puts Apple ahead in this area. Consistency Consistency is important because it allows you to develop usage patterns. This basically means that if your site has a consistent interface throughout, your visitors will quickly learn how it works and will be able to use this knowledge in any of the new pages that they visit, since they’ll all be using the same, or very similar, interface. Apple does a great job of keeping the interface consistent. All of the product pages feature very similar aesthetics and are structured in the same way. The whole site looks and feels the same throughout and the global navigation bar at the top is always there, on every page. This means that the entire experience is very unified and coherent — you know you’re on the same website wherever you go. Could you tell that this is a Microsoft page if you took away their logo? Custom graphics, styles and colour palettes across all the Microsoft sections help little to maintain a coherent brand image on the web. Microsoft really struggles here. There are many different sections across Microsoft. com and they all feature their own look and feel, including their own navigation. So once you go to a section on their site, be it the Microsoft store, the Office site, or the Security pages, they will all look and feel like separate websites. What’s worse, the global navigation bar is also gone, meaning that you have to go back to the homepage, or the site map, to see an overview of all of their sites. It’s really an ecosystem of websites hosted under the same domain and therefore it doesn’t get the benefit of consistency that Apple has. The brand image is also terribly fragmented making it impossible to define what a Microsoft site looks like. (Dmitry Fadeyev) Marketing Nobody will argue that Apple is the kind of viral marketing. You might find some PC ads/commercials on local magazines and newspapers, but you will not find great Mac vs PC / Get a Mac / Buy a Mac / Hello, I am a Mac, and I am a PC commercials like Apple produces. Security When it comes to Mac vs. PC security concerns, many experts think that Windows has caught up with Apple. Before Microsoft Windows 7, we have all heard that Windows operating system is a targeted platform for malicious attacks. Of course it is true, but the question is. Does the operating system has the right tools to defend itself? With Mac computers, you wont need to worry about viruses as you do in Windows operating system. The real problem is not only about viruses, but about security breaches that allow hackers to penetrate into your computer, and steal your valuable art works, photographs and important (and sometimes secret) media assets. It was proven that both Windows 7 and Mac OS X Snow Leopard have their own glitches. DailyTech. com posted an article, saying that one prominent Mac hacker has pointed out that Mac OS X Snow Leopard is less secured than Microsoft Windows 7 OS. Well, if a Celeb hacker says so, we should probably take it pretty seriously. So it seems, at least when making a business decision, that you shouldnt pick a Macintosh over a PC, just because people are telling you that Macintosh doesnt have viruses, or they are more secure. Apple OS X Snow Leopard is based on the UNIX core. That fact alone doesnt make it more secure than Windows 7 as you can see. We just couldnt ignore the fact that viruses are real pain in the axe. As a power PC user, with all those pop-out windows saying You have been infected with a Virus † I would probably be very happy knowing that I can work quietly, without worrying about viruses and all that crux horrors that the Internet brings with it. You just cant blame Windows operating system for being more popular. Financial Analysis Its important to step back and examine just how close each of these companies really are in terms of revenue and earnings. Analysts polled by Thomson Reuters expect Apple to report approximately $2. 85 billion in net income ($3. 07 in EPS) on about $14. 62 billion in revenue when it releases its results. Yet, its well known that Apple regularly beats consensus estimates by quite a large margin and that actual results will come in well above the consensus. Just last quarter, Apple not only beat revenue estimates by over $1 billion, but it annihilated EPS estimates by reporting $0. 88 above the $2. 45 consensus – a 36% beat. In fact, Apple has regularly beaten consensus estimates by well over 35% each quarter over the past year. (Andy M. Zaky) Financial Alchemists Turley Muller, who is currently the most accurate analyst on Apple, offers a more realistic view of the company. Muller believes that Apple will report about $3. 1 billion in net income on ($3. 35 in EPS) on $15. 15 billion in revenue. And while I think Muller has left some room for upside surprise, its clearly best to use his numbers rather than the consensus as a measure of comparison. Microsoft, on the other hand, is expected to earn $4. billion in net income ($0. 46 in EPS) on $15. 26 billion in revenue when it releases its results – just a hair above Mullers revenue estimates for Apple. And while Microsoft regularly reports upside surprises itself, the gap between consensus estimates and Microsofts actual results is nowhere near as wide as it is with Apples results. Thus, if Apple reports at the higher end of Mullers estimates, and if Microsoft reports closer to the consensus, its quite possible that Apple might have a shot to beat Microsoft in revenue for the first time in its history this quarter. The chart (Appendix 2(A) details a quarterly revenue comparison of Apple and Microsoft over the past few years. As one can see from the chart, Apple is within striking distance of surpassing Microsofts quarterly revenue. Since Microsoft and Apple are on a different fiscal year, the chart realigns their results based on the calendar year. (Andy M. Zaky) So the big story in tech earnings is whether history will be made in the decades-long battle between Apple and Microsoft, or whether Microsoft will postpone the inevitable and maintain its dominance over Apple for at least one more quarter. Even if Apple doesnt beat Microsoft in sales this quarter, it will almost certainly do so next quarter and by quite a large margin. For the September quarter, analysts expect Apple to generate approximately $16. 81 billion in revenue compared to a projected $15. 16 billion in revenue for Microsoft. So even conservative estimates, which have yet to be adjusted to account for iPad sales, already put Apple ahead of Microsoft by nearly $1. 2 billion next quarter. My estimates put Apple ahead by $3. 2 billion as I expect Apple to record nearly $18. 9 billion in revenue. Whats even more surprising is that Apple will likely far surpass Microsoft in revenue for the entire 2012 fiscal year (Appendix 2(B). Im looking for Apple to record $81. 6 billion in revenue, well above the $70 billion Im expecting out of Microsoft for the year. You can view my track record on Apple at Philip Elmer-DeWitts column Apple 2. 0. Even the analyst consensus puts Apple well ahead of Microsoft next year, with revenue estimates of $72. 6 billion (AAPL) versus $67 billion (MSFT). The chart below compares Apple and Microsofts annual fiscal revenue for the past several years. While quarterly data must be compared on the calendar year to show a side by side comparison over a particular 3-month period, yearly data can be analyzed on the fiscal year. And, what about other metrics? Net income growth, total net income, total net cash, cash flow, book value, total assets and the economic sensitivity of each companys primary operations are just a few of the other key factors to consider when comparing the two companies. While Apple will surpass Microsoft in revenue in the near future, that doesnt necessarily mean that Apple automatically deserves a larger market capitalization. But it does appear that Apple will not only record more revenue than Microsoft, it will also eventually (within the next few years) earn more in net income, generate a larger amount of cash, and outpace Microsoft in terms of growth in net income and revenue. The earnings beat wont come easy for Apple. Due to Microsofts extraordinarily high operating margin, the only way Apple will beat Microsoft in earnings is by simply outpacing it in sales. Since Microsoft pushes more of its revenue to the bottom line, Apple will have to significantly outpace Microsoft in revenue to win on the net income front. The chart below compares Apple and Microsofts net income for the last several fiscal years (Appendix 2(C). Though these two companies no longer really operate in the same space as they once did with Apple turning its focus on the consumer and Microsoft on enterprise spending, both companies are dominating their respective industries. Update 7/20: As expected, Apple has once again crushed the consensus estimates on the top line, beating analyst revenue expectations by over well $1 billion when it reported $15. 7 billion in revenue Tuesday afternoon. In fact, Apple even surpassed my lofty expectations of $15. billion by $100 million in sales. Unless Microsoft far surpasses analyst expectations of $15. 24 billion in revenue, it appears that Apple has already won the race. Microsoft primarily makes its profits from business to business, which mainly consists of selling licenses to its operating system to computer manufacturers and office suites for enterprises. That’s not to say t hat they don’t sell to consumers — they do, and they have consumer only product lines as well, such as the Xbox gaming console, and of course home users also buy Windows and Office. This means that their business targets pretty much everyone, from home computer owners to developers and enterprises; which in turn stretches the purpose of their website to try and serve everyone. On the other hand, Apple is primarily a consumer company, and makes most of its profit selling hardware, like its iPod music players and Mac computers. This makes the target of Apple’s site much clearer — marketing, selling and providing support for its products to consumers. They don’t have to worry about selling licenses to manufacturers because they’re the only manufacturer, so the key purpose of the website would be to advertise and promote their multiple product lines, as well as selling them through their online store. (Andy M. Zaky) Cost Analysis Other factor that affects buying decision is cost. People have been arguing online about how much more expensive Macs are than PCs or not for more than a decade (and in print for years before that). These discussions usually involve some hard facts but also some persistent myths. As a longtime Windows guy who has recently migrated to the Mac, I think Im in a retty good position to try and sort out reality from fiction. Lets take a look at what you can really get for your money these days. Hardware For those of you who are left, what I have found in my research is that neither side has a lock on good value. If you start with Apples relatively short list of SKUs (three or four model variation s for each of its lines, such as MacBook Pro, MacBook, and iMac) and then look for comparable Windows machines, youll find that Apple bests the competition in some ways and not in others, but the pricing, overall, is surprisingly on par. Only a few years ago, it seemed like a no-brainer that Windows hardware was much cheaper. But if youre talking name-brand hardware, thats just no longer the case. On the other hand, if you search the Windows side first, youll quickly discover machines that in features and price fit in between the Mac SKUs. And in those niches, they represent very good values. So theres one answer to the question of whether Macs or Windows represent a better value: If one of those in between PCs suits your needs best, youd be paying an unnecessary premium to get a Mac instead. Lets look at some hard numbers. I started my research with top-of-the-line notebooks I spent an hour on Dells site trying to find the cheapest notebook that offered everything Apples $2,799 MacBook Pro 17 provides. That

Sunday, March 8, 2020

The Old Man and the Sea essays

The Old Man and the Sea essays The main theme of Hemmingway's The Old Man and the Sea, is not an easy one to pick out. At first glance the book seems to simply be a story about a guy who goes out and battles with a fish. However, there has to be some underling theme. It could be the relationship between a boy and a man, and how both are treated by nature. This is illustrated by the boy's parents not allowing him to continue with the unlucky old man. It is also shown by the success the boy had and the failure the old man experienced after their parting. Still through all of this the boy remembers how well the old man treated him and does everything he can for the old man. On the whole, I liked this book. It was written in relatively easy to follow language, yet Hemmingway was still able to convey unbelievable images of picturesque settings in the reader's mind. There is also an interesting use of dialog, not only between the boy and the old man, but especially with the old man talking to himself. This is something I really haven't seen used that extensively. I think Hemmingway used this to fill in the parts of the story where the old man is simply at a stalemate with the fish, when he is just sitting there being pulled around the ocean. The one thing I didn't understand about this one sided conversation was the constant reference to Joe DiMaggio. I don't know if this was simply a tribute to a great ball player, or some kind of historical reference that I just didn't get. The pace and general flow of the story was good. There were a few times during the struggle where the action all but disappeared, but on the whole there was almost always something happening. The plot was also pretty simple and easy to follow. Another quality of this book which I have seen in others I have previously read was the complete lack of a male-female love subplot. As I have said before, this ofte ...

Friday, February 21, 2020

Something relating to the history of the Holocaust Research Paper

Something relating to the history of the Holocaust - Research Paper Example The contrary will be shown. It will be shown that they had a class system. They had classified the types of citizens as early as 1936. The infrastructure had been created and the fascilities were built before the Germans even entered Dutch soil enabling the Germans to come in and murder over 100 000 people in less than 3 years. Three stages will be examined is this essay. From 1936-1939, when the national decree dictated who was a dutch citizen and the creation of refugee centers. From 1939 to 1940, when Westerbrok was voted into Parliament as a center for the "legal refugees". To conclude with the capitulation of the Netherlands government within 5 days in 1940 and the consequences it had on the Shoah. Please note that in the sources there is much conflicting information due to the age of the survivors and the difference in translations and countries methods of notations.. 1936-139 The Jewish population of Amsterdam represented approximately 10% of the population. The attitude was r ather avant garde, agnostic, assimilated and had benefited greatly from the WWI attitude of being a neutral state.(Hillesum 1999) There was a sense of safty of being Dutch before being Jewish. The general consensus was accepting the census as a natural govermental process. Upon registering in 1936, Jews were told that as citizens they would be protected. (Vanderwerff 2010)The atmosphere as explained by Etty Hillesum, in her Letters of Westerbork, was that she had no desire for organised religion. Life was absurd. God was helpless (12/07/1942) She was born into an agnostic family. Before 1941, she was lost in the different intellectual circles of Amsterdam. She had failed her exam to get into law school. She studied Slavic studies and then went on to tutor. This is an insight into the Jewish population of Amsterdam. The intellectual assimilation would eventually be the demise of the Jews of Amsterdam. The felt themeselves more protected and superior over the German Jews who were ofte n poorer and less educated then the Dutch Jews. They had jobs and lived in proper housing. They were not touched by the refugee housing or economic situation. As in other European nations, they considered themselves citizens of the nation of their birth. In 1936, by Royal Decree it was voted that a national census would require new identity cards in order to define who were Dutch citizens. Religion was required on the last line of the card. (Vanderwerff 2010) In 1939, Refugees were forced to register. Legal Refugee Jews (Stateless) were defined by having been born in a country that no longer existed because of World War I and having been born in Poland. Illegal Refugee Jews were those who came into the Netherlands without any visas. Illegal refugees were sent back to Germany. (Vanderwerff 2010) In World War I, The Netherlands had remained a neutral State. It was common knowledge that the Netherlands was a state that had had an open door policy. Because of the depression, lack of job s and overall anti-semitism, German Jews and Stateless Jews were considered secondary citizen to Dutch citizens. The geo-political economic situation of Europe has changed the map. Dutch citizens were given precedents over refugees in employment and housing. What had been refugee homes all over the country since 1936 had become internment camps in

Wednesday, February 5, 2020

Credit scoring model Coursework Example | Topics and Well Written Essays - 5000 words

Credit scoring model - Coursework Example As a way of solving classification issues and also decreases Type I errors, typical of many credit scoring models, this piece attempts to describe or rather come up with an appropriate credit scoring model via two stages. Classification stage involves development and construction of an ANN-based credit scoring model, which basically classifies applicants into two categories, which are, those who have acceptable credit (good) and those who have unacceptable credit (bad). In the second stage, which will also be referred to as the re-assigning stage, attempt is made to lower Type I error through reassignment of the unaccepted applicants with good credit to a conditionally accepted category making use of a CBR-based classification approach. In a bid to demonstrate the effectiveness of the model proposed in this paper, an analysis is run on a German dataset with assistance of SAS Enterprise Miner. The results will be expected to not only prove that the model is a more effective credit sco ring model but that it will also enhance the business revenues through its ability to lower both Type I and Type II error system scoring errors. Introduction Data mining is a process that involves search and analysis of data so as to find implicit, although substantially vital information. It covers selection, exploration and modeling of large data volumes with the aim of uncovering previously unrecognized patterns, and in the end generate understandable information, from huge databases. It generally employs an extensive range of computational techniques which include approaches such as statistical analysis, decision trees analysis, neural networks review, rule induction and refinement approach, as well as graphic visualization. Of the various mentioned methods, the classification aspect has an important role in decision making within businesses mainly as a result of the extensive applications when it comes to financial forecasting, detection of fraud, development of a marketing str ategy, credit scoring, to mention just but a few. The aim of developing credit scoring models is to assist financial institutions to detect good credit applicants who are more likely to honor their debt obligation. Often such systems are based on multiple variables including the applicant’s age, their credit limit, income levels, as well as marital status, among others. Conventionally, there are many distinct credit scoring models which have been developed by financial as well as researchers in a bid to unravel the mysteries behind classification problem. Such include linear discriminant analysis, logistic regression, multivariate adaptive regression splines, classification, as well as regression tree, case based reasoning, and of course the artificial neural networks. Normally, linear discriminant analysis, logistic regression, and artificial neural networks are utilized in construction of credit scoring models. LDA is amongst the earliest forms of credit scoring model and e njoy widespread usage across the globe. Nonetheless, its use has often been subjected to criticism based on its assumption of existence of a linear relationship between the input variables and the output variables. Sadly, this is an assumption that seldom holds, and is rather sensitive to deviations arising from assumption of multivariate normality (West, 2000). Like LDA, LR is also a rather common alternative employed in performance of credit scoring assessments. In essence, the LR model has stood out as the best

Tuesday, January 28, 2020

Moduation Techniques | An Overview

Moduation Techniques | An Overview The evolution of wireless cellular technology from 1G to 4G has a similar aim that is capable to deliver high data rate signal so that it can transmit high bit rate multimedia content in cellular mobile communication. Thus, it has driven many researches into the application of higher order modulations. One of the focuses of this project is to study and compare the different types of Digital Modulation technique that widely being used in the LTE systems. Hence, before being able to design and evaluate this in computer simulation. A study is carried out on digital modulation and drilled down further on QPSK modulation schemes, and followed by the QAM modulation schemes. What is modulation? There are several definitions on modulation taken from several references as follows: Modulation is defined as the process by which a carrier wave is able to carry the message or digital signal (series of ones and zeroes). Modulation is the process of facilitating the transfer of information over a medium. Voice cannot be sent very far by screaming. To extend the range of sound, we need to transmit it through a medium other than air, such as a phone line or radio. The process of converting information (voice in this case) so that it can be successfully sent through a medium (wire or radio waves) is called modulation. Modulation is the process of varying a carrier signal, typically a sinusoidal signal, in order to use that signal to convey information. One of the three key characteristics of a signal is usually modulated: its phase, frequency or amplitude. There are 2 types of modulations: Analog modulation and digital modulation. In analog modulation, an information-bearing analog waveform is impressed on the carrier signal for transmission whereas in digital modulation, an information-bearing discrete-time symbol sequence (digital signal) is converted or impressed onto a continuous-time carrier waveform for transmission. 2G wireless systems are realized using digital modulation schemes. Why Digital Modulation? The move to digital modulation provides more information capacity, compatibility with digital data services, higher data security, better quality communications, and quicker system availability. Developers of communications systems face these constraints: available bandwidth permissible power inherent noise level of the system The RF spectrum must be shared, yet every day there are more users for that spectrum as demand for communications services increases. Digital modulation schemes have greater capacity to convey large amounts of information than analog modulation schemes. Different types of Digital Modulation As mentioned in the previous chapter, there are three major classes of digital modulation techniques used for transmission of digitally represented data: Amplitude Shift Keying (ASK) Frequency Shift Keying (FSK) Phase Shift Keying (PSK) All convey data by changing some aspect of a base signal, the carrier wave (usually a sinusoid) in response to a data signal. For ASK, FSK, and PSK the amplitude, frequency and phase are changed respectively. Bit rate and symbol rate To understand and compare different PSK and QAM modulation format efficiencies, it is important to first understand the difference between bit rate and symbol rate. The signal bandwidth for the communications channel needed depends on the symbol rate, not on the bit rate. Bit rate is the frequency of a system bit stream. Take, for example, a radio with an 8 bit sampler, sampling at 10 kHz for voice. The bit rate, the basic bit stream rate in the radio, would be eight bits multiplied by 10K samples per second or 80 Kbits per second. (For the moment we will ignore the extra bits required for synchronization, error correction, etc.). A Quadrature Phase Shift Keying (QPSK) signal. The states can be mapped to zeros and ones. This is a common mapping, but it is not the only one. Any mapping can be used. The symbol rate is the bit rate divided by the number of bits that can be transmitted with each symbol. If one bit is transmitted per symbol, as with BPSK, then the symbol rate would be the same as the bit rate of 80 Kbits per second. If two bits are transmitted per symbol, as in QPSK, then the symbol rate would be half of the bit rate or 40 Kbits per second. Symbol rate is sometimes called baud rate. Note that baud rate is not the same as bit rate. These terms are often confused. If more bits can be sent with each symbol, then the same amount of data can be sent in a narrower spectrum. This is why modulation formats that are more complex and use a higher number of states can send the same information over a narrower piece of the RF spectrum. Phase Shift Keying (PSK) PSK is a modulation scheme that conveys data by changing, or modulating, the phase of a reference signal (i.e. the phase of the carrier wave is changed to represent the data signal). A finite number of phases are used to represent digital data. Each of these phases is assigned a unique pattern of binary bits; usually each phase encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular phase. There are two fundamental ways of utilizing the phase of a signal in this way: By viewing the phase itself as conveying the information, in which case the demodulator must have a reference signal to compare the received signals phase against; (PSK) or By viewing the change in the phase as conveying information differential schemes, some of which do not need a reference carrier (to a certain extent) (DPSK). A convenient way to represent PSK schemes is on a constellation diagram. This shows the points in the Argand plane where, in this context, the real and imaginary axes are termed the in-phase and quadrature axes respectively due to their 90 ° separation. Such a representation on perpendicular axes lends itself to straightforward implementation. The amplitude of each point along the in-phase axis is used to modulate a cosine (or sine) wave and the amplitude along the quadrature axis to modulate a sine (or cosine) wave. In PSK, the constellation points chosen are usually positioned with uniform angular spacing around a circle. This gives maximum phase-separation between adjacent points and thus the best immunity to corruption. They are positioned on a circle so that they can all be transmitted with the same energy. In this way, the moduli of the complex numbers they represent will be the same and thus so will the amplitudes needed for the cosine and sine waves. Two common examples are binary phase-shift keying (BPSK) which uses two phases, and quadrature phase-shift keying (QPSK) which uses four phases, although any number of phases may be used. Since the data to be conveyed are usually binary, the PSK scheme is usually designed with the number of constellation points being a power of 2. Applications of PSK and QAM Owing to PSKs simplicity, particularly when compared with its competitor quadrature amplitude modulation (QAM), it is widely used in existing technologies. The most popular wireless LAN standard, IEEE 802.11b, uses a variety of different PSKs depending on the data-rate required. At the basic-rate of 1 Mbit/s, it uses DBPSK. To provide the extended-rate of 2 Mbit/s, DQPSK is used. In reaching 5.5 Mbit/s and the full-rate of 11 Mbit/s, QPSK is employed, but has to be coupled with complementary code keying. The higher-speed wireless LAN standard, IEEE 802.11g has eight data rates: 6, 9, 12, 18, 24, 36, 48 and 54 Mbit/s. The 6 and 9 Mbit/s modes use BPSK. The 12 and 18 Mbit/s modes use QPSK. The fastest four modes use forms of quadrature amplitude modulation. The recently-standardised Bluetooth will use p / 4-DQPSK at its lower rate (2 Mbit/s) and 8-DPSK at its higher rate (3 Mbit/s) when the link between the two devices is sufficiently robust. Bluetooth 1 modulates with Gaussian minimum shift keying, a binary scheme, so either modulation choice in version 2 will yield a higher data-rate. A similar technology, ZigBee (also known as IEEE 802.15.4) also relies on PSK. ZigBee operates in two frequency bands: 868-915MHz where it employs BPSK and at 2.4GHz where it uses OQPSK. Notably absent from these various schemes is 8-PSK. This is because its error-rate performance is close to that of 16-QAM it is only about 0.5dB better but its data rate is only three-quarters that of 16-QAM. Thus 8-PSK is often omitted from standards and, as seen above, schemes tend to jump from QPSK to 16-QAM (8-QAM is possible but difficult to implement). QPSK QPSK is a multilevel modulation techniques, it uses 2 bits per symbol to represent each phase. Compared to BPSK, it is more spectrally efficient but requires more complex receiver. Constellation Diagram for QPSK The constellation diagram for QPSK with Gray coding. Each adjacent symbol only differs by one bit. Sometimes known as quaternary or quadriphase PSK or 4-PSK, QPSK uses four points on the constellation diagram, equispaced around a circle. With four phases, QPSK can encode two bits per symbol, shown in the diagram with Gray coding to minimize the BER twice the rate of BPSK. Figure 2.5 depicts the 4 symbols used to represent the four phases in QPSK. Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed. Four symbols that represents the four phases in QPSK Although QPSK can be viewed as a quaternary modulation, it is easier to see it as two independently modulated quadrature carriers. With this interpretation, the even (or odd) bits are used to modulate the in-phase component of the carrier, while the odd (or even) bits are used to modulate the quadrature-phase component of the carrier. BPSK is used on both carriers and they can be independently demodulated. As a result, the probability of bit-error for QPSK is the same as for BPSK: However, with two bits per symbol, the symbol error rate is increased: If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the probability of symbol error may be approximated: As with BPSK, there are phase ambiguity problems at the receiver and differentially encoded QPSK is more normally used in practice. As written above, QPSK, are often used in preference to BPSK when improved spectral efficiency is required. QPSK utilizes four constellation points, each representing two bits of data. Again as with BPSK the use of trajectory shaping (raised cosine, root raised cosine etc) will yield an improved spectral efficiency, although one of the principle disadvantages of QPSK, as with BPSK, is the potential to cross the origin, that will generate 100% AM. QPSK is also known as a method for transmitting digital information across an analog channel. Data bits are grouped into pairs, and each pair is represented by a particular waveform, called a symbol, to be sent across the channel after modulating the carrier. QPSK is also the most commonly used modulation scheme for wireless and cellular systems. Its because it does not suffer from BER degradation while the bandwidth efficiency is increased. The QPSK signals are mathematically defined as: Implementation of QPSK QPSK signal can be implemented by using the equation stated below. The symbols in the constellation diagram in terms of the sine and cosine waves used to transmit them is being written below: This yields the four phases p/4, 3p/4, 5p/4 and 7p/4 as needed. As a result, a two-dimensional signal space with unit basis functions The first basis function is used as the in-phase component of the signal and the second as the quadrature component of the signal. Therefore, the signal constellation consists of the signal-space 4 points The factors of 1/2 show that the total power is divide evenly among the two carriers. QPSK systems can be implemented in a few ways. First, the dual data stream is divided into the in-phase and quadrature-phase components. These are then independently modulated onto two orthogonal basis functions. In this implementation, two sinusoids are used. Next, the two signals are superimposed, and the resulting signal is the QPSK signal. Polar non-return-to-zero encoding is also being used. These encoders can be located before for binary data source, but have been located after to illustrate the theoretical dissimilarity between digital and analog signals concerned with digital modulation. The matched filters can be substituted with correlators. Each detection device uses a reference threshold value to conclude whether a 1 or 0 is detected. Quadrature Amplitude Modulation (QAM) Quadrature amplitude modulation (QAM) is both an analog and a digital modulation scheme. It is a modulation scheme in which two sinusoidal carriers, one exactly 90degrees out of phase with respect to the other, which are used to transmit data over a given physical channel. Because the orthogonal carriers occupy the same frequency band and differ by a 90degree phase shift, each can be modulated independently, transmitted over the same frequency band, and separated by demodulation at the receiver. For a given available bandwidth, QAM enables data transmission at twice the rate of standard pulse amplitude modulation (PAM) without any degradation in the bit error rate (BER). QAM and its derivatives are used in both mobile radio and satellite communication systems. The modulated waves are summed, and the resulting waveform is a combination of both phase-shift keying (PSK) and amplitude-shift keying, or in the analog case of phase modulation (PM) and amplitude modulation. In the digital QAM case, a finite number of at least two phases and at least two amplitudes are used. PSK modulators are often designed using the QAM principle, but are not considered as QAM since the amplitude of the modulated carrier signal is constant. In 16 QAM 4 different phases and 4 different amplitudes are used for a total of 16 different symbols. This means such a coding is able to transmit 4bit per second. 64-QAM yields 64 possible signal combinations, with each symbol representing six bits (2^6 = 64). The yield of this complex modulation scheme is that the transmission rate is six times the signaling rate. This modulation format produces a more spectrally efficient transmission. It is more efficient than BPSK, QPSK or 8PSK while QPSK is the same as 4QAM. Another variation is 32QAM. In this case there are six I values and six Q values resulting in a total of 36 possible states (66=36). This is too many states for a power of two (the closest power of two is 32). So the four corner symbol states, which take the most power to transmit, are omitted. This reduces the amount of peak power the transmitter has to generate. Since 25 = 32, there are five bits per symbol and the symbol rate is one fifth of the bit rate. The current practical limits are approximately 256QAM, though work is underway to extend the limits to 512 or 1024 QAM. A 256QAM system uses 16 I-values and 16 Q-values giving 256 possible states. Since 2^8 = 256, each symbol can represent eight bits. A 256QAM signal that can send eight bits per symbol is very spectrally efficient. However, there is some drawbacks, the symbols are very close together and are thus more subject to errors due to noise and distortion. Such a signal may have to be transmitted with extra power (to effectively spread the symbols out more) and this reduces power efficiency as compared to simpler schemes. BPSK uses 80 K symbols-per-second sending 1 bit per symbol. A system using 256QAM sends eight bits per symbol so the symbol rate would be 10 K symbols per second. A 256QAM system enables the same amount of information to be sent as BPSK using only one eighth of the bandwidth. It is eight times more bandwidth efficient. However, there is a drawback too. The radio becomes more complex and is more susceptible to errors caused by noise and distortion. Error rates of higher-order QAM systems such as this degrade more rapidly than QPSK as noise or interference is introduced. A measure of this degradation would be a higher Bit Error Rate (BER). In any digital modulation system, if the input signal is distorted or severely attenuated the receiver will eventually lose symbol clock completely. If the receiver can no longer recover the symbol clock, it cannot demodulate the signal or recover any information. With less degradation, the symbol clock can be recovered, but it is noisy, and the symbol locations themselves are noisy. In some cases, a symbol will fall far enough away from its intended position that it will cross over to an adjacent position. The I and Q level detectors used in the demodulator would misinterpret such a symbol as being in the wrong location, causing bit errors. In the case of QPSK, it is not as efficient, but the states are much farther apart and the system can tolerate a lot more noise before suffering symbol errors. QPSK has no intermediate states between the four corner-symbol locations so there is less opportunity for the demodulator to misinterpret symbols. As a result, QPSK requires less transmitt er power than QAM to achieve the same bit error rate. Implementation of QAM First, the incoming bits are encoded into complex valued symbols. Then, the sequence of symbols is mapped into a complex baseband waveform. For implementation purposes, each complex multiplication above corresponds to 4 real multiplications. Besides, and will be the real and imaginary parts of = + iand assume that the symbols are generated as real and imaginary parts (as opposed to magnitude and phase, for example). After being derived, we will get and. From (1), x (t) becomes. This can be understand as two parallel PAM systems, followed by double-sideband modulation by quadrature carriers and. This realization of QAM is called double-sideband quadrature-carrier (DSB-QC) modulation. A QAM receiver must first demodulate the received waveform y(t). Assuming the scaling and receiver time reference discussed before, this received waveform is assumed to be simply y(t) = x(t) + n(t). Here, it is being understood that there is no noise, so that y(t) is simply the transmitted waveform x(t). The first task of the receiver is to demodulate x(t) back to baseband. This is done by multiplying the received waveform by both and. The two resulting waveforms are each filtered by a filter with impulse response q(t) and then sampled at T spaced intervals. The multiplication by at the receiver moves the positive frequency part of x(t) both up and down in frequency by, and does the same with the negative frequency part. It is assumed throughout that both the transmit pulse p(t) and the receive pulse q(t) are in fact baseband waveforms relative to the carrier frequency (specifically, that and for). Thus the result of multiplying the modulated waveform x(t) by yields a response at baseband and also yields responses around and. The receive filter q(t) then eliminates the double frequency terms. The effect of the multiplication can be seen by both at transmitter and receiver from the following trigonometric identity: Thus the receive filter q(t) in the upper (cosine) part of the demodulator filters the real part of the original baseband waveform, resulting in the output Assuming that the cascade g(t) of the filters p(t) and q(t) is ideal Nyquist, the sampled output retrieves the real part of the original symbols without intersymbol interference. The filter q(t) also rejects the double frequency terms. The multiplication by similarly moves the received waveform to a baseband component plus double carrier frequency terms. The effect of multiplying by at both transmitter and receiver is given by Again, (assuming that p(t) * q(t) is ideal Nyquist) the filter q(t) in the lower (sine) part of the receiver retrieves the imaginary components of the original symbols without intersymbol interference. Finally, from the identity, there is no crosstalk at baseband between the real and imaginary parts of the original symbols. It is important to go through the above argument to realize that the earlier approach of multiplying u(t) by for modulation and then by for demodulation is just a notationally more convenient way of doing the same thing. Working with sines and cosines is much more concrete, but is messier and makes it harder to see the whole picture. Modulation and transmission of QAM In general, the modulated signal can be represented by Where the carrier cos(wct) is said to be amplitude modulated if its amplitude is adjusted in accordance with the modulating signal, and is said to be phase modulated if (t) is varied in accordance with the modulating signal. In QAM the amplitude of the baseband modulating signal is determined by a(t) and the phase by (t). The in phase component I is then given by This signal is then corrupted by the channel. In this case is the AWGN channel. The received signal is then given by Where n(t) represents the AWGN, which has both the in phase and the quadrature component. It is this received signal which will be attempted to demodulate. Reference Fundamentals of Communication SystemsDescription: http://i.cmpnet.com/dspdesignline/2008/07/image046.gif, by John G. Proakis, Masoud Salehi Description: http://i.cmpnet.com/dspdesignline/2008/07/image046.gif Cross-layer resource allocation in wireless communications: techniques and Models from PHY and MAC Layer Interactionby Ana I. Pà ©rez-Niera, Marc Realp Campalans Digital Communication: Third Edition, by John R. Barry, Edward A. Lee, David G. Messerschmit OFDM for wireless multimedia communications by Richard Van Nee, Ramjee Prasad Modern Quadrature Amplitude Modulation by W.T Webb and L.Hanzo Digital Signal Processing in Communication Systems by Marvin E.Frerking COPD: a Clinical Case Study COPD: a Clinical Case Study Jerry Corners Introduction Chronic Obstructive Pulmonary Disease (COPD) is the fifth leading cause of morbidity and mortality in the UK and fourth in the world (Hurd 2000; Soriano 2000). Though other causes exist, like genetics and environmental pollution, tobacco smoke is by far the leading etiology of this disease (Pride 2002). It may seem axiomatic that if cigarette smoking is the cause of COPD, cessation (or avoidance) of smoking is the prevention. However, despite extensive public education, smoking is still common among men and women in the UK and even when people do quit, relapse within the first year is common (Lancaster et al. 2006). Therefore our attention as caregivers needs to be focused upon methods of cessation that produce lasting results. To illustrate the diagnosis, management, both short- and long-term, and what Mike can expect from treatment as reflected in the medical literature, we present the following case. Pathophysiology of COPD COPD is a chronic disease in which decreased airflow is related to airway smooth muscle hypereactivity due to an abnormal inflammatory reaction. Inhalation of tobacco products causes airway remodeling, resulting ultimately in emphysema and chronic bronchitis (Srivastava, Dastidar, Ray 2007). COPD is a complex inflammatory disease that affects both lung airways and lung parenchyma. The modern focus of the pathophysiology of COPD is centered around this inflammation and it is now recognized that systemic inflammation is responsible for many of the extrapulmonary effects of cigarette smoke inhalation (Heaney, Lindsay, McGarvey 2007). The Clinical Case Study Diagnosis Mike is a 54 year old, self-employed grandfather who smokes 40 cigarettes daily. He was recently diagnosed with COPD based on an FEV1 of 66% of predicted (Halpin 2004). According to Halpin (2004), â€Å"There are still no validated severity assessment tools that encompass the multidimensional nature of the disease, and we therefore continue to recommend using FEV1 as a percentage of the predicted as a marker of the severity of airflow obstruction, but acknowledge that this may not reflect the impact of the disease in that individual. We have changed the FEV1 cut off points and these now match those in the updated GOLD and new ATS/ERS guidelines, although the terminology is slightly different: an FEV1 of 50–80% predicted constitutes mild airflow obstruction, 30–49% moderate airflow obstruction, and According to these criteria, Mike has mild airflow obstruction and will be treated accordingly. But no matter what stage he is at or what pharmacologic interventions are prescribed, we are nevertheless obliged to offer this patient access to an effective nicotine cessation program while in hospital. Treatment Acutely, the mainstays of treatment for Mike’s level of disease are inhalation and possibly oral therapy along with pulmonary rehabilitation (Cote Celli 2005;Paz-Diaz et al. 2007). Of course underlying bronchpulmonary infection is treated with appropriate anitmicrobial therapy. Inhalation and Oral Therapy Bronchodilators Of the three classes of bronchodilator therapy, ÃŽ ²-agonists, anticholinergic drugs and methylxanthines, all appear to work by relaxation of the airway smooth muscles, which allows emptying of the lung and increased tidal volume, with an increase in FEV1 with increase in the total lung volume and dyspnea, subjective air-hunger, significantly improved, especially during exercise (Celli Macnee 2004c). Combining short- and long-acting bronchodilators appears to improve lung function better than either alone, and so Mike will be treated with a combination of salbutamol and (albuterol)/ipratropium. There are many other agents that could be used that have shown to be effective in mild disease, such as Mike’s (Celli Macnee 2004b). Corticosteroids Inflammation is often part of the acute phase of COPD exacerbations and therefore part of Mike’s therapy will be inhaled corticosteroids. Many studies have shown that inhaled corticosteroids produce at least some improvement in FEV1 and ventilatory capacity. It is often necessary for a trial of medication to confirm that a given patient will respond to inhaled corticosteroid treatment (Celli Macnee 2004a). Ries ( 2007) claims that inhaled corticosteroids have become the standard of care for patients with COPD, in all phases of severity (Salman et al. 2003). Mike will be offered inhaled corticosteroids. Pulmonary Rehabilitation According to a statement of the American Thoracic Society, â€Å"[Pulmonary rehabilitation is] a multidisciplinary programme of care for patients with chronic respiratory impairment that is individually tailored and designed to optimise physical and social performance and autonomy†. The Pulmonary Rehabilitation Program Exercise Garrod ( 2007) has shown convincing evidence that exercise significantly modifies systemic inflammation, as measured by CRP and IL-6 levels, that plays such an important role in the pathogenesis of COPD. But rather than target just the pulmonary musculature, Sin et al. ( 2007) have suggested that the skeletal muscle dysfunction and reduced exercise tolerance, which are important extrapulmonary manifestations of COPD, could in fact be due to the systemic inflammation that is important in COPD. Therefore, Mike will be placed on a regimen of weight training designed to improve his over all muscle strength. In addition he will be offered aerobic exercise treadmill sessions to improve his exercise tolerance, similar to cardiac rehabilitation (Leon et al. 2005). Nutritional Support General nutritional status is related to COPD severity (Budweiser et al. 2007;Ischaki et al. 2007) and mortality (Felbinger Suchner 2003). The cachexia of COPD is a common sign of end-stage pulmonary disease. Mike has mild disease and would not be expected to be suffering from malnutrition. However, an evaluation by a nutritionist and possible early correction of any deficits are part of his pulmonary rehabilitation. Psychological Support Depression, anxiety, and somatic symptoms are valid indicators of psychological distress in COPD (Hynninen et al. 2005) and quality of life (Arnold et al. 2006), two very important nursing issues. Much of the psychological distress is related to a sense of personal control because the illness, especially in its late stages, is so often accompanied by a feeling of loss of control in one’s life. Mike is still self-employed and with his mild impairment, he is not likely to be feeling these issues, yet. However caregivers need to be acutely aware that his quality of life may depend upon recognition and early intervention in the future (Gudmundsson et al. 2006;Oga et al. 2007). To that end he will have a psychological evaluation while in hospital to screen for depression or anxiety symptoms. Educational Support There are many areas that are very important to Mike as he goes through his pulmonary rehabilitation. In an initial interview, he needs to know what he can and cannot expect from treatment. He needs a person to explain that the damage done so far is not reversible but that there are many treatments available that will allow him to live a good life, if he stops further cigarette use. Issues of promoting a healthy lifestyle, muscle wasting and psychological adjustment are all treatable with information, when it is presented in a sympathetic, firm, supportive atmosphere. Mike needs to know what to expect in the future, if he is able to quit smoking, and if he does not quit smoking. He may not like to hear the truth, but his quality of life will benefit in the years to come from a clear, honest educational program. In addition Mike needs to understand that he may have exacerbations from time to time and that early intervention by his generalist or pulmonologist are mandatory to avoid more serious consequences. Education that stresses the value of a healthy lifestyle, including regular exercise according to the regimen established in hospital, is very important. Also, education can help considerably in preventing the wasting that, though probably not present now, may become important in the future. Smoking Cessation No subject in the COPD literature is more clear than the need for immediate cessation of exposure to all cigarette smoke; and, no subject is more frustrating to caregiver and patient alike, at least in those instances where there is poor compliance with the cigarette smoke proscription. We will explore with Mike some of the recommended strategies to accomplish this sometimes elusive, if vitally necessary goal. Nicotine Replacement Therapy (NRT) A recent article by West, et al. ( 2007) reported a prospective study of NRT that was large (2009 smokers), multicultural, involving smokers from the US, UK, Canada, France, and Spain, and of sufficient duration to render generalizable (â€Å"real world†) results. They concluded that NRT helps smokers’ cessation attempts and long-term abstinence rates. However, the 6% improvement rate was not large and this form of cessation therapy should be reserved for those who have tried and failed other methods or programmes. There are many forms of NRT, including nasal and oral nicotine sprays, gum, and patches of varying dosages, currently on the market, but whether they have significant one-year success rates over counselling is an arguable point in the literature. Since Mike now smokes 40 cigarettes daily, he will be offered the 15mg nicotine patch to help for the initial 20 weeks of cessation. Bupropion Therapy Buproprion is a dopamine agonist that has antidepressant effects but is also marketed as a smoking cessation agent. In a study comparing the nicotine patch with buproprion and controls (counselling only) by Uyar, et al. (Uyar et al. 2007), reported success of 26 % for the nicotine patch, 26% for buproprion, and 16% for counselling-only at the end of 24 weeks. As an interesting aside, they reported that those who had a Beck depression inventory above 13, i.e. were depressed at the onset of the study, were unsuccessful regardless of treatment or control group. However, because of the small numbers of smokers involved, there was no statistically significant difference between these groups. The authors conclude that counselling is as effective for cessation attempts as these pharmacologic treatments, and there are no known side effects of being in a control group. However, other studies (Tonnesen et al. 2003) have shown a significant effect of bupropion over placebo. Internet-Based Assistance Various groups have tried using an interactive website to help smokers stop smoking. Unfortunately they have yet to show significant positive findings. All that can be said about them is that the more often the smoker logs on to the site, the better his chances are that he will be successful (Japuntich et al. 2006;Mermelstein Turner 2006;Pike et al. 2007). Nurse-Conducted Behavioral Intervention In the UK Tonnesen et al. (Tonnesen, Mikkelsen, Bremann 2006) found that a combination of nurse-based counselling in conjunction with NRT in patients with COPD was more effective than placebo at 6 and 12 months. As one can readily imagine, there are a plethora of cessation strategies available to assist people in smoking cessation. However, there is no â€Å"silver bullet†, i.e. one method that fits everybody. It comes down to proper motivation, which we believe is related to education and perhaps other factors. All we can really be sure of is of that those who try, many will be successful, and try, try, again seems to be the best advice we can offer. But the most important lesson we can learn is to prevent use of this harmful and addictive substance in the first place. Teenage smoking prevalence is around 15% in developing countries and around 26% in the UK and US. Studies have shown that those who make it past 20 years of age are much less likely to succumb to this addiction (Grimshaw Stanton 2006). Conclusion Assuming Mike ceases to smoke cigarettes, and given a regimen of exercise appropriate to his physical functioning, and with a detailed and robust COPD rehabilitation programme, his prognosis is excellent. By far the most challenging days are yet to come as Mike begins to feel better and the educational materiel fades from his mind. Many smokers return to their fatal habit within a year. Many, though perhaps not all, could benefit from periodic follow-up sessions with a motivational nurse-counselor. 1902 words not counting references References Arnold, R., Ranchor, A. V., Koeter, G. H., de Jongste, M. J., Wempe, J. B., ten Hacken, N. H., Otten, V., Sanderman, R. 2006, Changes in personal control as a predictor of quality of life after pulmonary rehabilitation, Patient.Educ.Couns., vol. 61, no. 1, pp. 99-108. Budweiser, S., Meyer, K., Jorres, R. A., Heinemann, F., Wild, P. J., Pfeifer, M. 2007, Nutritional depletion and its relationship to respiratory impairment in patients with chronic respiratory failure due to COPD or restrictive thoracic diseases, Eur.J.Clin.Nutr. Celli, B. R. Macnee, W. 2004a, Standards for the diagnosis and treatment of patients with COPD: a summary of the ATS/ERS position paper, Eur.Respir.J., vol. 23, no. 6, pp. 932-946. Celli, B. R. Macnee, W. 2004b, Standards for the diagnosis and treatment of patients with COPD: a summary of the ATS/ERS position paper, Eur.Respir.J., vol. 23, no. 6, pp. 932-946. Celli, B. R. Macnee, W. 2004c, Standards for the diagnosis and treatment of patients with COPD: a summary of the ATS/ERS position paper, Eur.Respir.J., vol. 23, no. 6, pp. 932-946. Cote, C. G. Celli, B. R. 2005, Pulmonary rehabilitation and the BODE index in COPD, Eur.Respir.J., vol. 26, no. 4, pp. 630-636. Felbinger, T. W. Suchner, U. 2003, Nutrition for the malnourished patient with chronic obstructive pulmonary disease: more is better!, Nutrition, vol. 19, no. 5, pp. 471-472. Garrod, R., Ansley, P., Canavan, J., Jewell, A. 2007, Exercise and the inflammatory response in chronic obstructive pulmonary disease (COPD)Does training confer anti-inflammatory properties in COPD?, Med.Hypotheses, vol. 68, no. 2, pp. 291-298. Grimshaw, G. M. Stanton, A. 2006, Tobacco cessation interventions for young people, Cochrane.Database.Syst.Rev. no. 4, p. CD003289. Gudmundsson, G., Gislason, T., Janson, C., Lindberg, E., Suppli, U. C., Brondum, E., Nieminen, M. M., Aine, T., Hallin, R., Bakke, P. 2006, Depression, anxiety and health status after hospitalisation for COPD: a multicentre study in the Nordic countries, Respir.Med., vol. 100, no. 1, pp. 87-93. Halpin, D. 2004, NICE guidance for COPD, Thorax, vol. 59, no. 3, pp. 181-182. Heaney, L. G., Lindsay, J. T., McGarvey, L. P. 2007, Inflammation in chronic obstructive pulmonary disease: implications for new treatment strategies, Curr.Med.Chem., vol. 14, no. 7, pp. 787-796. Hynninen, K. M., Breitve, M. H., Wiborg, A. B., Pallesen, S., Nordhus, I. H. 2005, Psychological characteristics of patients with chronic obstructive pulmonary disease: a review, J.Psychosom.Res., vol. 59, no. 6, pp. 429-443. Ischaki, E., Papatheodorou, G., Gaki, E., Papa, I., Koulouris, N., Loukides, S. 2007, Body mass and fat free mass indices in COPD: Relation with variables expressing disease severity, Chest. Japuntich, S. J., Zehner, M. E., Smith, S. S., Jorenby, D. E., Valdez, J. A., Fiore, M. C., Baker, T. B., Gustafson, D. H. 2006, Smoking cessation via the internet: a randomized clinical trial of an internet intervention as adjuvant treatment in a smoking cessation intervention, Nicotine.Tob.Res., vol. 8 Suppl 1, p. S59-S67. Lancaster, T., Hajek, P., Stead, L. F., West, R., Jarvis, M. J. 2006, Prevention of relapse after quitting smoking: a systematic review of trials, Arch.Intern.Med., vol. 166, no. 8, pp. 828-835. Leon, A. S., Franklin, B. A., Costa, F., Balady, G. J., Berra, K. A., Stewart, K. J., Thompson, P. D., Williams, M. A., Lauer, M. S. 2005, Cardiac rehabilitation and secondary prevention of coronary heart disease: an American Heart Association scientific statement from the Council on Clinical Cardiology (Subcommittee on Exercise, Cardiac Rehabilitation, and Prevention) and the Council on Nutrition, Physical Activity, and Metabolism (Subcommittee on Physical Activity), in collaboration with the American association of Cardiovascular and Pulmonary Rehabilitation, Circulation, vol. 111, no. 3, pp. 369-376. Mermelstein, R. Turner, L. 2006, Web-based support as an adjunct to group-based smoking cessation for adolescents, Nicotine.Tob.Res., vol. 8 Suppl 1, p. S69-S76. Oga, T., Nishimura, K., Tsukino, M., Sato, S., Hajiro, T., Mishima, M. 2007, Longitudinal deteriorations in patient reported outcomes in patients with COPD, Respir.Med., vol. 101, no. 1, pp. 146-153. Paz-Diaz, H., Montes de, O. M., Lopez, J. M., Celli, B. R. 2007, Pulmonary rehabilitation improves depression, anxiety, dyspnea and health status in patients with COPD, Am.J.Phys.Med.Rehabil., vol. 86, no. 1, pp. 30-36. Pike, K. J., Rabius, V., McAlister, A., Geiger, A. 2007, American Cancer Societys QuitLink: randomized trial of Internet assistance, Nicotine.Tob.Res., vol. 9, no. 3, pp. 415-420. Ries, A. L., Bauldoff, G. S., Carlin, B. W., Casaburi, R., Emery, C. F., Mahler, D. A., Make, B., Rochester, C. L., Zuwallack, R., Herrerias, C. 2007, Pulmonary Rehabilitation: Joint ACCP/AACVPR Evidence-Based Clinical Practice Guidelines, Chest, vol. 131, no. 5 Suppl, pp. 4S-42S. Salman, G. F., Mosier, M. C., Beasley, B. W., Calkins, D. R. 2003, Rehabilitation for patients with chronic obstructive pulmonary disease: meta-analysis of randomized controlled trials, J.Gen.Intern.Med., vol. 18, no. 3, pp. 213-221. Sin, D. D. Man, S. F. 2007, Systemic inflammation and mortality in chronic obstructive pulmonary disease, Can.J.Physiol Pharmacol., vol. 85, no. 1, pp. 141-147. Srivastava, P. K., Dastidar, S. G., Ray, A. 2007, Chronic obstructive pulmonary disease: role of matrix metalloproteases and future challenges of drug therapy, Expert.Opin.Investig.Drugs, vol. 16, no. 7, pp. 1069-1078. Tonnesen, P., Mikkelsen, K., Bremann, L. 2006, Nurse-conducted smoking cessation in patients with COPD using nicotine sublingual tablets and behavioral support, Chest, vol. 130, no. 2, pp. 334-342. Tonnesen, P., Tonstad, S., Hjalmarson, A., Lebargy, F., Van Spiegel, P. I., Hider, A., Sweet, R., Townsend, J. 2003, A multicentre, randomized, double-blind, placebo-controlled, 1-year study of bupropion SR for smoking cessation, J.Intern.Med., vol. 254, no. 2, pp. 184-192. Uyar, M., Filiz, A., Bayram, N., Elbek, O., Herken, H., Topcu, A., Dikensoy, O., Ekinci, E. 2007, A randomized trial of smoking cessation. Medication versus motivation, Saudi.Med.J., vol. 28, no. 6, pp. 922-926. West, R. Zhou, X. 2007, Is nicotine replacement therapy for smoking cessation effective in the real world? Findings from a prospective multinational cohort study, Thorax. Page 1 of 11 Is Power the Same as Violence? Is Power the Same as Violence? Huang Li Introduction For a long time in history, the coercive side that power involves and the destructive results that power rivalry brings have all along depicted power as horrible and deterrent. It has been viewed as closely related to force and violence, or to a large extent very similar. It is only until the time of modern democratic societies that the meaning of power is gradually enriched with the increasing role of rational recognition in power relations. This essay intends to show that power is not the same as violence; it is more than that because of the most fundamental difference: rational recognition. Power is not only composed of coercive force that resembles violence, more importantly it involves the force of social recognition which violence is short of. Power is a mutually regulated communicative process rather than simply exercised by the powerful over the powerless. After identifying some basic differences between power and violence, this essay will focus on the discussion of power and power relations, to explore the major difference between power and violence rational recognition and why it is so. On one hand, it will show that power can create violence and it consists of coercive elements by demonstrating why power is not a one-way event; on the other hand, this essay will proof why power is more of mutual constraint that rational recognition and willingness of acceptance from others can identify power from violence. Scholars like Weber views power as means than ends, backed by violence, threat or inducement; Mann illustrates power as resources that can be occupied; Parsons and Foucault both intend to reconstruct power but still proceed in the realm of violence theory. This essay mostly follows the ideas of Honneth, Arendt, and Habermas, but attempts to avoid another extreme of equalizing power to purely power of rationality or power of consensus through communicative process. It sees power as a combination shaped by both coercive and rational forces, avoiding placing power in the opposite of violence since in history power has been devastating too and violence could be â€Å"an attempt to achieve justice† (Gilligan, 2000, 11). Basic Differences: Power Dependent on Numbers and Violence on Implements Arendt defines power in the context of groups of individuals, as â€Å"the human ability not just to act but to act in concert† (1972, 143). One individual alone does not generate power; power is the aggregate strength of all the individuals in a group. So the exercise of power is preconditioned with numbers. Unlike power, violence does not require numbers or groups in order to be violence. Rather, it depends on implements to â€Å"multiply strength, to a point at which they can replace it† (Arendt, 1972, 145), instead of becoming power. Violence is designed and applied for expanding one’s physical strength that it is totally instrumental and always a means for certain purpose; but power in itself can serve as an end. There is categorical distinction in this sense. Is Power a One-way Event? If violence is not the end, it is a â€Å"blinding rage that speaks through the body† (Gilligan, 2000, 55) and the hope of those who do not possess power. So violence could start from the powerless against the powerful, such as slaves against slave owners, or the ruled against the ruling. Such power relations see those in power as subjects and those under the power objects, to be controlled and manipulated. Power in such a one-way model is pillared by certain condition which is understood as its source. Mann identifies four sources of power: ideology, economy, military and politics (1970, 35) that people who occupy these resources will own power. A society is thus divided into two kinds of people in a one-way power structure. If the will of those in power is not executed, the ruled will be punished, possibly by violence, and they stand up to resist, with violence, for power. It is not difficult to reach the conclusion that in a binary opposition, power and violence can be cause and effect of each other and they are actually two sides of one coin. Derived from the Hobbesian proposition, it should be admitted that power do contain certain aspects of violence, historically or theoretically, when it is understood as something can be possessed like resources. However, what can be relied upon by the ruled class for their struggle if they don’t have any resources at all? In the case of ideology, any interpretation by the powerless will be meaningless and invalid, why would those in power necessitate oppressing and controlling them? Will there be any struggle inside the powerful and the powerless? Power is Mutually Agreed: Rational Recognition of Imbalance Clearly such violence-illustrated power is not the whole picture. Power is more than something can be owned and preserved; it only exists when is â€Å"exercised by some on others† (Foucault, 2003, 126) and will be â€Å"dispersed once the group ceases to exist† (Arendt, 1972, 143). Power is the â€Å"structural feature of human relations† (Elias, 1998, 188). Slaves have power over the slave owner too as long as they are valuable to him; their power depends on the degree to which their owner relies on them; so is the case between parents and children, and teachers and students. In reality, if an individual or group acquires the power to implement self will, such power is not fully discovered if the ruled do not acknowledge it; they do not just accept power, they make certain responses to it based on their own will. So power is not necessarily a unilateral process where one is dominated and controlled by the other; it exists in interdependence and mutual constraint among people with differentiated level of resources; it is both â€Å"pervasive and negotiated† (Gosling, 2007, 3). Not only will power be regulated and negotiated between the ruling and the ruled, but also within themselves. The former power relations are coercive because the power is legitimized by laws, regimes or organizations. The latter may be absent from these elements but power relations and interactions still takes place because some individuals will still tend to persuade and influence others in exchange for recognition of authoritative positions, through knowledge, money and pers onal network, in order to implement one’s own will and better response to such power relations at the â€Å" most micro levels† (michel-foucault.com). In fact, power relations at the micro level are where those power relations between hierarchies originate. At the very micro level, it is to a larger extent the power of rational recognition rather than the power of force that leads to certain power relations. Since interdependence always exists among people regardless of their power positions, power relation is a dynamicequilibrium and mutual power regulation is always there, even in the extreme case of slaves and slave owner. However if the power relations regulated by rational recognition are neglected, those based on them at the macro levels will be shaken. Although power relations are mutually regulated and communicative rational, the degrees of interdependence are different, which lead to unbalanced relationships among the players. In fact, power to some extend is just demonstrated by such imbalance; violence too is demonstrated in kind of imbalance; but power goes further if it is identified different as it means others’ recognition of such imbalance. When the imbalance is maintained in the form of pure coercive force, it is violence; when rational force is included, it starts to turn into power. Under any circumstance, power is the combination of both. Bifacial Nature of Power When examined under Habermas’s context, in the terms of â€Å"facts and norms†, power includes two dimensions as well, described as â€Å"facticity and validity†. The facticity dimension reveals the coercive nature of power that power, in any kind of form, potentially contains coercive forces in realizing goals and excluding all impediments. Such aspect of power is underpinned by violence or the threat of violence which exist as real and concrete facts. The other dimension is validity that refers to power’s tendency of gaining rational recognition from the others. Though the two dimensions coexist in power and so does the tensions between them, they are not always equally demonstrated. In a tyrannic society, power shows more coercive side of its nature whereas the power of rational recognition is more compelling in a democratic society. Violence Does Not Create Power but Destroys It As discussed so far, power involves elements of coercion and it can generate violence. But is it the case the other way around that violence can also produce power? In many scholars’ understanding, violence is viewed as a resource that â€Å"can be mobilized to enforce the compliance of others† (Ray, 2011, 13). Usually exercised by those in power, it creates the ability of an individual or group to achieve their own goals or aims even if others are trying to prevent them from realizing them. Thus violence is naturally seen as a source of power. However, is what one has gained by using violence, or what violence has created, truly power? When a government turns into violence against its own people or a foreign country, or an individual uses violence to acquire what is wanted, it is generally because power in their hand is running out and violence is the last resort. While such a government or individual does not lack means of violence, they are in fact in short of power; to be more accurate, they are lack of recognition of their wills by others. When violence as a resource is utilized against another, it not only consumes the resource itself but also diminishes what little power is left over. Violence is always the choice of the impotent, not the powerful. Viewed in this sense, violence only equals to coercive means regardless of other’s recognitions. It emerges when â€Å"social ensembles are incoherent, fragmented and decadent† (Wieviorka, 2009, 165). Therefore, as violence â€Å"inevitably destroys power, it can never generate power† (Arendt 1972, 152). There is no â€Å"continuity between obedience to command (the enactment of power) and obedience to law (as legitimate authority)† (Ray, 2011, 13). A government that solely relies on violence has no power and â€Å"tyranny is both the least powerful and the most violent form of government† (Arendt, 1972, 140). Reproduction of Power and Violence In the past, power is largely associated with gains of interests, or occupation of social resources like those identified by Michael Mann. In Honneth’s Struggle for Recognition, he reveals the â€Å"force of recognition† behind power. Once this point is taken into consideration, the reproduction of power will no longer be just about violent competition, or rivalry for social resources, rather, the willingness of others to acknowledge and accept. Arendt insists that violence does not give rise to power because she believes that social recognition is missed in violence. When power is taken as a combination of coercive and rational forces, it may be understood as a relationship of mutual recognition among a group of people backed by the potential threats each have for others. Therefore, the reproduction of power naturally includes attempts of occupying as much resources as possible for greater coercive capability; it is indispensible and more important to gain recognition from others. If authoritative coercion is a source of power, it is not the only source. Rational recognition also generates power. So political power is not the potential capability to implement one’s own goals or realize one’s own interests, it relies on those over whom the power is exercised to define what power truly is. The power of a government is conferred through people’s recognition, or in another word, the coercive force of the government is agreed by the people. When applied at the micro level, it can also be stated that the power between individuals does not only arise in the lure of interests or in the constraint of violence, it rests in the one’s recognition of others’ will and authority over oneself. Only when such recognition exists, the will can be implemented without enforcement and power becomes power rather than violence. Right to the contrary, what violence concerned is how one’s own goals are reached through forceful means. Violence is always destructive but never constructive. Terrorist attacks do not increase the power of the terrorists, it grows intimidation and controls; meanwhile it gives the government power to do what it cannot do in the past and to expand its sphere of influence. Violence reinforces state power and makes more violence necessary in order to maintain and reproduce violence. Conclusion When power is perceived under violence theory, man is to be controlled and manipulated, instrumentalized in a subject-object relationship which is all about one trying to dominate the other in struggles for power resources, in order to preserve power and oppress others from grabbing it. Power in that sense equals to violence, which is observed throughout history. While power will fail should it be not supported by forceful and compulsory means, it is not sufficient to have these only. What cannot be overlooked is an â€Å"infinitely complex network of ‘micropowers’, of power relations that permeate every aspect of social life† (Sheridan 1980: 139). Where rational recognition also creates power, power can be compellent but not violent simultaneously. Thus, viewed in a rational context, man becomes a dialogue partner with the coexistence of competition, compromise and cooperation. Mutual regulation and interdependence is the one of the features of such power relationship and mutual understanding and respect is part of the foundation of power reproduction. Recognition of imbalance between people, particularly from those over whom power is exercised, legitimizes power and differentiates it from violence. Power and violence are not the same; the former is more than the latter. Power â€Å"cannot be overthrown and acquired once and for all by the destruction of institutions and the seizure of state apparatuses† (Sheridan 1980: 139). Unlike violence, power is not unitary and its exercise binary; it is interactive; a very important part of power struggle is the rivalry for recognition. In modern democratic societies, the violence aspect of power is decreasing and increasingly giving way to the role of rational recognition in shaping power. The major resources of power is no longer just about military or economy of one’s own capability, it is more about how convincing it is for others to accept, and in the end, how well one’s power is recognized and received by others. Bibliography: Arendt, Hannah, (1972), â€Å"On Violence† inCrises of the Republic, New York: Harcourt Brace Company, pp. 103-184. Elias, Norbert, (1998), â€Å"On Civilization, Power, and Knowledge†, Chicago: University of Chicago Press, chapter 7. Foucault, Michel, (2003), â€Å"The Subject and Power† inThe Essential Foucault, P. Rabinow, ed., New York: The New Press, pp. 126-144. Gilligan, James, (2000), â€Å"Violence: Reflection on Our Deadliest Epidemic†, London: Jessica Kingsley, pp. 1-60. Gosling, David, (2007), â€Å"Micro-Power Relations Between Teachers and Students Using Five Perspectives on Teaching in Higher Education†, available at: http://www.davidgosling.net/userfiles/micro power relations isl 2007.pdf, last accessed on 7 Dec. 2014. Habermas, J., (1996), â€Å"Between Facts and Norms†, Massachusetts: the MIT Press. Honneth, Axel, (1996), â€Å"The Struggle for Recognition: The Moral Grammar of Social Conflicts†, Massachusetts: the MIT Press. Mann, Michael, (1970), â€Å"The Source of Social Power†, Cambridge University Press, chapter 2, pp. 34-72. Michel-foucault.com, (2007), Key concepts, available at: http://www.michel-foucault.com/concepts/index.html, last accessed on 6 Dec. 2014. Ray, Larry, (2011), â€Å"Violence and Society†, London: Sage, pp. 6-23. Shabani, A. Payrow, (2004), â€Å"Habermas’Between Facts and Norms: Legitimizing Power?† available at: https://www.bu.edu/wcp/Papers/Poli/PoliShab.htm, last accessed on 6 Dec. 2014. Wieviorka, Michel, (2009), â€Å"Violence: A New Approach†, London: Sage, pp. 165.