If some of these IT infrastructure components like storage and telecommunications have gotten so cheap, why does it seem like companies are spending more and more money on information technology? This is because users are demanding better, faster, and easier ways to use computers and more ways to communicate with others. Let‘s discuss some of these hardware technologies that are helping companies meet the growing technology demand of employees, customers, suppliers, and business partners.

The Emerging Mobile Digital Platform

Anytime, anywhere, 24/7, 365. That‘s what computer users now expect. Technology manufacturers are meeting the demand with a host of new communication devices like cell phones and smartphones. The newest gadgets


on the market are tablets and e-book readers like the Kindle from Amazon.com or Barnes & Noble‘s Nook reader. Smartphones are getting —well—smarter, and proving more reasons for users to migrate away from traditional desktop PC computing.. Tablets are miniaturized subnotebooks that are built specifically for wireless communications and Internet access. Even though they may be small in size, they still pack a lot of computing power.

Grid Computing

Take a moment and think about how much time you don’t use your personal computer. It‘s actually quite a lot. In fact, most computers are idle more time than not. What if you could combine all the idle time of hundreds or thousands of computers into a continuous, connected computing capacity to capture, process, manage, store, and retrieve data? You wouldn‘t have to purchase mammoth, super computers to realize this capability and capacity.


Grid computing is the technique that utilizes the idle computational resources of separate, geographically remote computers to create a single virtual supercomputer. In this process, a server computer breaks data and applications into discrete chunks that are parceled out to the grid's machines. Three reasons why grid computing is appealing to companies include:

® Cost savings

® Computational speed

® Computational agility


Virtualization and Multicore Processors

Virtualization is the process of presenting a set of computing resources (such as computing power or data storage) so that they can all be accessed in ways that are not restricted by physical configuration or geographic location. Server virtualization enables companies to run more than one operating system at the same time on a single machine. Most servers run at just 10 to 15 percent of capacity, and virtualization can boost utilization server utilization rates to 70 percent or higher. Here‘s a list of the benefits businesses enjoy from using virtualization:

  • Increase equipment utilization rates
  • Conserve data center space and energy usage
  • Require fewer computers and servers
  • Combine legacy applications with newer applications
  • Facilitate centralization and consolidation of hardware administration


A Multicore processor is an integrated circuit that contains two or more processors. This technology enables two or more processing engines with reduced power requirements and heat dissipation to perform tasks faster than a resource-hungry chip with a single processing core.

Cloud Computing and the Computing Utility

Cloud computing is already defined in this chapter. Basically, cloud computing

is defined by five characteristics:

® On-demand    self-service:   Users    can    access    computing     capabilities whenever and wherever they are.

® Ubiquitous network access: No special devices are necessary for accessing data or services.

® Location   independent  resource  pooling:   Users    don‘t    need to be concerned about where the data are stored.

® Rapid elasticity: Computing resources expand and contract as necessary to serve users.

® Measured service: Users pay only for the computing capabilities actually used.

Figure 5-10 Cloud Computing Platform


Almost any type of computing device can access data and applications from these clouds through three types of services:

® Cloud infrastructure as a service: Allows customers to process and store data, and use networking and other resources available from the cloud.

® Cloud platform as a service: The service provider offers infrastructure and programming tools to customers so they can develop and test applications.

® Cloud software as a service: The vendor provides software programs on a subscription fee basis.


Cloud computing is becoming popular because customers only pay for the computing infrastructure that they actually use. In many cases users experience lower IT costs than if they had to buy all the equipment, hire the technical staff to run it and maintain it, and purchase software applications. This type of on- demand computing is beneficial to small and medium-size companies since they can easily scale up and down their IT requirements as the pace of their business demands it. Larger organizations however, may not want their most sensitive data stored on servers which they don‘t control. System reliability is also a special concern to all businesses. The unavailability of business data and applications for even a few hours may be unacceptable. Three kinds of clouds are available:

® Public cloud: A public cloud is basically the internet. Service providers use the internet to make resources, such as applications and storage, available to the general public. Examples of public  clouds  include Amazon Elastic Compute Cloud (EC2), IBM‘s Blue Cloud, Sun Cloud, Google AppEngine and Windows Azure Services Platform.

® Private cloud: These clouds are data center architectures owned by a single company that provides flexibility, scalability, provisioning, automation and monitoring. The goal of a private cloud is not sell ―as-a- service‖ offerings to external customers but instead to gain the benefits of cloud architecture without giving up the control of maintaining your own data center.

® Hybrid cloud: By using a Hybrid clud, companies can maintain control of an internally managed private cloud while relying on the public cloud as needed. For instance during peak periods individual applications, or portions of applications can be migrated to the Public Cloud.


Autonomic Computing

Autonomic computing is a step toward creating an IT infrastructure that is able to diagnose and fix problems with very little human intervention. It is an industry-wide effort to develop systems that can configure, optimize, repair, and protect themselves against intruders and viruses, in an effort to free system administrators from routine system management, reduce costly system crashes. Today's virus software with automatic virus updates is one example of autonomic computing. Thus autonomic computing features systems that can:

® Configure themselves

® Optimize and tune themselves

® Heal themselves when broken

® Protect themselves from intruders and self-destruction


Although this type of computing is still rather new, it promises to relieve the burden many companies experience in trying to maintain massive, complex IT infrastructures.