Maulik R. Kamdar
Department of Biotechnology, Indian Institute of Technology, Kharagpur
Whilst submitting research documentation for the ITC ‘Networking for Green Earth’ Competition, I came across interesting articles on Green Computing which formed a basis of my review for the same.
Abstract – This document gives a brief introduction to the new and the budding concept of Green Computing and Green IT. With the rapid advent of new Software architectures as well as technology derivatives it has become imperative to find new methods and designs which would help minimize if not cut down the energy profile usage. Thrust of computing was initially on faster analysis and speedier calculation and solving of complex problems. But in the recent past another focus has got immense importance and that is achievement of energy efficiency, minimization of power consumption of E-equipments. It has also given utmost attention to minimization of E-waste and use of non-toxic materials in preparation of E-equipments. A complete analysis on power-efficient applications designs, considering tools that help the architect achieve scalability without deriving in energy waste as well as toxic E-waste, has been described in brief in the document. Economic Prosperity in the latter years would depend on the growth of IT infrastructure. Environmental Sustainability needs have to be taken into account in parallel with this growth.
Keywords – Green Computing, Green Technology, Green IT, Software architectures, Economic sustainability.
I. Introduction
Green computing or green IT, refers to environmentally sustainable computing or IT. It is “the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. Green IT also strives to achieve economic viability and improved system performance and use, while abiding by our social and ethical responsibilities. Thus, green IT includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling. It is the study and practice of using computing resources efficiently” [1].
Rather than viewing the environmentally sustainable data centre as a product feature checklist for a onetime win, serious IT architects are adopting a more comprehensive sustainability plan in their data centre system design. While new technology from the industry continues to drive efficiency into the IT infrastructure, environmental systemic quality metrics need to be built into at every part of the IT architectural process, and an ongoing architectural commitment is required at all levels of the infrastructure, beyond a typical product procurement strategy.
In 1992, the U.S. Environmental Protection Agency launched Energy Star, a voluntary labeling program which is designed to promote and recognize energy-efficiency in monitors, climate control equipment, and other technologies. This resulted in the widespread adoption of sleep mode among consumer electronics. The term “green computing” was probably coined shortly after the Energy Star program began; there are several USENET posts dating back to 1992 which use the term in this manner.
The need to reduce power consumption is obvious. Gone are the days of measuring data centers by square foot of space. Now, data centers are increasingly sized by the watt. More efficient technologies with new capabilities are being promoted as magic cures. Yet, saving energy is a much more complex architectural problem, requiring a coordinated array of tactics, from architectural power management capacity planning techniques to optimizing operational processes and facilities design.
II. Assessing the Current Problems
A typical desktop PC system is comprised of the computer itself (the CPU or the “box”), a monitor, and printer. A typical CPU may require approximately 200 watts of electrical power. Then, add 70-150 watts for a 17-19 inch monitor (proportionately more for larger monitors). The power requirements of conventional laser printers can be as much as 100 watts or more when printing though much less if idling in “sleep mode.” Ink jet printers use as little as 12 watts while printing and 5 watts while idling [2]. How a user operates the computer also factors into energy costs. First let’s take the worst case scenario of continuous operation. Assuming you operate 200 watt PC system day and night every day, direct annual electrical costs would be over $130.00 (at $0.075/kWh). In contrast, if you operate your system just during normal business hours, say 40 hours per week, the direct annual energy cost would be about $30. This is not taking into consideration the cost of additional ambient room cooling. Considering the tremendous benefits of computer use, neither of the above cost figures may seem like much, but think of what happens when these costs are multiplied by the many hundreds or thousands of computers in use. The loss of revenue due to energy consumption and waste are astronomical. Conventionally, manufacturing computers includes the use of lead, cadmium, mercury, and other toxics in general. Usually, computers can contain 4 to 8 pounds of lead alone, according to green experts. It’s no wonder that computers and other electronics make up two-fifths of all lead in landfills [3].
Data-centers consume 10 to 100 times more energy per square foot than typical office/classroom space, due to its servers, network gears, and storage [5]. A typical data centre consumes energy in four basic areas:
- Critical computational systems (servers, networks, storage)
- Cooling systems
- Power conversion such as power distribution units (PDU)
- Hosting (everything else: lighting, and so on).
The most common environmental impact measurement is labelled carbon footprint, usually measured in tCO2eq (metric tons of CO2 equivalent) based on the source of energy and amount consumed, manufacturing and logistics impact (often labelled embodied cost), as well as end-of-life impact (e-waste, environmental externalities, and so on). Redemtech’s “Sustainable Computing Assessment” benchmarks organizations from a range of industries, including insurance, against sustainable best practices in five key areas: productivity, reuse, accountability, energy and environmental social responsibility. A score of 75% or better in each category indicates a mature green IT program. However, the assessment’s scores averaged from 32% to 37%, with the highest scores coming in the area of energy efficiency. The study reveals that most companies lack holistic policies for promoting all four cornerstones of sustainable computing: extended lifecycles, energy efficiency, utilization and reuse and responsible recycling. Even companies with coherent policies lack the governance needed to ensure that operations are aligned with the sustainability priorities of the business, the assessment finds [8].
III. Few Implementation Techniques
Below are mentioned a few methods and means of implementing Green Computing to commercial uses and bringing forth the benefits of Green IT to the world. Most of the methods may be ideal and realization can require many manipulations as well as modifications.
A. Virtualization
Virtualization is touted as a key solution in order to optimize data-centre efficiency. It allows clients to consolidate work onto fewer computers, increasing utilization, which can significantly reduce energy and maintenance bills and simplify their infrastructure.
- Level-0: A Level 0 (“Local”) in the virtualization maturity model means no virtualization at all. Even with no virtualization, there is plenty of scope for energy savings.
Enable power saving mode: Most PCs are idle for the majority of time, and theoretically can be turned off when idle. This can generate enormous energy savings. This has some implications for application design, as applications designers need to consider the platform (client and/or server) going to sleep and waking up.
Minimize the amount of data stored: Data uses power because data stored in application databases or file systems needs disks to be operating to store the data. Therefore, reducing the amount of data stored can reduce the amount of power used by an application by reducing the amount of data storage infrastructure. Efficient data archiving approaches can assist here.
Design and code efficient applications: In the past, developers were forced to code carefully because computers were so slow that inefficient code made applications unusable. Moore’s Law has given us more powerful computer hardware at a faster rate than we can consume it, resulting in applications that can appear to perform well even though internal architecture and coding may be wasteful and inefficient. Inefficient code causes greater CPU cycles to be consumed, which consumes more energy. - Level-1: A Level 1 (“Logical Virtualization”) in the virtualization maturity model introduces the idea of sharing applications. This might be for example through the use of departmental servers running applications that are accessed by many client PCs. This first appeared in the mainstream as “client/server” technology, and later with more sophisticated N-tier structures. Moving to Level 1 (“Logical Virtualization”) is all about rationalizing the number of applications and application platforms where there is overlapping or redundant functionality, and increasing the use of common application services so that shared application components can be run once rather than duplicated multiple times. For large organizations, this will produce much bigger payoffs than any subsequent hardware virtualization.
Rationalize your applications: Have a complete Enterprise Architecture, encompassing an Application Architecture identifying the functional footprints and overlaps of the application portfolio, so that a plan can be established for rationalizing unnecessary or duplicated functions. This may be accomplished by simply identifying and decommissioning unnecessarily duplicated functionality, or factoring out common components into shared services. As well as solving data integrity and process consistency issues, this will generally mean there are fewer and smaller applications overall, and therefore, less resources required to run them, lowering the energy/emissions footprint and at the same time reducing operational costs.
Consolidation of applications: Single-application servers are not efficient and that servers should ideally be used as shared resources so they operate at higher utilization levels. It is important to ensure that the available CPU can be successfully divided between the applications on the server with sufficient capacity for growth. Performance testing should be executed to ensure that each application will not stop the others on the server from running efficiently while under load. - Level-2: The basic premise here is that individual server deployments do not need to consume the hardware resources of dedicated hardware, and these resources can therefore be shared across multiple logical servers. The difference from Level 1 is that the hardware and software infrastructure upon which applications servers are run is itself shared (virtualized). For server infrastructure, this is accomplished with platforms such as Microsoft Virtual Server and VMware among others, where a single physical server can run many virtual servers. For storage solutions, this level is accomplished with Storage Area Network (SAN) related technologies, where physical storage devices can be aggregated and partitioned into logical storage that appears to servers as dedicated storage but can be managed much more efficiently.
- Level-3:A Level 3 (“Cloud virtualization”) in the virtualization maturity model extends Level 2 by virtualizing not just resources but also the location and ownership of the infrastructure through the use of cloud computing. This means the virtual infrastructure is no longer tied to a physical location, and can potentially be moved or reconfigured to any location, both within or outside the consumer’s network or administrative domain. The implication of cloud computing is that data centre capabilities can be aggregated at a scale not possible for a single organization, and located at sites more advantageous (from an energy point of view, for example) than may be available to a single organization. Servers and storage virtualized to this level are generally referred to as Cloud Platform and Cloud Storage, with examples being Google App Engine, Amazon Elastic Compute Cloud, and Microsoft’s Windows Azure. Fundamentally, cloud computing involves three different models:
- Software as a Service (SaaS), [7] refers to a browser or client application accessing an application that is running on servers hosted somewhere on the Internet.
- Attached Services refers to an application that is running locally accessing services to do part of its processing from a server hosted somewhere on the Internet.
- Cloud Platforms allow an application that is created by an organization’s developers to be hosted on a shared runtime platform somewhere on the Internet.
Table 1: Differentiation between Different Virtualization Schemes
Virtualization Maturity |
Name | Applications | Infra-structure | Location | Ownership | Server | Storage | Network |
Level 0 | Local | Dedicated | Fixed | Distributed | Internal |
Standalone PC |
Local Disks | None |
Level 1 | Logical | Shared | Fixed | Centralized | Internal | Client/Server, N-tier | File Server/ DB Server |
LAN, Shared Services |
Level 2 |
Data Center |
Shared | Virtual | Centralized | Internal |
Server Virtualization |
SAN | WAN/VPN |
Level 3 | Cloud |
Software as a Service |
Virtual | Virtual | Virtual |
Cloud Platform |
Cloud Storage |
Internet |
B. Scaling Unit-Based Architecture and Design
A scale unit is a defined block of IT resources that represents a unit of growth. It includes everything needed to grow another step: computing capacity, network capacity, power distribution units, floor space, cooling and so on. A scale unit must support the initial resource needs while enabling you to support the necessary delta of key growth factors, be they transactions, number of users, ports, or what have you. The size of the scale unit is chosen to optimize the trade-off between resources needed immediately and those that can grow over time.
The design process is actually quite simple and straightforward:
- Design the deployment of the solution based on available capacity planning information.
- Identify growth drivers (storage, speed etc.) for the solution. Ideally, the software vendor or development team will provide them. If they don’t, you can dissect the capacity planning guidance and see what changes in the solution architecture are triggered based on scale (typically user traffic, request processing, “static” elements such as Web sites, content within Web sites, and so forth).
- Identify and design appropriate scale-units for each identified growth driver. If this is your first scale-unit deployment, you might want to start with only one or two growth drivers.
- “Partition” the design based on the scale-units that you defined.
- Verify that the initial deployment and subsequent scale-units add up to fully functional deployments.
- Deploy the solution with only the initial scale-unit.
- Scale-units enable us to group and align well-factored solutions with minimized deployments without compromising the overall deployment architecture. Scale-units combined with instrumentation will enable control software like System Centre to manage a portfolio of applications based on the established optimization goal and approach.
C. Profiling the Energy Usage
- Energy Usage Profile for the Hardware: The server’s energy consumption is much easier to measure. Several low-cost devices are available to monitor energy consumption. The important thing is to build an overall picture of how much energy the server uses at idle and under stress. The energy usage of individual components does not need to be exact, but is important because it provides a ranking system for energy consumers within the server. CPU usage is the most expensive resource in terms of actual cost and environmental impact. Memory usage has a minimal cost at best. Hard disk usage has minimal cost. The gist is that if we are going to attempt to optimize our infrastructure and application to minimize energy usage, the CPU should be the primary target.
- Energy Usage Profile for the Application: Tools can be used to determine precise CPU usage for specific components of the application. These tracking statistics can then be used to attack the high cost portions of the application. In other words, find the most expensive operations and optimize them to reduce CPU usage. The ultimate goal is to lower resource usage to the point where multiple applications can be hosted on the same set of servers. Server sharing can reduce the energy footprint of all the applications involved.
- Energy Usage Profile for the Operating System: Looking at the data collected for physical servers when idle and at peak loads, we can see that a significant amount of energy is wasted on the system idle process. This wasted energy can be reclaimed through operating system virtualization. Virtualization allows the host machine to run at approximately 80 percent of peak processor utilization with fractionally increased power requirements. The first step in tuning the virtual guest is to disable or deactivate unneeded services. Depending on the type of guest operating system and its purpose, there may be a number of unnecessary services running by default. Eliminate screen savers and evaluate your event logging requirements for each guest operating system. This will avoid wasted processor cycles and disk activity. Minimizing disk activity is essential to both performance and energy efficiency. Look closely at the network utilization of your applications with a network monitoring tool to locate and eliminate chatty protocols and unnecessary network communication. Wireshark is an effective and freely available tool for network analysis.
- Extra Energy Usage Profile: Printing) Are users likely to print information? If so, how much? Are there ways to optimize screen layout to minimize the likelihood of printing? Information Retrieval) Are statements, invoices, or other materials mailed to users? Is there a way to present that information within the system? Updates) Do users need to leave their systems on to receive updates? If so, is there a way for the application to update itself while being used? Emailing: Purging bad email addresses eliminates energy wasted attempting to send messages.
D. Monitoring Systems- Wireless Sensor Networks
Given the data centers’ complex airflow and thermodynamics, dense and real-time environmental monitoring systems are necessary to improve their energy efficiency. The data these systems collect can help data centre operators troubleshoot thermo-alarms, make intelligent decisions on rack layout and server deployments, and innovate on facility management. The data can be particularly useful as data centers start to require more sophisticated cooling control to accommodate environmental and workload changes.
Wireless sensor network (WSN) technology is an ideal candidate for this monitoring task as it is low-cost, nonintrusive, can provide wide coverage, and can be easily repurposed. Wireless sensors require no additional network and facility infrastructure in an already complicated data centre IT environment. Compared to sensors on motherboards, external sensors are less sensitive to CPU or disk activities, thus the collected data is less noisy and is easier to understand.
Collecting cooling data is a first step toward understanding the energy usage patterns of data centres. To reduce the total data centre energy consumption without sacrificing user performance or device life, we need an understanding of key operation and performance parameters — power consumption, device utilizations, network traffic, application behaviours, and so forth — that is both holistic and fine-grained.
Some data centres use Computational Fluid Dynamics (CFD) simulations to estimate heat distribution and guide their cooling management strategies. Such CFD simulations are useful, particularly during a data centre’s design phase. They provide guidelines about room size, ceiling height, and equipment density. However, there are limitations to their usefulness: Accurate thermal models for computing devices are difficult to obtain; as soon as the rack layout or server types change, the current CFD model becomes obsolete; and updating CFD models is a time-consuming and expensive process.
Data collected from various sources can be used to build models that correlate the physical and performance parameters. Deriving and applying these models relies on building algorithms and tools for analysis, classification, prediction, optimization, and scheduling. The data and tools can be used by data center operators, facility managers, and decision makers to perform various tasks, such as:
- Real-time monitoring and control: Examples include resolving thermal alarms, discovering and mitigating hot spots, and adaptive cooling control.
- Change management: Given a small number of servers to be deployed, a data centre operator can make informed decisions about their placement depending on the available space, extra power, and sufficient cooling.
- Capacity planning: Given a load growth model and an understanding of resource dependencies, one can analyse the capacity utilization over various dimensions to decide whether to install more servers into existing data centres, to upgrade server hardware, or to build new data centres to meet future business need.
- Dynamic server provisioning and load distribution: Server load can vary significantly over time. Controlling air cooling precisely to meet dynamic critical power variations is difficult, but the inverse strategy of distributing load according to cooling efficiency is promising.
- Fault diagnostics and fault tolerance: Many hardware faults in data centres are caused by either long term stressing or abrupt changes in operating conditions. On the other hand, modern software architecture can tolerate significant hardware failures without sacrificing software reliability or user experience. One should consider the total cost of ownership including both acquiring hardware and maintaining their operating conditions.
E. E-Waste Minimization
- By replacing petroleum-filled plastic with bioplastics or plant-based polymers, which require less oil and energy to produce than traditional plastics and developing solutions against a challenge to keep these bioplastic computers cool so that electronics won’t melt them.
- Landfills can be controlled by making best use of the device by upgrading and repairing in time with a need to make such processes (i.e., up gradation and repairing) easier and cheaper.
- Avoiding the discarding will not only control e-waste out of dumps but also save energy and materials needed for a whole new computer.
- Power-sucking displays can be replaced with green light displays made of OLEDs, or organic light-emitting diodes
- Use of toxic materials like lead can be replaced by silver and copper making recycling of computers (which is expensive and time consuming at present) more effective by recycling computer parts separately with a option of reuse or resale
F. Developing thin client devices
This solution would exploit the technology behind a server-client type of system. These devices would be extremely thin and would contain no storage space as well as computing would be kept at a minimum. There would be a huge central server which would be used to communicate many such thin client devices. The entire storage would occur at the server, as well as the computing tasks would be carried out at the server itself. The thin clients would only possess the technologies required to communicate with these central servers and send information to the servers and retrieve information back from them. This would drastically reduce the energy usage and will be limited to only the energy spent by the central server. This solution can be very effective in office networks where currently you would be having for example 100 cabins in one large room, each having a Desktop PC. These 100 PC’s can well be replaced by 100 such thin client devices and one central server interacting with them. In addition to saving companies ongoing power consumption costs, thin client devices have a number of additional energy saving benefits when compared to traditional PCs. Because they have no moving parts, such as disc drives or fans, and emit very little heat, organizations also save in cooling costs; actual savings vary based on facility. Producing thin clients also requires significantly less energy and resources, as they contain fewer parts; are cheaper to transport; and would have a more lifecycle compared to the traditional Desktop PC, greatly reducing computer disposal costs.
IV. The Path Ahead
Many new electronics sold in the United States already meet the European Restriction of Hazardous Substances Directive (RoHS), a standard banning the general use of six hazardous substances including lead and mercury, and many manufacturers are committed to further reducing use of toxic substances [1]. The European Union’s directives (WEEE) on waste electrical and electronic equipment required the substitution of heavy metals and flame retardants like PBBs and PBDEs in all electronic equipment put on the market. The directives placed responsibility on manufacturers for the gathering and recycling of old equipment (the Producer Responsibility model). The Green Electronics Council offers the Electronic Products Environmental Assessment Tool (EPEAT) to assist in the purchase of “green” computing systems. The Council evaluates computing equipment on 28 criteria that measure a product’s efficiency and sustainability attributes. President George W. Bush issued an Executive Order which requires all United States Federal agencies to use EPEAT when purchasing computer systems. Efforts made by the Green Grid in improving the energy efficiency of advanced data centres and business computing ecosystems are also noteworthy. Sun created a Sun Eco office to oversee all of the company’s green programs, including telecommuting but also core products such as low-power servers. Dell in February launched “Plant a Tree for Me,” where consumers pay an extra $2 for a laptop or $6 for a desktop to plant trees aimed at offsetting the equivalent computer missions. It launched www.dell.com/earth to tout its green policies [6]. Wipro Limited, a leading player in Global IT and R&D services, is committed towards environmental sustainability by minimizing the usage of hazardous substances and chemicals which have potential impact on the ecology. It has joined hands with WWF India, one of the largest conservation organizations in the country, to directly deal with issues of climate change, water and waste management and biodiversity conservation [4].
Conclusion
There is a compelling need for applications to take environmental factors into account in their design, driven by the need to align with organizational environmental policies, reduce power and infrastructure costs and to reduce current or future carbon costs. The potential reduction in energy and emissions footprint through good architectural design is significant. The move to more environmentally sustainable applications impacts software and infrastructure architecture. The link between the two is strong, driving a need for joint management of this area of concern from infrastructure and software architects within organizations. These issues should be considered at the outset and during a project, not left to the end.
References
- Wikipedia page for Green Computing http://en.wikipedia.org/wiki/Green_computing
- What is Green Computing? http://www.sncllc.com/docs/whitepaper/greencomputing.pdf
- Sanghita Roy and Manigrib Bag, “Green Computing – New Horizon of Energy Efficiency and E-Waste Minimization – World Perspective vis-à-vis Indian Scenario”
- Wipro Schemes for Green Computing http://www.wipro.co.in/products/greenpc/html/0007clip.htm
- Green Computing- a cosn leadership initiative by Rich Kaestner www.cybernetic-technologies.com/GreenComputingPresentation.pdf
- Dell Schemes for Green Computing http://www.dell.com/content/topics/global.aspx/about_dell/values/enviro
- Mydhili K Nair and Dr. V. Gopalakrishna, “Generic Web Services: A Step Towards Green Computing”, International Journal on Computer Science and Engineering Vol.1(3), 2009, 248-253
- Redemtech’s “Sustainable Computing Assessment” management.com/news/green_information_technology-10015345-1.html