The energy-efficiency of computing has always been among hot topic for both research and industry. Beginning from 1992, the Energy Star logo became well-familiar to the general public, and among first results achieved by the program was the mass-scale incorporation of energy-saving sleep mode for computer monitors and a plethora of other consumer electronic devices.
With the advent of cloud computing, the industry has faced an unprecedented sustainability challenge: colossal datacenters consuming energy with a gargantuan appetite not only for the sake of computation but also for auxiliary purposes, such as cooling. In 2009, Alex Wissner-Gross, a Harvard University physicist had estimated the carbon footprint of an average Google search to be about 7 gram. Fortunately, due to the progress in heat recycling, the waste heat can be directed to warm the residential houses or used for in-place electrical power generation.
The Age of the Edge
Meanwhile, the new trends were developing in the cloud paradigm since for emerging technologies it became crucial to have computational capacity near the end-users. For example, augmented reality applications need very fast response times, technically speaking, low latency. Internet-of-Things, on the other hands, generates the amounts of data sending which to the cloud will eventually lead to overwhelming network bandwidth capacity. This shift requires computing power to be located at the edges of the network, near the end-users, hence the name for this novel technology – edge computing.
Given the above considerations, the tendency might be seen as a dissolution of the cloud into a fog, which is actually another term for the edge computing. Consequently, the presence of in-network computing capacity opens new perspectives for sustainability, namely, the excessive renewable energy might be utilized for performing computational tasks. Edge computing facilities, compared to large data centers, are rather small and geographically widespread, which improves the chances that some of them will be located in the proximity of renewable energy sources. Production of renewables is known to have substantial variance and storing the excessive energy has its technical and economic challenges. An attractive option would be to direct the excessive power to nearby edge servers, which would then take up additional computational load. The remaining question is how the tasks and services will locate those facilities? Fortunately, we might have an answer: intelligent containers, which we envision to be essential building blocks for self-organizing service infrastructures.
The Intelligent Container
Intelligent containers are atomic units encapsulating services. However, contrary to their precursors, they are able to discover edge computing facilities autonomously, building their own map of the world. They can negotiate with the discovered facilities and pick only those satisfying particular energy requirements. Having a microservice application deployed using intelligent containers, one can have it running solely on the excesses of renewables, given that tradeoffs can be made in the performance and availability. The additional benefit is that no administrative overhead is needed since containers will automatically select appropriate places for deployment and migrate there.
Technical concepts behind intelligent containers are explained in our paper ”ICON: Intelligent Container Overlays” , and it is worth noting that they might optimize not only for green energy but also for high performance and low latency for end-users, dependent on the current application needs.
Mandal, U., Habib, M. F., Zhang, S., Mukherjee, B., & Tornatore, M. (2013). Greening the cloud using renewable-energy-aware service migration. IEEE network, 27(6), 36-43.
Hatzopoulos, D., Koutsopoulos, I., Koutitas, G., & Van Heddeghem, W. (2013, June). Dynamic virtual machine allocation in cloud server facility systems with renewable energy sources. In Communications (ICC), 2013 IEEE International Conference on (pp. 4217-4221). IEEE.
Aleksandr Zavodovski is a PhD student at the University of Helsinki, Department of Computer Science (CS). Zavodovski together with other CS-researchers contributes to interdisciplinary BCDC Energy research project as BCDC Cloud Team.
The blog post was published on 26th November 2018 on BCDC Energy research project website.
Last updated: 27.11.2018