A variety of published and working papers covering smart networks for anycast services, statistical multiplexing effects in cloud computing, cloud/edge tradeoffs, axiomatic models of distributed computing networks, etc.
"Abstract—Three approaches to load balancing in a distributed computing network are evaluated analytically using statistics and via Monte Carlo simulation: 1) random selection; 2) selection based solely on identifying the server with the lowest response time; 3) selection based on identification of a combination of path and server with the lowest total response time. Analytical and simulation results show that the lowest expected response times occur via joint optimization. The exact improvement depends on the underlying distributions of path response times and server response times. An exemplary case where each path and server is an independent, identically distributed random variable with a continuous uniform distribution on [0,1] is assessed; then the improvement may be approximated by (1/2)+(1/(n+1))-√(π/2n), where n is the number of alternative combinations of path and server. Generally, when P is a random variable representing path response times, the value of a smart network in reducing response time is μ(P)-min(P) as n→∞. Such improvements support a philosophy of “smart” networks vs. “dumb pipes.”..."
"Today, cloud computing has become of great interest technically and as it relates to business strategy and competitiveness. However, while there are numerous informal (verbal) definitions of cloud computing, rigorous axiomatic, formal (mathematical) models of clouds appear to be in short supply: this paper is the first use of the term “Axiomatic Cloud Theory.”
We define a cloud as a structure (S,T,G,Q,δ,q0) satisfying five formal axioms: it must be 1) Common, 2) Location-independent, 3) Online, 4) Utility, and 5) on-Demand. S is space, T is time; G=(V,E) is a directed graph; Q is a set of states, where each state combines assignments of resource capacity and demand, resource allocations, node location, and pricing; qo is an initial state; and δ is a transition function that determines state trajectories over time: mapping resources, allocations, locations, and pricing to a next state of resources, allocations, locations, and pricing. This captures the interrelationships in a real cloud: capacity relative to demand can drive pricing, pricing and resource location drive allocation, allocation patterns can drive new resource levels..."
"We propose a Law of Cloud Response Time that combines network latency and parallel processing speed-up in a distributed, elastic, cloud computing environment. As the first supercomputing and parallel processing systems came into existence in the 1960s, Gene Amdahl proposed Amdahl's Law: the maximum possible speedup due to parallelization is 1/S, where S is the sequential percentage of the application. Thus, at least for somewhat parallelizable applications, more processors mean less elapsed time, but there is a limit to the gains as no acceleration can occur in the serial portion of the application.
However, today's geographically dispersed cloud environments comprising networked nodes of elastic resources are very different than the local, monolithic, centralized environments of a half century ago, so we propose a new law for interactive transactions over a network with parallelization..."
In industries such as cloud computing, lodging, and car rental services, demand from multiple customers is aggregated and served out of a common pool of resources managed by an operator. This approach can drive economies of scale and learning curve effects, but such benefits are offset by providers‘ needs to recover SG&A and achieve a return on invested capital. Does aggregation create value or are customers‘ costs just swept under a provider‘s rug and then charged back?
Under many circumstances, service providers—which one might call "smooth" operators—can take advantage of statistical effects that reduce variability in aggregate demand, creating true value vs. fixed, partitioned resources serving that demand.
"We show that an abstract formulation of resource assignment in a distributed cloud computing environment, which we term the CLOUD COMPUTING demand satisfiability problem, is NP-complete, using transformations from the PARTITION problem and 3-SATISFIABILITY, two of the “core” NP-complete problems. Specifically, let there be a set of customers, each with a given level of demand for resources, and a set of servers, each with a given level of capacity, where each customer may be served by two or more of the servers. The general problem of determining whether there is an assignment of customers to servers such that each customer’s demand may be satisfied by available resources is NP-complete..."
"Cloud computing and related services offer resources and services “on demand.” Examples include access to “video on demand” via IPTV or over-the-top streaming; servers and storage allocated on demand in “infrastructure as a service;” or “software as a service” such as customer relationship management or sales force automation. Services delivered “on demand” certainly sound better than ones provided “after an interminable wait,” but how can we quantify the value of on-demand, and the scenarios in which it creates compelling value?
We show that the benefits of on-demand provisioning depend on the interplay of demand with forecasting, monitoring, and resource provisioning and de-provisioning processes and intervals, as well as likely asymmetries between excess capacity and unserved demand..."
"Using agent-based simulation and analysis of an idealized model of a duopoly with one flat-rate and one usage-based provider, we demonstrate that flat-rate plans are unsustainable in a perfectly competitive market with independent, decentralized decision-making by active, self-selecting, rational utility maximizers engaged in a stochastic, multi-step decision process driving iterative price adjustment. In distinction to Nobel Laureate Dr. George Akerlof’s quality uncertainty in “The Market for ‘Lemons’”, where the seller is advantaged by asymmetric information regarding the quality of the product or service being sold, in what we’ll call “The Market for ‘Melons’” (PDF) it is the buyer that may be advantaged by asymmetric information regarding the ex-ante quantity of planned consumption. Moreover, adverse selection and moral hazard have less to do with quality uncertainty, information asymmetry, or morality, than with rational choice by consumers with dispersed consumption under flat-rate pricing..."
"Cloud computing represents a new model and underlying technology for IT. However, the value of cloud computing may be abstracted as the value of any on-demand utility: rental cars, taxi cabs, hotel rooms, or the like.
In a companion paper , the value of “on-demand” resource provisioning is quantified.
Here, the value of “utility”—i.e., pay-per-use with a linear tariff—is quantified, and three major conclusions are presented based on the nature of the offered demand and the relative cost—or “utility premium”—of the utility vs. fixed resources on a unit cost basis..."
Additional Articles
ARTICLE | PUBLICATION |
---|---|
"Network Implications of Cloud Computing" | United Nations International Telecommunications Union World Technical Symposium |
"Cloudonomics: A Rigorous Approach to Cloud Benefit Quantification" | The Journal of Software Technology, October, 2011, Vol. 14, No. 4, pp. 10-18. |
"The Future of Cloud Computing" | IEEE Technology Time Machine Symposium on Technologies Beyond 2020 (TTM), 2011, ISBN 1-4577-0415-3, June, 2011. |
"Easy Computer-Based Simulation to Solve Risk, Reliability, and Recoverability Problems" | Proceedings of the 2000 Contingency Planning and Management Conference, Witter Publishing Corporation, 2000. |
"Dependability in the Cloud: Challenges and Opportunities" by Kaustabh Joshi, Guy Bunker, Farnham Jahanian, Aard van Moorsel, and Joe Weinman | IEEE/IFIP International Conference on Dependable Systems and Networks, 2009, DOI 10.1109/DSN.2009.5270350 |
"Analysis, Modeling, Simulation, and Metrics Tools for Business Process Reengineering" | Enterprise BPR Management Forum, Federal Open Systems Exhibition Conference Workbook, 1995. |
"Will Reengineering Replace TQM?" by M. H. Fallah, J. Weinman | Proceedings, IEEE International Conference on Engineering Management, Singapore, 1995. |
"Future Prospects for Reengineering," by E. Fuchs, J. Weinman | Second European Organization for Quality Forum on TQM Development: Process Reengineering as Part of the TQM Strategy, Munich, 1995. |
"Enabling Technologies for World-class Business Operations," by J. Hsu, R. Kent, J. Weinman | AT&T Technical Journal, January/February 1994. |
"A Distributed Applications Architecture for AT&T Manufacturing," by G. W. Arnold, J. Weinman | Computer Communication Technologies for the 90's, Proceedings of the Ninth International Conference on Computer Communication, J. Raviv, ed., Elsevier Science Publishers, ISBN 0-444-70539-2, 1988. |
"Nestable bracketed comments" | ACM SIGPLAN (Special Interest Group on Programming Languages) Notices, Vol. 18, Num. 10, October 1 |