Mainstream computing is a collaborative methodology that leverages the rarity of unanticipated system state in order to protect users. At a high level, our approach allows a user to say, “ensure that my program’s behavior conforms with at least 99.9% (or some other user-defined percentage) of the usage patterns for this program.” Put another way, we ask users to specify a tolerance for failure, pf ail , which bounds the rate at which the system will flag anomalies (which can be due to system liabilities, or can simply be benign false positives on legitimate executions). Statistically then, the more mainstream, or “normal” a user’s usage is, the less likely it is for the user to encounter an anomaly for a given setting of pf ail . Mainstream computing tracks program-level runtime statistics for an application across a community of users. Similar to other invariant tracking systems, mainstream computing constantly profiles applications in an effort to determine likely invariants for a program’s operands and control flow. Unlike prior art, our system provides statistical bounds on false positive rates, and we ask the user to set the bounds appropriately. This approach is analogous to the “privacy” slider bar present in some web browsers that allows users to easily trade functionality of the browser for potential loss of privacy. It is the mainstream computing server’s responsibility to generate, with statistical guarantees, the set of constraints that satisfy a user’s requests. As with prior art on collaborative infrastructures, the server collects data from multiple clients, creating a large corpus of data from which it can create constraints. Unlike previous work, however, we show that mainstream computing can create valuable models by only consulting a small portion of the corpus. We argue that this property of mainstream computing is crucial because it limits the influence rogue users may have on constraint creation. The novel contributions of this paper are as follows:
We introduce mainstream computing, a collaborative system that dynamically checks a program—via runtime assertion checks—to ensure that it is running according to expectation. Rather than enforcing strict, statically-defined assertions, our system allows users to run with a set of assertions that are statistically guaranteed to fail at a rate bounded by a user-defined probability, pf ail . For example, a user can request a set of assertions that will fail at most 0.5% of the times the application is invoked. Users who believe their usage of an application is mainstream can use relatively large settings for pf ail . Higher values of pf ail provide stricter regulation of the application which likely enhances security, but will also inhibit some legitimate program behaviors; in contrast, program behavior is unregulated when pf ail = 0, leaving the user vulnerable to attack. We show that our prototype is able to detect denial of service attacks, integer overflows, frees of uninitialized memory, boundary violations, and an injection attack. In addition we perform experiments with a mainstream computing system designed to protect against soft errors. Categories and Subject Descriptors D. Software [D.2. Software Engineering]: D.2.4. Program Verification General Terms Reliability, Security
1.
Introduction
A variety of issues threaten the stability of today’s systems: code vulnerabilities, soft errors, insider threats, race conditions, hardware aging, etc. While there is no doubt that these threats are dangerous, we are fortunate that they rarely present themselves. The vast majority of the time, code running on modern systems executes in a manner consistent with user and programmer expectations. Current protection mechanisms tend to be designed for a specific vulnerability (e.g., buffer overruns, or illegal control-flow transfers). In this paper we introduce mainstream computing, which by simply detecting and enforcing likely program properties, naturally provides some level of protection against a wide variety of systems liabilities. ∗ Contributed
Eric Van Hensbergen
• We introduce mainstream computing. • We show that mainstream computing will likely generate un-
tainted constraints, even when malicious users are part of the collaborative community. • We show that mainstream computing systems can protect
against buffer overruns, integer overflows, memory free bugs, denial of service attacks, and injection attacks.
to this paper while employed at IBM Austin Research Lab.
• We show that mainstream computing systems can be used to
recover from many soft errors. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CGO’10 April 24–28, 2010, Toronto, Ontario, Canada. c 2010 ACM 978-1-60558-635-9/10/04. . . $10.00 Copyright
2.
Mainstream Computing
At the conceptual level, mainstream computing attempts to automatically whitelist common behavior, and log, reject, sandbox, or repair abnormal behavior. This section describes the many components of a mainstream computing system. We begin with a high1
Permission to make digital or hard copies of all or part of this work for personal or classroom use ... the server collects data from multiple clients, creating a large cor- pus of data ...... our filtering methodology is able to quickly drive failure rates well below 0.01 .... library, so failure oblivious computing cannot repair execution.
Apr 29, 2011 - clusters in networks accounting for edge directions, edge weights, overlapping ...... Vertices of the LFR benchmark have a fixed degree (in this case ..... The database can be accessed online at the site of the Office for. National ...
Apr 29, 2011 - for the micro-communities and then for the macro-communities. In order to do so ..... livejournal.com), and was downloaded from the Stanford Large. Network .... wish to compare the trend of the network similarity with that of the.
Aug 24, 2011 - chipset,audio, video, networking,and peripheralchipset drivers please. Download ... available online which I havetried failed and thereisan error ... Audio(UAA) Free Driver Download for Windows. ... Free Download All ... UAAHighDefinit
3 unsubsidized Massachusetts health insurance exchange (âthe Connectorâ) in ..... The hazard ratios in Table 1, column 2, shows that compared to enrollees in ..... We have touched on a number of themes in this review of the Massachusetts ...
Apr 29, 2011 - vertices have a total degree M. We can separate the above quantities in the ...... between areas of the United Kingdom, and therefore it has a ... The database can be accessed online at the site of the Office for. National ...
Apr 29, 2011 - Therefore networks can be found everywhere: in biology. (e. g., proteins and ... yielding an evolutionary advantage on the long run [26]. However, most ...... communities involved in cancer metastasis. BMC Bioinf 7: 2. 15. Holme P, Hus
Apr 29, 2011 - funders had no role in study design, data collection and analysis, ...... Tan PN, Steinbach M, Kumar V (2005) Introduction to Data Mining. New.
Jun 14, 2010 - ABSTRACT. Adaptive computing systems rely on predictions of program ... rate predictions of changes in application behavior to proac- tively manage system ..... [2] C. Isci, et al. Live, Runtime Phase Monitoring and Prediction.
Jun 14, 2010 - P(s4 | s3) s4. P(s4). P(s3). P(s2). P(s1). Probability. Figure 1: Model with back-off for n = 4. The statistical metric model is a conditional ...
program such as e2fsck to restore consistency. Journaling has three well-known modes of operation: â¢. Journal Mode: Both file system data and metadata are.
Jun 14, 2010 - Adaptive computing systems rely on predictions of program ... eling workload behavior as a language modeling problem. .... r. LastValue. Table-1024. SMM-Global. Figure 2: Prediction accuracy of our predictor, last-value and ...
Financial support from the Ecole Polytechnique Chair in Business Economics is ... small fraction of production, the relevant information for the standard consumer boils down ..... price), can be seen as a shortcut accounting for a retailing stage tha
MINES ParisTech & Paris School of Economics ... A pervasive trade-off between quality and quantity ... 'mechanical' learning of consumers (not bayesian).
Oct 13, 2010 - ... Janeiro, the Latin American meeting of the Econometric Society 2009 at ... equilibrium, Incomplete markets, Collateral, Default, Risk sharing,.
Oct 13, 2010 - through their effects on the equilibrium interest rate and the equilibrium prices of the durable goods. ...... If the value of the durable good is sufficiently high (i.e. collateral is plentiful) and if each ...... Theory, Online first
Yet, this is key from a competition policy point of view. This paper. â· Cournot model of collective 'reputation' with quality and quantity choiceâwith tractable policy applications. â· A pervasive trade-off between quality and quantity. â· The