Recent Publications of Scott Smolka






Recent Journal Publications

  1. D. Phan, J. Yang, R. Grosu, S. A. Smolka, and S. D. Stoller, Collision Avoidance for Mobile Robots with Limited Sensing and Limited Information about Moving Obstacles. Formal Methods in System Design, Springer (Jan. 2017).

    This paper addresses the problem of safely navigating a mobile robot with limited sensing capability and limited information about stationary obstacles. We consider two sensing limitations: blind spots between sensors and limited sensing range. We identify a set of constraints on the sensorsÂ’ readings whose satisfaction at time t guarantees collision-freedom during the time interval \([t, t + \varDelta t]\). Here, \(\varDelta t\) is a parameter whose value is bounded by a function of the maximum velocity of the robot and the range of the sensors. The constraints are obtained under assumptions about minimum internal angle and minimum edge length of polyhedral obstacles. We apply these constraints in the switching logic of the Simplex architecture to obtain a controller that ensures collision-freedom. Experiments we have conducted are consistent with these claims. To the best of our knowledge, our study is the first to provide runtime assurance that an autonomous mobile robot with limited sensing can navigate without collisions with only limited information about obstacles. Collision Avoidance for Mobile Robots with Limited Sensing and Limited Information About the Environment (PDF Download Available). Available from: https://www.researchgate.net/publication/300253303_Collision_Avoidance_for_Mobile_Robots_with_Limited_Sensing_and_Limited_Information_About_the_Environment [accessed Aug 22, 2017].


  2. A. Murthy, Md. A. Islam, S. A. Smolka, and R. Grosu, Computing Compositional Proofs of Input-to-Output Stability Using SOS Optimization and d-Decidability. Nonlinear Analysis: Hybrid Systems, Elsevier. Available online 14 May 2016.

    We present BFComp, an automated framework based on Sum-Of-Squares (SOS) optimization and d-decidability over the reals, to compute Bisimulation Functions (BFs) that characterize Input-to-Output Stability (IOS) of dynamical systems. BFs are Lyapunov-like functions that decay along the trajectories of a given pair of systems, and can be used to establish the stability of the outputs with respect to bounded input deviations.
    In addition to establishing IOS, BFComp is designed to provide tight bounds on the squared output errors between systems whenever possible. For this purpose, two SOS optimization formulations are employed: SOSP 1, which enforces the decay requirements on a discretized grid over the input space, and SOSP 2, which covers the input space exhaustively. SOSP 2 is attempted first, and if the resulting error bounds are not satisfactory, SOSP 1 is used to compute a Candidate BF (CBF). The decay requirement for the BFs is then encoded as a d-decidable formula and validated over a level set of the CBF using the dReal tool. If dReal produces a counterexample containing the states and inputs where the decay requirement is violated, this pair of vectors is used to refine the input-space grid and SOSP 1 is iterated.
    By computing BFs that appeal to a small-gain theorem, the BFComp framework can be used to show that a subsystem of a feedback-composed system can be replaced–with bounded error–by an approximately equivalent abstraction, thereby enabling approximate model-order reduction of dynamical systems. The BFs can then be used to obtain bounds on the error between the outputs of the original system and its reduced approximation. To this end, we illustrate the utility of BFComp on a canonical cardiac-cell model, showing that the four-variable Markovian model for the slowly activating Potassium current IKs can be safely replaced by a one-variable Hodgkin–Huxley-type approximation. In addition to a detailed performance evaluation of BFComp, our case study also presents workarounds for systems with non-polynomial vector fields, which are not amenable to standard SOS optimizers.


  3. G. Chatzieleftheriou, B. Bonakdarpour, P. Katsaros, and S. A. Smolka, Abstract Model Repair. Logical Methods in Computer Science, Vol. 3:11 (2015).

    Given a Kripke structure M and CTL formula f, where M does not satisfy f, the problem of Model Repair is to obtain a new model M0 such that M0 satisfies f. Moreover, the changes made to M to derive M0 should be minimum with respect to all such M0. As in model checking, state explosion can make it virtually impossible to carry out model repair on models with infinite or even large state spaces. In this paper, we present a framework for model repair that uses abstraction refinement to tackle state explosion. Our framework aims to repair Kripke Structure models based on a Kripke Modal Transition System abstraction and a 3-valued semantics for CTL. We introduce an abstract-model-repair algorithm for which we prove soundness and semi-completeness, and we study its complexity class. Moreover, a prototype implementation is presented to illustrate the practical utility of abstract-model-repair on an Automatic Door Opener system model and a model of the Andrew File System 1 protocol.


  4. Md. A. Islam, A. Murthy, E. Bartocci, E. M. Cherry, F. H. Fenton, J. Glimm, S. A Smolka, and R. Grosu, Model-Order Reduction of Ion Channel Dynamics Using Approximate Bisimulation. Theoretical Computer Science, (Sept 2015).

    We show that in the context of the Iyer et al. (IMW) 67-variable cardiac myocycte model, it is possible to replace the detailed 13-state probabilistic subsystem of the sodium channel dynamics with a much simpler Hodgkin-Huxley (HH)-like two-state abstraction, while only incurring a bounded approximation error. We then extend our technique to the 10-state subsystem of the fast-recovering calcium-independent potassium channel. The basis of our results is the construction of an approximate bisimulation between the HH-type abstraction and the corresponding detailed ion channel model, both of which are input-controlled (voltage in this case) CTMCs. The construction of the appropriate approximate bisimulation, as well as the overall result regarding the behavior of this modified IMW model, involves: (1) Identification of the voltage-dependent parameters of the m and h gates in the HH-type models via a two-step fitting process, carried out over more than 20,000 representative observational traces of the detailed sodium and potassium ion channel models. (2) Proving that the distance between observations of the detailed models and their respective abstraction is bounded. (3) Exploring the sensitivity of the overall IMW model to the HH-type approximations. Our extensive simulation results experimentally validate our findings, for varying IMW-type input stimuli.


  5. A. Murthy, E. Bartocci, F. Fenton, J. Glimm, R. Gray, E. M. Cherry, S. A. Smolka, and R. Grosu, Curvature Analysis of Cardiac Excitation Wavefronts. IEEE/ACM Transactions on Computational Biology and Bioinformatics, Vol. 10, No. 2, pp. 323-336 (March-April 2013).

    We present the Spiral Classification Algorithm (SCA), a fast and accurate algorithm for classifying electrical spiral waves and their associated breakup in cardiac tissues. The classification performed by SCA is an essential component of the detection and analysis of various cardiac arrhythmic disorders, including ventricular tachycardia and fibrillation. Given a digitized frame of a propagating wave, SCA constructs a highly accurate representation of the front and the back of the wave, piecewise interpolates this representation with cubic splines, and subjects the result to an accurate curvature analysis. This analysis is more comprehensive than methods based on spiral-tip tracking, as it considers the entire wave front and back. To increase the smoothness of the resulting symbolic representation, the SCA uses weighted overlapping of adjacent segments which increases the smoothness at join points. SCA has been applied to a number of representative types of spiral waves, and, for each type, a distinct curvature evolution in time (signature) has been identified. Distinct signatures have also been identified for spiral breakup. These results represent a significant first step in automatically determining parameter ranges for which a computational cardiac-cell network accurately reproduces a particular kind of cardiac arrhythmia, such as ventricular fibrillation.


  6. A. Gorlin, C. R. Ramakrishnan, and S. A. Smolka, Model Checking with Probabilistic Tabled Logic Programming. Theory and Practice of Logic Programming, Vol. 12, No. 4-5, pp. 681-700, Cambridge University Press (Sept. 2012).

    We present a formulation of the problem of probabilistic model checking as one of query evaluation over probabilistic logic programs. To the best of our knowledge, our formulation is the first of its kind, and it covers a rich class of probabilistic models and probabilistic temporal logics. The inference algorithms of existing probabilistic logic-programming systems are well defined only for queries with a finite number of explanations. This restriction prohibits the encoding of probabilistic model checkers, where explanations correspond to executions of the system being model checked. To overcome this restriction, we propose a more general inference algorithm that uses finite generative structures (similar to automata) to represent families of explanations. The inference algorithm computes the probability of a possibly infinite set of explanations directly from the finite generative structure. We have implemented our inference algorithm in XSB Prolog, and use this implementation to encode probabilistic model checkers for a variety of temporal logics, including PCTL and GPL (which subsumes PCTL* ). Our experiment results show that, despite the highly declarative nature of their encodings, the model checkers constructed in this manner are competitive with their native implementations.


  7. X. Huang, J. Seyster, S. Callanan, K. Dixit, R. Grosu, S. A. Smolka, S. D. Stoller, and E. Zadok, Software Monitoring with Controllable Overhead. International Journal on Software Tools for Technology Transfer, Vol. 14, Issue 3, pp. 327-347, Springer-Verlag (June 2012).

    We introduce the technique of software monitoring with controllable overhead (SMCO), which is based on a novel combination of supervisory control theory of discrete event systems and PID-control theory of discrete time systems. SMCO controls monitoring overhead by temporarily disabling monitoring of selected events for as short a time as possible under the constraint of a user-supplied target overhead o_t. This strategy is optimal in the sense that it allows SMCO to monitor as many events as possible, within the confines of o_t. SMCO is a general monitoring technique that can be applied to any system interface or API. We have applied SMCO to a variety of monitoring problems, including two highlighted in this paper: integer range analysis, which determines upper and lower bounds on integer variable values; and non-accessed period detection, which detects stale or underutilized memory allocations. We benchmarked SMCO extensively, using both CPU- and I/O-intensive workloads, which often exhibited highly bursty behavior. We demonstrate that SMCO successfully controls overhead across a wide range of target overhead levels; its accuracy monotonically increases with the target overhead; and it can be configured to distribute monitoring overhead fairly across multiple instrumentation points.


  8. J. Seyster, K. Dixit, X. Huang, R. Grosu, K. Havelund, S. A. Smolka, S. D. Stoller, and E. Zadok, InterAspect: Aspect-Oriented Instrumentation with GCC. Formal Methods in System Design, Vol. 41, No. 3, pp. 295-320, Springer (Dec. 2012).

    We present the INTERASPECT instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directlyadd instrumentation by transforming the compiler<92>s intermediate representation. Such transformations,however, require expert knowledge of GCC internals. INTERASPECT addressesthis situation by allowing instrumentation plug-ins to be developed using the familiar vocabularyof Aspect-Oriented Programming: pointcuts, join points, and advice functions.Moreover, INTERASPECT uses specific information about each join point in a pointcut,possibly including results of static analysis, to support powerful customized instrumentation.We describe the INTERASPECT API and present several examples that illustrate itspractical utility as a runtime-verification platform. We also introduce a tracecut system thatuses INTERASPECT to construct program monitors that are formally specified as regularexpressions.


  9. X. Huang, A. Singh, and S. A. Smolka, Using Integer Clocks to Verify Clock-Synchronization Protocols. Innovations in Systems and Software Engineering, Special Issue on NASA Formal Methods Symposium 2010, Vol. 7, Issue 2, pp. 119-130 (June 2011).

    We use the Uppaal model checker for timed automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN), the clock-synchronization algorithm of Lenzen, Locher and Wattenhofer (LLW) for general distributed systems, and the clock-thread technique of the software monitoring with controllable overhead algorithm (SMCO). Clock-synchronization algorithms such as TPSN, LLW, and SMCO must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must also be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of timed automata. To overcome this formal-modeling obstacle, we augment the Uppaal specification language with the integer clock-derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN, LLW, and SMCO protocols. With these integer-clock-based models in hand, we use Uppaal to verify a number of key correctness properties, including network-wide time synchronization, bounded clock skew, bounded overhead skew, and absence of deadlock. We also use the Uppaal Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution.


  10. E. Bartocci, R. Singh, F. von Stein, A. Amedome, A.-J. Caceres, J. Castillo, E. Closser, G. Deards, A. Goltsev, R. Sta. Ines, C. Isbilir, J. Marc, D. Moore, D. Pardi, S. Sadhu, S. Sanchez, P. Sharma, A. Singh, J. Rogers, A. Wolinetz, T. Grosso-Applewhite, K. Zhao, A. Filipski, R. F. Gilmour Jr, R. Grosu, J. Glimm, S. A. Smolka, E. M. Cherry, E. M. Clarke, N. Griffeth, and F. H. Fenton, Teaching Cardiac Electrophysiology Modeling to Undergraduate Students: Using Java Applets and GPU Programming to Study Arrhythmias and Spiral-Wave Dynamics. American Journal of Advances in Physiology Education, Vol. 35, No. 4, pp. 427-437, American Physiological Society (Dec. 2011).

    As part of a 3-wk intersession workshop funded by a National Science Foundation Expeditions in Computing award, 15 undergraduate students from the City University of New York(1) collaborated on a study aimed at characterizing the voltage dynamics and arrhythmogenic behavior of cardiac cells for a broad range of physiologically relevant conditions using an in silico model. The primary goal of the workshop was to cultivate student interest in computational modeling and analysis of complex systems by introducing them through lectures and laboratory activities to current research in cardiac modeling and by engaging them in a hands-on research experience. The success of the workshop lay in the exposure of the students to active researchers and experts in their fields, the use of hands-on activities to communicate important concepts, active engagement of the students in research, and explanations of the significance of results as the students generated them. The workshop content addressed how spiral waves of electrical activity are initiated in the heart and how different parameter values affect the dynamics of these reentrant waves. Spiral waves are clinically associated with tachycardia, when the waves remain stable, and with fibrillation, when the waves exhibit breakup. All in silico experiments were conducted by simulating a mathematical model of cardiac cells on graphics processing units instead of the standard central processing units of desktop computers. This approach decreased the run time for each simulation to almost real time, thereby allowing the students to quickly analyze and characterize the simulated arrhythmias. Results from these simulations, as well as some of the background and methodology taught during the workshop, is presented in this article along with the programming code and the explanations of simulation results in an effort to allow other teachers and students to perform their own demonstrations, simulations, and studies.


  11. A. Singh, C. R. Ramakrishnan and S. A. Smolka. A Process Calculus for Mobile Ad Hoc Networks. Science of Computer Programming, Vol. 75, Issue 6, pp. 440-469 (June 2010).

    We present the ω-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The ω-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the ω-calculus is the separation of a node's communication and computational behavior, described by an ω-process, from the description of its physical transmission range, referred to as an ω-process interface. Our main technical results are as follows. We give a formal operational semantics of the ω-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control ω-processes. We also prove that the ω-calculus is a conservative extension of the p-calculus, and that late bisimulation (appropriately lifted from the p-calculus to the ω-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation, which abstracts away from two types of internal actions: t-actions, as in the p-calculus, and μ-actions, signaling node movement. Finally, we illustrate the practical utility of the calculus by developing and analyzing a formal model of a leader-election protocol for MANETs.


  12. E. Bartocci, F. Corradini, M. R. Di Berardini, E. Entcheva, S. A. Smolka and R. Grosu. Modeling and Simulation of Cardiac Tissue using Hybrid I/O Automata. Theoretical Computer Science. Vol. 410, No. 33, pp. 3149-3165 (August 2009).

    We propose a new biological framework based on the Lynch et al. theory of Hybrid I/O Automata (HIOAs) for modeling and simulating excitable tissue. Within this framework, we view an excitable tissue as a composition of two main kinds of component: a diffusion medium and a collection of cells, both modeled as an HIOA. This approach yields a notion of decomposition that allows us to describe a tissue as the parallel composition of several interacting tissues, a property that could be exploited to parallelize, and hence improve, the efficiency of the simulation process. We also demonstrate the feasibility of our HIOA-based framework to capture and mimic different kinds of wave-propagation behavior in 2D isotropic cardiac tissue, including normal wave propagation along the tissue; the creation of spiral waves; the break-up of spiral waves into more complex patterns such as fibrillation; and the recovery of the tissue to the rest via electrical defibrillation.


  13. R. Grosu, S.A. Smolka, F. Corradini, A. Wasilewska, E. Entcheva, and E. Bartocci, Learning and Detecting Emergent Behavior in Networks of Cardiac Myocytes. Communications of the ACM , Vol. 53, No. 3, pp. 97-105 (March 2009).

    We address the problem of specifying and detecting emergent behavior in networks of cardiac myocytes, spiral electric waves in particular, a precursor to atrial and ventricular fibrillation. To solve this problem we: (1) Apply discrete mode-abstraction to the cycle-linear hybrid automata (clha) we have recently developed for modeling the behavior of myocyte networks; (2) Introduce the new concept of spatial-superposition of clha modes; (3) Develop a new spatial logic, based on spatial-superposition, for specifying emergent behavior; (4) Devise a new method for learning the formulae of this logic from the spatial patterns under investigation; and (5) Apply bounded model checking to detect (within milliseconds) the onset of spiral waves. We have implemented our methodology as the Emerald tool-suite, a component of our eha framework for specification, simulation, analysis and control of excitable hybrid automata. We illustrate the effectiveness of our approach by applying Emerald to the scalar electrical fields produced by our CellExcite simulator.


  14. P. Ye, E. Entcheva, S.A. Smolka and R. Grosu. Modeling Excitable Cells Using Cycle-Linear Hybrid Automata. Journal of IET Systems Biology, Vol. 2, Issue 1, pp. 24-32 (January 2008).

    Abstract: Cycle-linear hybrid automata (CLHAs), a new model of excitable cells that efficiently and accurately captures action-potential morphology and other typical excitable-cell characteristics such as refractoriness and restitution, is introduced. Hybrid automata combine discrete transition graphs with continuous dynamics and emerge in a natural way during the (piecewise) approximation process of any nonlinear system. CLHAs are a new form of hybrid automata that exhibit linear behaviour on a per-cycle basis but whose overall behaviour is appropriately nonlinear. To motivate the need for this modelling formalism, first it is shown how to recast two recently proposed models of excitable cells as hybrid automata: the piecewise-linear model of Biktashev and the nonlinear model of Fenton –Karma. Both of these models were designed to efficiently approximate excitable-cell behaviour. We then show that the CLHA closely mimics the behaviour of several classical highly nonlinear models of excitable cells, thereby retaining the simplicity of Biktashev’s model without sacrificing the expressiveness of Fenton –Karma. CLHAs are not restricted to excitable cells; they can be used to capture the behaviour of a wide class of dynamic systems that exhibit some level of periodicity plus adaptation.


  15. E. Bartocci, F. Corradini, E. Entcheva, R. Grosu and S.A. Smolka. CellExcite: An Efficient Simulation Environment for Excitable Cells. BMC Bioinformatics, 9(Suppl 2):53 (March 2008).

    Background: Brain, heart and skeletal muscle share similar properties of excitable tissue, featuring both discrete behavior (all-or-nothing response to electrical activation) and continuous behavior (recovery to rest follows a temporal path, determined by multiple competing ion flows). Classical mathematical models of excitable cells involve complex systems of nonlinear differential equations. Such models not only impair formal analysis but also impose high computational demands on simulations, especially in large-scale 2-D and 3-D cell networks. In this paper, we show that by choosing Hybrid Automata as the modeling formalism, it is possible to construct a more abstract model of excitable cells that preserves the properties of interest while reducing the computational effort, thereby admitting the possibility of formal analysis and efficient simulation. Results: We have developed CellExcite, a sophisticated simulation environment for excitable-cell networks. CellExcite allows the user to sketch a tissue of excitable cells, plan the stimuli to be applied during simulation, and customize the diffusion model. CellExcite adopts Hybrid Automata (HA) as the computational model in order to efficiently capture both discrete and continuous excitable-cell behavior. Conclusions: The CellExcite simulation framework for multicellular HA arrays exhibits significantly improved computational efficiency in large-scale simulations, thus opening the possibility for formal analysis based on HA theory. A demo of CellExcite is available at http:// www.cs.sunysb.edu/~eha/.


  16. S. Basu and S.A. Smolka, “Model Checking the Java Meta-Locking Algorithm.” ACM Transactions on Software Engineering and Methodology. Vol. 16, Issue 3 (July 2007).

    We report on our efforts to use the XMC model checker to model and verify the Java metalocking algorithm. XMC [Ramakrishna et al. 1997] is a versatile and efficient model checker for systems specified in XL, a highly expressive value-passing language. Metalocking [Agesen et al. 1999] is a highly-optimized technique for ensuring mutually exclusive access by threads to object monitor queues and, therefore; plays an essential role in allowing Java to offer concurrent access to objects. Metalocking can be viewed as a two-tiered scheme. At the upper level, the metalock level, a thread waits until it can enqueue itself on an object's monitor queue in a mutually exclusive manner. At the lower level, the monitor-lock level, enqueued threads race to obtain exclusive access to the object. Our abstract XL specification of the metalocking algorithm is fully parameterized, both on the number of threads M, and the number of objects N. It also captures a sophisticated optimization of the basic metalocking algorithm known as extra-fast locking and unlocking of uncontended objects. Using XMC, we show that for a variety of values of M and N, the algorithm indeed provides mutual exclusion and freedom from deadlock and lockout at the metalock level. We also show that, while the monitor-lock level of the protocol preserves mutual exclusion and deadlock-freedom, it is not lockout-free because the protocol's designers chose to give equal preference to awaiting threads and newly arrived threads.


  17. F. Moller, S.A. Smolka, and J. Srba, “On the Computational Complexity of Bisimulation, Redux.” Information and Computation, Vol. 194, Issue 2, pp. 129-143 (November 2004).

    Paris Kanellakis and the second author (Smolka) were among the first to investigate the computational complexity of bisimulation, and the first and third authors (Moller and Srba) have long-established track records in the field. Smolka and Moller have also written a brief survey about the computational complexity of bisimulation [ACM Comput. Surv. 27(2) (1995) 287]. The authors believe that the special issue of Information and Computation devoted to PCK50: Principles of Computing and Knowledge: Paris C. Kanellakis Memorial Workshop represents an ideal opportunity for an up-to-date look at the subject.


  18. D.Q. Goldin, S.A. Smolka, P.C. Attie, and E.L. Sonderegger, “Turing Machines, Transition Systems, and Interaction.” Information and Computation, Vol. 194, Issue 2, pp. 101-128 (November 2004).

    This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of “normal” Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence.
    A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.


  19. P. Yang, C.R. Ramakrishnan, and S.A. Smolka, “A Logical Encoding of the p-Calculus: Model Checking Mobile Processes Using Tabled Resolution.” International Journal on Software Tools for Technology Transfer (STTT), Vol. 6, No. 1, pp. 38-66, Springer-Verlag (July 2004).

    We present MMC, a model checker for mobile systems speci fied in the style of the p-calculus. MMC’s development builds on our experience gained in developing XMC, a model checker for an extension of Milner’s value-passing calculus implemented using the XSB tabled logic-programming system. MMC, however, is not simply an extension of XMC; rather it is virtually a complete re-implementation that addresses the salient issues that arise in the p-calculus, including scope extrusion and intrusion, and dynamic generation of new names to avoid name capture. We show that tabled logic programming is especially suitable as an efficient implementation platform for model checking p-calculus specications, and can be used to obtain an exact encoding of the p-calculus’s transitional semantics. Moreover, MMC is easily extended to handle process expressions in the spi-calculus. Our experimental data shows that MMC outperforms other known tools for model checking the p-calculus.


  20. D. Hansel, R. Cleaveland, and S.A. Smolka, Distributed Prototyping from Validated Specifications. Journal of Systems and Software, 2003.

    We present vpl2cxx that automatically generates efficient, fully Model checking distributed C++ code from high-level system models specified in the mathematically well-founded VPL design language. As the Concurrency Workbench of the New Century (CWB-NC) verification tool includes a front-end for VPL, designers may use the full range of automatic verification and simulation checks provided by this tool on their VPL system designs before invoking the translator, thereby generating distributed prototypes from validated specifications. Besides being fully distributed, the code generated by vpl2cxx is highly readable and portable to a host of execution environments and real-time operating systems. This is achieved by encapsulating all generated code dealing with low-level interprocess communication issues in a library for synchronous communication, which in turn is built upon the Adaptive Communication Environment (ACE) client-server network programming interface. Finally, example applications show that the performance of the generated code is very good, especially for prototyping purposes. We discuss two such examples, including the RETHER real-time Ethernet protocol for voice and video applications.


  21. Y. Dong, X. Du, G. J. Holzmann and S.A. Smolka, Fighting Livelock in the GNU i-Protocol: A Case Study in Explicit-State Model Checking. Software Tools for Technology Transfer, Vol. 4, Issue 2, 2003.

    The i-protocol, an optimized sliding-window protocol for GNU uucp, first came to our attention in 1995 when we used the Concurrency Factory's local model checker to detect, locate, and correct a non-trivial livelock in version 1.04 of the protocol. Since then, we have conducted a systematic case study on the protocol using four verification tools, viz. Cospan, Murphi, Spin, and XMC, each of which supports some form of explicit-state model checking. Our results show that although the i-protocol is inherently complex--the size of its state space grows exponentially in the window size and it deploys several sophisticated optimizations aimed at minimizing control-message and retransmission overhead--it is nonetheless amenable to a number of general-purpose abstraction techniques whose application can significantly reduce the size of the protocol's state space.


  22. A. Phillipou, O. Sokolsky, R. Cleaveland, I. Lee, and S.A. Smolka, Hiding Resources that Can Fail: An Axiomatic Perspective. Information Processing Letters, Special Issue on Process Algebra, Vol. 80, Issue 1, pp. 3-13 (October 2001).

    In earlier work, we presented a process algebra, PACSR, that uses a notion of resource failure to capture probabilistic behavior in reactive systems. PACSR also supports an operator for resource hiding. In this paper, we carefully consider the interaction between these two features from an axiomatic perspective. For this purpose, we introduce a subset of PACSR, called "PACSR- lite", that allows us to isolate the semantic issues surrounding resource hiding in a probabilistic setting, and provide a sound and complete axiomatization of strong bisimulation for this fragment.


  23. X. Du, S.A. Smolka, and R. Cleaveland, Local Model Checking and Protocol Analysis. Software Tools for Technology Transfer, Vol. 2, No. 3, pp. 219-241 (Nov. 1999).

    This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol. The protocol considered is Rether, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking results show that (for a particular network configuration) Rether makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller than the total amount that would result from a global analysis of the protocol.

    In the course of specifying and verifying Rether, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time overhead. Again, using local model checking, we showed that this alternative design also possesses the properties of interest. This observation points up one of the often-overlooked benefits of formal verification: by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to uncover interesting design alternatives.


  24. R. Cleaveland, Z. Dayar, S.A. Smolka, and S. Yuen, Testing Preorders for Probabilistic Processes. Information and Computation (1999).

    We present a testing preorder for probabilistic processes based on a quantification of the probability with which processes pass tests. The theory enjoys close connections with the classical testing theory of De Nicola and Hennessy in that whenever a process passes a test with probability 1 (respectively some non-zero probability) in our setting, then the process must (respectively may) pass the test in the classical theory. We also develop an alternative characterization of the probabilistic testing preorders that takes the form of a mapping from probabilistic traces to the interval [0,1], where a probabilistic trace is an alternating sequence of actions and probability distributions over actions.

    Finally, we give proof techniques, derived from the alternative characterizations, for establishing preorder relationships between probabilistic processes. The utility of these techniques is demonstrated by means of some simple examples.


  25. Y.-J. Joung and S.A. Smolka, Strong Interaction Fairness via Randomization . IEEE Trans. on Parallel and Distributed Systems, Vol. 9, No. 2 (February 1998).

    We present MULTI, a symmetric, distributed, randomized algorithm that, with probability 1, schedules multiparty interactions in a strongly fair manner. To our knowledge, MULTI is the first algorithm for strong interaction fairness to appear in the literature. Moreover, the expected time taken by MULTI to establish an interaction is a constant not depending on the total number of processes in the system. In this sense, MULTI guarantees real-time response.

    MULTI makes no assumptions (other than boundedness) about the time it takes processes to communicate. It thus offers an appealing tonic to the impossibility results of Tsay&Bagrodia and Joung concerning strong interaction fairness in an environment, shared-memory or message-passing, in which processes are deterministic and the communication time is nonnegligible.

    Because strong interaction fairness is as strong a fairness condition that one might actually want to impose in practice, our results indicate that randomization may also prove fruitful for other notions of fairness lacking deterministic realizations and requiring real-time response.


  26. S.-H. Wu, S.A. Smolka and E. Stark, Composition and and Behaviors of Probabilistic I/O Automata . Theoretical Computer Science, Vol. 176, No. 1-2, pp. 1-38 (1997).

    We augment the I/O automaton model of Lynch and Tuttle with probability, as a step toward the ultimate goal of obtaining a useful tool for specifying and reasoning about asynchronous probabilistic systems. Our new model, called probabilistic I/O automata, preserves the fundamental properties of the I/O automaton model, such as the asymmetric treatment of input and output and the pleasant notion of asynchronous composition. For certain classes of probabilistic I/O automata, we show that probabilistic behavior maps, which are an abstract representation of I/O automaton behavior in terms of a certain expectation operator, are compositional and fully abstract with respect to a natural notion of probabilistic testing.


  27. Y.-J. Joung and S.A. Smolka, A Comprehensive Study of the Complexity of Multiparty Interaction . Journal of the ACM, Vol. 43, No. 1, pp. 75-115 (January 1996).

    A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of which must be ready to execute its own action for any of the actions in the set to occur. An attempt to participate in an interaction delays a process until all other participants are available. Although a relatively new concept, the multiparty interaction has found its way into a number of distributed programming languages and algebraic models of concurrency. In this paper, we present a taxonomy of languages for multiparty interaction that covers all proposals of which we are aware. Based on this taxonomy, we then present a comprehensive analysis of the computational complexity of the multiparty interaction implementation problem, the problem of scheduling multiparty interactions in a given execution environment.


  28. S.A. Smolka and B. Steffen, Priority as Extremal Probability. Formal Aspects of Computing, Vol. 8, pp. 585-606 (1996).

    We extend the stratified model of probabilistic processes to obtain a very general notion of process priority. The main idea is to allow probability guards of value 0 to be associated with alternatives of a probabilistic summation expression. Such alternatives can be chosen only if the non-zero alternatives are precluded by contextual constraints. We refer to this model as one of ``extremal probability'' and to its signature as PCCSz. We provide PCCSz with a structural operational semantics and a notion of probabilistic bisimulation, which is shown to be a congruence. Of particular interest is the abstraction PCCSp of PCCSz in which all non-zero probability guards are identified. PCCSp represents a customized framework for reasoning about priority, and covers all features of process algebras proposed for reasoning about priority that we know of.


  29. R. Cleaveland and S.A. Smolka, Strategic Directions in Concurrency Research. ACM Computing Surveys, Vol. 28, No. 4, pp. 607-625 (December 1996).


  30. R. De Nicola and S.A. Smolka, Concurrency: Theory and Practice. ACM Computing Surveys, Vol. 28, No. 4es, Article 52 (December 1996). http://www.acm.org./pubs/articles/journals/surveys/1996-28-4es/a52-de_nicola/a52-de_nicola.html


  31. R. van Glabbeek, S.A. Smolka, and B.U. Steffen, Reactive, Generative, and Stratified Models of Probabilistic Processes. Information and Computation, Vol. 121, No. 1, pp. 59-80 (August 1995).

    We introduce three models of probabilistic processes, namely, reactive, generative and stratified. These models are investigated within the context of PCCS, an extension of Milner's SCCS in which each summand of a process summation expression is guarded by a probability and the sum of these probabilities is 1. For each model we present a structural operational semantics of PCCS and a notion of bisimulation equivalence which we prove to be a congruence. We also show that the models form a hierarchy: the reactive model is derivable from the generative model by abstraction from the relative probabilities of different actions, and the generative model is derivable from the stratified model by abstraction from the purely probabilistic branching structure. Moreover the classical nonprobabilistic model is derivable from each of these models by abstraction from all probabilities.


  32. J.C.M. Baeten, J.A. Bergstra, and S.A. Smolka, Axiomatizing Probabilistic Processes: ACP with Generative Probabilities. Information and Computation, Vol. 121, No. 2, pp. 234-255 (September 1995).

    This paper is concerned with finding complete axiomatizations of probabilistic processes. We examine this problem within the context of the process algebra ACP and obtain as our end-result the axiom system prACP-I, a version of ACP whose main innovation is a probabilistic asynchronous interleaving operator. Our goal was to introduce probability into ACP in as simple a fashion as possible. Optimally, ACP should be the homomorphic image of the probabilistic version in which the probabilities are forgotten.

    We begin by weakening slightly ACP to obtain the axiom system ACP-I. The main difference between ACP and ACP-I is that the axiom x + delta = x, which does not yield a plausible interpretation in the generative model of probabilistic computation, is rejected in ACP-I. We argue that this does not affect the usefulness of ACP-I in practice, and show how ACP can be reconstructed from ACP-I with a minimal amount of technical machinery.

    prACP-I is obtained from ACP-I through the introduction of probabilistic alternative and parallel composition operators, and a process graph model for ACP-I based on probabilistic bisimulation is developed. We show that prACP-I is a sound and complete axiomatization of probabilistic bisimulation for finite processes, and that prACP-I can be homomorphically embedded in ACP-I as desired.

    Our results for ACP-I and prACP-I are presented in a modular fashion by first considering several subsets of the signatures. We conclude with a discussion about adding an iteration operator to ACP-I.


  33. F. Moller and S.A. Smolka, On the Complexity of Bisimulation. ACM Computing Surveys, Vol. 27, No. 2, pp. 287-289 (June 1995).


  34. R. Gupta, S. Bhaskar, and S.A. Smolka, On Randomization in Sequential and Distributed Algorithms, ACM Computing Surveys, Vol. 26, No. 1, pp. 7-86 (March 1994).

    Probabilistic, or randomized, algorithms are fast becoming as commonplace as conventional deterministic algorithms. This survey presents five techniques that have been widely used in the design of randomized algorithms. These techniques are illustrated using 12 randomized algorithms--both sequential and distributed--that span a wide range of applications, including: primality testing (a classical problem in number theory), universal hashing (choosing the hash function dynamically and at random), interactive probabilistic proof systems (a new method of program testing), dining philosophers (a classical problem in distributed computing), and Byzantine agreement (reaching agreement in the presence of malicious processors). Included with each algorithm is a discussion of its correctness and its computational complexity. Several related topics of interest are also addressed, including the theory of probabilistic automata, probabilistic analysis of conventional algorithms, deterministic amplification, and derandomization of randomized algorithms. Finally, a comprehensive annotated bibliography is given.


  35. Y.-J. Joung and S.A. Smolka, Coordinating First-Order Multiparty Interactions. ACM TOPLAS, Vol. 16, No. 3, pp. 954-985 (May 1994).

    A first-order multiparty interaction is an abstraction mechanism that defines communication among a set of formal process roles. Actual processes participate in a first-order interaction by enroling into roles, and execution of the interaction can proceed when all roles are filled by distinct processes. As in CSP, enrolement statements can serve as guards in alternative commands. The enrolement guard scheduling problem then is to enable the execution of first-order interactions through the judicious scheduling of roles to processes currently ready to execute enrolement guards.

    We present a fully distributed and message-efficient algorithm for the enrolement guard scheduling problem, the first such solution of which we are aware. We also describe several extensions of the algorithm, including generic roles, dynamically changing environments where processes can be created and destroyed at run time, and nested-enrolement which allows interactions to be nested.


  36. P.C. Kanellakis and S.A. Smolka, “CCS Expressions, Finite State Processes, and Three Problems of Equivalence,” Information and Computation, Vol. 86, No. 1, pp. 43-68 (May 1990).

    We examine the computational complexity of testing finite state processes for equivalence in Milner's Calculus of Communicating Systems (CCS). The equivalence problems in CCS are presented as refinements of the familiar problem of testing whether two nondeterministic finite automata (NFA) are equivalent, i.e., accept the same language. Three notions of equivalence proposed for CCS are investigated, namely, observational equivalence, strong observational equivalence, and failure equivalence. We show that observational equivalence can be tested in polynomial time. As defined in CCS, observational equivalence is the limit of a sequence of successively finer equivalence relations, ˜k, where ˜1 is nondeterministic finite automaton equivalence. We prove that, for each fixed k, deciding ˜k is PSPACE-complete. We show that strong observational equivalence can be decided in polynomial time by reducing it to generalized partitioning, a new combinatorial problem of independent interest. Finally, we demonstrate that testing for failure equivalence is PSPACE-complete, even for a very restricted type of process.


  37. J. Reif and S.A. Smolka, “Data Flow Analysis of Distributed Communicating Processes,” International Journal on Parallel Programming (Gary Lindstrom, Editor), Vol. 19, No. 1 (Feb. 1990).

    Data flow analysis is a technique essential to the compile-time optimization of computer programs, wherein facts relevant to program optimizations are discovered by the global propagation of facts obvious locally. This paper extends several known techniques for data flow analysis of sequential programs to the static analysis of distributed communicating processes. In particular, we present iterative algorithms for detecting unreachable program statements, and for determining the values of program expressions. The latter information can be used to place bounds on the size of variables and messages. Our main innovation is theevent spanning graph, which serves as a heuristic for ordering the nodes through which data flow information is propagated. We consider bothstatic communication, where all channel arguments are constants, and the more difficultdynamic communication, where channel arguments may be variables and channels may be passed as messages.


  38. J. Reif and S.A. Smolka, “The Complexity of Reachability in Distributed Communicating Processes,” Acta Informatica, Vol. 25, pp. 333-354 (1988).

    A crucial problem in the analysis of communicating processes is the detection of program statements that are unreachable due to communication deadlocks. In this paper, we consider the computational complexity of the reachability problem for various models of communicating processes. We obtain these models by making simplifying assumptions about the behavior of message queues and program control, with the hope that reachability may become easier to decide. Depending on the assumptions made, we show that reachability is undecidable, requires nearly exponential space infinitely often, or is NP-complete. In obtaining these results, we demonstrate a very close relationship between the decidable models and Petri nets and Habermann's path expressions, respectively.

  39. A. Giacalone and S.A. Smolka, “Integrated Environments for Formally Based Design and Simulation of Concurrent Systems: A Non-Procedural Approach,” Special Issue on Integrated Software Engineering Environments, IEEE Transactions on Software Engineering, Vol. 14, No. 6, pp. 787-802 (June 1988).

    This project will build initially an environment that supports Milner's Calculus of Communicating Systems. CCS provides a simple and general model of concurrency based on processes communicating via ports, and is also a formal system in which semantics is represented nonprocedurally, i.e. via inference and equational rules. The experiment with the CCS environment will be performed by using it in several ongoing research projects involving concurrent systems. These experiments will provide critical feedback on the environment's effectiveness. The CCS environment will also serve as a laboratory to investigate how the principles on which it is based extend to other languages. This is an attempt to develop understanding of environments for the specification and design of concurrent systems. Improved understanding of such environments will be important in future designs.


  40. P.C. Kanellakis and S.A. Smolka, “On the Analysis of Cooperation and Antagonism in Networks of Communicating Processes,” Special Issue on Parallel and Distributed Computing, Algorithmica, Vol. 3, pp. 421-450 (1988).

    We propose a new method for the analysis of cooperative and antagonistic properties of communicating finite state processes (FSPs). This algebraic technique is based on a composition operator and on the notion of possibility equivalence among FSPs. We demonstrate its utility by showing that potential blocking, termination, and lockout can be decided in polynomial time for loosely connected networks of tree FSPs. Potential blocking and termination are examples of cooperative properties, while lockout is an antagonistic one. For loosely connected networks of (the more general) acyclic FSPs, the cooperative properties become NP-complete and the antagonistic ones PSPACE-complete. For tightly coupled networks of tree FSPs, we also have NP-completeness for the cooperative properties. For the harder case of FSPs with cycles, we provide a natural extension of the method.


  41. S.A. Smolka and R.E. Strom, “A CCS Semantics for NIL,” Special Issue on Formal Definition/Design of Computer Systems, IBM Journal of Research and Development, Vol. 31, No. 5, pp. 556-571 (Sept. 1987).

    We present a syntax-directed translation of NIL, a high-level language for distributed systems programming, into CCS, Milner's Calculus of Communicating Systems. This translation presents unique problems because of NIL's highly dynamic nature, and makes full use of CCS's descriptive facilities. In particular, we consider NIL constructs for dynamic creation and deletion of processes and communication channels, queued synchronous and asynchronous message passing, nondeterministic message selection, and exception handling. A NIL implementation of a simple command shell is used to illustrate the translation procedure. We discuss various issues and open problems concerning the suitability of CCS as an abstract semantics for NIL.


  42. P. Wegner and S.A. Smolka, “Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives,” IEEE Transactions on Software Engineering, Vol. 9, No. 4, pp. 446-462 (July 1983).

    Three notations for concurrent programming are compared, namely CSP, Ada, and monitors. CSP is an experimental language for exploring structuring concepts in concurrent programming. Ada is a general-purpose language with concurrent programming facilities. Monitors are a construct for managing access by concurrent processes to shared resources. We start by comparing "lower-level" communication, synchronization, and nondeterminism in CSP and Ada and then examine "higher-level" module interface properties of Ada tasks and monitors.


Recent Conference Publications

  1. A. Tiwari, S. A. Smolka, L. Esterle, A. Lukina, J. Yang, and R. Grosu, “Attacking the V: On the Resiliency of Adaptive-Horizon MPC.” Proceedings of ATVA 2017, 15th International Symposium on Automated Technology for Verification and Analysis, Pune, India (Oct. 2017).

    We introduce ARES, an efficient approximation algorithm for generating optimal plans (action sequences) that take an initial state of a Markov Decision Process (MDP) to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state. The level relation on states and the plans constructed by ARES implicitly define a Lyapunov function and an optimal policy, respectively, both of which could be explicitly generated by applying ARES to all states of the MDP, up to some topological equivalence relation. We also assess the effectiveness of ARES by statistically evaluating its rate of success in generating optimal plans. The ARES algorithm resulted from our desire to clarify if flying in V-formation is a flocking policy that optimizes energy conservation, clear view, and velocity alignment. That is, we were interested to see if one could find optimal plans that bring a flock from an arbitrary initial state to a state exhibiting a single connected V-formation. For flocks with 7 birds, ARES is able to generate a plan that leads to a V-formation in 95% of the 8,000 random initial configurations within 63 seconds, on average. ARES can also be easily customized into a model-predictive controller (MPC) with an adaptive receding horizon and statistical guarantees of convergence. To the best of our knowledge, our adaptive-sizing approach is the first to provide convergence guarantees in receding-horizon techniques.


  2. H. Abbas, A. Rodionova, E. Bartocci, S. A. Smolka, and R. Grosu, “Quantitative Regular Expressions for Arrhythmia Detection Algorithms.” Proceedings of CMSB 2017, 16th International Conference on Computational Methods in Systems Biology, Darmstadt, Germany (Sept. 2017).

    Motivated by the desire to verify the correctness of algorithms for arrhythmia discrimination used in cardiac medical devices, we present a general wavelet-based characterization of peaks (local maxima and minima) that occur in cardiac electrograms, along with two peak-detection algorithms based on this characterization. Peak detection (PD) is a common signal-processing task, as peaks indicate events of interest, such as heartbeats (in the cardiac setting). The performance of PD thereby directly influences the correctness of the algorithms that depend on its output. We show that our wavelet-based PD algorithms (peakWPM and peakWPB) and a commercial PD algorithm from Medtronic Inc. (peakMDT) are easily expressible in Quantitative Regular Expressions (QREs), a formal language based on regular expressions for specifying complex numerical queries over data streams. We then study the accuracy and sensitivity of the resulting QRE-based PD algorithms on real patient data, and show that the wavelet-based peakWPM algorithm outperforms the other two PD algorithms, yielding results that are on par with those provided by a cardiologist.


  3. N. Paoletti, K. S. Liu, S. A. Smolka, and S. Lin, “Data-Driven Robust Control for Type 1 Diabetes Under Meal and Exercise Uncertainties.” Proceedings of CMSB 2017, 16th International Conference on Computational Methods in Systems Biology, Darmstadt, Germany (Sept. 2017).

    We present a fully closed-loop design for an artificial pancreas (AP) which regulates the delivery of insulin for the control of Type I diabetes. Our AP controller operates in a fully automated fashion, without requiring any manual interaction (e.g. in the form of meal announcements) with the patient. A major obstacle to achieving closed-loop insulin control is the uncertainty in those aspects of a patient's daily behavior that significantly affect blood glucose, especially in relation to meals and physical activity. To handle such uncertainties, we develop a data-driven robust model-predictive control framework, where we capture a wide range of individual meal and exercise patterns using uncertainty sets learned from historical data. These sets are then used in the controller and state estimator to achieve automated, precise, and personalized insulin therapy. We provide an extensive in silico evaluation of our robust AP design, demonstrating the potential of this approach, without explicit meal announcements, to support high carbohydrate disturbances and to regulate glucose levels in large clusters of virtual patients learned from population-wide survey data.


  4. J. Cyranka, Md. A. Islam, G. Byrne, P. L. Jones , S. A. Smolka, and R. Grosu, “Lagrangian Reachability.” Proceedings of CAV 2017, 29th International Conference on Computer-Aided Verification, Heidelberg, Germany (July 2017).

    We introduce LRT, a new Lagrangian-based ReachTube computation algorithm that conservatively approximates the set of reachable states of a nonlinear dynamical system. LRT makes use of the Cauchy-Green stretching factor (SF), which is derived from an over-approximation of the gradient of the solution flows. The SF measures the discrepancy between two states propagated by the system solution from two initial states lying in a well-defined region, thereby allowing LRT to compute a reachtube with a ball-overestimate in a metric where the computed enclosure is as tight as possible. To evaluate its performance, we implemented a prototype of LRT in C++/Matlab, and ran it on a set of well-established benchmarks. Our results show that LRT compares very favorably with respect to the CAPD and Flow* tools.


  5. J. Yang, Md. A. Islam, A. Murthy, S. A. Smolka, and S. D. Stoller, “Simplex Architecture for Hybrid Systems using Barrier Certificates.” Proceedings of SAFECOMP 2017, 36th International Conference on Computer Safety, Reliability and Security, Trento, Italy (Sept. 2017).

    This paper shows how to use Barrier Certificates (BaCs) to design Simplex Architectures for hybrid systems. The Simplex architecture entails switching control of a plant over to a provably safe Baseline Controller when a safety violation is imminent under the control of an unverified Advanced Controller. A key step of determining the switching condition is identifying a recoverable region, where the Baseline Controller guarantees recovery and keeps the plant invariably safe. BaCs, which are Lyapunov-like proofs of safety, are used to identify a recoverable region. At each time step, the switching logic samples the state of the plant and uses bounded-time reachability analysis to conservatively check whether any states outside the zero-level set of the BaCs, which therefore might be non-recoverable, are reachable in one decision period under control of the Advanced Controller. If so, failover is initiated.
    Our approach of using BaCs to identify recoverable states is computationally cheaper and potentially more accurate (less conservative) than existing approaches based on state-space exploration. We apply our technique to two hybrid systems: a water tank pump and a stop-sign-obeying controller for a car.


  6. A. Lukina, L. Esterle, C. Hirsch, E. Bartocci, J. Yang, A. Tiwari, S. A. Smolka, and R. Grosu, “ARES: Adaptive Receding-Horizon Synthesis of Optimal Plans.” Proceedings of TACAS 2017: 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Uppsala, Sweden (Apr. 2017).

    We introduce ARES, an efficient approximation algorithm for generating optimal plans (action sequences) that take an initial state of a Markov Decision Process (MDP) to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state. The level relation on states and the plans constructed by ARES implicitly define a Lyapunov function and an optimal policy, respectively, both of which could be explicitly generated by applying ARES to all states of the MDP, up to some topological equivalence relation. We also assess the effectiveness of ARES by statistically evaluating its rate of success in generating optimal plans. The ARES algorithm resulted from our desire to clarify if flying in V-formation is a flocking policy that optimizes energy conservation, clear view, and velocity alignment. That is, we were interested to see if one could find optimal plans that bring a flock from an arbitrary initial state to a state exhibiting a single connected V-formation. For flocks with 7 birds, ARES is able to generate a plan that leads to a V-formation in 95% of the 8,000 random initial configurations within 63 s, on average. ARES can also be easily customized into a model-predictive controller (MPC) with an adaptive receding horizon and statistical guarantees of convergence. To the best of our knowledge, our adaptive-sizing approach is the first to provide convergence guarantees in receding-horizon techniques.


  7. D. Phan, J. Yang, M. Clark, R. Grosu, J. D. Schierman, S. A. Smolka, and S. D. Stoller, “A Component-Based Simplex Architecture for High-Assurance Cyber-Physical Systems.” Proceedings of ACSD 2017: 17th International Conference on Application of Concurrency to System Design, Zaragoza, Spain (June 2017).

    We present Component-Based Simplex Architecture (CBSA), a new framework for assuring the runtime safety of component-based cyber-physical systems (CPSs). CBSA integrates Assume-Guarantee (A-G) reasoning with the core principles of the Simplex control architecture to allow component-based CPSs to run advanced, uncertified controllers while still providing runtime assurance that A-G contracts and global properties are satisfied. In CBSA, multiple Simplex instances, which can be composed in a nested, serial or parallel manner, coordinate to assure system-wide properties. Combining A-G reasoning and the Simplex architecture is a challenging problem that yields significant benefits. By utilizing A-G contracts, we are able to compositionally determine the switching logic for CBSAs, thereby alleviating the state explosion encountered by other approaches. Another benefit is that we can use A-G proof rules to decompose the proof of system-wide safety assurance into sub-proofs corresponding to the component-based structure of the system architecture. We also introduce the notion of coordinated switching between Simplex instances, a key component of our compositional approach to reasoning about CBSA switching logic. We illustrate our framework with a component-based control system for a ground rover. We formally prove that the CBSA for this system guarantees energy safety (the rover never runs out of power), and collision freedom (the rover never collides with a stationary obstacle). We also consider a CBSA for the rover that guarantees mission completion: all target destinations visited within a prescribed amount of time.


  8. Md. A. Islam, Q. Wang, E. M. Clarke, S. A. Smolka, R. Hsani, Radu Grosu, and O. Balun, “Probabilistic Reachability Analysis of the Tap Withdrawal Circuit in Caenorhabditis elegans.” Proceedings of HLDVT 2016: 18th IEEE International High-Level Design Validation and Test Workshop, Santa Cruz, CA, IEEE Press (Oct. 2016).

    We present a probabilistic reachability analysis of a (nonlinear ODE) model of a neural circuit in Caeorhabditis elegans (C. elegans), the common roundworm. In particular, we consider Tap Withdrawal (TW), a reflexive behavior exhibited by a C. elegans worm in response to vibrating the surface on which it is moving. The neural circuit underlying this response is the subject of this investigation. Specially, we perform bounded-time reachability analysis on the TW circuit model of Wicks et al. (1996) to estimate the probability of various TW responses. The Wicks et al. model has a number of parameters, and we demonstrate that the various TW responses and their probability of occurrence in a population of worms can be viewed as a problem of parameter uncertainty. Our approach to this problem rests on encoding each TW response as a hybrid automaton with parametric uncertainty. We then perform probabilistic reachability analysis on these automata using a technique that combines a d-decision procedure with statistical tests. The results we obtain are a significant extension of those of Wicks et al. (1996), who equip their model with fixed parameter values that reproduce a single TW response. In contrast, our technique allow us to more thoroughly explore the models parameter space using statistical sampling theory, identifying in the process the distribution of TW responses. Wicks et al. conducted a number of ablation experiments on a population of worms in which one or more of the neurons in the TW circuit are surgically ablated (removed). We show that our technique can be used to correctly estimate TW response-probabilities for four of these ablation groups. We also use our technique to predict TW response behavior for two ablation groups not previously considered by Wicks et al.


  9. Md. A. Islam, H. Lim, N. Paoletti, H. Abbas, Z. Jiang, J. Cyranka, W. R. Cleaveland, S. Gao, E. M. Clarke, R. Grosu, R. Mangharam, E. Cherry, F. H. Fenton, R. A. Gray, J. Glimm, S. Lin, Q. Wang, and S. A. Smolka, “The CyberCardia Project: Modeling, Verification and Validation of Implantable Cardiac Devices.” Proceedings of FMBBS 2016, Workshop on Formal Methods for Biological and Biomedical Systems, in conjunction with 2016 IEEE Conference on Bioinformatics and Biomedicine, Shenzhen, China (Dec. 2016).

    In this paper, we survey recent progress in CyberCardia project, a CPS Frontier project funded by the National Science Foundation. The CyberCardia project will lead to significant advances in the state of the art for system verification and cardiac therapies based on the use of formal methods and closed-loop control and verification. The animating vision for the work is to enable the development of a true in silico design methodology for medical devices that can be used to speed the development of new devices and to provide greater assurance that their behavior matches designer intentions, and to pass regulatory muster more quickly so that they can be used on patients needing their care. The acceleration in medical-device innovation achievable as a result of the CyberCardia research will also have long-term and sustained societal benefits, as better diagnostic and therapeutic technologies enter into the practice of medicine more quickly.


  10. C. Jegourel, A. Lukina, A. Legay, S. A. Smolka, R. Grosu, and E. Bartocci, “Feedback Control for Statistical Model Checking of Cyber-Physical Systems.” Proceedings of ISoLA 2016, Eighth International Symposium on Leveraging Applications, Corfu, Greece, Lecture Notes in Computer Science, Springer (Oct. 2016).

    We introduce feedback-control statistical system checking (FC-SSC), a new approach to statistical model checking that exploits principles of feedback-control for the analysis of cyber-physical systems (CPS). FC-SSC uses stochastic system identification to learn a CPS model, importance sampling to estimate the CPS state, and importance splitting to control the CPS so that the probability that the CPS satisfies a given property can be efficiently inferred. We illustrate the utility of FC-SSC on two example applications, each of which is simple enough to be easily understood, yet complex enough to exhibit all of FC-SCC’s features. To the best of our knowledge, FC-SSC is the first statistical system checker to efficiently estimate the probability of rare events in realistic CPS applications or in any complex probabilistic program whose model is either not available, or is infeasible to derive through static-analysis techniques.


  11. Md. A. Islam, G. Byrne, S. Kong, E. M. Clarke, R Cleaveland, F. H. Fenton, R. Grosu, and S. A. Smolka, “Bifurcation Analysis of Cardiac Alternans using d-Decidability.” Proceedings of CMSB 2016, 14th Conference on Computational Methods in Systems Biology, Cambridge, UK (Sept. 2016).

    We present a bifurcation analysis of electrical alternans in the two-current Mitchell-Schaeffer (MS) cardiac-cell model using the theory of dd-decidability over the reals. Electrical alternans is a phenomenon characterized by a variation in the successive Action Potential Durations (APDs) generated by a single cardiac cell or tissue. Alternans are known to initiate re-entrant waves and are an important physiological indicator of an impending life-threatening arrhythmia such as ventricular fibrillation. The bifurcation analysis we perform determines, for each control parameter tt of the MS model, the bifurcation point in the range of tt such that a small perturbation to this value results in a transition from alternans to non-alternans behavior. To the best of our knowledge, our analysis represents the first formal verification of non-trivial dynamics in a numerical cardiac-cell model.
    Our approach to this problem rests on encoding alternans-like behavior in the MS model as a 11-mode, multinomial hybrid automaton (HA). For each model parameter, we then apply a sophisticated, guided-search-based reachability analysis to this HA to estimate parameter ranges for both alternans and non-alternans behavior. The bifurcation point separates these two ranges, but with an uncertainty region due to the underlying dd-decision procedure. This uncertainty region, however, can be reduced by decreasing dd at the expense of increasing the model exploration time. Experimental results are provided that highlight the effectiveness of this method.


  12. J. Yang, R. Grosu, S. A. Smolka, and A. Tiwari, “Love Thy Neighbor: V-Formation as a Problem of Model Predictive Control (Extended Abstract).” Proceedings of CONCUR 2016, 27th International Conference on Concurrency Theory, Qu´ebec City, Canada (Aug. 2016).

    We present a new formulation of the V-formation problem for migrating birds in terms of model predictive control (MPC). In our approach, to drive a collection of birds towards a desired formation, an optimal velocity adjustment (acceleration) is performed at each time-step on each bird's current velocity using a model-based prediction window of $T$ time-steps. We present both centralized and distributed versions of this approach. The optimization criteria we consider are based on fitness metrics of candidate accelerations that birds in a V-formations are known to benefit from, including velocity matching, clear view, and upwash benefit. We validate our MPC-based approach by showing that for a significant majority of simulation runs, the flock succeeds in forming the desired formation. Our results help to better understand the emergent behavior of formation flight, and provide a control strategy for flocks of autonomous aerial vehicles.


  13. D. Phan, J. Yang, D. Ratasich, R. Grosu, S. A. Smolka and S. D. Stoller, “Collision Avoidance for Mobile Robots with Limited Sensing and Limited Information about the Environment.” Proceedings of RV 2015, 15th International Conference on Runtime Verification, Vienna, Austria (Sept. 2015).

    This paper addresses the problem of safely navigating a mobile robot with limited sensing capability and limited information about stationary obstacles. We consider two sensing limitations: blind spots between sensors and limited sensing range. We identify a set of constraints on the sensors’ readings whose satisfaction at time t guarantees collision-freedom during the time interval \([t, t + \varDelta t]\). Here, \(\varDelta t\) is a parameter whose value is bounded by a function of the maximum velocity of the robot and the range of the sensors. The constraints are obtained under assumptions about minimum internal angle and minimum edge length of polyhedral obstacles. We apply these constraints in the switching logic of the Simplex architecture to obtain a controller that ensures collision-freedom. Experiments we have conducted are consistent with these claims. To the best of our knowledge, our study is the first to provide runtime assurance that an autonomous mobile robot with limited sensing can navigate without collisions with only limited information about obstacles.


  14. K. Selyunin, D. Ratasich, E. Bartocci, Md. A. Islam, S. A. Smolka, and R. Grosu, “Neural Programming: Towards Adaptive Control in Cyber-Physical Systems.” Proceedings of CDC 2015, 54th IEEE Conference on Decision and Control, Osaka, Japan (Dec. 2015).

    We introduce Neural Programming (NP), a novel paradigm for writing adaptive controllers for Cyber-Physical Systems (CPSs). In NP, if and while statements, whose discontinuity is responsible for frailness in CPS design and implementation, are replaced with their smooth (probabilistic) neural nif and nwhile counterparts. This allows one to write robust and adaptive CPS controllers as dynamic neural networks (DNN). Moreover, with NP, one can relate the thresholds occurring in soft decisions with a Gaussian Bayesian network (GBN). We provide a technique for learning these GBNs using available domain knowledge. We demonstrate the utility of NP on three case studies: an adaptive controller for the parallel parking of a Pioneer rover; the neural circuit for tap withdrawal in C. elegans; and a neural-circuit encoding of parallel parking which corresponds to a proportional controller. To the best of our knowledge, NP is the first programming paradigm linking neural networks (artificial or biological) to programs in a way that explicitly highlights a program's neural-network structure.


  15. Md. A. Islam, R. DeFrancisco, C. Fan, R. Grosu, S. Mitra, and S. A. Smolka, Model Checking Tap Withdrawal in C. Elegans. Submitted to CAV 2015.

    We present what we believe to be the first formal verification of a biologically realistic (nonlinear ODE) model of a neural circuit in a multicellular organism: Tap Withdrawal (TW) in C. Elegans, the common roundworm. TW is a reflexive behavior exhibited by C. Elegans in response to vibrating the surface on which it is moving; the neural circuit underlying this response is the subject of this investigation. Specifically, we perform reachability analysis on the TW circuit model of Wicks et al. (1996), which enables us to estimate key circuit parameters. Underlying our approach is the use of Fan and Mitra's recently developed technique for automatically computing local discrepancy (convergence and divergence rates) of general nonlinear systems. We show that the results we obtain are in agreement with the experimental results of Wicks et al. (1995). As opposed to the fixed parameters found in most biological models, which can only produce the predominant behavior, our techniques characterize ranges of parameters that produce (and do not produce) all three observed behaviors: reversal of movement, acceleration, and lack of response.


  16. A. Murthy, Md. A. Islam, S. A. Smolka, and R. Grosu, Computing Bisimulation Functions using SOS Optimization and δ-Decidability over the Reals. Proceeding of HSCC 2015, 18th International Conference on Hybrid Systems: Computation and Control, Seattle, Washington, USA, ACM Press (April 2015).

    We show that the 13-state sodium channel component of the Iyer et al. cardiac cell model can be replaced with a previously identi- fied d-bisimilar 2-state Hodgkin Huxley-type abstraction by appealing to a small gain theorem. To prove this feedback compositionality result, we construct quadratic-polynomial exponentially decaying bisimulation functions between the two sodium channel models and also for the rest of a simplified version of the Iyer et al. model using the SOSTOOLS toolbox. Our experimental results validate the analytical ones. To the best of our knowledge, this is the first application of d-bisimilar, feedbackassisting, compositional reasoning in biological systems.


  17. Md. A. Islam, A. Murthy, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics.Proceeding of HSCC 2014, 17th International Conference on Hybrid Systems: Computation and Control, Berlin, Germany, ACM Press (April 2014).

    By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


  18. E. Bartocci, R. DeFrancisco, and S. A. Smolka, Towards a GPGPU-Parallel SPIN Model Checker. Proceedings of SPIN 2014, Third International SPIN Symposium on Model Checking of Software, San Jose, CA, ACM Press (July 2014).

    As General-Purpose Graphics Processing Units (GPGPUs)become more powerful, they are being used increasingly often in high-performance computing applications. State space exploration, as employed in model-checking and other verification techniques, is a large, complex problem that has successfully been ported to a variety of parallel architectures. Use of the GPU for this purpose, however, has only recently begun to be studied. We show how the 2012 multicore CPU-parallel state-space exploration algorithm of the SPIN model checker can be re-engineered to take advantage of the unique parallel-processing capabilities of the GPGPU architecture, and demonstrate how to overcome the non-trivial design obstacles presented by this task. Our preliminary results demonstrate significant performance improvements over the traditional sequential model checker for state spaces of appreciable size (>10 million unique states).


  19. S. Bhattacharyya, S. P. Miller, J. Yang, S. A. Smolka, B. Meng, C. Sticksel, and C. Tinelli, Verification of Quasi-Synchronous Systems with Uppaal. Proceedings of DASC 2014, IEEE/AIAA 33rd Digital Avionics Systems Conference, Colorado Springs, CO (Oct. 2014).

    Modern defense systems are complex distributed software systems implemented over heterogeneous and constantly evolving hardware and software platforms. Distributed agreement protocols are often developed exploiting the fact that their systems are quasi-synchronous, where even though the clocks of the different nodes are not synchronized, they all run at the same rate, or multiples of the same rate, modulo their drift and jitter. This paper describes an effort to provide systems designers and engineers with an intuitive modeling environment that allows them to specify the highlevel architecture and synchronization logic of quasi-synchronous systems using widely available systems-engineering notations and tools. To this end, a translator was developed that translates system architectural models specified in a subset of SysML into the Architectural Analysis and Description Language (AADL). Translators were also developed that translate the AADL models into the input language of the Uppaal and Kind model checkers. The Uppaal model checker. supports the modeling, verification, and validation of real-time systems modeled as a network of timed automata. This paper focuses on the challenges encountered in translating from AADL to Uppaal, and illustrates the overall approach with a common avionics example: the Pilot Flying System.


  20. D. Peled, R. Grosu, C. R. Ramakrishnan, S. A. Smolka, S. D. Stoller, and J. Yang, Using Statistical Model Checking for Measuring Systems. Proceedings of ISoLA 2014, Sixth International Symposium on Leveraging Applications, Corfu, Greece, Lecture Notes in Computer Science, Springer (Oct. 2014).

    State spaces represent the way a system evolves through its different possible executions. Automatic verification techniques are used to check whether the system satisfies certain properties, expressed using automata or logic-based formalisms. This provides a Boolean indication of the systemÂ’s fitness. It is sometimes desirable to obtain other indications, measuring e.g., duration, energy or probability. Certain measurements are inherently harder than others. This can be explained by appealing to the difference in complexity of checking CTL and LTL properties. While the former can be done in time linear in the size of the property, the latter is PSPACE in the size of the property; hence practical algorithms take exponential time. While the CTL-type of properties measure specifications that are based on adjacency of states (up to a fixpoint calculation), LTL properties have the flavor of expecting some multiple complicated requirements from each execution sequence. In order to quickly measure LTL-style properties from a structure, we use a form of statistical model checking; we exploit the fact that LTL-style properties on a path behave like CTL-style properties on a structure. We then use CTL-based measuring on paths, and generalize the measurement results to the full structure using optimal Monte Carlo estimation techniques. To experimentally validate our framework, we present measurements for a flocking model of bird-like agents.


  21. T. Deshpande, P. Katsaros, S. A. Smolka, and S. D. Stoller, Stochastic Game-Based Analysis of the DNS Bandwidth Amplification Attack Using Probabilistic Model Checking. Proceedings of EDCC 2014, 10th European Dependable Computing Conference, Newcastle upon Tyne, UK, IEEE Press (May 2014).

    The Domain Name System (DNS) is an Internet-wide, hierarchical naming system used to translate domain names into numeric IP addresses. Any disruption of DNS service can have serious consequences. We present a formal game-theoretic analysis of a notable threat to DNS, namely the bandwidth amplification attack (BAA), and the countermeasures designed to defend against it. We model the DNS BAA as a two-player, turn-based, zero-sum stochastic game between an attacker and a defender. The attacker attempts to flood a victim DNS server with malicious traffic by choosing an appropriate number of zombie machines with which to attack. In response, the defender chooses among five BAA countermeasures, each of which seeks to increase the amount of legitimate traffic the victim server processes. To simplify the model and optimize the analysis, our model does not explicitly track the handling of each packet. Instead, our model is based on calculations of the rates at which the relevant kinds of events occur in each state. We use our game-based model of DNS BAA to generate optimal attack strategies, which vary the number of zombies, and optimal defense strategies, which aim to enhance the utility of the BAA countermeasures by combining them in advantageous ways. The goal of these strategies is to optimize the attacker's and defender's payoffs, which are defined using probabilistic reward-based properties, and are measured in terms of the attacker's ability to minimize the volume of legitimate traffic that is processed, and the defender's ability to maximize the volume of legitimate traffic that is processed.


  22. Md. A. Islam, T. Deshpande, A. Murthy, E. Bartocci, S. D. Stoller, S. A. Smolka, and R. Grosu, Tracking Action Potentials of Nonlinear Excitable Cells using Model Predictive Control. Proceedings of BIOTECHNO 2014, Sixth International Conference on Bioinformatics, Biocomputational Systems and Biotechnologies, Chamonix, France, IARIA XPS Press (April 2014).

    Explicit and online Model Predictive Controllers (MPCs) are presented for an excitable cell simulator. The dynamics of the excitable cell are captured using the nonlinear FitzHugh-Nagumo model. Despite the plantÂ’s nonlinearity, the model predictive control problem is formulated as an instance of quadratic programming, using a piecewise affine abstraction of the plant. Speed versus accuracy tradeoff is analyzed for the explicit and online versions by testing them on the same reference trajectory. The MPCs provide a framework for in silico testing of biomedical control strategies, specifically in the context of excitable cells like neurons and cardiac myocytes.


  23. Md. A. Islam, A. Murthy, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics. Proceeding of HSCC 2014, 17th International Conference on Hybrid Systems: Computation and Control, Berlin, Germany, ACM Press (April 2014).

    By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


  24. K. Kalajdzic, E. Bartocci, S. A. Smolka, S. D. Stoller and R. Grosu, Runtime Verification with Particle Filtering. Proceedings of RV 2013, Fourth International Conference on Runtime Verification, INRIA Rennes, France, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2013).

    We introduce Runtime Verification with Particle Filtering (RVPF), a powerful and versatile method for controlling the tradeoff between uncertainty and overhead in runtime verification. Overhead and accuracy are controlled by adjusting the frequency and duration of observation gaps, during which program events are not monitored, and by adjusting the number of particles used in the RVPF algorithm. We succinctly represent the program model, the program monitor, their interaction, and their observations as a dynamic Bayesian network (DBN). Our formulation of RVPF in terms of DBNs is essential for a proper formalization of peek events: low-cost observations of parts of the program state, which are performed probabilistically at the end of observation gaps. Peek events provide information that our algorithm uses to reduce the uncertainty in the monitor state after gaps. We estimate the internal state of the DBN using particle filtering (PF) with sequential importance resampling (SIR). PF uses a collection of conceptual particles (random samples) to estimate the probability distribution for the system's current state: the probability of a state is given by the sum of the importance weights of the particles in that state. After an observed event, each particle chooses a state transition to execute by sampling the DBN's joint transition probability distribution; particles are then redistributed among the states that best predicted the current observation. SIR exploits the DBN structure and the current observation to reduce the variance of the PF and increase its performance. We experimentally compare the overhead and accuracy of our RVPF algorithm with two previous approaches to runtime verification with state estimation: an exact algorithm based on the forward algorithm for HMMs, and an approximate version of that algorithm, which uses precomputation to reduce runtime overhead. Our results confim RVPF's versatility, showing how it can be used to control the tradeoff between execution time and memory usage while, at the same time, being the most accurate of the three algorithms.


  25. A. Islam, A. Murthy, E. Bartocci, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics. Proceedings of CMSBÂ’13, 11th Conference on Computational Methods in Systems Biology, Klosterneuburg, Austria, Springer-Verlag, Lecture Notes in Computer Science, Sept. 2013 (Short paper).

    By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


  26. E. Bartocci, R. Grosu, A. Karmarkar, S. A. Smolka, S. D. Stoller, E. Zadok and J. Seyster, Adaptive Runtime Verification. Proceedings of RV 2012, Third International Conference on Runtime Verification, Istanbul, Turkey, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2012).

    We present Adaptive Runtime Verification (ARV), a new approach to runtime verification in which overhead control, runtime verification with state estimation, and predictive analysis are synergistically combined. Overhead control maintains the overhead of runtime verification at a specified target level, by enabling and disabling monitoring of events for each monitor instance as needed. In ARV, predictive analysis based on a probabilistic model of the monitored system is used to estimate how likely each monitor instance is to violate a given temporal property in the near future, and these criticality levels are fed to the overhead controllers, which allocate a larger fraction of the target overhead to monitor instances with higher criticality, thereby increasing the probability of violation detection. Since overhead control causes the monitor to miss events, we use Runtime Verification with State Estimation (RVSE) to estimate the probability that a property is satisfied by an incompletely monitored run. A key aspect of the ARV framework is a new algorithm for RVSE that performs the calculations in advance, dramatically reducing the runtime overhead of RVSE, at the cost of introducing some approximation error. We demonstrate the utility of ARV on a significant case study involving runtime monitoring of concurrency errors in the Linux kernel.


  27. A. Gorlin, C. R. Ramakrishnan, and S. A. Smolka, Model Checking with Probabilistic Tabled Logic Programming. Proceedings of ICLP 2012, 28th International Conference on Logic Programming, Budapest, Hungary (Sep. 2012).

    We present a formulation of the problem of probabilistic model checking as one of query evaluation over probabilistic logic programs. To the best of our knowledge, our formulation is the first of its kind, and it covers a rich class of probabilistic models and probabilistic temporal logics. The inference algorithms of existing probabilistic logic-programming systems are well defined only for queries with a finite number of explanations. This restriction prohibits the encoding of probabilistic model checkers, where explanations correspond to executions of the system being model checked. To overcome this restriction, we propose a more general inference algorithm that uses finite generative structures (similar to automata) to represent families of explanations. The inference algorithm computes the probability of a possibly infinite set of explanations directly from the finite generative structure. We have implemented our inference algorithm in XSB Prolog, and use this implementation to encode probabilistic model checkers for a variety of temporal logics, including PCTL and GPL (which subsumes PCTL*). Our experiment results show that, despite the highly declarative nature of their encodings, the model checkers constructed in this manner are competitive with their native implementations.


  28. A. Murthy, A. Islam, E. Bartocci, E. Cherry, F. H. Fenton, J. Glimm, S. A. Smolka, and R. Grosu, Approximate Bisimulations for Sodium Channel Dynamics. Proceedings of CMSB 2012, 10th Conference on Computational Methods in Systems Biology, London, UK, Springer, Lecture Notes in Computer Science (Oct. 2012).

    This paper shows that, in the context of the Iyer et al. 67- variable cardiac myocycte model (IMW), it is possible to replace the detailed 13-state continuous-time MDP model of the sodium-channel dynamics, with a much simpler Hodgkin-Huxley (HH)-like two-state sodiumchannel model, while only incurring a bounded approximation error. The technical basis for this result is the construction of an approximate bisimulation between the HH and IMW channel models, both of which are input-controlled (voltage in this case) continuous-time Markov chains. The construction of the appropriate approximate bisimulation, as well as the overall result regarding the behavior of this modified IMW model, involves: (1) The identification of the voltage-dependent parameters of the m and h gates in the HH-type channel, based on the observations of the IMW channel. (2) Proving that the distance between observations of the two channels never exceeds a given error. (3) Exploring the sensitivity of the overall IMW model to the HH-type sodium-channel approximation. Our extensive simulation results experimentally validate our findings, for varying IMW-type input stimuli.


  29. A. Donz'e, O. Maler, E. Bartocci, D. Nickovic, R. Grosu, and S. A. Smolka, On Temporal Logic and Signal Processing. Proceedings of ATVA 2012, 10th International Symposium on Automated Technology for Verification and Analysis, Trivandrum, Kerala, Springer, Lecture Notes in Computer Science (Oct. 2012).

    We present Time-Frequency Logic (TFL), a new specification formalism for real-valued signals that combines temporal logic properties in the time domain with frequency-domain properties. We provide a property checking framework for this formalism and illustrate its expressive power in defining and recognizing properties of musical pieces. Like hybrid automata and their analysis techniques, the TFL formalism is a contribution to a unified systems theory for hybrid systems.


  30. G. Chatzieleftheriou, B. Bonakdarpour, S. A. Smolka and P. Katsaros, Abstract Model Repair. Proceedings of NFM 2012, Fourth NASA Formal Methods Symposium, pp. 341-355, Springer-Verlag, Lecture Notes in Computer Science, Vol. 7226, Norfolk, VA, (April 2012).


  31. T. Deshpande, S. Basagiannis, P. Katsaros, and S. A. Smolka, “Formal Analysis of the DNS Bandwidth Amplification Attack and its Countermeasures Using Probabilistic Model Checking.” Proceedings of HASE 2011, 13th IEEE International High Assurance Systems Engineering Symposium, Boca Raton, FL, IEEE Computer Society (Nov. 2011).

    The Domain Name System (DNS) is an internet-wide, hierarchical naming system used to translate domain names into physical IP addresses. Any disruption of the service DNS provides can have serious consequences. We present a formal analysis of two notable threats to DNS, namely cache poisoning and bandwidth amplification, and the countermeasures designed to prevent their occurrence. Our analysis of these attacks and their countermeasures is given in the form of a cost-benefit analysis, and is based on probabilistic model checking of Continuous-Time Markov Chains. We use CTMCs to model the race between legitimate and malicious traffic in a DNS server under attack. Countermeasure costs and benefits are quantified in terms of probabilistic reachability and reward properties, which are evaluated over all possible execution paths. The results of our analysis support substantive conclusions about the relative effectiveness of the different countermeasures under varying operating conditions. We also validate the criticism that the DNS security extensions devised to eliminate cache poisoning render DNS more vulnerable to bandwidth amplification attacks.


  32. S. D. Stoller, E. Bartocci, J. Seyster, R. Grosu, K. Havelund, S. A. Smolka and E. Zadok, Runtime Verification with State Estimation. Proceedings of RV 2011, Second International Conference on Runtime Verification, San Francisco, US, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2011).

    We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate the probability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to “fill in” sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.


  33. E. Bartocci, E. M. Cherry, J. Glimm, R. Grosu, S. A. Smolka, and F.H. Fenton, Toward Real-time Simulation of Cardiac Dynamics. Proceedings of CMSB 2011, 9th International Conference on Computational Methods in Systems Biology, Paris, France, ACM Press (Sept. 2011).

    We show that through careful and model-specific optimizations of their GPU implementations, simulations of realistic, detailed cardiac-cell models now can be performed in 2D and 3D in times that are close to real time using a desktop computer. Previously, large-scale simulations of detailed mathematical models of cardiac cells were possible only using supercomputers. In our study, we consider five different models of cardiac electrophysiology that span a broad range of computational complexity: the two-variable Karma model, the four-variable Bueno-Orovio-Cherry-Fenton model, the eight-variable Beeler-Reuter model, the 19-variable Ten Tusscher-Panfilov model, and the 67-variable Iyer-Mazhari-Winslow model. For each of these models, we treat both their single- and double-precision versions and demonstrate linear or even sub-linear growth in simulation times with an increase in the size of the grid used to model cardiac tissue. We also show that our GPU implementations of these models can increase simulation speeds to near real-time for simulations of complex spatial patterns indicative of cardiac arrhythmic disorders, including spiral waves and spiral wave breakup. The achievement of real-time applications without the need for supercomputers may, in the near term, facilitate the adoption of modeling-based clinical diagnostics and treatment planning, including patient-specific electrophysiological studies.


  34. A. Murthy, E. Bartocci, F. Fenton, J. Glimm, R. Gray, S.A. Smolka, and R. Grosu, Curvature Analysis of Cardiac Excitation Wavefronts. Proceedings of CMSB 2011, 9th International Conference on Computational Methods in Systems Biology, Paris, France, ACM Press (Sept. 2011).

    We present the Spiral Classification Algorithm (SCA), a fast and accurate algorithm for classifying electrical spiral waves and their associated breakup in cardiac tissues. The classification performed by SCA is an essential component of the detection and analysis of various cardiac arrhythmic disorders, including ventricular tachycardia and fibrillation. Given a digitized frame of a propagating wave, SCA constructs a highly accurate representation of the front and the back of the wave, piecewise interpolates this representation with cubic splines, and subjects the result to an accurate curvature analysis. This analysis is more comprehensive than methods based on spiral-tip tracking, as it considers the entire wave front and back. To increase the smoothness of the resulting symbolic representation, the SCA uses weighted overlapping of adjacent segments which increases the smoothness at join points. SCA has been applied to a number of representative types of spiral waves, and, for each type, a distinct curvature evolution in time (signature) has been identified. Distinct signatures have also been identified for spiral breakup. These results represent a significant first step in automatically determining parameter ranges for which a computational cardiac-cell network accurately reproduces a particular kind of cardiac arrhythmia, such as ventricular fibrillation.


  35. R. Grosu, G. Batt, F. H. Fenton, J. Glimm, C. Leguernic, S. A. Smolka, and E. Bartocci, From Cardiac Cells to Genetic Regulatory Networks. Proceedings of CAV 2011, 23rd International Conference on Computer Aided Verification, Snowbird, Utah, Springer-Verlag, Lecture Notes in Computer Science (July 2011).

    A fundamental question in the treatment of cardiac disorders, such as tachycardia and fibrillation, is under what circumstances does such a disorder arise? To answer to this question, we develop a multiaffine hybrid automaton (MHA) cardiac-cell model, and restate the original question as one of identification of the parameter ranges under which the MHA model accurately reproduces the disorder. The MHA model is obtained from the minimal cardiac model of one of the authors (Fenton) by first bringing it into the form of a canonical, genetic regulatory network, and then linearizing its sigmoidal switches, in an optimal way. By leveraging the Rovergene tool for genetic regulatory networks, we are then able to successfully identify the parameter ranges of interest.


  36. Z. Li, R. Grosu, K. Muppalla, S. A. Smolka, S. D. Stoller, and E. Zadok, Model Discovery for Energy-Aware Computing Systems. Proceedings of ERSS 2011, First International Workshop on Energy Consumption and Reliability of Storage Systems, Orlando, FL, IEEE Press (July 2011).

    We present a model-discovery methodology for energy-aware computing systems that achieves high prediction accuracy. Model discovery, or system identification, is a critical first step in designing advanced controllers that can dynamically manage the energy-performance trade-off in an optimal manner. Our methodology favors Multiple-InputsMultiple-Outputs (MIMO) models over a collection of Single-Input-Single-Output (SISO) models, when the inputs and outputs of the system are coupled in a nontrivial way. In such cases, MIMO is generally more accurate than SISO over a wide range of inputs in predicting system behavior. Our experimental evaluation, carried out on a representative server workload, validates our approach. We obtained an average prediction accuracy of 77% and 76% for MIMO power and performance, respectively. We also show that MIMO models are consistently more accurate than SISO ones.


  37. Z. Li, R. Grosu, P. Sehgal, S. A. Smolka, S. D. Stoller, and E. Zadok, On the Energy Consumption and Performance of Systems Software. Proceedings of SYSTOR 2011, Fourth Annual International Systems and Storage Conference, Haifa, Israel, ACM Press (May 2011).

    Models of energy consumption and performance are necessary to understand and identify system behavior, prior to designing advanced controls that can balance out performance and energy use. This paper considers the energy consumption and performance of servers running a relatively simple file-compression workload. We found that standard techniques for system identi- fication do not produce acceptable models of energy consumption and performance, due to the intricate interplay between the discrete nature of software and the continuous nature of energy and performance. This motivated us to perform a detailed empirical study of the energy consumption and performance of this system with varying compression algorithms and compression levels, file types, persistent storage media, CPU DVFS levels, and disk I/O schedulers. Our results identify and illustrate factors that complicate the systemÂ’s energy consumption and performance, including nonlinearity, instability, and multi-dimensionality. Our results provide a basis for future work on modeling energy consumption and performance to support principled design of controllable energy-aware systems.


  38. E. Bartocci, R. Grosu, P. Katsaros, C. R. Ramakrishnan, and S. A. Smolka, Model Repair for Probabilistic Systems. Proceedings of TACAS 2011, 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Saarbrucken, Germany, Sprunger-Verlag, Lecture Notes in Computer Science, Vol. 6605, pp. 326-340 (March-April 2011).


  39. N. Alexiou, T. Deshpande, S. Basagiannis, S. A. Smolka and P. Katsaros, Formal Analysis of the Kaminsky DNS Cache-Poisoning Attack Using Probabilistic Model Checking. Proceedings of HASE 2010, 12th IEEE International High Assurance Systems Engineering Symposium, San Jose, CA, IEEE Computer Society (Nov. 2010).

    We use the probabilistic model checker PRISM to formally model and analyze the highly publicized Kaminsky DNS cache-poisoning attack. DNS (Domain Name System) is an internet-wide, hierarchical naming system used to translate domain names such as google.com into physical IP addresses such as 208.77.188.166. The Kaminsky DNS attack is a recently discovered vulnerability in DNS that allows an intruder to hijack a domain, i.e. corrupt a DNS server so that it replies with the IP address of a malicious web server when asked to resolve URLs within a non-malicious domain such as google.com. A proposed fix for the attack is based on the idea of randomizing the source port a DNS server uses when issuing a query to another server in the DNS hierarchy. We use PRISM to introduce a Continuous Time Markov Chain representation of the Kaminsky attack and the proposed fix, and to perform the required probabilistic model checking. Our results, gleaned from more than 240 PRISM runs, formally validate the existence of the Kaminsky cache-poisoning attack even in the presence of an intruder with virtually no knowledge of the victim DNS server's actions. They also serve to quantify the effectiveness of the proposed fix: using nonlinear least-squares curve fitting, we show that the probability of a successful attack obeys a 1/N distribution, where N is the upper limit on the range of source-port ids. We also demonstrate an increasing attack probability with an increasing number of attempted attacks or increasing rate at which the intruder guesses the source-port id.


  40. J. Seyster, K. Dixit, X. Huang, R. Grosu, K. Havelund, S. A. Smolka, S. D. Stoller and E. Zadok, Aspect-Oriented Instrumentation with GCC. Proceedings of RV 2010, First International Conference on Runtime Verification, Sliema, Malta, Springer-Verlag, Lecture Notes in Computer Science, Vol. 6418, pp. 405-420 (Nov. 2010).

    We present the INTERASPECT instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directly add instrumentation by transforming the compilerÂ’s intermediate representation. Such transformations, however, require expert knowledge of GCC internals. INTERASPECT addresses this situation by allowing instrumentation plug-ins to be developed using the familiar vocabulary of Aspect-Oriented Programming: pointcuts, join points, and advice functions. Moreover, INTERASPECT uses specific information about each join point in a pointcut, possibly including results of static analysis, to support powerful customized instrumentation. We describe the INTERASPECT API and present several examples that illustrate its practical utility as a runtime-verification platform. We also introduce a tracecut system that uses INTERASPECT to construct program monitors that are formally specified as regular expressions.


  41. X. Huang, A. Singh and S. A. Smolka, Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol. Proceedings of NFM 2010, Second NASA Formal Methods Symposium, Washington D.C., NASA Conference Publication (April 2010).

    We use the UPPAAL model checker for Timed Automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN). The TPSN protocol seeks to provide network-wide synchronization of the distributed clocks in a sensor network. Clock-synchronization algorithms for sensor networks such as TPSN must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of Timed Automata. To overcome this formal-modeling obstacle, we augment the UPPAAL specification language with the integer clock derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN protocol. With this integer-clock-based model of TPSN in hand, we use UPPAAL to verify that the protocol achieves network-wide time synchronization and is devoid of deadlock. We also use the UPPAAL Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution.


  42. A. Singh, C. R. Ramakrishnan and S. A. Smolka, Query-Based Model Checking of Ad Hoc Network Protocols. Proceedings of CONCUR 2009, the 20th International Conference on Concurrency Theory, Pisa, Italy, Springer-Verlag, Lecture Notes in Computer Science, Vol. 5710, pp. 603-619 (Sep. 2009).


  43. Z. Yang, B. Al-Rawi, K. Sakallah, X. Huang, S.A. Smolka and R. Grosu, Dynamic Path Reduction for Software Model Checking. Proceedings of iFMÂ’09, the Seventh International Conference on Integrated Formal Methods, Duesseldorf, Germany, Springer-Verlag, Lecture Notes in Computer Science, Vol. 5423, pp. 322-336 (Feb. 2009).

    We present the new technique of dynamic path reduction (DPR), which allows one to prune redundant paths from the state space of a program under verification. DPR is a very general technique which we consider here in the context of the bounded model checking of sequential programs with nondeterministic conditionals. The DPR approach is based on the symbolic analysis of concrete executions. For each explored execution path p that does not reach an abort statement, we repeatedly apply a weakest-precondition computation to accumulate the constraints associated with an infeasible sub-path derived from p by taking the alternative branch to an if-then-else statement. We then use an SMT solver to learn the minimally unsatisfiable core of these constraints. By further learning the statements in p that are critical to the sub-pathÂ’s infeasibility as well as the control-flow decisions that must be taken to execute these statements, unexplored paths containing the same unsatisfiable core can be efficiently and dynamically pruned. Our preliminary experimental results show that DPR can prune a significant percentage of execution paths, a percentage that grows with the size of the instance of the problem being considered.


  44. O. Riganelli, R. Grosu, S. R. Das, C. R. Ramakrishnan and S. A. Smolka, Power Optimization in Fault-Tolerant Mobile Ad Hoc Networks. Proceedings of the 11th IEEE High Assurance Systems Engineering Symposium (HASEÂ’08), Nanjing, China, IEEE Press (Dec. 2008).

    In this paper, we investigate the transmission-power assignment problem for k-connected mobile ad hoc networks (MANETs), the problem of optimizing the lifetime of a MANET at a given degree k of connectivity by minimizing power consumption. Our proposed solution is fully distributed and uses a model-based transmission power adaptation strategy based on model-predictive control. Specifically, a stochastic model of the network is used by a state estimator to predict the network's future degree of connectivity and remaining energy. The predicted states are used by an optimizer to derive an optimal transmission power assignment sequence which tracks the desired connectivity level k while minimizing energy consumption. Our experimental results on a simulated wireless sensor network comprising 100 mobile nodes reveals that our localized topology control algorithm provides an almost identical control policy to that of a globalized scheme which is solving a fully observable problem. The difference, of course, is in the scalability of our localized solution, which requires much less communication bandwidth and energy than the globalized approach.


  45. P. Ye, R. Grosu, S.A. Smolka and E. Entcheva. Formal Analysis of Abnormal Excitation in Cardiac Tissue. Proceedings of CMSBÂ’08, the 6th International Conference on Computational Methods in Systems Biology, Rostock, Germany, Springer, Lecture Notes in Bioinformatics (Oct. 2008).

    We present the Piecewise Linear Approximation Model of Ion Channel contribution (PLAMIC) to cardiac excitation. We use the PLAMIC model to conduct formal analysis of cardiac arrhythmic events, namely Early Afterdepolarizations (EADs). The goal is to quantify (for the first time) the contribution of the overall sodium (Na?+?), potassium (K?+?) and calcium (Ca2?+?) currents to the occurrence of EADs during the plateau phase of the cardiac action potential (AP). Our analysis yields exact mathematical criteria for the separation of the parameter space for normal and EAD-producing APs, which is validated by simulations with classical AP models based on complex systems of nonlinear differential equations. Our approach offers a simple formal technique for the prediction of conditions leading to arrhythmias (EADs) from a limited set of experimental measurements, and can be invaluable for devising new anti-arrhythmic strategies.


  46. R. Cleaveland, S.A. Smolka and S.T. Sims, An Instrumentation-Based Approach to Controller Model Validation. Model-Driven Development of Reliable Automotive Services, Springer, LNCS 4922, pp. 84-97, (July 2008).

    This paper discusses the concept of Instrumentation-Based Validation (IBV): the use of model instrumentation and coverage-based testing to validate models of embedded control software. IBV proceeds as follows. An engineer first formalizes requirements as assertions, or small models, which may be thought of as monitors that observe the behavior of the controller model as it executes. The engineer then instruments the model with these assertions and develops test suites with the aim of highlighting where assertion violations occur. To make our discussion of IBV more concrete, we also consider its implementation within the Reactis tool suite for the automated testing and validation of controller models given in /Stateflow.


  47. A. Singh, C. R. Ramakrishnan and S. A. Smolka, A Process Algebra for Mobile Ad Hoc Networks. Proceedings of Coordination 2008, the 10th International Conference on Coordination Models and Languages, Oslo, Norway, Springer, LNCS Vol. 5052, pp. 296-314 (June 2008).


  48. E. Bartocci, F. Corradini, R. Grosu, E. Merelli, O. Riganelli and S. A. Smolka. StonyCam: A Formal Framework for Modeling, Analyzing and Regulating Cardiac Myocytes. Concurrency, Graphs and Models, Lecture Notes in Computer Science, Vol. 5065, Springer-Verlag, pp. 493-502 (June 2008).

    This paper presents a formal framework, experimental infrastructure, and computational environment for modeling, analyzing and regulating the behavior of cardiac tissues. Based on the theory of hybrid automata, we aim at providing suitable tools to be used in devising strategies for the pharmacological or other forms of treatment of cardiac electrical disturbances.


  49. P. Ye, E. Entcheva, S.A. Smolka and R. Grosu. Symbolic Analysis of the Neuron. Proceedings of ICBBEÂ’08, the 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, IEEE Press (May 2008).

    We present a novel approach to investigating key behavioral properties of complex biological systems by first using automated techniques to learn a simplified Linear Hybrid Automaton model of the system under investigation, and then carrying out automatic reachability analysis on the resulting model. The specific biological system we consider is the neuronal Action Potential and the specific question of interest is bifurcation: the graded response of the neuron to stimulation of varying amplitude and duration. Reachability analysis in this case is performed using the d/dt analysis tool for hybrid systems. The results we so obtain reveal the precise conditions under which bifurcation manifests, when taking into consideration an infinite class of input stimuli of arbitrary shape, amplitude, and duration within given respective intervals. To the best of our knowledge, this represents the first time that formal (reachability) analysis has been applied to a computational model of excitable cells. The obvious advantage of symbolic reachability analysis over simulation—perhaps the only available analysis method when complex systems of coupled ODEs are used to model excitable-cell behavior, as has traditionally been the case—is that through the so-called reachable set computation, the system’s reaction to an infinite set of possible inputs can be observed. Our results further demonstrate that Linear Hybrid Automata, as a formal language, is both expressive enough to capture interesting excitable-cell behavior, and abstract enough to render formal analysis possible.


  50. R. Grosu, E. Bartocci, F. Corradini, E. Entcheva, S.A. Smolka and A. Wasilewska, Learning and Detecting Emergent Behavior in Networks of Cardiac Myocytes. Proceedings of HSCCÂ’08, the 11th International Conference on Hybrid Systems: Computation and Control, St. Louis, MO, Springer, LNCS Vol. 4981, pp. 229-243 (April 2008).

    We address the problem of specifying and detecting emergent behavior in networks of cardiac myocytes, spiral electric waves in particular, a precursor to atrial and ventricular fibrillation. To solve this problem we: (1) Apply discrete mode-abstraction to the cycle-linear hybrid automata (CLHA) we have recently developed for modeling the behavior of myocyte networks; (2) Introduce the new concept of spatialsuperposition of CLHA modes; (3) Develop a new spatial logic, based on spatial-superposition, for specifying emergent behavior; (4) Devise a new method for learning the formulae of this logic from the spatial patterns under investigation; and (5) Apply bounded model checking to detect (within milliseconds) the onset of spiral waves. We have implemented our methodology as the Emerald tool-suite, a component of our EHA framework for specification, simulation, analysis and control of excitable hybrid automata. We illustrate the effectiveness of our approach by applying Emerald to the scalar electrical fields produced by our CellExcite simulator.


  51. E. Bartocci, F. Corradini, M.R. Di Berardini, E. Entcheva, R. Grosu and S.A. Smolka. Spatial Networks of Hybrid I/O Automata for Modeling Excitable Tissue. Proceedings of FBTCÂ’07, the International Workshop From Biology to Concurrency and Back, Lisbon, Springer, ENTCS Vol. 194(3), pp. 51-67 (September, 2007).

    We propose a new biological framework, spatial networks of hybrid input/output automata (SNHIOA), for the efficient modeling and simulation of excitable-cell tissue. Within this framework, we view an excitable tissue as a network of interacting cells disposed according to a 2D spatial lattice, with the electrical behavior of a single cell modeled as a hybrid input/ouput automaton. To capture the phenomenon that the strength of communication between automata depends on their relative positions within the lattice, we introduce a new, weighted parallel composition operator to specify the influence of one automata over another. The purpose of the SNHIOA model is to efficiently capture the spatiotemporal behavior of wave propagation in 2D excitable media. To validate this claim, we show how SNHIOA can be used to model and capture different spatiotemporal behavior of wave propagation in 2D isotropic cardiac tissue, including normal planar wave propagation, spiral creation, the breakup of spirals into more complex (potentially lethal) spatiotemporal patterns, and the recovery of the tissue to the rest via defibrillation.


  52. R. Grosu, S. Mitra, P. Ye, E. Entcheva, IV Ramakrishnan and S.A. Smolka, Learning Cycle-Linear Hybrid Automata for Excitable Cells. Proceedings of HSCCÂ’07, the 10th International Conference on Hybrid Systems: Computation and Control, Pisa, Italy, pp. 245-258, Springer, LNCS Vol. 4416 (April 2007).

    We show how to automatically learn the class of Hybrid Automata called Cycle-Linear Hybrid Automata (CLHA) in order to model the behavior of excitable cells. Such cells, whose main purpose is to amplify and propagate an electrical signal known as the action potential (AP), serve as the “biologic transistors” of living organisms. The learning algorithm we propose comprises the following three phases: (1) Geometric analysis of the APs in the training set is used to identify, for each AP, the modes and switching logic of the corresponding Linear Hybrid Automata. (2) For each mode, the modified Prony’s method is used to learn the coefficients of the associated linear flows. (3) The modified Prony’s method is used again to learn the functions that adjust, on a per-cycle basis, the mode dynamics and switching logic of the Linear Hybrid Automata obtained in the first two phases. Our results show that the learned CLHA is able to successfully capture AP morphology and other important excitable-cell properties, such as refractoriness and restitution, up to a prescribed approximation error. Our approach is fully implemented in MATLAB and, to the best of our knowledge, provides the most accurate approximation model for ECs to date.


  53. S. Callanan, R. Grosu, J. Seyster, S.A. Smolka and E. Zadok. Model Predictive Control for Memory Profiling. Proceedings of NSF NGSÂ’07, Next Generation Software Workshop at IPDPS, Long Beach, California, pp. 1-7, IEEE Press (March 2007).

    We make two contributions in the area of memory pro- filing. The first is a real-time, memory-profiling toolkit we call Memcov that provides both allocation/deallocation and access profiles of a running program. Memcov requires no recompilation or relinking and significantly reduces the barrier to entry for new applications of memory profiling by providing a clean, non-invasive way to perform two major functions: processing of the stream of memory-allocation events in real time and monitoring of regions in order to receive notification the next time they are hit. Our second contribution is an adaptive memory pro- filer and leak detector called MemcovMPC. Built on top of Memcov, MemcovMPC uses Model Predictive Control to derive an optimal control strategy for leak detection that maximizes the number of areas monitored for leaks, while minimizing the associated runtime overhead. When it observes that an area has not been accessed for a userdefinable period of time, it reports it as a potential leak. Our approach requires neither mark-and-sweep leak detection nor static analysis, and reports a superset of the memory leaks actually occurring as the program runs. The set of leaks reported by MemcovMPC can be made to approximate the actual set more closely by lengthening the threshold period.


  54. R. Grosu, X. Huang, S.A. Smolka, W. Tan and S. Tripakis. Deep Random Search for Efficient Model Checking of Timed Automata. Proceedings of MWÂ’06, 7th Monterey Workshop on Composition of Embedded Systems, Paris, France, pp. 37-48 (Oct. 2006).

    We present DRS (Deep Random Search), a new Las Vegas algorithm for model checking safety properties of timed automata. DRS explores the state space of the simulation graph of a timed automaton by performing random walks up to a prescribed depth. Nodes along these walks are then used to construct a random fringe, which is the starting point of additional deep random walks. The DRS algorithm is complete, and optimal to within a specified depth increment. Experimental results show that it is able to find extremely deep counter-examples for a number of benchmarks, outperforming Open-Kronos and Uppaal in the process.


  55. E.W. Stark, R. Cleaveland, and S.A. Smolka. Probabilistic I/O Automata: Theories of Two Equivalences. Proceedings of 17th International Conference on Concurrency Theory (CONCUR 2006), Lecture Notes in Computer Science, Vol. 4137, pp. 343-357, Spring Verlag (Aug. 2006).

    Working in the context of a process-algebraic language for Probabilistic I/O Automata (PIOA), we study the notion of PIOA behavior equivalence by obtaining a complete axiomatization of its equational theory and comparing the results with a complete axiomatization of a more standard equivalence, weighted bisimulation. The axiomatization of behavior equivalence is achieved by adding to the language an operator for forming convex combinations of terms.


  56. M.R. True, E. Entcheva, S.A. Smolka, P. Ye and R. Grosu. Efficient Event-Driven Simulation of Excitable Hybrid Automata. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

    We present an efficient, event-driven simulation framework for large-scale networks of excitable hybrid automata (EHA), a particular kind of hybrid automata that we use to model excitable cells. A key aspect of EHA is that they possess protected modes of operation in which they are non-responsive to external inputs. In such modes, our approach takes advantage of the analytical solution of the modes' linear differential equations to eliminate all integration steps, and therefore to dramatically reduce the amount of computation required. We first present a simple simulation framework for EHA based on a time-step integration method that follows naturally from our EHA models. We then present our event-driven simulation framework, where each cell has an associated event specifying both the type of processing next required for the cell and a time at which the processing must occur. A priority queue, specifically designed to reduce queueing overhead, maintains the correct ordering among events. This approach allows us to avoid handling certain cells for extended periods of time. Through a mode-by-mode case analysis, we demonstrate that our event-driven simulation procedure is at least as accurate as the time-step one. As experimental validation of the efficacy of the event-driven approach, we demonstrate a five-fold improvement in the simulation time required to produce spiral waves in a 400-times-400 cell array


  57. P. Ye, E. Entcheva, S.A. Smolka, M.R. True and R. Grosu. Hybrid Automata as a Unifying Framework for Modeling Cardiac Cells. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

    We propose hybrid automata (HA) as a unifying framework for computational models of excitable cells. HA, which combine discrete transition graphs with continuous dynamics, can be naturally used to obtain a piecewise, possibly linear, approximation of a nonlinear excitable-cell model. We first show how HA can be used to efficiently capture the action-potential morphology--as well as reproduce typical excitable-cell characteristics such as refractoriness and restitution--of the dynamic Luo-Rudy model of a guinea-pig ventricular myocyte. We then recast two well-known computational models, Biktashev's and Fenton-Karma, as HA without any loss of expressiveness. Given that HA possess an intuitive graphical representation and are supported by a rich mathematical theory and numerous analysis tools, we argue that they are well positioned as a computational model for biological processes.


  58. P. Ye, E. Entcheva, S.A. Smolka, M.R. True and R. Grosu. A Cycle-Linear Approach to Modeling Action Potentials. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

    We introduce cycle-linear hybrid automata (CLHA) and show how they can be used to efficiently model dynamical systems that exhibit nonlinear, pseudo-periodic behavior. CLHA are based on the observation that such systems cycle through a fixed set of operating modes, although the dynamics and duration of each cycle may depend on certain computational aspects of past cycles. CLHA are constructed around these modes such that the per-cycle, per-mode dynamics are given by a time-invariant linear system of equations; the parameters of the system are dependent on a deformation coefficient computed at the beginning of each cycle as a function of memory units. Viewed over time, CLHA generate a very intuitive, linear approximation of the entire phase space of the original, nonlinear system. We show how CLHA can be used to efficiently model the action potential of various types of excitable cells and their adaptation to pacing frequency


  59. S. Callanan, R. Grosu, A. Rai, S.A. Smolka, M. True and E. Zadok. Runtime Verification for High-Confidence Systems: A Monte Carlo Approach. Proceedings of Second International Workshop on Model Based Testing (MBT 2006), (March 2006).

    We present a new approach to runtime verification that utilizes classical statistical techniques such as Monte Carlo simulation, hypothesis testing, and confidence interval estimation. Our algorithm, MCM, uses sampling-policy automata to vary its sampling rate dynamically as a function of the current confidence it has in the correctness of the deployed system. We implemented MCM within the Aristotle tool environment, an extensible, GCC-based architecture for instrumenting C programs for the purpose of runtime monitoring. For a case study involving the dynamic allocation and deallocation of objects in the Linux kernel, our experimental results show that Aristotle reduces the runtime overhead due to monitoring, which is initially high when confidence is low, to levels low enough to be acceptable in the long term as confidence in the monitored system grows.


  60. S. Callanan, R. Grosu, X. Huang, S.A. Smolka and E. Zadok. Compiler-Assisted Software Verification Using Plug-Ins. Proceedings of NGSÂ’06, Next Generation Software Workshop at IPDPS, Rhodes Island, Greece, pp. 8-15, IEEE Press (April 2006).

    We present Protagoras, a new plug-in architecture for the GNU compiler collection that allows one to modify GCC's internal representation of the program under compilation. We illustrate the utility of Protagoras by presenting plug-ins for both compile-time and runtime software verification and monitoring. In the compile-time case, we have developed plug-ins that interpret the GIMPLE intermediate representation to verify properties statically. In the runtime case, we have developed plug-ins for GCC to perform memory leak detection, array bounds checking, and reference-count access monitoring.


  61. R. Grosu and S.A. Smolka. Monte Carlo Methods for Process Algebra. Proceedings of International Workshop on Algebraic Process Calculi: The First Twenty Five Years and Beyond, Bertinoro, Italy, ENTCS Vol. 162, pp. 203-207 (Sep. 2006).

    We review the recently developed technique of Monte Carlo model checking and show how it can be applied to the implementation problem for I/O Automata. We then consider some open problems in applying Monte Carlo techniques to other process-algebraic problems, such as simulation and bisimulation.


  62. R. Grosu, X. Huang, S. Jain, and S. A. Smolka. “Open-Source Model Checking.” Proceedings of SoftMC 2005: Workshop on Software Model Checking, Electronic Notes in Theoretical Computer Science (July 2005).

    We present GMC2, a software model checker for GCC, the open-source compiler from the Free Software Foundation (FSF). GMC2, which is part of the GMC static-analysis and model-checking tool suite for GCC under development at SUNY Stony Brook, can be seen as an extension of Monte Carlo model checking to the setting of concurrent, procedural programming languages. Monte Carlo model checking is a newly developed technique that utilizes the theory of geometric random variables, statistical hypothesis testing, and random sampling of lassos in Büchi automata to realize a one- sided error, randomized algorithm for LTL model checking. To handle the function call/return mechanisms inherent in procedural languages such as C/C++, the version of Monte Carlo model checking implemented in GMC2 is optimized for pushdown-automaton models. Our experimental results demonstrate that this approach yields an efficient and scalable software model checker for GCC.


  63. R. Grosu and S. A. Smolka. “Safety-Liveness Semantics for UML 2.0.” Proceedings of ACSD 2005: Fifth International Conference on Application of Concurrency to System Design, IEEE Computer Society Press, Los Alamitos, CA, USA (June 2005).

    We provide an automata-theoretic solution to one of the main open questions about the UML standard, namely how to assign a formal semantics to a set of sequence diagrams without compromising refinement? Our solution relies on a rather obvious idea, but to our knowledge has not been used before in this context: that bad and good sequence diagrams in the UML standard should be regarded as safety and liveness properties, respectively. Proceeding in this manner, we obtain a semantics that essentially complements the set of behaviors associated with the set of sequence diagrams, thereby allowing us to use the standard notion of refinement as language inclusion. We show that refinement in this setting is compositional with respect to sequential composition, alternative composition, parallel composition, and star+ composition.


  64. R. Grosu and S.A. Smolka, Efficient Modeling of Excitable Cells Using Hybrid Automata, Proceedings of Computational Methods in Systems Biology, Lecture Notes in Computer Science, Springer-Verlag (April 2005).

    We present an approach to modeling complex biological systems that is based on Hybrid automata (HA). HA combine discrete transition graphs with continuous dynamics. Our goal is to efficiently capture the behavior of excitable cells previously modeled by systems of nonlinear differential equations. In particular, we derive HA models from the Hodgkin-Huxley model of the giant squid axon, the Luo-Rudy dynamic model of a guinea pig ventricular cell, and a model of a neonatal rat ventricular myocyte. Our much simpler HA models are able to successfully capture the action-potential morphology of the different cells, as well as reproduce typical excitable-cell characteristics, such as refractoriness (period of non-responsiveness to external stimulation) and restitution (adaptation to pacing rates). To model electrical wave propagation in a cell network, the single-cell HA models are linked to a classical 2D spatial model. The resulting simulation framework exhibits significantly improved computational efficiency in modeling complex wave patterns, such as the spiral waves underlying pathological conditions in the heart.


  65. R. Grosu and S.A. Smolka, Monte Carlo Model Checking, Proceedings of TACAS 2005: Eleventh International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, Springer-Verlag (April 2005).


  66. C.W. Keller, D. Saha, S. Basu, and S.A. Smolka. “FocusCheck: A Tool for Model Checking and Debugging Sequential C Programs.” Proceedings of TACAS 2005: Eleventh International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, Springer-Verlag (April 2005).

    We present the FocusCheck model-checking tool for the verification and easy debugging of assertion violations in sequential C programs. The main functionalities of the tool are the ability to: (a) identify all minimum-recursion, loop-free counter-examples in a C program using on-the-fly abstraction techniques; (b) extract focus-statement sequences (FSSs) from counter-examples, where a focus statement is one whose execution directly or indirectly causes the violation underlying a counter-example; (c) detect and discard infeasible counter-examples via feasibility analysis of the corresponding FSSs; and (d) isolate program segments that are most likely to harbor the erroneous statements causing the counter-examples. FocusCheck is equipped with a smart graphical user interface that provides various views of counter-examples in terms of their FSSs, thereby enhancing usability and readability of model-checking results.


  67. P. Yang, Y. Dong, C.R. Ramakrishnan, and S.A. Smolka. “Compiling Mobile Processes for Efficient Model Checking” (Winner of Most Practical Paper Award). Proceedings of Seventh International Symposium on Practical Aspects of Declarative Languages (PADL 05), Lecture Notes in Computer Science, Springer-Verlag (Jan. 2005).

    We present the ?-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The ?-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the ?-calculus is the separation of a node’s communication and computational behavior, described by an ?-process, from the description of its physical transmission range, referred to as an ?-process interface.
    Our main technical results are as follows. We give a formal operational semantics of the ?-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control ?-processes. We also prove that the ?-calculus is a conservative extension of the p-calculus, and that late bisimulation equivalence (appropriately lifted from the p-calculus to the ?-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: t-actions, as in the p-calculus, and µ-actions, signaling node movement. We additionally define a symbolic semantics for the ?-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.


  68. R. Grosu and S.A. Smolka. “Quantitative Model Checking.” Proceedings of First International Symposium on Leveraging Applications of Formal Methods (ISoLA 2004), Paphos, Cyprus (Nov. 2004).

    We present QMC, a one-sided error Monte Carlo decision procedure for the LTL model-checking problem S |= ?. Besides serving as a randomized algorithm for LTL model checking, QMC delivers quantitative information about the likelihood that S |= ?. In particular, given a specification S of a finite-state system, an LTL formula ?, and parameters and d, QMC performs random sampling to compute an estimate peZ of the expectation pZ that the language L(B) of the B¨uchi automaton B = BS × B¬? is empty; B is such that L(B) = Ø iff S |= ?. A random sample in our case is a lasso, i.e. an initialized random walk through B ending in a cycle. The estimate peZ output by QMC is an (, d)- approximation of pZ—one that is within a factor of 1± with probability at least 1-d—and is computed using a number of samples N that is optimal to within a constant factor, in expected time O(N ·D) and expected space O(D), where D is B’s recurrence diameter. Experimental results demonstrate that QMC is fast, memory-efficient, and scales extremely well.


  69. S. Basu, D. Saha, and S.A. Smolka. “Localizing Program Errors for Cimple Debugging.” Proceedings of 24th IFIP International Conference on Formal Techniques for Networked and Distributed Systems (FORTE 2004), Lecture Notes in Computer Science, Springer-Verlag (Sep. 2004).

    We present automated techniques for the explanation of counter-examples, where a counter-example should be understood as a sequence of program statements. Our approach is based on variable dependency analysis and is applicable to programs written in Cimple, an expressive subset of the C programming language. Central to our approach is the derivation of a focus-statement sequence (FSS) from a given counter-example: a subsequence of the counter-example containing only those program statements that directly or indirectly affect the variable valuation leading to the program error behind the counter-example. We develop a ranking procedure for FSSs where FSSs of higher rank are conceptually easier to understand and correct than those of lower rank. We also analyze constraints over uninitialized variables in order to localize program errors to specific program segments; this often allows the user to subsequently take appropriate debugging measures. We have implemented our techniques in the FocusCheck model checker, which efficiently checks for assertion violations in Cimple programs on a per-procedure basis. The practical utility of our approach is illustrated by its successful application to a fast, linear-time median identification algorithm commonly used in statistical analysis and in the Resolution Advisory module of the Traffic Collision Avoidance System.


  70. E.W. Stark, R. Cleaveland, and S.A. Smolka, A Process-Algebraic Language for Probabilistic I/O Automata. Proceedings of 14th International Conference on Concurrency Theory (CONCUR 2003), Lecture Notes in Computer Science, Springer-Verlag (Aug. 2003).

    We present a process-algebraic language for Probabilistic I/O Automata (PIOA). To ensure that PIOA specifications given in our language satisfy the "input-enabled" property, which requires that all input actions be enabled in every state of a PIOA, we augment the language with a set of type inference rules. We also equip our language with a formal operational semantics defined by a set of transition rules. We present a number of results whose thrust is to establish that the typing and transition rules are sensible and interact properly. The central connection between types and transition systems is that if a term is well-typed, then in fact the associated transition system is input-enabled. We also consider two notions of equivalence for our language, weighted bisimulation equivalence and PIOA behavioral equivalence. We show that both equivalences are substitutive with respect to the operators of the language, and note that weighted bisimulation equivalence is a strict refinement of behavioral equivalence.


  71. S. Basu, D. Saha, Y.-J. Lin, and S.A. Smolka, Counter-Example Generation for Push-Down Systems. Proceedings of 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems (FORTE 2003), Lecture Notes in Computer Science, Springer-Verlag (Sep. 2003).

    We present a new, on-the-fly algorithm that given a push-down model representing a sequential program with (recursive) procedure calls and an extended finite-state automaton representing (the negation of) a safety property, produces a succinct, symbolic representation of all counter-examples; i.e., traces of system behaviors that violate the property. The class of what we call minimum-recursion loop-free counter-examples can then be generated from this representation on an as-needed basis and presented to the user. Our algorithm is also applicable, without modification, to finite-state system models. Simultaneous consideration of multiple counter-examples can minimize the number of model-checking runs needed to recognize common root causes of property violations. We illustrate the use of our techniques via application to a Java-Tar utility and an FTP-server program, and discuss a prototype tool implementation which offers several abstraction techniques for easy-viewing of generated counter-examples.


  72. Y. Dong, C.R. Ramakrishnan, and S.A. Smolka. Evidence Explorer: A Tool for Exploring Model-Checking Proofs. Proceedings of 15th Conference on Computer-Aided Verification (CAV 2003), Lecture Notes in Computer Science, Springer-Verlag (July 2003).

    We present the Evidence Explorer, a new tool for assisting users in navigating the proof structure, or evidence, produced by a model checker when attempting to verify a system specification for a temporal-logic property. Due to the sheer size of such evidence, single-step traversal is prohibitive and smarter exploration methods are required. The Evidence Explorer enables users to explore evidence through a collection of orthogonal but coordinated views. These views allow one to quickly ascertain the overall perception of evidence through consistent visual cues, and easily locate interesting regions by simple drill-down operations. Views are definable in relational graph algebra, a natural extension of relational algebra to graph structures such as model-checking evidence.


  73. F. Moller and S.A. Smolka. On the Computational Complexity of Bisimulation, Redux. Proceedings of the ACM Paris Kanellakis Memorial Workshop (PCK50) in affiliation with the ACM 2003 Federated Computing Research Conference (FCRC 2003), ACM Press (June 2003).

    This is a revised and updated version of a 1995 ACM Computing Surveys paper by Moller and Smolka on the computational complexity of bisimulation. Paris Kanellakis and Smolka were among the first to investigate the computational complexity of bisimulation, and Moller has a long-established track record in the field. The authors therefore believe that the proceedings of Principles of Computing and Knowledge: Paris C. Kanellakis Memorial Workshop represent an ideal opportunity for another look at the subject.


  74. Y. Dong, C.R. Ramakrishnan, and S.A. Smolka. Model Checking and Evidence Exploration. Proceedings of Tenth IEEE International Conference on Engineering of Computer Based Systems (ECBS 2003), IEEE Computer Society Press (April 2003).

    We present an algebraic framework for evidence exploration: the process of interpreting, manipulating, and navigating the proof structure or evidence produced by a model checker when attempting to verify a system specification for a temporal-logic property. Due to the sheer size of such evidence, single-step traversal is prohibitive and smarter exploration methods are required. Evidence exploration allows users to explore evidence through smaller, manageable views, which are definable in relational graph algebra, a natural extension of relational algebra to graph structures such as model-checking evidence. We illustrate the utility of our approach by applying the Evidence Explorer, our tool implementation of the evidence-exploration framework, to the Java meta-locking algorithm, a highly optimized technique deployed by the Java Virtual Machine to ensure mutually exclusive access to object monitor queues by threads.


  75. P. Yang, C.R. Ramakrishnan, and S.A. Smolka, A Logical Encoding of the pi-Calculus: Model Checking Mobile Processes Using Tabled Resolution. Procedings of Fourth International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2003), Lecture Notes in Computer Science, Springer-Verlag (Jan. 2003).

    We present MMC, a model checker for mobile systems specified in the style of the pi-calculus. MMC's development builds on our experience gained in developing XMC, a model checker for an extension of Milner's value-passing calculus implemented using the XSB tabled logic-programming system. MMC, however, is not simply an extension of XMC; rather it is virtually a complete re-implementation that addresses the salient issues that arise in the pi-calculus, including scope extrusion and intrusion, and dynamic generation of new names to avoid name capture. We show that tabled logic programming is especially suitable as an efficient implementation platform for model checking pi-calculus specifications, and can be used to obtain an exact encoding of the pi-calculus's transitional semantics. Moreover, MMC is easily extended to handle process expressions in the spi-calculus. Our experimental data shows that MMC outperforms other knowntools for model checking the pi-calculus.


  76. Y. Dong, B. Sarna-Starosta, C.R. Ramakrishnan, and S.A.Smolka, Vacuity Checking in the Modal Mu-Calculus. Proceedings of Ninth International Conference on Algebraic Methodology and Software Technology (AMAST 2002), Lecture Notes in Computer Science, Springer-Verlag (Sept. 2002).

    Vacuity arises when a logical formula is trivially true in a given model due, for example, to antecedent failure. Beer et al. have recently introduced a logic-independent notion of vacuity and shown that certain logics, i.e., those with polarity, admit an efficient decision procedure for vacuity detection. We show that the modal mu-calculus, a very expressive temporal logic, is a logic with polarity and hence the results of Beer et al. are applicable. We also extend the definition of vacuity to achieve a new notion of redundancy in logical formulas. Redundancy captures several forms of antecedent failure that escape traditional vacuity analysis, including vacuous actions in temporal modalities and unnecessarily strong temporal operators. Furthermore, we have implemented an efficient redundancy checker for the modal mu-calculus in the context of the XMC model checker. Our checker generates diagnostic information in the form of all maximal subformulas that are redundant and exploits the fact that XMC can cache intermediate results in memo tables between model-checking runs. We have applied our redundancy checker to a number of previously published case studies, and found instances of redundancy that have gone unnoticed till now. These findings provide compelling evidence of the importance of redundancy detection in the design process.


  77. R. Grosu, E. Zadok, S. A. Smolka, R. Cleaveland, and Y. A. Liu. “High-Confidence Operating Systems.” Proceedings of the Tenth SIGOPS European Workshop: “Can we really depend on an OS?”, Saint-Emilion, France (Sept. 2002).

    Operating systems (OSs) are among the most sophisticated software systems in widespread use, and among the most expensive and time-consuming to develop and maintain. OS software must also be robust and dependable, since OS failures can result in system crashes that corrupt user data, endanger human lives (cf. embedded systems), or provide open avenues of attack for hackers or even cyber-terrorists.


  78. R. Grosu, Yanhong A. Liu, S.A. Smolka, S.D. Stoller, and J. Yan, Automated Software Engineering Using Concurrent Class Machines. Proceedings of 16th IEEE International Conference on Automated Software Engineering (ASE 2001), IEEE Press, San Diego, CA (Nov. 2001).

    Concurrent Class Machines are a novel state-machine model that directly captures a variety of object-oriented concepts, including classes and inheritance, objects and object creation, methods, method invocation and exceptions, multithreading and abstract collection types. The model can be understood as a precise definition of UML activity diagrams which, at the same time, offers an executable, object-oriented alternative to event-based statecharts. It can also be understood as a visual, combined control and data flow model for multithreaded object-oriented programs. We first introduce a visual notation and tool for Concurrent Class Machines and discuss their benefits in enhancing system design. We then equip this notation with a precise semantics that allows us to define refinement and modular refinement rules. Finally, we summarize our work on generation of optimized code, implementation and experiments, and compare with related work.


  79. D. Hansel, R. Cleaveland, S.A. Smolka, Distributed Prototyping from Validated Specifications. Proceedings of 12th IEEE International Workshop on Rapid System Prototyping (RSP 2001), IEEE Press, Monterey, CA (June 2001).

    We present vpl2cxx, a translator that automatically generates efficient, fully distributed C++ code from high-level system models specified in the mathematically well-founded VPL design language. As the Concurrency Workbench of the New Century (CWB-NC) verification tool includes a front-end for VPL, designers may use the full range of automatic verification and simulation checks provided by this tool on their VPL system designs before invoking the translator, thereby generating distributed prototypes from validated specifications. Besides being fully distributed, the code generated by vpl2cxx is highly readable and portable to a host of execution environments and real-time operating systems (RTOSes). This is achieved by encapsulating all generated code dealing with low-level interprocess communication issues in a library for synchronous communication, which in turn is built upon the Adaptive Communication Environment (ACE) client-server network programming interface. Finally, example applications show that the performance of the generated code is very good, especially for prototyping purposes. We discuss two such examples, including the RETHER real-time Ethernet protocol for voice and video applications.


  80. R. Sekar, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, Model-Carrying Code (MCC): A New Paradigm for Mobile-Code Security. Proceedings of the New Security Paradigms Workshop (NSPW 2001), ACM Press, Cloudcroft, New Mexico (Sept. 2001).

    A new approach to ensuring the security of mobile code is presented. Our approach enables a mobile-code consumer to understand and formally reason about what a piece of mobile code can do; check if the actions of the code are compatible with his/her security policies; and, if so, execute the code. The compatibility-checking process is automated, but if there are conflicts, consumers have the opportunity to refine their policies, taking into account the functionality provided by the mobile code. Finally, when the code is executed, our framework uses advanced runtime-monitoring techniques to ensure that the code does not violate the consumer's (refined) policies. At the heart of our method, which we call model-carrying code (MCC), is the idea that a piece of mobile code comes equipped with an expressive yet concise model of the code's behavior, the generation of which can be automated. MCC enjoys several advantages over current approaches to mobile-code security. Succinctly put, it protects consumers of mobile code from malicious or faulty code without unduly restricting the code's functionality. Moreover, the MCC approach is applicable to the vast majority of code that exists today, which is written in C or C++. This contrasts with previous approaches such as Java 2 security and proof-carrying code, which are either language-specific or are difficult to extend to these languages. Finally, MCC can be combined with existing techniques such as cryptographic signing and proof-carrying code to yield additional benefits.


  81. M. van Osch, S.A. Smolka, Finite-State Analysis of the CAN Bus Protocol. Proceedings of Sixth IEEE International Symposium on High Assurance Systems Engineering (HASE 2001), IEEE Press, Boca Raton, Florida (Oct. 2001).

    We formally specify the data link layer of the Controller Area Network (CAN), a high-speed serial bus system with real-time capabilities, widely used in embedded systems. CAN's primary application domain is automotive, and the physical and data link layers of the CAN architecture were the subject of the ISO 11898 international standard. We checked our specification against 12 important properties of CAN, eight of which are gleaned from the ISO standard; the other four are desirable properties not directly mentioned in the standard. Our results indicate that not all properties can be expected to hold of a CAN implementation and we discuss the implications of these findings. Moreover, we have conducted a number of experiments aimed at determining how the size of the protocol's state space is affected by the introduction of various features of the data link layer, the number of nodes in the network, the number of distinct message types, and other parameters.


  82. K. Narayan Kumar, C.R. Ramakrishnan, S.A. Smolka, “Alternating Fixed Points in Boolean Equation Systems as Preferred Stable Models.” Proceedings of 17th International Conference on Logic Programming (ICLP’01), Lecture Notes in Computer Science, Springer-Verlag (Nov. 2001).

    We formally characterize alternating fixed points of boolean equation systems as models of (propositional) normal logic programs. To this end, we introduce the notion of a preferred stable model of a logic program, and define a mapping that associates a normal logic program with a boolean equation system such that the solution to the equation system can be ``read off'' the preferred stable model of the logic program. We also show that the preferred model cannot be calculated a-posteriori (i.e. compute stable models and choose the preferred one) but rather must be computed in an intertwined fashion with the stable model itself. The mapping reveals a natural relationship between the evaluation of alternating fixed points in boolean equation systems and the Gelfond-Lifschitz transformation used in stable-model computation.
    For alternation-free boolean equation systems, we show that the logic programs we derive are stratified, while for formulas with alternation, the corresponding programs are non-stratified. Consequently, our mapping of boolean equation systems to logic programs preserves the computational complexity of evaluating the solutions of special classes of equation systems (e.g., linear-time for the alternation-free systems, exponential for systems with alternating fixed points).


  83. D.Q. Goldin, S.A. Smolka, P. Wegner, “Turing Machines, Transition Systems, and Interaction.” Proceedings of Eighth International Workshop on Expressiveness in Concurrency (EXPRESS 2001), Electronic Notes in Theoretical Computer Science, Springer-Verlag (Aug. 2001).

    We present Persistent Turing Machines (PTMs), a new way of interpreting Turing-machine computation, one that is both interactive and persistent. We show that the class of PTMs is isomorphic to a very general class of effective transition systems. One may therefore conclude that the extensions to the Turing-machine model embodied in PTMs are sufficient to make Turing machines expressively equivalent to transition systems. We also define the persistent stream language (PSL) of a PTM and a corresponding notion of PSL-equivalence, and consider the infinite hierarchy of successively finer equivalences for PTMs over finite interaction-stream prefixes. We show that the limit of this hierarchy is strictly coarser than PSL-equivalence, a “gap” whose presence can be attributed to the fact that the transition systems corresponding to PTM computations naturally exhibit unbounded nondeterminism.
    We also consider amnesic PTMs and a corresponding notion of equivalence based on amnesic stream languages (ASLs). It can be argued that amnesic stream languages are representative of the classical view of Turing-machine computation. We show that the class of ASLs is strictly contained in the class of PSLs. Furthermore, the hierarchy of PTM equivalence relations collapses for the subclass of amnesic PTMs. These results indicate that, in a stream-based setting, the extension of the Turing-machine model with persistence is a nontrivial one, and provide a formal foundation for reasoning about programming concepts such as objects with static attributes.


  84. X. Du, C.R. Ramakrishnan, S.A. Smolka, Tabled Resolution + Constraints: A Recipe for Model Checking Real-Time Systems. Proceedings of the 21st IEEE Real-Time Systems Symposium (RTSS 2000). Lecture Notes in Computer Science, Springer-Verlag (Nov. 2000).

    We present a computational framework based on tabled resolution and constraint processing for verifying real-time systems. We also discuss the implementation of this framework in the context of the XMC/RT verification tool. For systems specified using timed automata, XMC offers backward and forward reachability analysis, as well as timed modal mu-calculus model checking. It can also handle timed infinite-state systems, such as those with unbounded message buffers, provided the set of reachable states is finite. We illustrate this capability on a real-time version of the leader election protocol. Finally, XMC/RT can function as a model checker for untimed systems. Despite this versatility, preliminary benchmarking experiments indicate that XMC/RT's performance remains competitive with that of other real-time verification tools.


  85. A. Roychoudhury, K. Narayan Kumar, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, Verification of Parameterized Systems Using Logic Program Transformations. Proceedings of the Sixth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2000), Lecture Notes in Computer Science, Springer-Verlag (April 2000).

    The i-protocol, an optimized sliding-window protocol for GNU UUCP, came


  86. R. Cleaveland, X. Du, S.A. Smolka, GCCS: A Graphical Coordination Language for System Specification. Proceedings of Fourth International Conference on Coordination Models and Languages (COORDINATION 2000). Lecture Notes in Computer Science, Springer-Verlag (Sept. 2000).

    We present GCCS, a graphical coordination language for hierarchical concurrent systems. GCCS, which is implemented in the Concurrency Factory design environment, represents a coordination model based on process algebra. Its coordination laws, given as a structural operational semantics, allow one to infer atomic system transitions on the basis of transitions taken by system components. We illustrate the language's utility by exhibiting a GCCS-coordinated specification of the Rether real-time ethernet protocol. The specification contains both graphical and textual components.


  87. X. Du, C.R. Ramakrishnan, S.A. Smolka, “Real-Time Verification Techniques for Untimed Systems.” Proceedings of the 1st Workshop on Model Checking for Time-Critical Systems (MTCS 2000), Electronic Notes in Theoretical Computer Science, Springer-Verlag (Aug. 2000)

    We show that verification techniques for timed automata based on the Alur and Dill region-graph construction can be applied to much more general kinds of systems, including asynchronous untimed systems over unbounded integer variables. We follow this approach in proving that the model-checking problem for the n-process Bakery algorithm is decidable, for any fixed n. We believe this is the first decidability proof for this problem to appear in the literature.


  88. S. Basu, S.A. Smolka, and O.R. Ward, Model Checking the Java Meta-Locking Algorithm. Proceedings of 7th IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS 2000), IEEE Press. Edinburgh, Scotland (April 2000).

    We apply the XMC tabled-logic-programming-based model checker to the Java meta-locking algorithm. Meta-locking is a highly optimized technique for ensuring mutually exclusive access by threads to object monitor queues, and therefore plays an essential role in allowing Java to offer concurrent access to objects. Our abstract specification of the meta-locking algorithm is fully parameterized, both on M, the number of threads, and N, the number of objects. Using XMC, we show that for a variety of values of M and N, the algorithm indeed provides mutual exclusion and freedom from lockout. Collectively, our results provide strong testimony to the correctness of the meta-locking algorithm.


  89. C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, et al. “XMC: A Logic-Programming-Based Verification Toolset.” Proceedings of the 12th International Conference on Computer Aided Verification (CAV ’00), Springer-Verlag (June 2000).

    XMC supports the specification and verification of concurrent systems such as communication protocols and embedded systems. It is implemented atop XSB, a high-performance logic-programming system. System models are specified in XL, a typed value-passing language based on CCS, properties of interest are specified in the modal mu-calculus, and model checking is used to verify properties of systems. XMC incorporates a justifier which allows the user to navigate the proof tree underlying a model-checking computation; such proof trees are effective in debugging branching-time formulas. XMC has been successfully applied to the specification and verification of a variety of systems including the Rether real-time Ethernet protocol, the Java meta-locking algorithm, and the SET e-commerce protocol.


  90. S. Lu and S.A. Smolka, Model Checking the Secure Electronic Transactions (SET) Protocol. Proceedings of Seventh International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'99), IEEE Computer Society Press (Oct. 1999).

    We use model checking to establish five essential correctness properties of the Secure Electronic Transaction (SET) protocol. SET has been developed jointly by Visa and MasterCard as a method to secure payment card transactions over open networks, and industrial interest in the protocol is high. Our main contributions are to firstly create a formal model of the protocol capturing the purchase request, payment authorization, and payment capture transactions. Together, these transactions constitute the kernel of the protocol. We then encoded our model and the aforementioned correctness properties in the input language of the FDR model checker. Running FDR on this input established that our model of the SET protocol satisfies all five properties even though the cardholder and merchant, two of the participants in the protocol, may try to behave dishonestly in certain ways. To our knowledge, this is the first attempt to formalize the SET protocol for the purpose of model checking.


  91. M. Mukund, K N. Kumar, S.A. Smolka, “Faster Model Checking for Open Systems.” Proceedings of ASIAN’99 , Lecture Notes in Computer Science, Springer-Verlag (Dec. 1999).

    We investigate OrEx, a temporal logic for specifying open systems. Path properties in OrEx are expressed using e-regular expressions, while similar logics for open systems, such as ATL* of Alur et al., use LTL for this purpose. Our results indicate that this distinction is an important one. In particular, we show that Orex has a more efficient model-checking procedure than ATL*, even though it is strictly more expressive. To this end, we present a single-exponential model-checking algorithm for OrEx; the model-checking problem for ATL* in contrast is provably double-exponential.


  92. Y. Dong, S.A. Smolka, E.W. Stark, S. White, “Practical Considerations in Protocol Verification: The E-2C Case Study,” Proceedings of Fifth IEEE International Conference on Engineering of Complex Computer Systems (ICECCS ’99), IEEE Press (Oct. 1999).

    We report on our efforts to formally specify and verify a new protocol of the E-2C Hawkeye Early Warning Aircraft. The protocol, which is currently in test at Northrop Grumman, supports communication between a mission computer (MC) and three or more tactical workstations (TWSs), connected by a single-segment LAN. We modeled the protocol in the PROMELA specification language of the SPIN verification tool, and used SPIN to analyze a number of properties of the protocol. Our investigation revealed a race condition that can lead to a disconnect of an MC/TWS connection when there is one lost UDP datagram and significant timing delays. Such delays are virtually impossible under normal E-2C operating conditions, but could be due to noise on the MC/TWS LAN. A simple modification was proposed that avoids the disconnect in many situations. Practical considerations, however, mandated that the protocol be left as it is: shutting down a noisy connection and reinitializing the TWS, with minimal delay and loss of information to the operator was deemed preferable to operating in a degraded mode.


  93. Y. Dong, X. Du, Y.S. Ramakrishna, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, O. Sokolsky, E.W. Stark, and D.S. Warren, Fighting Livelock in the i-Protocol: A Comparative Study of Verification Tools. Proceedings of the Fifth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS '99), Lecture Notes in Computer Science, Springer-Verlag (April 1999).

    The i-protocol, an optimized sliding-window protocol for GNU UUCP, came to our attention two years ago when we used the Concurrency Factory's local model checker to detect, locate, and correct a non-trivial livelock in version 1.04 of the protocol. Since then, we have repeated this verification effort with five widely used model checkers, namely, Cospan, Murphi, SMV, Spin, and XMC. It is our contention that the i-protocol makes for a particularly compelling case study in protocol verification and for a formidable benchmark of verification-tool performance, for the following reasons: 1) The i-protocol can be used to gauge a tool's ability to detect and diagnose livelock errors. 2) The size of the i-protocol's state space grows exponentially in the window size, and the entirety of this state space must be searched to verify that the protocol, with the livelock error eliminated, is deadlock- or livelock-free. 3) The i-protocol is an asynchronous, low-level software system equipped with a number of optimizations aimed at minimizing control-message and retransmission overhead. It lacks the regular structure that is often present in hardware designs. In this sense, it provides any verification tool with a vigorous test of its analysis capabilities.


  94. K. Narayan Kumar, R. Cleaveland, and S.A. Smolka, Infinite Probabilistic and Non-Probabilistic Testing. Proceedings of the 18th Conference on the Foundations of Software Technology and Theoretical Computer Science (FST&TCS '98), Lecture Notes in Computer Science, Springer-Verlag (Dec. 1998).

    We introcude several new notions of infinite testing, for both probabilistic and nonprobabilistic processes, and carefully examine their distinguishing power. We consider three kinds of infinite tests, simple, Buechi and fair. We show that Buechi tests do not change the distinguishing power of nondeterministic testing. In the probabilistic setting, we show that all three have the same distinguishing power as finite tests. Finally, we show that finite probabilistic tests are stronger than nondeterministic fair tests.


  95. X. Liu and S.A. Smolka, Simple Linear-Time Algorithms for Minimal Fixed Points. Proceedings of the 26th International Conference on Automata, Languages, and Programming (ICALP '98), Lecture Notes in Computer Science, Springer-Verlag (July 1998).

    We present global and local algorithms for evaluating minimal fixed points of dependency graphs, a general problem in fixed-point computation and model checking. Our algorithms run in linear-time, matching the complexity of the best existing algorithms for similar problems, and are simple to understand. The main novelty of our global algorithm is that it does not use the counter and "reverse list" data structures commonly found in existing linear-time global algorithms. This distinction plays an essential role in allowing us to easily derive our local algorithm from our global one. Our local algorithm is distinguished from existing linear-time local algorithms by a combination of its simplicity and suitability for direct implementation.

    We also provide linear-time reductions from the problems of computing minimal and maximal fixed points in Boolean graphs to the problem of minimal fixed-point evaluation in dependency graphs. This establishes dependency graphs as a suitable framework in which to express and compute alternation-free fixed points.

    Finally, we relate HORNSAT, the problem of Horn formula satisfiability, to the problem of minimal fixed-point evaluation in dependency graphs. In particular, we present straightforward, linear-time reductions between these problems for both directions of reducibility. As a result, we derive a linear-time local algorithm for HORNSAT, the first of its kind as far as we are aware.


  96. X. Liu, C.R. Ramakrishnan, and S.A. Smolka, Fully Local and Efficient Evaluation of Alternating Fixed Points. Proceedings of the Fourth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS '98), Lecture Notes in Computer Science, Springer-Verlag (April 1998).

    We introduce Partitioned Dependency Graphs (PDGs), an abstract framework for the specification and evaluation of arbitrarily nested alternating fixed points. The generality of PDGs subsumes that of similarly proposed models of nested fixed-point computation such as Boolean graphs, Boolean equation systems, and the propositional modal mu-calculus. Our main result is an efficient local algorithm for evaluating PDG fixed points. Our algorithm, which we call LAFP, combines the simplicity of previously proposed induction-based algorithms (such as Winskel's tableau method for nu-calculus model checking) with the efficiency of semantics-based algorithms (such as the bit-vector method of Cleaveland, Klein, and Steffen for the equational mu-calculus). In particular, LAFP is simply specified, we provide a completely rigorous proof of its correctness, and the number of fixed-point iterations required by the algorithm is asymptotically the same as that of the best existing global algorithms. Moreover, preliminary experimental results demonstrate that LAFP performs extremely well in practice. To our knowledge, this makes LAFP the first efficient local algorithm for computing fixed points of arbitrary alternation depth to appear in the literature.


  97. E.W. Stark and S.A. Smolka, Compositional Analysis of Expected Delays in Networks of Probabilistic I/O Automata, Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science (LICS '98), Indianapolis, IN, IEEE Computer Society Press (June 1998).

    Probabilistic I/O automata (PIOA) constitute a model for distributed or concurrent systems that incorporates a notion of probabilistic choice. The PIOA model provides a notion of composition, for constructing a PIOA for a composite system from a collection of PIOAs representing the components. We present a method for computing completion probability and expected completion time for PIOAs. Our method is compositional, in the sense that it can be applied to a system of PIOAs, one component at a time, without ever calculating the global state space of the system (i.e.the composite PIOA). The method is based on symbolic calculations with vectors and matrices of rational functions, and it draws upon a theory of observables, which are mappings from delayed traces to real numbers that generalize the classical "formal power series" from algebra and combinatorics. Central to the theory is a notion of representation for an observable, which generalizes the clasical notion "linear representation" for formal power series. As in the classical case, the representable observables coincide with an abstractly defined class of "rational" observables; this fact forms the foundation of our method.


  98. S.A. Smolka, B. Cui, Y. Dong, X. Du, K. Narayan Kumar, C.R. Ramakrishnan, I.V. Ramakrishnan, A. Roychoudhury, and D.S. Warren. Logic Programming and Model Checking (Invited Paper), Proceedings of PLILP/ALP'98, Pisa, Italy. Lecture Notes in Computer Science, Vol. 1490, Springer-Verlag (Sept. 1998).

    We report on the current status of the LMC project, which seeks to deploy the latest developments in logic-programming technology to advance the state of the art of system specification and verification. In particular, the XMC model checker for value-passing CCS and the modal mu-calculus is discussed, as well as the XSB tabled logic programming system, on which XMC is based. Additionally, several ongoing efforts aimed at extending the LMC approach beyond traditional finite-state model checking are considered, including compositional model checking, the use of explicit induction techniques to model check parameterized systems, and the model checking of real-time systems. Finally, after a brief conclusion, future research directions are identified.


  99. A. Roychoudhury, C.R. Ramakrishnan, I.V. Ramakrishnan, and S.A. Smolka, Tabulation-based Induction Proofs with Application to Automated Verification, Proceedings of the International Workshop on Tabulation in Parsing and Deduction (TAPD'98), Paris, France (April 1998).


  100. A. Philippou, O. Sokolsky, I. Lee, R. Cleaveland, and S.A. Smolka, Probabilistic Resource Failure in Real-Time Process Algebra, Proceedings of the Ninth International Conference on Concurrency Theory (CONCUR '98), Nice, France, Lecture Notes in Computer Science, Springer-Verlag (Aug. 1998).

    PACSR, a probabilistic extension of the real-time process algebra ACSR, is presented. The extension is built upon a novel treatment of the notion of a resource. In ACSR, resources are used to model contention in accessing physical devices. Here, resources are invested with the ability to fail and are associated with a probability of failure. The resulting formalism allows one to perform probabilistic analysis of real-time system specifications in the presence of resource failures. A probabilsitic variant of Hennessy-Milner logic with until is presented. The logic features an until operator which is parameterized by both a probabilistic constraint and a regular expression over observable actions. This style of parmeterization allows the application of probabilistic constraints to complex execution fragments. A model-checking algorithm for the proposed logic is also given. Finally, PACSR and the logic are illustrated with a telecommunications example.


  101. X. Du, K.T. McDonnell, E. Nanos, Y.S. Ramakrishna, and S.A. Smolka, Software Design, Specification, and Verification: Lessons Learned from the Rether Case Study, Proceedings of the Sixth International Conference on Algebraic Methodology and Software Technology (AMAST'97), Sydney, Australia, Lecture Notes in Computer Science, Springer-Verlag (Dec. 1997).

    Rether is a software-based real-time ethernet protocol developed at SUNY Stony Brook. The purpose of this protocol is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity ethernet hardware. It has been implemented in the FreeBSD 2.1.0 operating system, and is now being used to support the Stony Brook Video Server (SBVS), a low-cost, ethernet LAN-based server providing real-time delivery of video to end-users from the server's disk subsystem.

    Using local model checking, as provided by the Concurrency Factory specification and verification environment, we showed (for a particular network configuration) that Rether indeed makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. In the course of specifying and verifying Rether, we identified an alternative design of the protocol that warranted further study due to potential efficiency gains. Again using model checking, we showed that this alternative design also possesses the properties of interest.


  102. Y.S. Ramakrishna and S.A. Smolka, Partial-Order Reduction in the Weak Modal Mu-Calculus, Proceedings of the Eight International Conference on Concurrency Theory (CONCUR '97), Warsaw, Poland, Lecture Notes in Computer Science, Springer-Verlag (July 1997).

    We present a partial-order reduction technique for local model checking of hierarchical networks of labeled transition systems in the weak modal mu-calculus. We have implemented our technique in the Concurrency Factory specification and verification environment; experimental results show that partial-order reduction can be highly effective in combatting state explosion in modal mu-calculus model checking.


  103. Y.S. Ramakrishna, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, T. Swift, and D.S. Warren, Efficient Model Checking Using Tabled Resolution, Proceedings of the Ninth International Conference on Computer Aided Verification (CAV '97), Haifa, Israel, Lecture Notes in Computer Science, Vol. 1243, Springer-Verlag (July 1997).

    XSB is a logic programming system developed at SUNY Stony Brook that extends Prolog-style SLD resolution with tabled resolution. The principal merits of this extension are that XSB terminates on programs having finite models, avoids redundant subcomputations, and computes the well-founded model of normal logic programs.

    In this paper, we demonstrate the feasibility of using XSB as a programmable We demonstrate the feasibility of using the XSB tabled logic programming system as a programmable fixed-point engine for implementing efficient local model checkers. In particular, we present XMC, an XSB-based local model checker for a CCS-like value-passing language and the alternation-free fragment of the modal mu-calculus. XMC is written in under 200 lines of XSB code, which constitute a declarative specification of CCS and the modal mu-calculus at the level of semantic equations.

    In order to gauge the performance of XMC as an algorithmic model checker, we conducted a series of benchmarking experiments designed to compare the performance of XMC with the local model checkers implemented in C/C++ in the Concurrency Factory and SPIN specification and verification environments. After applying certain newly developed logic-programming-based optimizations (along with some standard ones), XMC's performance became extremely competitive with that of the Factory and shows promise in its comparison with SPIN.


  104. R. Cleaveland, I. Lee, P.M. Lewis, and S.A. Smolka, A Theory of Testing for Soft Real-Time Processes, Proceedings of the Eighth International Conference on Software Engineering and Knowledge Engineering, Lake Tahoe, Nevada, pp. 474-479. Published by Knowledge Systems Institute, Skokie, Illinois, ISBN 0-9641699-3-2 (May 1996).

    We present a semantic framework for soft real-time processes, i.e., processes that meet their deadlines most of the time. The key components of our framework are: (1) a specification language PDP (for Probabilistic and Discrete-time Processes) that incorporates both probabilistic and timing aspects of process behavior; (2) a formal operational semantics for PDP given as a recursively defined probability distribution function over process terms and atomic actions; and (3) a natural notion of a process passing a test with a certain probability, where a test is a process with the added capability of reporting success. By encoding deadlines as tests, the probability of a process passing a test may now be interpreted as the probability of the process meeting a deadline, thereby capturing the essence of soft real-time. A simple video frame transmission example illustrates our approach.


  105. Y.-J. Joung and S.A. Smolka, “Strong Interaction Fairness via Randomization,” 16th IEEE International Conference on Distributed Computing Systems, Hong Kong, pp. 475-483 (May 1996).

    We present Multi, a symmetric, fully distributed, randomized algorithm that, with probability 1, schedules multiparty interactions in a strongly fair manner. To our knowledge, Multi is the first algorithm for strong interaction fairness to appear in the literature. Moreover, the expected time taken by Multi to establish an interaction is a constant not depending on the total number of processes in the system. In this sense, Multi guarantees real-time response. Multi makes no assumptions (other than boundedness) about the time it takes processes to communicate. It thus offers an appealing tonic to the impossibility results of Tsay&Bagrodia and Joung concerning strong interaction fairness in an environment, shared-memory or message-passing, in which processes are deterministic and the communication time is nonnegligible. Because strong interaction fairness is as strong a fairness condition that one might actually want to impose in practice, our results indicate that randomization may also prove fruitful for other notions of fairness lacking deterministic realizations and requiring real-time response.


  106. R. Cleaveland, P.M. Lewis, S.A. Smolka, and O. Sokolsky, The Concurrency Factory Software Development Environment. In T. Margaria and B. Steffen, editors, Tools and Algorithms for the Construction and Analysis of Systems (TACAS '96), volume 1055 of Lecture Notes in Computer Science, pages 391--395, Passau, Germany, March 1996. Springer-Verlag.

    The Concurrency Factory is an integrated toolset for specification, simulation, verification, and implementation of concurrent systems such as communication protocols and process control systems. Two themes central to the project are the following: the use of process algebra, e.g., CCS, ACP, CSP, as the underlying formal model of computation, and the provision of practical support for process algebra. By "practical" we mean that the Factory should be usable by protocol engineers and software developers who are not necessarily familiar with formal verification, and it should be usable on problems of real-life scale, such as those found in the telecommuncations industry.

    This demo is intended to demonstrate the following features of the Concurrency Factory: the graphical user interface VTView and VTSim, the local model checker for the alternation-free modal mu-calculus, and the graphical compiler that transforms VTView specifications into executable code.


  107. R. Cleaveland, P.M. Lewis, S.A. Smolka, and O. Sokolsky, The Concurrency Factory: A Development Environment for Concurrent Systems. In R. Alur and T. Henzinger, editors, Computer-Aided Verification (CAV '96), volume 1102 of Lecture Notes in Computer Science, pages 398--401, New Brunswick, NJ, July 1996. Springer-Verlag.

    The Concurrency Factory supports the specification, simulation, verification, and implementation of real-time concurrent systems such as communication protocols and process control systems. While the system uses process algebra as its underlying design formalism, the primary focus of the project is practical utility: the tools should be usable by engineers who are not familiar with formal models of concurrency, and they should be capable of handling large-scale systems such as those found in the telecommunications industry. This paper serves as a status report for the Factory project and briefly describes a case-study involving the GNU UUCP i-protocol.


  108. O. Sokolsky and S.A. Smolka, Local Model Checking for Real-Time Systems. Computer-Aided Verification '95, American Mathematical Society (July 1995).

    We present a local algorithm for model checking in a real-time extension of the modal mu-calculus. As such, the whole state space of the real-time system under investigation need not be explored, but rather only that portion necessary to determine the truthhood of the logical formula. To the best of our knowledge, this is the first local algorithm for the verification of real-time systems to appear in the literature.

    Like most algorithms dealing with real-time systems, we work with a finite quotient of the inherently infinite state space. For maximal efficiency, we obtain, on-the-fly, a quotient that is as coarse as possible in the following sense: refinements of the quotient are carried out only when necessary to satisfy clock constraints appearing in the logical formula or timed automaton used to represent the system under investigation. In this sense, our data structures are optimal with respect to the given formula and automaton.


  109. S. Zhang, S.A. Smolka, and O. Sokolsky, “On the Parallel Complexity of Bisimulation and Model Checking,” in Modal Logic and Process Algebra: A Bisimulation Perspective, CSLI Lecture Notes No. 53, pp. 257-288 (1995).


  110. S. Zhang, S.A. Smolka, and O. Sokolsky, On the Parallel Complexity of Model Checking in the Modal Mu-Calculus. Proceedings of Ninth Annual IEEE Symposium on Logic in Computer Science, pp. 154-163, London, England (July 1994).

    The modal mu-calculus is an expressive logic that can be used to specify safety and liveness properties of concurrent systems represented as labeled transition systems (LTSs). We show that Model Checking in the Alternation-Free Modal Mu-Calculus (MCAFMMC) -- the problem of checking whether an LTS is a model of a formula of the alternation-free fragment of the propositional modal mu-calculus -- is P-complete even for a very restrictive version of the problem. In particular, MCAFMMC is P-complete even if the formula is fixed and the LTS is deterministic, acyclic, and has fan-in and fan-out bounded by 2. The reduction used is from a restricted version of the circuit value problem (Does a circuit alpha output a 1 on inputs x_1, ..., x_n?) known as Synchronous Alternating Monotone Fanout 2 Circuit Value Problem. Interestingly, the reduction is not valid for the logic CTL, a logic subsumed in expressive power by the alternation-free modal mu-calculus, thereby leaving open the parallel complexity of model checking in CTL.

    Our P-completeness result is tight in the sense that placing any further non-trivial restrictions on either the formula or the LTS results in membership in NC for MCAFMMC. In particular, we exhibit efficient NC-algorithms for two potentially useful versions of the problem, both of which involve formulas containing a constant number of fixed point operators: 1) the LTS is a finite tree with bounded fan-out; and 2) the formula is additionally OR-free and the LTS is deterministic and over an action alphabet of bounded size.

    In the course of deriving the algorithm for 2), we give a parallel constant-time reduction from the alternation-free modal mu-calculus to Datalog with negation. We also provide a polynomial-time reduction in the other direction thereby establishing a kind of equivalence result regarding the expressive power of the two formalisms.


  111. R. Cleaveland, J.N. Gada, P.M. Lewis, S.A. Smolka, O. Sokolsky, and S. Zhang, The Concurrency Factory -- Practical Tools for Specification, Simulation, Verification, and Implementation of Concurrent Systems. Proceedings of DIMACS Workshop on Specification of Parallel Algorithms, American Mathematical Society, Princeton, NJ (May 1994).


  112. A. Uselton and S.A. Smolka, A Compositional Semantics for Statecharts using Labeled Transition Systems. Proceedings of CONCUR '94 -- Fifth International Conference on Concurrency Theory, Uppsala, Sweden, Springer-Verlag Lecture Notes in Computer Science (Aug. 1994).


  113. S.-H. Wu, S. A. Smolka and E. Stark, “Compositionality and Full Abstraction for Probabilistic I/O Automata,” Proceedings of CONCUR ’94 — Fifth International Conference on Concurrency Theory, Uppsala, Sweden (Aug. 1994).

    We augment the I/O automaton model of Lynch and Tuttle with probability, as a step toward the ultimate goal of obtaining a useful tool for specifying and reasoning about asynchronous probabilistic systems. Our new model, called probabilistic I/O automata, preserves the fundamental properties of the I/O automaton model, such as the asymmetric treatment of input and output and the pleasant notion of asynchronous composition. For certain classes of probabilistic I/O automata, we show that probabilistic behavior maps, which are an abstract representation of I/O automaton behavior in terms of a certain expectation operator, are compositional and fully abstract with respect to a natural notion of probabilistic testing.


  114. S. Yuen, R. Cleaveland, Z. Dayar, and S.A. Smolka, “Fully Abstract Characterizations of Testing Preorders for Probabilistic Processes,” Proceedings of CONCUR ’94 — Fifth International Conference on Concurrency Theory, Uppsala, Sweden (Aug. 1994).

    We present alternative characterizations of the testing preorders for probabilistic processes proposed in [CSZ92]. For a given probabilistic process, the characterization takes the form of a mapping from probabilistic traces to the interval [0, 1], where a probabilistic trace is an alternating sequence of actions and probability distributions over actions. Our results, like those of [CSZ92], pertain to divergence-free probabilistic processes, and are presented in two stages: probabilistic tests without internal t-transitions are considered first, followed by probabilistic tests with t-transitions. In each case, we show that our alternative characterization is fully abstract with respect to the corresponding testing pre-order, thereby resolving an open problem in [CSZ92]. In the second case, we use the alternative characterization to show that the testing preorder is actually an equivalence relation.
    Finally, we give proof techniques, derived from the alternative characterizations, for establishing preorder relationships between probabilistic processes. The utility of these techniques is demonstrated by means of some simple examples.


  115. A. Uselton and S.A. Smolka, “A Process-Algebraic Semantics for Statecharts via State Refinement,” Proceedings of PROCOMET ’94, San Miniato, Italy. Proceedings published by Elsevier/North Holland (1994).

    this paper we put forth a process algebraic semantics for statecharts agreeing with [19]. In particular, we provide a translation of statecharts into a process algebra with state refinement , a new operator introduced by the authors in [22]. The semantics of a statechart is then given by the labeled transition system (LTS) of its translation, as defined by the process algebra's structural operational semantics (SOS). The benefits to be reaped by giving statecharts a process algebraic semantics include the following:


  116. O. Sokolsky and S.A. Smolka, “Incremental Model Checking in the Modal Mu-Calculus,” Computer-Aided Verification ’94, American Mathematical Society (June 1994).

    We present an incremental algorithm for model checking in the alternation-free fragment of the modal mu-calculus, the first incremental algorithm for model checking of which we are aware. The basis for our algorithm, which we call MCI (for Model Checking Incrementally), is a linear-time algorithm due to Cleaveland and Steffen that performs global (non-incremental) computation of fixed points. MCI takes as input a set d of change) to the labeled transition system under investigation, where a change constitutes an inserted or deleted transition; with virtually no additional cost, inserted and deleted states can also be accommodated. Like the Cleaveland-Steffen algorithm, MCI requires time linear in the size of the LTS in the worst case, but only time linear in d in the best case. We give several examples to illustrate MCI in action, and discuss its implementation in the Concurrency Factory, an interactive design environment for concurrent systems.


  117. A. Uselton and S.A. Smolka, “State Refinement in Process Algebra,” Proceedings of NAPAW ’93 – Second North American Workshop on Process Algebra, Ithaca, NY (Aug. 1993).

    Andrew C. Uselton and Scott A. Smolka Department of Computer Science SUNY at Stony Brook Stony Brook, NY 11794-4400, USA fuselton,sasg@cs.sunysb.edu Abstract We introduce a state refinement operator into BPA with recursive specifications and present a comprehensive technical development of the resulting theory, BPA + SR. Our main technical results are that bisimulation is a congruence in BPA+SR and that guarded recursive specifications have unique solutions. We also have that bisimulation remains a congruence if the merge operator of ACP is added to BPA + SR. This is significant since action refinement, another approach to refinement in process algebra, does not in general preserve semantic equivalences based on interleavings of atomic actions. State refinement, to our knowledge, represents the first attempt to capture the essence of Harel's statecharts --- viz., hierarchically structured state transition behavior --- within a purely process algebraic setting. A succinct, hierar...


  118. Y.-J. Joung and S.A. Smolka, “Multiparty Interaction: An Abstraction Mechanism for Distributed Programming,” Proceedings of SOFSEM ’92, Prague, Czechoslovakia (Nov. 1992).


  119. S. Zhang and S.A. Smolka, “Efficient Parallelization of Equivalence Checking Algorithms,” Proceedings of FORTE ’92 – Fifth International Conference on Formal Description Techniques, M. Diaz and R. Groz (eds.), pp. 133–146 (Oct. 1992).

    The paper presents the design and implementation of ARC-a tool for automated verification of concurrent systems. The tool is based on the untimed CSP language, its semantic models and theory of refinement. We alleviate the combinatorial explosion problem using ordered binary decision diagrams (OBDDs) for the internal representation of complex data structures-sets and labeled transition systems (LTS). The semantically complex external choice operator is translated into the corresponding LTS using an optimized algorithm. This and some other implementation improvements allow verifying systems with up to 10/sup 33/ states, which is consistent with the capabilities of other OBDD based approaches. Compared to two existing CSP tools, FDR and MRC, ARC has fewer language restrictions and is more memory efficient. A performance comparison based on the n-schedulers and dining philosophers problems suggests that the checking algorithm of ARC is, in most cases, faster than those of the other tools.


  120. J.C.M. Baeten, J.A. Bergstra, and S.A. Smolka, “Axiomatizing Probabilistic Processes: ACP with Generative Probabilities,” Proceedings of CONCUR ’92 – Third International Conference on Concurrency Theory, Stony Brook, NY (Aug. 1992). Proceedings published as Lecture Notes in Computer Science, Vol. 630, W.R. Cleaveland (ed.), Springer-Verlag (1992).

    This paper is concerned with finding complete axiomatizations of probabilistic processes. We examine this problem within the context of the process algebra ACP and obtain as our endresult the axiom system pr ACP-l, a version of ACP whose main innovation is a probabilistic asynchronous interleaving operator. Our goal was to introduce probability into ACP in as simple a fashion as possible, Optimally, ACP should be the homomorphic image of the probabilistic version in which the probabilities are forgotten, We begin by weakening slightly ACP to obtain the axiom system ACP-l. The main difference between ACP and ACP-l is that the axiom x + d = x, which does not yield a plausible interpretation in the generative model of probabilistic computation, is rejected in ACP-l. We argue that this does not affect the usefulness of ACP-l in practice, and show how ACP can be reconstructed from ACP-l with a minimal amount of technical machinery. pr ACP-l is obtained from ACP-l through the introduction of probabilistic alternative and parallel composition operators, and a process graph model for pr ACP-l based on probabilistic bisimulation is developed. We show that pr ACP-l is a sound and complete axiomatization of probabilistic bisimulation for finite processes, and that pr ACP-l can be homomorphically embedded in ACP-l as desired. Our results for ACP-l and pr ACP-l are presented in a modular fashion by first considering several subsets of the signatures, We conclude with a discussion about adding an iteration operator to pr ACP-l.


  121. R. Cleaveland, S.A. Smolka, and A. Zwarico, “Testing Preorders for Probabilistic Processes,” Proceedings of 19th International Colloquium on Automata, Languages and Programming, Vienna, Austria (July 1992). Proceedings published as Lecture Notes in Computer Science, Vol. 623, W. Kuich (ed.), Springer-Verlag (1992).

    We present a testing preorder for probabilistic processes based on a quantification of the probability with which processes pass tests. The theory enjoys close connections with the classical testing theory of De Nicola and Hennessy in that whenever a process passes a test with probability 1 (respectively some nonzero probability) in our setting, then the process must (respectively may) pass the test in the classical theory. We also develop an alternative characterization of the probabilistic testing preorders that takes the form of a mapping from probabilistic traces to the interval [0, 1], where a probabilistic trace is an alternating sequence of actions and probability distributions over actions. Finally, we give proof techniques, derived from the alternative characterizations, for establishing preorder relationships between probabilistic processes. The utility of these techniques is demonstrated by means of some simple examples.


  122. Y.-J. Joung and S.A. Smolka, “A Comprehensive Study of the Complexity of Multiparty Interaction,” Proceedings of 19th Annual ACM Symposium on Principles of Programming Languages, Albuquerque, New Mexico (Jan. 1992).

    We present a taxonomy of languages for multiparty interaction, which covers all proposals of which we are aware. Based on this taxonomy, we present a comprehensive analysis of the computational complexity of the multiparty interaction implementation problem, the problem of scheduling multiparty interactions in a given execution environment.


  123. Y.-J. Joung and S.A. Smolka, “Coordinating First-Order Multiparty Interactions,” Proceedings of 18th Annual ACM Symposium on Principles of Programming Languages, Orlando, Florida (Jan. 1991).

    A first-order multiparty interaction is an abstraction mechanism that defines communication among a set of formal process roles. Actual processes participate in a first-order interaction by enroling into roles, and execution of the interaction can proceed when all roles are filled by distinct processes. As in CSP, enrolement statements can serve as guards in alternative commands. The enrolement guard scheduling problem then is to enable the execution of first-order interactions through the judicious scheduling of roles to processes currently ready to execute enrolement guards.
    We present a fully distributed and message-efficient algorithm for the enrolement guard scheduling problem, the first such solution of which we are aware. We also describe several extensions of the algorithm, including generic roles, dynamically changing environments where processes can be created and destroyed at run time, and nested-enrolement which allows interactions to be nested.


  124. C.-C. Jou and S.A. Smolka, “Equivalences, Congruences, and Complete Axiomatizations for Probabilistic Processes,” Proceedings of CONCUR ’90 – First International Conference on Concurrency Theory, Amsterdam, the Netherlands (Aug. 1990). Proceedings published as Lecture Notes in Computer Science, Vol. 458, J.C.M. Baeten and J.W. Klop (eds), Springer-Verlag (1990).

    We study several notions of process equivalence—viz. trace, failure, ready, and bisimulation equivalence—in the context of probabilistic labeled transition systems. We show that, unlike nondeterministic transition systems, “maximality” of traces and failures does not increase the distinguishing power of trace and failure equivalence, respectively. Thus, in the probabilistic case, trace and maximal trace equivalence coincide, and failure and ready equivalence coincide.
    We then propose a language PCCS for communicating probabilistic processes, and present its operational semantics. We show that in PCCS, trace equivalence and failure equivalence are not congruences, whereas Larsen-Skou probabilistic bisimulation is. Furthermore, we prove that trace congruence, the largest congruence contained in trace equivalence, lies between failure equivalence and bisimulation equivalence in terms of its distinguishing strength.
    Finally, we stress the similarity between classical process algebra and probabilistic process algebra by exhibiting sound and complete axiomatizations of bisimulation equivalence for finite and finite state probabilistic processes, which are natural extensions of the classical ones (R. Milner, “A complete inference system for a class of regular behaviours,” Journal of Computer and System Science, Vol. 28, 1984). Of particular interest is the rule for eliminating unguarded recursion, which characterizes the possibility of infinite syntactic substitution as a zero-probability event.


  125. S.A. Smolka and B. Steffen, “Priority as Extremal Probability,” Proceedings of CONCUR ’90 – First International Conference on Concurrency Theory, Amsterdam, the Netherlands (Aug. 1990). Proceedings published as Lecture Notes in Computer Science, Vol. 458, J.C.M. Baeten and J.W. Klop (eds), Springer-Verlag (1990).

    We extend the stratified model of probabilistic processes to obtain a very general notion ofprocess priority. The main idea is to allow probability guards of value 0 to be associated with alternatives of a probabilistic summation expression. Such alternatives can be chosen only if the non-zero alternatives are precluded by contextual constraints. We refer to this model as one of “extremal probability” and to its signature asPCCS?. We providePCCS? with a structural operational semantics and a notionof probabilistic bisimulation, which is shown to be a congruence. Of particular interest is the abstractionPCCSp ofPCCS? in which all non-zero probability guards are identified.PCCSp represents a customized framework for reasoning about priority, and covers all features of process algebras proposed for reasoning about priority that we know of.


  126. J. Malhotra, S.A. Smolka, A. Giacalone, and R. Shapiro, “Winston: A Tool for the Hierarchical Design and Simulation of Concurrent Systems,” in Specification and Verification of Concurrent Systems, Springer-Verlag Series on Workshops in Computing, C.J. van Rijsbergen (ed.), ISBN 3-540-19625-0, Springer-Verlag, London (July 1990).

    Winston is an interactive environment that offers hierarchical editing, analysis, and simu­lation of concurrent systems. Winston is built using the design/OA™ Development System and hence inherits all the graphics capabilities of design. Winston supports the development of hierarchically structured networks of processes, in which networks of communicating processes may be decomposed recursively into subnetworks. A system specification created using Winston can be studied through multi-level simulation, and by analysis tools which include efficient pro­cedures for deciding Milner’s strong and weak observational equivalence between Finite State Processes.


  127. R. van Glabbeek, S.A. Smolka, B.U. Steffen, and C.M.N. Tofts, “Reactive, Generative, and Stratified Models of Probabilistic Processes,” Proceedings of Fifth Annual IEEE Symposium on Logic in Computer Science, pp. 130-141, Philadelphia, PA (June 1990). Invited for submission to Special Issue of Information and Computation on the Fifth Annual IEEE Symposium on Logic in Computer Science.

    We introduce three models of probabilistic processes, namely, reactive, generative, and stratified. These models are investigated within the context of PCCS, an extension of Milner's SCCS in which each summand of a process summation expression is guarded by a probability and the sum of these probabilities is 1. For each model, we present a structural operational semantics of PCCS and a notion of bisimulation equivalence which we prove to be a congruence. We also show that the models form a hierarchy: the reactive model is derivable from the generative model by abstraction from the relative probabilities of different actions, and the generative model is derivable from the stratified model by abstraction from the purely probabilistic branching structure. Moreover, the classical nonprobabilistic model is derivable from each of these models by abstraction from all probabilities.


  128. Y.-J. Joung and S.A. Smolka, “Efficient, Dynamically Structured Multiprocess Communication,” Proceedings of 28th Allerton Conference on Communication, Control and Computing, Urbana, IL (Oct. 1990).


  129. Y.-J. Joung and S.A. Smolka, “A Completely Distributed and Message-Efficient Implementation of Synchronous Multiprocess Communication,” Proceedings of the 19th Annual International Conference on Parallel Processing, Chicago, IL (Aug. 1990).


  130. A. Giacalone, C.-J. Jou, and S.A. Smolka, “Algebraic Reasoning for Probabilistic Processes,” Proceedings of IFIP WG 2.2/2.3 Working Conference on Programming Concepts and Methods, Sea of Gallilee, Israel (April 1990). Proceedings published by North-Holland (1990).


  131. A. Giacalone, J. Heller, A. Kaufman, M. Kifer, P.M. Lewis, P. Mishra, I.V. Ramakrishnan, S.A. Smolka, and D.S. Warren, “VERITAS: Visualization Environment Research In The Applied Sciences,” Proceedings of the SPIE/SPSE Symposium on Electronic Imaging (1989).

    VERITAS (Visualization Environment Research In The Applied Sciences) involves the design and development of tools for the creation of user environments for scientific computing systems. The initial goal has been to provide a tool-based environment in which a scientist can easily and rapidly construct, with no programming effort, a powerful customized graphical user interface to an existing scientific application. The ultimate goal is to support in a similar interactive, high-level way the development of the entire scientific application system by the scientist, to allow a tighter integration of the application model and the graphical interface. The project involves research and development in visualization and interactive data presentation techniques, knowledge representation and management systems, and high-level specification and programming languages.


  132. A. Giacalone, A. Kaufman, S.A. Smolka, and D.S. Warren, “Visualization Tools for the Applied Sciences,” Proceedings of Third International Conference on Human-Computer Interaction, Boston, MA (Sept. 1989).


  133. G. Singh, S.A. Smolka, and I.V. Ramakrishnan, “Distributed Algorithms for Tree Pattern Matching,” Proceedings of Second International Workshop on Distributed Algorithms, Amsterdam (July 1987). Proceedings published as Lecture Notes in Computer Science, Vol. 312, Springer-Verlag, J. van Leeuwen (1988).

    We present efficient distributed algorithms for the Tree Pattern Matching problem. To the best of our knowledge, these are the first distributed algorithms of this nature. Tree pattern matching is a fundamental operation in many programming task such as code optimization and automated theorem proving, and has a number of applications in distributed systems. We present both a top-down and bottom-up algorithm for linear tree pattern matching (where any variable occurs at most once in the pattern) for the case in which the subject tree is completely distributed as a node per processor. The pattern is assumed to reside within the local memory of each processor. Let the subject tree have n nodes and height hs, and the pattern m nodes and height hp. The top-down algorithm has bit complexity O(n log m hp) and time complexity O(hs), while the bottom-up algorithm has bit complexity O(n hp) and time complexity O(hp). However, the bottom-up algorithm requires preprocessing of the subject and pattern trees. An efficient extension of these distributed algorithms to the case of multiple patterns is also given.


  134. T. Bolognesi and S.A. Smolka, “Fundamental Results for the Verification of Observational Equivalences: A Survey,” Proceedings of Seventh International Symposium on Protocol Specification, Testing and Verification, Zurich, Switzerland (May 1987). Proceedings published as Protocol Specification, Testing, and Verification, VII, H. Rudin and C. H. West (eds.), North-Holland (1988).


  135. S.A. Smolka and R.E. Strom, “A CCS Semantics for NIL,” Proceedings of the IFIP TC 2/WG 2.2 Working Conference on Formal Description of Programming Concepts, Ebberup, Denmark, pp. 347–368 (Aug. 1986). Proceedings published as Formal Description of Programming Concepts III (M. Wirsing, ed.), North-Holland (1987).

    We present a syntax-directed translation of NIL, a high-level language for distributed systems programming, into CCS, Milner's Calculus of Communicating Systems. This translation presents unique problems because of NIL's highly dynamic nature, and makes full use of CCS's descriptive facilities. In particular, we consider NIL constructs for dynamic creation and deletion of processes and communication channels, queued synchronous and asynchronous message passing, nondeterministic message selection, and exception handling. A NIL implementation of a simple command shell is used to illustrate the translation procedure. We discuss various issues and open problems concerning the suitability of CCS as an abstract semantics for NIL.


  136. S.A. Smolka, S.K. Debray, and A.J. Frank, “Testing Protocol Robustness the CCS Way,” Proceedings of Fourth International Symposium on Protocol Specification, Testing and Verification, Skytop Lodge, PA, pp. 93–108 (June 1984). Proceedings published as Protocol Specification, Testing, and Verification, IV, Y. Yemini, R. Strom, and S. Yemini (eds.), North-Holland (1985).


  137. P.C. Kanellakis and S.A. Smolka, “On the Analysis of Cooperation and Antagonism in Networks of Communicating Processes,” Proceedings of the Fourth ACM Symposium on the Principles of Distributed Computing, Minaki, Ontario, Canada (Aug. 1985).

    We propose a new method for the analysis of cooperative and antagonistic properties of communicating finite state processes (FSPs). This algebraic technique is based on a composition operator and on the notion of possibility equivalence among FSPs. We demonstrate its utility by showing that potential blocking, termination, and lockout can be decided in polynomial time for loosely connected networks of tree FSPs. Potential blocking and termination are examples of cooperative properties, while lockout is an antagonistic one. For loosely connected networks of (the more general) acyclic FSPs, the cooperative properties become NP-complete and the antagonistic ones PSPACE-complete. For tightly coupled networks of tree FSPs, we also have NP-completeness for the cooperative properties. For the harder case of FSPs with cycles, we provide a natural extension of the method.


  138. S.K. Debray, A.J. Frank and S.A. Smolka, “On the Existence and Construction of Robust Communication Protocols for Unreliable Channels,” Proceedings of the Fourth Conference on Foundations of Software Technology and Theoretical Computer Science, Banglore, India, pp. 136-151 (Dec. 1984). Proceedings published as Lecture Notes in Computer Science, Vol. 181, Springer-Verlag, M. Joseph and R. Shyamasundar (eds.) (1984).

    A simple necessary and sufficient condition for the existence of robust communication protocols for arbitrary alphabets and a large class of transmission errors is presented. This class of errors, called transformation errors, consists of those errors where symbols (messages) may be lost or corrupted to other symbols. The proof is used as the basis of a procedure for automatically constructing robust protocols for transformation errors. The protocols generated are small in size, despite not being custom-designed. The results presented generalize and expand upon those of Aho et al. [AUY79]. Two protocols are constructed to illustrate our technique, and are contrasted with those of Aho et al.


  139. S.A. Smolka, “Polynomial-Time Analysis for a Class of Communicating Processes,” Proceedings of the Sixth International Symposium on Programming, Toulouse, France (April 1984). Proceedings published as Lecture Notes in Computer Science Series, Vol. 167, Springer-Verlag (1984).

    "Can a process terminate prematurely?" is a computationally difficult question to answer for finite-state communicating processes. We present an algorithm for this problem that runs in polynomial (quadratic) time for a significant class of communicating processes. The underlying model is algebraic and represents a restriction of Milner's CCS to finite-state systems with one-to-one communication.
    In order to answer the question of premature termination for process Pi, we express the problem as a two-player game, Pi versus the rest of the network. We then show that this problem can be restated in terms of the network's [HBR] failure. This leads to an algorithm based on an efficient procedure for computing the failures of a network. An on-board comparator and a rebound sorter are used as illustrative examples.


  140. P.C. Kanellakis and S.A. Smolka, “CCS Expressions, Finite State Processes, and Three Problems of Equivalence,” Proceedings of the 2nd ACM Symposium on the Principles of Distributed Computing, Montreal, Canada (Aug. 1983).

    We examine the computational complexity of testing finite state processes for equivalence in Milner's Calculus of Communicating Systems (CCS). The equivalence problems in CCS are presented as refinements of the familiar problem of testing whether two nondeterministic finite automata (NFA) are equivalent, i.e., accept the same language. Three notions of equivalence proposed for CCS are investigated, namely, observational equivalence, strong observational equivalence, and failure equivalence. We show that observational equivalence can be tested in polynomial time. As defined in CCS, observational equivalence is the limit of a sequence of successively finer equivalence relations, ˜k, where ˜1 is nondeterministic finite automaton equivalence. We prove that, for each fixed k, deciding ˜k is PSPACE-complete. We show that strong observational equivalence can be decided in polynomial time by reducing it to generalized partitioning, a new combinatorial problem of independent interest. Finally, we demonstrate that testing for failure equivalence is PSPACE-complete, even for a very restricted type of process.


  141. A. Danesi, S.A. Smolka, U. Chinappi,“Modeling and Flight Simulation of an Active Configured Aircraft under M.L.S. Guidance,” Proceedings of the NATO AGARD Conference on Modeling and Simulation of Avionics Systems and Command, Control and Communications Systems, AGARD Conference Proceedings No. CP-268, Paris, France (Oct. 1979).


  142. A. Danesi, S.A. Smolka, F. Borrini, “Trajectory Behaviour of a Control Configured Aircraft Subjected to Random Disturbances,” Proceedings of the NATO AGARD Conference on Aerodynamic Characteristics of Controls, AGARD Conference Proceedings No. CP-262 (1979).


  143. K.B. MacNichol, S.A. Smolka, L. Morino, “Atmospheric Boundary Layer Flows Around Prismatic Shapes,” Proceedings of the 14th Annual Meeting of the Society of Engineering Science, Lehigh, PA (Nov. 1977).
  144. R.D. Preuss, S.A. Smolka, E.O. Suciu, L. Morino, “Nonpotential Aerodynamics for Windmills in Shear Winds,” Proceedings of the 3rd Biennial Conference on Wind Energy Conversion Systems, Washington, D.C. (Sept. 1977).
  145. J. Broderick, S.A. Smolka, L. Morino, “Recent Developments in the Green’s Function Method for Aerodynamics,” Proceedings of the 2nd IMACS (AICA) International Symposium on Computer Methods for Partial Differential Equations, Lehigh, PA (June 1977).

Articles in Books

  1. M. Kifer and S.A. Smolka, "OSP: An Environment for Operating Systems Projects." Appears as Appendix D in Operating Systems - Internals and Design Principles, W. Stallings, pp. 723-730, Prentice-Hall (2001).


  2. E.W. Stark and S.A. Smolka, "A Complete Axiom System for Finite-State Probabilistic Processes," in Proof, Language and Interaction: Essays in Honour of Robin Milner (G. Plotkin, C.P. Stirling, and M. Tofte, eds.), MIT Press (2000).

    A complete equational axiomatization of probabilistic bisimulation for finite-state probabilistic processes is presented. It extends Milner's complete axiomatization of regular behaviors, which appeared in Volume 28 of the Journal of Computer and System Sciences (1984).


  3. W.R. Cleaveland and S.A. Smolka, "Process Algebra," in Encyclopedia of Electrical Engineering (J.G. Webster, editor), John Wiley & Sons (1999).

    Process algebra represents a mathematically rigorous framework for modeling concurrent systems of interacting processes. The process-algebraic approach relies on equational and inequational reasoning as the basis for analyzing the behavior of such systems. This chapter surveys some of the key results obtained in the area within the setting of a particular process-algebraic notation, the Calculus of Communicating Systems (CCS) of Milner. In particular, the Structural Operational Semantics approach to defining operational behavior of languages is illustrated via CCS, and several operational equivalences and refinement orderings are discussed. Mechanisms are presented for deducing that systems are related by the equivalence relations and refinement orderings, and different process-algebraic modeling formalisms are briefly surveyed.


  4. A. Giacalone and S.A. Smolka, "Integrated Environments for Formally Based Design and Simulation of Concurrent Systems: A Non-Procedural Approach," in Visual Programming Environments, E.P. Glinert, IEEE Computer Society Press (1990).


  5. P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Tutorial: Distributed Software Engineering, S.M. Shatz and J.-P. Wang, eds., IEEE Computer Society Press, pp. 148-164 (1989).


  6. P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Concurrent Programming (N. Gehani and A. McGettrick, eds.), Addison-Wesley (1987).


  7. S.A. Smolka and P.C. Kanellakis, "Recent Results in the Static Analysis of Communicating Finite State Processes," in Current Advances in Distributed Computing and Communications, Computer Science Press, Rockville, Maryland, pp. 208-233 (1986).


  8. P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Programming Languages: A Grand Tour, 3rd edition, Ellis Horowitz, Computer Science Press (1986).



Back to Scott Smolka's Homepage