November 15-16, 2010, Redmond, Washington

Workshop Description

A two-day workshop on Usable Verification to be held at Microsoft Research at Redmond, Washington, during Mon-Tue November 15 and 16. The workshop has travel and logistic support from the United States National Science Foundation and Microsoft Research. Formal verification has made rapid progress in recent years. The tools and ideas are increasingly being used in industry in a variety of ways. Many of the challenging problems of building secure software systems, programming multicore processors, cloud-based systems, and cyber-physical systems will require formal support for modeling and analysis. There are a growing number of applications in computer science research as well as in other disciplines. A key challenge is to broaden the audience for this technology by making the tools more usable and by applying it to a wider range of problems. The Workshop on Usable Verification provides a forum for discussing cutting-edge ideas in the integration, use, and usability of advanced verification tools. The goal is to stimulate cooperation and healthy competition among different verification projects and groups toward measurably advancing usability.

Workshop Program

Monday (Nov 15)
8.15-9:Gathering at MSR Building 99
9-10.30:Usability (Chair: Lenore Zuck):
Jeannette Wing (CMU): Usable Verification: Balancing Thinking and Automation
Kathi Fisler (WPI): Putting the User in Usable Verification
Pete Manolios (Northeastern): Computer-aided reasoning for the Masses
Mike Lowry (NASA Ames): Towards Information-rich User Interface Paradigms for Formal Verification
Willem Visser (Stellenbosch): Towards Checking the Usefulness of Verification Tools
Focus: What makes verification tools more or less usable? How can we measure usability? What is the role of automation versus the human-in-the-loop? How do we target tools to specific user communities: students, beginners, and experts?
11-12.30:Verification in the large (Chair: Jeannette Wing):
Lenore Zuck (NSF): Usable Verification for Useful Systems
John Rushby (SRI): What use is verified software?
Tom Ball (MSR): Towards Scalable Modular Checking of User-defined Properties
Saddek Bensalem (Verimag): Compositional verification
John O'Leary (Intel): Formal verification at Intel
Focus: What is the role of verification in designing and building reliable systems? How can verification help in decomposing problems and composing solutions? What new ideas are needed to scale up the technology and dial down our ambition for verifying large systems? How do connect verification in the small and in the large?
2-3.30:Modeling (Chair: John Rushby):
Robert France (Colo. State): Bridging the Formal Techniques and Model-Driven Engineering Divide
Joanne Atlee (Waterloo): Easing Formal Verification through Domain-Specific Languages
Arie Gurfinkel (SEI): Raising the level of abstraction through partial models
Arvind (MIT): TBA
Wolfram Schulte (MSR): TBA
Focus: Can we raise the level of abstraction in design? Is model-based engineering the most effective way to gain an audience for verification? How does modeling impact usable verification? How do verification tools make modeling more effective? How can verification technology reshape the design lifecycle?
4-5.30:CPS (Chair: John Harrison):
David Monniaux (Verimag): Static analysis: Coming out of the niche?
Sanjit Seshia (UC Berkeley): The Challenge of Environment Modeling in Verifying Cyber-Physical Software Systems
Andre Platzer (CMU): Integrative challenges of Cyber-physical systems
Devesh Bhatt (Honeywell): Effective verification of Flight Critical Software Systems
Sumit Gulwani (MSR): Usable Synthesis
Focus: Can principles from software modeling and verification be applied to physical and cyber-physical systems? How can verification technology be more useful in the design and analysis of Cyber-Physical Systems by domain experts? How can verification help non-experts effective at building CP systems?
6PM:Dinner (Indian Buffet)
Tuesday (Nov 16)
9-10.30:Inference Tools (Chair: Tom Ball):
Leo Freitas (Newcastle): Learning from an expert's proof AI4FM
Chris Conway (NYU): Leveraging SMT
Cesare Tinelli (Iowa): Leveraging SMT (contd.)
Nikolaj Bjorner (MSR): Unleashing the verification genie in the cloud
Patrice Godefroid (MSR): Usable Verification of Existential Properties (aka Bug Finding)
Focus: In what ways can improved automation in inference tools help make verification technolog more usable? How can the automation itself be improved to address usability? What are novel ways of combining and delivering inference capabilities?
11-12.30:Verification tools (Chair: Alan Hu):
Mattias Ulbrich (Karlsruhe): Software verification
Bob Brayton (UC Berkeley): Continued relevance of Bit-level Verification Research
Grigore Rosu (UIUC): Matching Logic: A New Program Verification Approach
Ken McMillan (MSR): Tools that explain their failures
Rustan Leino (MSR): Usable Auto-active Verification
Focus: What technologies are succeeding or failing on the usability front? Where are the bottlenecks to usability? Is push-button automation a boon or a bane? Can verification tools deliver better feedback? What experiments can we conduct to gauge usability? How can we embed verification tools into the design lifecycle?
2-3.30:Analysis tools (Chair: David Monniaux):
Aarti Gupta (NEC): Precise and scalable program analysis at NEC
Jens Palsberg (UCLA): Analysis of Parallel Languages
Koushik Sen (UC Berkeley): Separating Functional and Parallel Correctness
Mike Ernst (UW): User-specified type systems for domain-specific verification
Corina Paseareanu (NASA/CMU): Using Symbolic (Java) PathFinder at NASA
Focus: What will make analysis tools more usable? Should one give up precision to make analysis usable? Can less precise analysis be useful? Can push-button tools be usable? What are new challenge areas for analysis tools? How can analysis tools exploit inference and verification capabilities?
4-5:Closing Thoughts: Challenges and Next Steps (Chair: Shankar)
Focus: What are the lessons learned? Where has verification technology succeeded or failed? Can we make it more usable for verification experts, domain experts, and students? Are there missed opportunities for exploiting verification in nontraditional domains? Are we doing a poor job of getting the word out? How can we make it easier to develop usable verification technology? What are the big experiments to measure and improve usability?

Other Submissions

Matt Kaufmann and J Strother Moore: The Need for Humans in the Loop: Meeting the Grand Challenge of Full Functional Verification
Leslie Lamport: Blueprints
Sagar Chaki, Arie Gurfinkel, and Ofer Strichman: Using Regression Verification to Revalidate Real-Time Software on Multicore Computers
Natarajan Shankar: A Tool Bus for Anytime Verification

Workshop Location

The meeting will be held in Microsoft Redmond Building 99. Directions, maps, and a list of the closest hotels may be found at

Workshop Slides


Upload Slides:

Workshop Organizers

Tom Ball (MSR), Lenore Zuck (NSF), and N. Shankar