NEW: Earn an IEEE Reliability Society Certificate by attending any two of the tutorials. 

Students who can be given credit by their University for attending tutorials can also ask the tutorial chair for a detailed certificate indicating the titles and duration of the attended tutorials. 

Participants interested in receiving a certificate are invited to get their tutorial attendance sheet signed by the tutorial instructors after each attended tutorial. 

Monday, October 23, 9:00am-12:30pm:

T6: Automated, Cloud-based and Real Time Software Reliability Growth Modeling

Monday, October 23, 2:00pm-5:30pm:

T8: Testing Reliable Software in an Agile Context – Benefits, Challenges, Solutions
T5:  ODC - Agile Root Cause Analysis

Tuesday, October 24, 2:00pm-5:30pm:

T2: Mining AndroZoo
T4: Data Science and Measurement in Software Reliability Engineering

Wednesday, October 25, 2:00pm-5:30pm:

T1: Applications of Survivability Modeling to Mission-Critical Systems Assessment

Thursday, October 26, 9:00am-12:30pm:

T3: Frama-C, a Collaborative Framework for C Code Verification
T7: Anomaly Detection in Networks

Tutorial 1: Applications of Survivability Modeling to Mission-Critical Systems Assessment

Presenters: Kishor Trivedi (Duke Univ.), Alberto Avritzer (Independent Consultant)  

The goal of this tutorial is to provide an introduction to the concept and definition of survivability and to demonstrate approaches to model and quantify survivability in systems and networks. In our tutorial we define survivability as the “ability to provide services in compliance with the requirement even in presence of major and minor failures in network infrastructure and service platforms caused by undesired events that might be external or internal”. The network survivability is quantified as defined by the ANSI T1A1.2 committee, which is the transient performance from the instant an undesirable event occurs until steady state with an acceptable performance level is attained.

We present approaches for survivability  assessment in several mission-critical domains. Specifically, we present the following:

  1. survivability modeling to Smart-Grids,
  2. the assessment of the  impact of Hurricane Sandy on NY metropolitan area,
  3. cyber-security applications,
  4. modeling of software engineering disasters due to global distance,
  5. application to water, gas and power,
  6. modeling of high-availability systems.

Tutorial 2: Mining AndroZoo 

Presenters: Li Li, Tegawendé Bissyandé, Jacques Klein (Univ. of Luxembourg)   

Research on Android has boomed in recent years and is now occupying an increasing number of established researchers worldwide. At the University of Luxembourg, we have investigated various research directions to produce approaches, tools and datasets for providing further knowledge on Android app development practices and improving app analyses for security purposes. This tutorial is built around the use of AndroZoo, a collection of over 5 million Android apps that we have built over the years and released to the community for encouraging large-scale and reproducible experiments in-the-wild.

Overall, this tutorial will mainly cover four parts: (1) Providing descriptive information on the wealth of data available in AndroZoo, and how such information can be retrieved via public APIs, (2) Dealing with the different steps for analysing Android apps, as well as the capabilities and shortcomings of state-of-the-art tools such as IccTA and DroidRA, (3) Showcasing case studies around the analysis of app versions such as app lineages, repackaged apps, (4) Exploring the challenges in malware detection in the wild and discussing the investigations that we have conducted towards addressing some of those challenges.

At the end of this tutorial, we expect the audience to:

  • realize the opportunities of app mining with a growing collection of market apps (i.e., AndroZoo).
  • be able to take advantage of our datasets to further their research (e.g., be familiar with the APIs we provided for accessing AndroZoo).
  • become familiar with most basic steps of Android app analysis.
  • assimilate the added-value of tools such as IccTA and DroidRA for Android app analysis in practice.
  • understand the current state of app evolution analysis.
  • comprehend the various challenges in large-scale Android malware analysis.

Tutorial 3: Frama-C, a Collaborative Framework for C Code Verification

Presenters: Nikolai Kosmatov, Julien Signoles (CEA List)

The Frama-C software analysis platform provides a collection of scalable, interoperable, and sound software analyses for the industrial analysis of C code. The platform is based on a kernel which hosts analyzers as collaborating plug-ins and uses the ACSL formal specification language as a lingua franca. Frama-C includes plug-ins based on abstract interpretation, deductive verification, monitoring and test case generation, as well as a series of derived plug-ins which build elaborate analyses upon the basic ones. This large variety of analysis techniques and its unique collaboration capabilities make Frama-C most suitable for developing new code analyzers and applying code analysis techniques in many academic and industrial projects.

This tutorial will bring participants to a journey into the Frama-C world along its main plug-ins: after a general introduction, we will present abstract-interpretation based plug-in Value and its recent redesign Eva, the deductive verification tool WP, the runtime verification tool E-ACSL and the test generation tool PathCrawler. A last part will present some of their possible collaborations.

Participants will learn how to use the different Frama-C analyzers and how to combine them. Several examples and use cases presented during the tutorial will give a clear practical vision of possible usages of the underlying static and dynamic analyses in their everyday work. This tutorial can be of interest for all researchers and practitioners in software verification and engineering.


Tutorial slides

Tutorial 4: Data Science and Measurement in Software Reliability Engineering

Presenters: Pete Rotella, Sunita Chulani (Cisco Systems)

High performance models are needed to enable software practitioners to identify deficient (and superior) development and test practices. Even using standard practices and metrics, software development teams can, and do, vary substantially in practice adoption and effectiveness. One challenge for researchers and analysts in these organizations is to develop and implement mathematical models that adequately characterize the health of individual practices (such as code review, unit test, static analysis, function testing, etc.). These models can enable process and quality assurance groups to assist engineering teams in surgically repairing broken practices or replacing them with more effective and efficient ones.

In this tutorial, we will describe our experience with model building and implementation, and describe the boundaries within which certain types of models perform well. We will also address how to balance model generalizability and specificity in order to integrate computational methods into everyday engineering workflow.

Tutorial 5: ODC - Agile Root Cause Analysis

Presenter: Ram Chillarege (Chillarege Inc.)

Every project team struggles with gaining a deeper understanding of the development process. The typical retrospectives find people, organization and practice issues, but rarely go deep enough to address process, design, codebase, and technology issues that plague multiple Sprints. The classical methods of root cause analysis are too slow for the Agile world, and require painful process that have long been given up.  This tutorial introduces how Orthogonal Defect Classification (ODC) brings modern methods of semantic analysis to gain the much needed insight and speed development process. The methods provide a 10X on the classical methods of root cause analysis, thereby creating new opportunities for the team.



Your Take Aways from the Tutorial:

  • ODC Concepts

  • ODC Classification and Information Extraction

  • How to gain 10x in Root Cause Analysis

  • How to tune up the Test Process using ODC

  • In-process Measurement and Prediction with ODC

  • Case Studies of ODC based Process Diagnosis

  • What is required to support ODC?

  • How does one plan an ODC Rollout ?

Tutorial 6: Automated, Cloud-based and Real Time Software Reliability Growth Modeling

Presenters: Kazuhira Okumoto, Abhaya Asthana, Rashid Mijumbi (Bell Labs CTO, Nokia)

This tutorial will discuss practical aspects of software defect and reliability predictions. We will introduce a recently developed cloud-based software reliability growth modelling (SRGM) tool which, for a given project, automatically obtains defect data from logging databases, pre-processes it, generates multiple piece-wise curves so as more accurately capture changing defect trends. The last of these curves is used to predict residual defects at delivery and in-service software reliability and availability. Moreover, the tool also provides a summary of reliability metrics relating to a particular project. By being cloud-based, the tool can be provided as-a-service to software development teams spread across geography and time with automated, real-time, reliable and actionable insights regarding the development process.

At a high level, the tutorial will begin by introducing the state-of-the-art and challenges (especially those due to recent trends towards agile development) in SRGM. We will then present the design and implementation aspects of a cloud-based, real-time SRGM. Finally, we will discuss and demonstrate a number of use cases where the current tool has been applied, and conclude with the challenges that still need to be overcome.

The tutorial will be appropriate for the general ISSRE 2017 attendees. In particular, it will benefit practitioners, researchers and students who are interested in applying SRGM to real projects. The main pre-requisite will be a good understanding of basic software development processes. Practitioners will be able to understand a software reliability prediction procedure that can be used for making a decision on whether the software product is ready for delivery, and if not, the necessary amount of testing that is needed to achieve the required software quality. Researchers and/or students will be able to understand industry needs to advance future research areas in software reliability and availability.

Tutorial 7: Anomaly Detection in Networks

Presenter: Veena Mendiratta (Nokia Bell Labs)

This tutorial provides a balanced mix of theory and hands-on practice in the area of network anomaly detection. The first part of the tutorial will focus on introducing analytics methods for network anomaly detection. Next, a real-world case study is presented applying non-parametric machine learning techniques to detect anomalies, and neural network based Kohonen Self Organizing Maps (SOMs) and visual analytics for exploring anomalous behavior in wireless networks. Data from a 4G network will be used for the analyses. The last section of the tutorial will provide a hands-on session where attendees will be guided in the analysis of real log data using the techniques described above, in particular the use of Kohonen SOMs. The hands-on session will focus on exploratory data analysis and modeling approaches using the provided datasets. The hands-on session will be conducted using: the R software environment, the rstudio user interface for R, and various R packages.

The target audience for this tutorial is novice as well as moderately skilled users that have an interest in software failures, anomaly detection, machine learning and/or visual analytics; and are interested in learning to use R for these applications.

The tutorial will include three parts: Concepts and Survey of Anomaly Detection Techniques, a Case Study and a Hands-on session using real log data and R.


For the hands-on portion of the tutorial (which is the last part), attendees must install the following software on their laptops:

The R script and data for the exercise will be provided during the tutorial.

Tutorial 8: Testing Reliable Software in an Agile Context – Benefits, Challenges, Solutions

Presenters: Sigrid Eldh, Kristoffer Ankarberg (Ericsson AB)

Many industries are using and adapting to Agile processes with continuous build and integration of software. Practices like test driven development, refactoring and test automation are now more in focus than ever. Ericsson has a history of driving efficient processes and was an early adopter to both Agile and Lean concepts, which is not a simple task for large complex systems with high demands on performance and reliability. This new way of working has made it possible to move into continuous deployment as DevOps becoming more in focus.

This tutorial will discuss hurdles, lessons learned, positive and negative consequences of such a shift. We will focus on Agile practices but from a quality and test/verification angle, to attack the question “How can we judge product Reliability through one short 2-4 weeks sprint?”. This means it will be an “active” participating workshop, not only lecturing. Our goal in this tutorial is to share our experiences, but also engage the audience in discussions, through on-line and working on questions that the audience thinks are important in this context. Results collected will be shared at the workshop.

All the tutorial attendees are advised to bring their laptops to the tutorial session for the hands-on part of the tutorial.