Paper Published at FASE 2017

Our group has published a new article at the 20th International Conference on Fundamental Approaches to Software Engineering (FASE 2017) about our work on learning probabilistic models for model checking:

  • J. Wang, J. Sun, Q. Yuan, and J. Pang. Should we learn probabilistic models for model checking? A new approach and an empirical study. In Proc. International Conference on Fundamental Approaches to Software Engineering (FASE 2017), volume 10202, LNCS, Springer, 2017, pp. 3-21.
    [PDF] [DOI] [Bibtex]
    author = {Jingyi Wang and
    Jun Sun and
    Qixia Yuan and
    Jun Pang},
    title = {Should We Learn Probabilistic Models for Model Checking? {A} New Approach and An Empirical Study},
    booktitle = {{Proc. International Conference on Fundamental Approaches to Software Engineering (FASE 2017)}},
    series = {LNCS},
    volume = {10202},
    publisher = {Springer},
    pages = {3--21},
    year = {2017},
    doi = {10.1007/978-3-662-54494-5_1},
    pdf = {},

Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically “learn” models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.

Postdoc Positions Available

Our group has multiple postdoc positions available. The research projects are related to:

  • Source-code level program verification (against safety properties or security-related properties)
  • Run-time verification/enforcement of Java/C programs

The postdocs will work with existing researchers in the group as well as interact with researchers in the same research center. Once hired, the candidate will have the opportunity to travel overseas to collaborate with partners of the projects.

The working language is English. The general requirements on the candidate are:

  • A PhD in Computer Science or related areas.
  • Strong background in logic and reasoning.
  • An established research record.
  • Proficiency in Java or C++ programming

The term is one to three years starting as early as June 2017. The salary range is SGD 65K – 80K per annual. Singapore’s tax is around 3%-5% of the annual salary.

Interested candidates are encouraged to contact Jun Sun ( for more information.