Workshops
Model Driven Engineering, Verification, and Validation: Integrating Verification and Validation (MoDeVVa)Organizers:
Description: Model-Driven Engineering (MDE) is a development process that extensively uses models and automatic model transformations to handle complex software developments. Many software artefacts, tools, environments and modelling languages have to be developed to make MDE a reality. Consequently, there is a crucial need for effective V&V techniques in this new context. Furthermore, the novelty of this development paradigm gives rise to questions concerning its impact on traditional V&V techniques, and how they can leverage this new approach. The objective of this workshop on model design, verification and validation (MODEVVA) is to offer a forum for researchers and practitioners who are developing new approaches to V&V in the context of MDE. Major questions that cross-cut V&V and MDE include: Is the result of a transformation really what the user intended? Is the model correct with respect to the expected security, time, and structural constraints? What models can be used for validation or for verification? Does the implementation, generated after several model transformations, conform to the initial requirements? For details, please go to the workshop website. Advances in Model Based Testing (AMOST)Organizers:
Description: The increasing use of software and the growing system complexity, in size, heterogeneity, autonomy, physical distribution, and dynamicity make focused software system testing a challenging task. Recent years have seen an increasing industrial and academic interest in the use of models for designing and testing software. Success has been reported using a range of types of models using a variety of specification formats, notations and formal languages, such as UML, SDL, B and Z. The goal of the A-MOST workshop is to bring together researchers and practitioners to discuss the current state of the art and practice as well as future prospects for Model-Based software Testing (MBT). We invite submissions of full-length papers that describe new research, tools, technologies, and industry experience, as well as position papers. For details, please go to the workshop website. Search-Based Software Testing (SBST)Organizer:
Description: Search-based software testing is the use of random or directed search techniques (hill climbing, genetic algorithms etc.) to address problems in the software testing domain. There has been an explosion of activity in the search-based software testing field of late, particularly in the test data generation field. Recent work has also focused on other aspects such as model-based testing, real-time testing, interaction testing, testing of service-oriented architectures and test case prioritization. Papers should address a problem in the software testing domain and should approach the solution to the problem using a search strategy. Search-based techniques are taken to include (but are not limited to) random search, local search (i.e. hill climbing, simulated annealing etc.), evolutionary algorithms (i.e. genetic algorithms, evolution strategies, genetic programming), ant colony optimization and particle swarm optimization. There are several prizes to be won at the workshop:
For details, please go to the workshop website. Software Test Evaluation (STEV)This workshop has been cancelled.Organizers: Test automation does not only include automated execution of tests, but also automatic test data selection and automatic evaluation of the test results. Whereas there is a lot of research in the field of automatic test data generation, there is far less work regarding test oracles. Recently, there has been some acitivty in the field of test evaluation. Several approaches have been proposed to overcome the oracle problem, such as model-based test oracles, log file analysis, metamorphic testing, symmetric testing, statistical hypotheses tests, as well as others. Although these solutions can be quite helpful, there remains a lot of research to do. In this workshop, ideas will be exchanged, and problems will be identified, moving us closer to viable solutions. For details, please go to the workshop website. Modeling, Validation, and Heterogeneity (MoVaH)Organizers:
Description: This workshop aims at being a forum for researchers and practitioners with varying backgrounds to discuss new ideas concerning the management of heterogeneous systems. Contributions may consist of:
Industrial systems are more and more heterogeneous, both at the realization level (software, hardware, IPs) and at the design level (formal methods, specific design methods, off-the-shelf components). The various semantics of the modeling and implementation languages must be connected in some way to define the behavior of the whole model of the system. Managing this heterogeneity is a major challenge for the design and the validation of heterogeneous systems. For details, please go to the workshop website. International Software Testing Standard WorkshopOrganizers:
Description: In May 2007 ISO formed a working group to develop a new standard on software testing. The proposed standard, ISO 29119, comprises four parts. The first covers "concepts and terminology", the second "test process", the third "test documentation", and the fourth "test techniques". Of these four parts, it is the second on "test process" that is the least well-supported by current standards. The current vision for coverage of test processes is a three-tiered approach corresponding to the creation (and update) of the Test Strategy at the organizational level, a Master Test Plan at the individual project level, and a generic test process beneath that. This generic test process will define the execution of testing at each test level (e.g. component, integration, system, and acceptance) and for different test types (e.g. functional testing, performance testing). Submissions are invited for papers on either the theory or practice of:
For details, please go to the workshop website. 1st International Workshop on Security TestingOrganizers:
Description: Testing is an activity that aims at both demonstrating discrepancies between a system's actual and intended behaviors and increasing the confidence that there is no such discrepancy. The security of a system classically relates to the confidentiality and integrity of data as well as the availability of systems and the non-repudiation of transactions. Because confidentiality and integrity can be compromised in many different ways, because availability and non-repudiation guarantees are tremendously difficult to give, and because testing the mere functionality of a system alone is a fundamentally critical task, testing security properties is a real challenge, both from an academic and a practical point of view. The goal of this workshop on "security testing" is to provide a forum for practitioners and researchers to exchange ideas, perspectives on problems, and solutions. For details, please go to the workshop website. Workshop on A Benchmark for Software Testing (TestBench'08)Organizers:
Description: A significant and fundamental problem faced by software testing researchers is the difficulty of evaluating and making meaningful comparisons between different testing tools and techniques. Empirical evaluation in the discipline is patchy, frequently inconsistent, and consequently failing to drive progress. Benchmarks have been used successfully in other domains such as databases, computer architecture and information retrieval, and research in the speech recognition and natural language processing domains has been steered by international competitions run on common data sets. It is argued that these benchmarks have provided a significant impetus for research and help define the key challenges. There is evidence that the time is right for the development of a software testing benchmark. The recent trend in empirical evaluation shows convergence towards a smaller set of systems (arguably "proto-benchmarks"), although the infamous "triangle" program is still much in evidence. The aim of this workshop is to investigate and establish the development of a central repository of expertise and exemplar systems that can act as a benchmark. Such a benchmark will serve as a catalyst for the development of tools and techniques, provide a basis for their objective evaluation and serve to focus the research in the discipline. A longer term vision is to use the benchmark to run an annual tools and techniques competition at ICST. The workshop has three objectives:
For details, please go to the workshop website. |
||
|
||
Last updated: February 5, 2008 |