Verifiability Node
Trustworthy Autonomous Systems Verifiability Node
Our Verifiability Node will develop novel rigorous techniques that automate the systematic and holistic verification of autonomous systems: their increasing technological significance ensures that advances will have a real impact.
Partners
Our Project Partners
Vision
TASVerifiability Node Vision
Autonomous systems are technologies that interact within an environment to gain knowledge and build awareness, learn, adapt and make decisions, with little or no human control. They include automated decision-making software and ‘smart’ devices as well as self-driving cars, drones and healthcare and surgical robots. These systems are already used in many sectors of society. Given their increased use, it is important to ensure that they are designed, built and deployed in a way that can be fully relied upon.
Tools
TAS Verifiability Node Tools
Our state-of-the art tools are freely available for dowload.
HyConf
Tool for conformance testing of Cyber-Physical Systems.
RoboTool
RoboTool supports graphical modelling, validation, and automatic generatits...
MCAPL
The MCAPL Project provides a tool for prototyping BDI agent programming languages ....
ROSMonitoring
ROSMonitoring provides a Runtime Verification Framework for ROS.
Team
Management Board
Prof Mohammad Mousavi
London | Principal Investigator (PI)
Prof Ana Cavalcanti
York | Deputy PI
Prof Michael Fisher
Manchester | Deputy PI
Prof Rob Richardson
Leeds | Site Leader
Prof Gavin Brown
Manchester | Site Leader
Prof James Woodcock
York | Site Leader
Prof Effie Lai-Chong Law
Durham | Site Leader
Prof Rob Hierons
Sheffield | Site Leader
Dr Bilal Kaddouh
Leeds | Case Studies Officer
Dr Genovefa Kefalidou
Leicester | Engagement and Node Liaison Officer
Dr José Miguel Rojas
Sheffield | Publicity Officer
Advisors
Advisory Board
Dejanira Araiza Illan
Assistant Principal Engineer in Robotic Applications at Johnson & Johnson.
Raja Chatila
Professor Emeritus, Institute of Intelligent Systems and Robotics.
Sebastian Conran
CEO of Consequential Robotics.
Danit Gal
Associate Fellow at The Leverhulme Centre for the Future of Intelligence
Koen Hindriks
Professor Artificial Intelligence, Social AI group, Vrije Universiteit Amsterdam.
Joost Noppen
Principal Researcher Software, BT Applied Research.
Colin O’Halloran
Technical Director of D-RisQ Ltd.
Kristin Yvonne Rozier
Assistant Professor, Department of Aerospace Engineering.
Rich Walker
Managing Director, The Shadow Robot Company.
Henry Tse
Connected Places Catapult, the UK’s innovation accelerator for cities, transport, and places.
Thierry Lecomte
R&D Project Director at CLEARSY, French SME specialised in safety critical systems.
Publications
63 entries
1 of 21
Mousavi, Mohammad Reza; Cavalcanti, Ana; Fisher, Michael; Dennis, Louise; Hierons, Rob; Kaddouh, Bilal; Law, Effie Lai-Chong; Richardson, Rob; Ringer, Jan Oliver; Tyukin, Ivan; Woodcock, Jim
Trustworthy Autonomous Systems Through Verifiability
Journal Article
In:
Computer,
56
(2),
pp. 40-47,
2023
Links
BibTeX
@article{10042118,
title = {Trustworthy Autonomous Systems Through Verifiability},
author = {Mohammad Reza Mousavi and Ana Cavalcanti and Michael Fisher and Louise Dennis and Rob Hierons and Bilal Kaddouh and Effie Lai-Chong Law and Rob Richardson and Jan Oliver Ringer and Ivan Tyukin and Jim Woodcock},
doi = {10.1109/MC.2022.3192206},
year = {2023},
date = {2023-03-02},
journal = {Computer},
volume = {56},
number = {2},
pages = {40-47},
keywords = {},
pubstate = {published},
tppubtype = {article}
Close
doi:10.1109/MC.2022.3192206
Close
Baxter, James; Cavalcanti, Ana; Gazda, Maciej; Hierons, Robert M.
Testing Using CSP Models: Time, Inputs, and Outputs
Journal Article
In:
ACM Trans. Comput. Logic,
24
(2),
2023
ISSN: 1529-3785
Abstract
Links
BibTeX
@article{10.1145/3572837,
title = {Testing Using CSP Models: Time, Inputs, and Outputs},
author = {James Baxter and Ana Cavalcanti and Maciej Gazda and Robert M. Hierons},
url = {https://doi.org/10.1145/3572837},
doi = {10.1145/3572837},
issn = {1529-3785},
year = {2023},
date = {2023-03-01},
journal = {ACM Trans. Comput. Logic},
volume = {24},
number = {2},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {The existing testing theories for CSP cater for verification of interaction patterns (traces) and deadlocks, but not time. We address here refinement and testing based on a dialect of CSP, called tock-CSP, which can capture discrete time properties. This version of CSP has been of widespread interest for decades; recently, it has been given a denotational semantics, and model checking has become possible using a well established tool. Here, we first equip tock-CSP with a novel semantics for testing, which distinguishes input and output events: the standard models of (tock-)CSP do not differentiate them, but for testing this is essential. We then present a new testing theory for timewise refinement, based on novel definitions of test and test execution. Finally, we reconcile refinement and testing by relating timed ioco testing and refinement in tock-CSP with inputs and outputs. With these results, this paper provides, for the first time, a systematic theory that allows both timed testing and timed refinement to be expressed. An important practical consequence is that this ensures that the notion of correctness used by developers guarantees that tests pass when applied to a correct system and, in addition, faults identified during testing correspond to development mistakes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
Close
The existing testing theories for CSP cater for verification of interaction patterns (traces) and deadlocks, but not time. We address here refinement and testing based on a dialect of CSP, called tock-CSP, which can capture discrete time properties. This version of CSP has been of widespread interest for decades; recently, it has been given a denotational semantics, and model checking has become possible using a well established tool. Here, we first equip tock-CSP with a novel semantics for testing, which distinguishes input and output events: the standard models of (tock-)CSP do not differentiate them, but for testing this is essential. We then present a new testing theory for timewise refinement, based on novel definitions of test and test execution. Finally, we reconcile refinement and testing by relating timed ioco testing and refinement in tock-CSP with inputs and outputs. With these results, this paper provides, for the first time, a systematic theory that allows both timed testing and timed refinement to be expressed. An important practical consequence is that this ensures that the notion of correctness used by developers guarantees that tests pass when applied to a correct system and, in addition, faults identified during testing correspond to development mistakes.
Close
doi:10.1145/3572837
Close
Pontolillo, Gabriel; Mousavi, Mohammad Reza
A Multi-Lingual Benchmark for Property-Based Testing of Quantum Programs
Inproceedings
In:
Proceedings of the 3rd International Workshop on Quantum Software Engineering,
pp. 1–7,
Association for Computing Machinery,
Pittsburgh, Pennsylvania,
2023
ISBN: 9781450393355
Abstract
Links
BibTeX
@inproceedings{10.1145/3528230.3528395,
title = {A Multi-Lingual Benchmark for Property-Based Testing of Quantum Programs},
author = {Gabriel Pontolillo and Mohammad Reza Mousavi},
url = {https://doi.org/10.1145/3528230.3528395},
doi = {10.1145/3528230.3528395},
isbn = {9781450393355},
year = {2023},
date = {2023-03-01},
booktitle = {Proceedings of the 3rd International Workshop on Quantum Software Engineering},
pages = {1–7},
publisher = {Association for Computing Machinery},
address = {Pittsburgh, Pennsylvania},
series = {Q-SE '22},
abstract = {We present a multi-lingual benchmark for (property-based) testing of quantum programs. We report on the methodology used to design our benchmark and the rationale behind its design decisions.Our benchmark covers three major quantum programming languages, namely Qiskit, Cirq, and Q#. We curate our benchmark from languages documentations, open source repositories, and academic papers. In order to demonstrate the common logic of the algorithms included in our benchmark, we start from an implementation in one language (often Qiskit) and produce comparable implementations in the other two languages. We produce several properties and mutants for each program as a benchmark to measure the effectiveness of property-based testing frameworks. We reflect on the high-level quantum programming concepts offered in the three languages of our benchmark and their possible impact on testability and quality assurance.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
Close
We present a multi-lingual benchmark for (property-based) testing of quantum programs. We report on the methodology used to design our benchmark and the rationale behind its design decisions.Our benchmark covers three major quantum programming languages, namely Qiskit, Cirq, and Q#. We curate our benchmark from languages documentations, open source repositories, and academic papers. In order to demonstrate the common logic of the algorithms included in our benchmark, we start from an implementation in one language (often Qiskit) and produce comparable implementations in the other two languages. We produce several properties and mutants for each program as a benchmark to measure the effectiveness of property-based testing frameworks. We reflect on the high-level quantum programming concepts offered in the three languages of our benchmark and their possible impact on testability and quality assurance.
Close
doi:10.1145/3528230.3528395
Close
63 entries
1 of 21
See more Publications
Events
11
First International Symposium on Trustworthy Autonomous Systems 2023
Verifiability Talk 56: “Highlights of Model Checking and Runtime Verification of Aerospace Systems”
15
Verifiability Talk 55: “Robot learning from and for human interaction”
Verifiability Talk 54: “Taking Back Control: Formally Modelling a Compiler Intermediate Representation for GPU Computing”
18
Verifiability Talk 53: “Integrated Neural Network Verification with Vehicle”
Verifiability Talk 52: “Towards Formal Specification of Reinforcement Learning”
20
Verifiability Talk 51: Amanda Porek
Verifiability Talk 50: “Autonomous Firefighting Unmanned Aerial Vehicle: A case study”
See More events
Twitter feeds
Twitter Feeds
Please follow us on Twitter to receive instant updates on events, activities and news related to the node.
Join Us twitter