
Overview
Background
Associate Professor Mark Utting's research interests include software verification, model-based testing, theorem proving and automated reasoning, programming language design and implementation. He received his PhD from UNSW on the semantics of object-oriented languages, and since then has worked as an academic at several Queensland universities, as well as Waikato University in NZ and the University of Franche-Comte in France. He is passionate about designing and engineering good software that solves real-world problems, has extensive experience with managing software development projects and teams both in academia and industry, and has worked in industry, developing next generation genomics software and manufacturing software. He is author of the book ‘Practical Model-Based Testing: A Tools Approach’, as well as more than 80 publications on model-based testing, software verification, and language design and implementation. His current research focus is on using software verification to give strong guarantees about the correctness of compilers, correctness of blockchain smart contracts, freedom from information leaks of ARM64 binary programs, and the correctness of AI-generated code.
Availability
- Associate Professor Mark Utting is:
- Available for supervision
Fields of research
Qualifications
- Doctor of Philosophy, University of New South Wales
Research interests
-
Software Verification
Using automated and interactive theorem proving and static analysis tools to verify the correctness of software.
-
Verification of Smart Contracts
Formal verification of smart contracts for blockchain applications.
-
AI for Testing
Using model-based testing and other test discovery algorithms to partially automate the design and execution of software test suites.
-
Software Engineering and Language Engineering
The design, implementation, analysis, and usage of secure programming languages.
Works
Search Professor Mark Utting’s works on UQ eSpace
2015
Journal Article
Dynamic agent composition for large-scale agent-based models
Boulaire, Fanny, Utting, Mark and Drogemuller, Robin (2015). Dynamic agent composition for large-scale agent-based models. Complex Adaptive Systems Modeling, 3 (1) 1. doi: 10.1186/s40294-015-0007-2
2014
Journal Article
The JStar language philosophy
Utting, Mark, Weng, Min-Hsien and Cleary, John G. (2014). The JStar language philosophy. Parallel Computing, 40 (SI2), 35-50. doi: 10.1016/j.parco.2013.11.004
2014
Conference Publication
Parallel ABM for electricity distribution grids: a case study
Boulaire, Fanny, Utting, Mark and Drogemuller, Robin (2014). Parallel ABM for electricity distribution grids: a case study. Euro-Par 2013: Parallel Processing Workshops, Aachen, Germany, 26 - 27 August 2013. Berlin, Heidelberg: Springer Berlin Heidelberg. doi: 10.1007/978-3-642-54420-0_55
2014
Conference Publication
Aggregating energy supply and demand
Drogemuller, R, Boulaire, F, Ledwich, G, Buys, L, Utting, M, Vine, D, Morris, P and Arefi, A (2014). Aggregating energy supply and demand. ECPPM 2014, 10th European Conference on Product and Process Modelling, Vienna, Austria, 17-19 September 2014. London, United Kingdom: CRC Press. doi: 10.1201/b17396-71
2013
Conference Publication
The JStar language philosophy
Utting, Mark, Weng, Min-Hsien and Cleary, John G. (2013). The JStar language philosophy. 2013 International Workshop on Programming Models and Applications for Multicores and Manycores, Shenzhen, China, 23 February 2013. New York, NY USA: The Association for Computing Machinery . doi: 10.1145/2442992.2442996
2013
Conference Publication
MODAM: a modular agent-based modelling framework
Boulaire, Fanny, Utting, Mark and Drogemuller, Robin (2013). MODAM: a modular agent-based modelling framework. 2nd International Workshop on Software Engineering Challenges for the Smart Grid (SE4SG), San Francisco, CA USA, 18 May 2013. Piscataway, NJ USA: Institute of Electrical and Electronics Engineers. doi: 10.1109/SE4SG.2013.6596109
2012
Journal Article
A taxonomy of model-based testing approaches
Utting, Mark, Pretschner, Alexander and Legeard, Bruno (2012). A taxonomy of model-based testing approaches. Software Testing Verification and Reliability, 22 (5), 297-312. doi: 10.1002/stvr.456
2012
Conference Publication
A hybrid simulation framework to assess the impact of renewable generators on a distribution network
Boulaire, Fanny, Utting, Mark, Drogemuller, Robin, Ledwich, Gerard and Ziari, Iman (2012). A hybrid simulation framework to assess the impact of renewable generators on a distribution network. 2012 Winter Simulation Conference, Berlin, Germany, 9-12 December 2012. doi: 10.1109/WSC.2012.6465000
2011
Journal Article
Evolving web-based test automation into agile business specifications
Mugridge, Rick, Utting, Mark and Streader, David (2011). Evolving web-based test automation into agile business specifications. Future Internet, 3 (2), 159-174. doi: 10.3390/fi3020159
2011
Book Chapter
How to design extended finite state machine test models in Java
Utting, Mark (2011). How to design extended finite state machine test models in Java. Model-based testing for embedded systems. (pp. 1-24) Boca Raton, FL, United States: CRC Press. doi: 10.1201/b11321-7
2009
Conference Publication
Transformation rules for Z
Utting, Mark, Malik, Petra and Toyn, Ian (2009). Transformation rules for Z.
2009
Conference Publication
Putting formal specifications under the magnifying glass: Model-based testing for validation
Aydal, Emine G., Paige, Richard F., Utting, Mark and Woodcock, Jim (2009). Putting formal specifications under the magnifying glass: Model-based testing for validation. doi: 10.1109/ICST.2009.20
2008
Conference Publication
The role of model-based testing
Utting, Mark (2008). The role of model-based testing. doi: 10.1007/978-3-540-69149-5_56
2008
Conference Publication
Unit testing of Z specifications
Utting, Mark and Malik, Petra (2008). Unit testing of Z specifications. doi: 10.1007/978-3-540-87603-8_24
2008
Conference Publication
A comparison of state-based modelling tools for model validation
Aydal, Emine G., Utting, Mark and Woodcock, Jim (2008). A comparison of state-based modelling tools for model validation. 46th International Conference, TOOLS EUROPE 2008: Objects, Components, Models and Patterns, Zurich, Switzerland, June/July 2008. Berlin, Heidelberg: Springer Berlin Heidelberg. doi: 10.1007/978-3-540-69824-1_16
2007
Conference Publication
A subset of precise UML for model-based testing
Bouquet, F., Grandpierre, C., Legeard, B., Peureux, F., Vacelet, N. and Utting, M. (2007). A subset of precise UML for model-based testing. New York, NY, USA: ACM. doi: 10.1145/1291535.1291545
2007
Conference Publication
Jumble java byte code to measure the effectiveness of unit tests
Irvine, Sean A., Pavlinic, Tin, Trigg, Leonard, Cleary, John G., Inglis, Stuart and Utting, Mark (2007). Jumble java byte code to measure the effectiveness of unit tests. Mutation 2007 Conference, Windsor England, Sep 10-14, 2007. LOS ALAMITOS: IEEE. doi: 10.1109/TAICPART.2007.4344121
2007
Book
Practical model-based testing
Utting, Mark and Legeard, Bruno (2007). Practical model-based testing. Amsterdam, Netherlands: Elsevier. doi: 10.1016/b978-0-12-372501-1.x5000-5
2006
Conference Publication
Model-based testing from UML models
Bernard, Eddy, Bouquet, Fabrice, Charbonnier, Amandine, Legeard, Bruno, Peureux, Fabien, Utting, Mark and Torreborre, Eric (2006). Model-based testing from UML models.
2006
Book
Practical model-based testing: A tools approach
Utting, Mark and Legeard, Bruno (2006). Practical model-based testing: A tools approach. Elsevier. doi: 10.1016/B978-0-12-372501-1.X5000-5
Supervision
Availability
- Associate Professor Mark Utting is:
- Available for supervision
Before you email them, read our advice on how to contact a supervisor.
Available projects
-
Verifying compiler optimization passes
We have an on-going project to model and verify sophisticated compiler optimisations in the Graal Java compiler. Graal is a high-performance polyglot virtual machine (VM) that not only supports JVM-based languages such as Java, Scala, Kotlin and Groovy, and LLVM-based languages like C and C++, but also more dynamic languages like Python and JavaScript. This research project focuses on verifying optimization passes of the Graal compiler, using the Isabelle/HOL interactive theorem prover.
-
Smart Contract Tools for Blockchains - correctness and bug-finding
I am interested in supervising projects relating to the analysis and verification of smart contracts, for blockchains such as Ethereum, Algorand, Aptos (and other Move-powered blockchains), etc.
This could involve developing static analysis algorithms to prove simple correctness properties, automated verification (using SMT solvers like Z3) of deeper properties, or full verification of more complex properties using interactive provers such as Isabelle/HOL. It would also be interesting to explore the use of automated test generation (black box or fuzzing) to try and find bugs in smart contracts and counter-examples to properties that they are expected to satisfy.
-
Improved program development tools.
I am interested in supervising projects on better methods of programming - tools that help programmers develop secure programs, correct programs. There are many ways of working towards this goal, including improved IDEs, automated analysis tools, light-weight proof tools, automated assertion checking, wide-spectrum languages that include specification constructs as well as executable code, gray-box fuzzing to find interesting counter-examples, etc.
-
Generative programming and correctness
Recently there has been a big jump in the capabilities of AI-based program generators, such as CoPilot (https://github.com/features/copilot) and ChatGPT, which can generate code from English descriptions. But how can a programmer know if the suggested code is correct? Does it have the desired behaviour for the most common use case? What does it do for all those edge cases? This project will explore ways of increasing the programmer confidence in the correctness of suggested code. For example, this could involve various kinds of automated test generation, counter-example generation, runtime invariant checking, or light-weight automated software verification.
-
Verifying compiler optimization passes
We have an on-going project to model and verify sophisticated compiler optimisations in the Graal Java compiler. Graal is a high-performance polyglot virtual machine (VM) that not only supports JVM-based languages such as Java, Scala, Kotlin and Groovy, and LLVM-based languages like C and C++, but also more dynamic languages like Python and JavaScript. This research project focuses on verifying optimization passes of the Graal compiler, using the Isabelle/HOL interactive theorem prover.
-
Smart Contract Tools for Blockchains - correctness and bug-finding
I am interested in supervising projects relating to the analysis and verification of smart contracts, for blockchains such as Ethereum, Algorand, Aptos (and other Move-powered blockchains), etc.
This could involve developing static analysis algorithms to prove simple correctness properties, automated verification (using SMT solvers like Z3) of deeper properties, or full verification of more complex properties using interactive provers such as Isabelle/HOL. It would also be interesting to explore the use of automated test generation (black box or fuzzing) to try and find bugs in smart contracts and counter-examples to properties that they are expected to satisfy.
-
Improved program development tools.
I am interested in supervising projects on better methods of programming - tools that help programmers develop secure programs, correct programs. There are many ways of working towards this goal, including improved IDEs, automated analysis tools, light-weight proof tools, automated assertion checking, wide-spectrum languages that include specification constructs as well as executable code, gray-box fuzzing to find interesting counter-examples, etc.
-
Generative programming and correctness
Recently there has been a big jump in the capabilities of AI-based program generators, such as CoPilot (https://github.com/features/copilot), ChatGPT, and other large language models (LLMs), which can generate code from English descriptions. But how can a programmer know if the suggested code is correct? Does it have the desired behaviour for the most common use case? What does it do for all those edge cases? This project will explore ways of increasing the programmer confidence in the correctness of suggested code. For example, this could involve various kinds of automated test generation, counter-example generation, runtime invariant checking, or light-weight automated software verification.
Supervision history
Current supervision
-
Doctor Philosophy
A Trustworthy Compiler for Ethereum Smart Contracts
Principal Advisor
Other advisors: Dr Naipeng Dong
-
Master Philosophy
Optimizing Software Development: Hierarchical Formal Specification Integration for Enhanced Unit Testing and Agile Synergy
Principal Advisor
Other advisors: Dr Guowei Yang
-
Doctor Philosophy
Verifying GraalVM Compiler Optimizations
Principal Advisor
Other advisors: Emeritus Professor Ian Hayes
-
Doctor Philosophy
Verifying Compiler Optimisation Passes
Principal Advisor
Other advisors: Emeritus Professor Ian Hayes
-
Doctor Philosophy
Continuous Code Analysis for Rapidly Evolving Software
Associate Advisor
Other advisors: Associate Professor Guangdong Bai, Dr Guowei Yang
-
Doctor Philosophy
Verified Secure Compilation for C-like Programs
Associate Advisor
Other advisors: Associate Professor Graeme Smith
-
Doctor Philosophy
Evaluating and Improving Type Inference Models for Web Application Reverse Engineering
Associate Advisor
Other advisors: Professor Michael Bruenig, Professor Ryan Ko
Media
Enquiries
For media enquiries about Associate Professor Mark Utting's areas of expertise, story ideas and help finding experts, contact our Media team: