Introduction

MUSE 2.0 is a tool for running simulations of energy systems, written in Rust. It is a slimmer and faster version of the older MUSE tool. To get started, please look at the user guide.

For an overview of the model, see the model description and the dispatch optimisation formulation. For a list of relevant terms, see the glossary.

If you are a developer, please see the developer guide.

User Guide

Setting the log level

MUSE uses the env_logger crate for logging. The default log level is info, though this can be configured either via the log_level option in settings.toml or by setting the MUSE2_LOG_LEVEL environment variable. (If both are used, the environment variable takes precedence.)

The possible options are:

  • error
  • warn
  • info
  • debug
  • trace
  • off

By default, MUSE will colourise the log output if this is available (i.e. it is outputting to a terminal rather than a file), but this can be overridden by modifying the MUSE2_LOG_STYLE environment variable.

For more information, please consult the env_logger documentation.

Model Description

Introduction

Model Purpose

This Software Requirements Specification (SRS) describes MUSE 2.0 (ModUlar energy systems Simulation Environment). The purpose of MUSE is to provide users with a framework to simulate pathways of energy system transition, usually in the context of climate change mitigation.

Model Scope

MUSE is an Integrated Assessment Modelling framework that is designed to enable users to create and apply an agent-based model that simulates a market equilibrium on a set of user-defined commodities, over a user-defined time period, for a user-specified region or set of regions. MUSE was developed to simulate approaches to climate change mitigation over a long time horizon (e.g. 5-year steps to 2050 or 2100), but the framework is generalised and can therefore simulate any market equilibrium.

Overall Description

Overview

MUSE 2.0 is the successor to MUSE. The original MUSE framework is open-source software available on GitHub, coded in Python. MUSE 2.0 is implemented following re-design of MUSE to address a range of legacy issues that are challenging to address via upgrades to the existing MUSE framework, and to implement the framework in the high-performance Rust language.

MUSE is classified as a recursive dynamic modelling framework in the sense that it iterates on a single time period to find a market equilibrium, and then moves to the next time period. Agents in MUSE have limited foresight, reacting only to information available in the current time period.

This is distinct from intertemporal optimisation modelling frameworks (such as TIMES and MESSAGEix) which have perfect foresight over the whole modelled time horizon.

Model Concept

MUSE 2.0 is a bottom-up engineering-economic modelling framework that computes a price-induced supply-demand equilibrium on a set of user-defined commodities. It does this for each milestone time period within a user-defined time horizon. This is a "partial equilibrium" in the sense that the framework equilibrates only the user-defined commodities, as opposed to a whole economy.

MUSE 2.0 is data-driven in the sense that model processing and data are entirely independent, and user-defined data is at the heart of how the model behaves. It is also "bottom-up" in nature, which means that it requires users to characterise each individual process that produces or consumes each commodity, along with a range of other physical, economic and agent parameters.

At a high level, the user defines:

  1. The overall temporal arrangements, including the base time period, milestone time periods and time horizon, and within-period time slice lengths.

  2. The service demands for each end-use (e.g. residential heating, steel production), for each region, and how that demand is distributed between the user-defined time slices within the year. Service demands must be given a value for the base time period and all milestone time periods in each region.

  3. The existing capacity of each process (i.e. assets) in the base time period, and the year in which it was commissioned or will be decommissioned.

  4. The techno-economic attributes (e.g. capital cost, operating costs, efficiency, lifetime, input and output commodities, etc) of each process. This must include attributes of processes existing in the base time period (i.e. assets) and possible future processes that could be adopted in future milestone time periods.

  5. The agents that choose between technologies by applying search spaces, objectives and decision rules. Portions of demand for each commodity must be assigned to an agent, and the sum of these portions must be one.

The model takes this data, configures and self-checks, and then solves for a system change pathway:

  1. Initialisation
  2. Commodity Price Discovery
  3. Agent Investment
  4. Carbon Budget Solution (or CO2 Price Responsiveness)
  5. Find Prices for Next Milestone Year
  6. Recursively Solve Using Steps (3)-(5) for Each Milestone Year until End

Framework Processing Flow

At a high level, the MUSE 2.0 iterative solution concept is as follows:

1. Initialisation

Read input data, performing basic temporal set up, commodity and process/asset information. Consistency check is performed.

2. Commodity Price Discovery

Dispatch Optimimisation (hereon "Dispatch") is executed to determine commodity production and consumption, with fixed asset capacities. In the first time period - the calibrated base year (t0) - this step is performed before any agent investment step.

  1. Asset dispatch is merit order based but is subject to constraints that represent technical or other limits.

    • For assets/processes, dispatch limits are user-defined minimum, maximum and fixed availability factors (i.e. percentage of capacity) that can be defined per time slice, season or year.

    • For commodities, user-defined limits can be minimum, maximum or fixed total or regional output, input or net production by time slice, season or year.

  2. Dispatch can be solved for all assets and commodities in the system simultaneously, where existing assets (known from calibrated input data) are operated to meet demand, and to produce/consume any intermediate commodities required, and to meet environmental or other constraints if specified. Dispatch can also be solved for a subset of the whole system (e.g. where commodity demands are needed for end-use sectors in order to determine upstream capacity requirements).

  3. Price discovery is implemented via linear programming (cost minimisation via the Dispatch Optimisation). The objective function is the cost of operating the system over a year, which must be minimised. The decision variables are the commodity inputs and outputs of each asset, for each time slice. These are constrained by (a) the capacity of the asset and (b) the availability limits by time slice/season/year. Energy commodity supply/demand must balance for SED (supply equals demand) type commodities, and all service demands (SVD commodities) must be met. Commodity production or consumption may be subject to constraints (usually annual but could be time slice/season level).

  4. Based on the resulting dispatch a time sliced price is observed for each commodity in each region using marginal pricing (i.e. the system cost for operating the most expensive process serving a commodity demand). The result of this step is model-generated time sliced commodity prices.

  5. The model then also calculates the prices of commodities that are not present in the dispatch solution, but could exist in the solution for next period. These are calculated directly from input data. This is done by calculating the marginal price of the process producing the commodity in question with the best objective value, where objective values are calculated using the utilisation of the next most expensive (marginal cost) asset in the dispatch stack, adjusted for availability differences, and commodity prices from the price discovery at step 3 above.

3. Agent Investment

The capacity investment of new assets required in the next milestone year are calculated as follows:

  1. End-of-life capacity decommissioning: Decommission assets that have reached the end of their life in the milestone year.

  2. Agent investment (service demand): For each service demand, for each agent that is responsible for a portion of that demand:

    • For assets, calculate objective value/s assuming the utilisation observed from dispatch for that asset in step (2) will persist. For assets this calculation does not include capital cost as this is sunk cost because the asset already exists.

    • For processes, calculate objective value/s assuming the utilisation observed from dispatch in step (2) for the asset with the marginal cost immediately above the marginal cost of this process (and respecting the processes' availability constraints). If the process has lower marginal cost than any asset, then assume full dispatch (subject to its availability constraints). If the process has the same marginal cost as an asset, assume the same utilisation as that asset. If the process has marginal cost higher than any asset, assume zero utilisation.

      Issue 1: It is possible to calculate utilisation using time slice level utilisation of asset with marginal cost immediately above the process, also taking into account availability constraints. This would be more accurate in most cases (but there are some complications, e.g. where asset/process has conflicting availability constraints/utilisation).

    • Add assets/processes to the capacity mix starting with the one with the best objective value and keep adding them until sufficient capacity exists to meet all demand in the milestone year. This step must respect process capacity constraints (growth, addition and overall limits).

      Issue 2: There is a circularity here. E.g. asset choices influence the dispatch of other assets, which in turn can influence objective values, which in turn can influence asset choices. An heuristic solution is to run dispatch again, update utilisations of assets and proposed new assets, and repeat step 3.2 again, to see if any asset's objective has deteriorated to the point where it can be replaced, and keep going around this loop until nothing changes between loops - but there will certainly be cases where this does not converge.

      Issue 3: Also, commodity prices influence dispatch (and thus objective value, and thus asset choices), so upstream decisions also impact outcome here. However, this as it's a deliberate feature of MUSE - investors are assuming observed prices from the previous period persist.

  3. Agent investment (commodities): For each commodity consumed, starting with those commodities consumed by the end-use assets (i.e. those assets that output a service demand), calculate the capacity investments required to serve these commodity demands:

    • Run dispatch of partial system to determine final commodity demand related to all end use technologies. Determine production capacity required (maximum of outputs across time slices) to serve this demand, for each commodity.

    • Follows step 3.2 above to determine capacity mix for each commodity.

    • Continue this process, moving further upstream, until there are no commodity demands left to serve.

      Issue 4: Circularities here, e.g. power system capacity is required to produce H2, but also H2 can be consumed in the power sector so H2 capacity is needed to produce it, which in turn requires more power system capacity. One possible approach is to check if peak demand for each commodity has changed at the end of a run through all commodities, and if it has then run the capacity investment algorithm again for that commodity. Again, this is a heuristic solution that may lead to mathematical instabilities or poor quality solutions.

      Issue 5: What about commodities that are consumed but not produced, or produced but not consumed? Do this capacity investment step only for SED commodities? And also check for processes that consume or produce non-balance commodities, and check if they can make money - invest in them if they do - requires specific objective of NPV.

  4. Decision-rule-based capacity decommissioning: Decommission assets that have a utilisation of zero after steps 3.2-3.3. These assets have become stranded. This could happen when, for example, carbon prices are high and emitting assets become unfavorable as a result (e.g. operating them is too expensive and cannot compete with new technology even though the latter has capital cost included).

4. Carbon budget solution (or CO2 price responsiveness)

Where a CO2 budget or price is specified, steps (2)-(3) are initially run with the CO2 price from the previous milestone year. After completion, we run dispatch with a CO2 budget equal to the user prescribed level (if it exists) for the new milestone year, and record the resulting CO2 price (dual solution of the CO2 constraint). If the CO2 price is less than zero then re-run dispatch without the budget constraint and set CO2 price to zero. Alternatively, a user might specify a CO2 price for all or part of the time horizon, and no carbon budget, in which case the model runs dispatch with the specified carbon price relating to each milestone year in steps (2)-(3) and no further processing is needed here.

If there is no solution to the dispatch optimisation, then the CO2 budget cannot be met. In this case we re-run dispatch without the budget constraint but with the CO2 price from the previous milestone year. We warn the user that the budget set was not met for the milestone year.

5. Find prices for next milestone year

The dispatch solution from step (4) determines the prices and final commodity consumption and production for the present milestone year, and we record these results. We use these prices and perform steps (2) and (3) above for the next milestone year, alongside calculated prices for any commodities not present in the system (as per step 2.5).

6. Recursively solve using steps (3)-(5) for each milestone year until end

The model then moves to the next milestone time period and repeats the process, beginning with prices from the last-solved time period. This process continues until the end of the time horizon is reached.

Issue 6: At this point we have commodity prices for every time period in the simulation. The model could then perform a "super-loop" where the entire process above is repeated, but agents have some foresight of on commodity price. Super-loops we be considered for inclusion is a later release of MUSE.

Dispatch Optimisation Formulation

Decision variables

\( q_{r,a,c,ts} \), where q represents c commodity flow in region r, to/from asset a, in time slice ts. Negative values are flows into the asset and positive values are flows from the asset; q must be ≤0 for input flows and ≥0 for output flows. Note that q is a quantity flow (e.g. energy) as opposed to an intensity (e.g. power).

where

r = region

a = asset

c = commodity

ts = time slice

Objective function

$$ min. \sum_{r}{\sum_{a}{\sum_{c}{\sum_{ts}}}} cost_{r,a,c,ts} * q_{r,a,c,ts} $$

Where cost is a vector of cost coefficients representing the cost of each commodity flow.

$$ cost_{r,a,c,ts} = var\_ opex_{r,a,pacs} + flow\_ cost_{r,a,c} + commodity\_ cost_{r,c,ts} $$

var_opex is the variable operating cost for a PAC. If the commodity is not a PAC, this value is zero.

flow_cost is the cost per unit flow.

commodity_cost is the exogenous (user-defined) cost for a commodity. If none is defined for this combination of parameters, this value is zero.

NOTE: If the commodity flow is an input (i.e. flow <0), then the value of cost should be multiplied by −1 so that the impact on the objective function is positive.

Constraints

Issue 1: It would reduce the size of the optimisation problem if all assets of the same type that are in the same region are grouped together in constraints (to reduce the number of constraints). However, this approach would also complicate pre- and post-optimisation processing which would need to unpick grouped assets and allocate back to their agent owners.

Asset-level input-output commodity balances

Non-flexible assets

Assets where ratio between output/s and input/s is strictly proportional. Energy commodity asset inputs and outputs are proportional to first-listed primary activity commodity at a time slice level defined for each commodity. Input/output ratio is a fixed value.

For each r, a, ts, c:

$$ \frac{q_{r,a,c,ts}}{flow_{r,a,c}} - \frac{q_{r,a,pac1,ts}}{flow_{r,a,pac1}} = 0 $$

for all commodity flows that the process has (except pac1). Where pac1 is the first listed primary activity commodity for the asset (i.e. all input and output flows are made proportional to pac1 flow).

TBD - cases where time slice level of the commodity is seasonal or annual.

Commodity-flexible assets

Assets where ratio of input/s to output/s can vary for selected commodities, subject to user-defined ratios between input and output.

Energy commodity asset inputs and outputs are constrained such that total inputs to total outputs of selected commodities is limited to user-defined ratios. Furthermore, each commodity input or output can be limited to be within a range, relative to other commodities.

For each r, a, c, ts:

(TBD)

for all c that are flexible commodities. “in” refers to input flow commodities (i.e. with a negative sign), and “out” refers to output flow commodities (i.e. with a positive sign).

Asset-level capacity and availability constraints

Primary activity commodity/ies output must not exceed asset capacity or any other limit as defined by availability factor constraint user inputs.

For the capacity limits, for each r, a, c, ts. The sum of all PACs must be less than the assets' capacity:

$$ \sum_{pacs} \frac{q_{r,a,c,ts}}{capacity\_ a_{a} * time\_ slice\_ length_{ts}} \leq 1 $$

For the availability constraints, for each r, a, c, ts:

$$ \sum_{pacs} \frac{q_{r,a,c,ts}}{capacity\_ a_{a} * time\_ slice\_ length_{ts}} \leq process.availability.value(up)_{r,a,ts} $$

$$ \sum_{pacs} \frac{q_{r,a,c,ts}}{capacity\_ a_{a} * time\_ slice\_ length_{ts}} \geq process.availability.value(lo)_{r,a,ts} $$

$$ \sum_{pacs} \frac{q_{r,a,c,ts}}{capacity\_ a_{a} * time\_ slice\_ length_{ts}} = process.availability.value(fx)_{r,a,ts} $$

The sum of all PACs must be within the assets' availability bounds. Similar constraints also limit output of PACs to respect the availability constraints at time slice, seasonal or annual levels. With appropriate selection of q on the LHS to match RHS temporal granularity.

Note: Where availability is specified for a process at daynight time slice level, it supersedes the capacity limit constraint (i.e. you don't need both).

Commodity balance constraints

Commodity supply-demand balance for a whole system (or for a single region or set of regions). For each internal commodity that requires a strict balance (supply == demand, SED), it is an equality constraint with just “1” for each relevant commodity and RHS equals 0. Note there is also a special case where the commodity is a service demand (e.g. Mt steel produced), where net sum of output must be equal to the demand.

For supply-demand balance commodities. For each r and each c:

$$\sum_{a,ts} q_{r,a,c,ts} = 0$$

For a service demand, for each c, within a single region:

$$\sum_{a,ts} q_{r,a,c,ts} = cr\_ net\_ fx$$

Where c is a service demand commodity and cr_net_fx is the exogenous (user-defined) demand for the given time slice selection. Note that the ts to be summed over will differ depending on the specified time slice level for a given commodity. If the time slice level is annual, it will be every time slice, if it's season then there will be separate constraints for each season and if it's time_slice then there will be separate constraints for every individual time slice.

TBD – commodities that are consumed (so sum of q can be a negative value). E.g. oil reserves.
TBD – trade between regions.

Asset-level commodity flow share constraints for flexible assets

Restricts share of flow amongst a set of specified flexible commodities. Constraints can be constructed for input side of processes or output side of processes, or both.

$$ q_{r,a,c,ts} \leq process.commodity.constraint.value(up)_{r,a,c,ts} * \left( \sum_{flexible\ c} q_{r,a,c,ts} \right) $$

$$ q_{r,a,c,ts} \geq process.commodity.constraint.value(lo)_{r,a,c,ts} * \left( \sum_{flexible\ c} q_{r,a,c,ts} \right) $$

$$ q_{r,a,c,ts} = process.commodity.constraint.value(fx)_{r,a,c,ts} * \left( \sum_{flexible\ c} q_{r,a,c,ts} \right) $$

Could be used to define flow limits on specific commodities in a flexible process. E.g. a refinery that is flexible and can produce gasoline, diesel or jet fuel, but for a given crude oil input only a limited amount of jet fuel can be produced and remainder of production must be either diesel or gasoline (for example).

Other net and absolute commodity volume constraints

Net constraint: There might be a net CO2 emissions limit of zero in 2050, or even a negative value. Constraint applied on both outputs and inputs of the commodity, sum must less then (or equal to or more than) a user-specified value. For system-wide net commodity production constraint, for each c, sum over regions, assets, time slices.

$$\sum_{r,a,ts} q_{r,a,c,ts} \leq commodity.constraint.rhs\_ value(up)$$

$$\sum_{r,a,ts} q_{r,a,c,ts} \geq commodity.constraint.rhs\_ value(lo)$$

$$\sum_{r,a,ts} q_{r,a,c,ts} = commodity.constraint.rhs\_ value(fx)$$

Similar constraints can be constructed for net commodity volume over specific regions or sets of regions.

Production or consumption constraint: Likewise similar constraints can be constructed to limit absolute production or absolute consumption. In these cases selective choice of q focused on process inputs (consumption) or process outputs (production) can be applied.

Model diagrams

This document contains diagrams showing the algorithm used by MUSE 2.0. It is likely to contain errors and omissions and will change as the code is developed. It is principally aimed at MUSE developers.

Functions are described with the following terms:

  • Inputs: immutable input arguments; values not modified by function
  • Outputs: values returned from function
  • Modifies: mutable input arguments; values modified by function

Overview of MUSE 2.0
Figure 1: Overview of MUSE 2.0 algorithm

Dispatch optimisation
Figure 2: Overview of dispatch optimisation

Glossary

Activity: The flow of input/s or output/s of a Process that are limited by its capacity. For example, a 500MW power station can output 500MWh per hour of electrical power, or a 50MW electrolyser consumes up to 50MWh per hour of electrical power to produce hydrogen. The Primary Activity Commodity specifies which output/s or input/s are linked to the Process capacity.

Agent: A decision-making entity in the system. An Agent is responsible for serving a user-specified portion of a Commodity demand or Service Demand. Agents invest in and operate Assets to serve demands and produce commodities.

Agent Objective/s: One or more objectives that an Agent considers when deciding which Process to invest in. Objectives can be economic, environmental, or others.

Asset: Once an Agent makes an investment, the related capacity of their chosen Process becomes an Asset that they own and operate. An Asset is an instance of a Process, it has a specific capacity, and a decommissioning year. A set of Assets must exist in the base year sufficient to serve base year demands (i.e. a calibrated base year, based on user input data).

Availability: The maximum, minimum or fixed percentage of maximum output (or input) that an Process delivers over a period. The time period could be a single time slice, a season, or a year.

Base Year: The starting year of a model run. The base year is typically calibrated to known data, including Process stock and commodity consumption/production.

Calibration: The act of ensuring that the model represents the system being modelled in a historical base year.

Capacity: The maximum output (or input) of an Asset, as measured by units of the Primary Activity Commodity.

Capital Cost: The overnight capital cost of a process, measured in units of the Primary Activity Commodity divided by CAP2ACT. CAP2ACT is a factor that converts 1 unit of capacity to maximum activity of the primary activity commodity/ies per year. For example, if capacity is measured in GW and activity is measured in PJ, CAP2ACT for the process is 31.536 because 1 GW of capacity can produce 31.536 PJ energy output in a year.

Commodity: A substance (e.g. CO2) or form of energy (e.g. electricity) that can be produced and/or consumed by Processes* in the model. A Service Demand is a type of commodity that is defined at the end point of the system.

Commodity Cost: Represents a tax, levy or other external cost on a commodity. Commodity costs can be applied to all commodity production (sum of output of all processes for that commodity), net production (sum of output and input for all processes), or all consumption (sum of input for all processes). It can also be negative, indicating an incentive on commodity production/consumption/net.

Decision Rule: The rule via which an Agent uses the Objective/s to decide between Process options to invest in. Examples include single objective, weighted sum between multiple objectives, or epsilon constraint where a secondary objective is considered if two options with similar primary objectives are identified.

Dispatch: The way in which Assets are operated to serve demand. MUSE 2.0 uses merit order dispatch, subject to Availability and other constraints that can be defined by the user.

End Year: The final year in the model time horizon.

Equivalent Annual Cost (EAC): An Agent objective, representing the annualised cost of serving all or part of an Agent's demand for a year, considering the Asset's entire lifetime.

Fixed Operating Cost: The Asset or Process annual operating cost charged per unit of capacity.

Input Commodity/ies: The commodities that flow into a Process.

Levelised Cost of X (LCOX): An Agent objective, representing the discounted cost of 1 unit of output commodity X from a process over its lifetime under a specified discount rate.

Lifetime: The lifetime of a Process, measured in years.

Milestone Years: A set of years in the model time horizon where model results are recorded. For example, with a 2025 Base Year and End Year 2100, a user might choose to record outputs in 5-year steps.

Merit Order: A method of operating Assets when the cheapest is dispatched first, followed by the next most expensive, etc, until demand is served. Also called “unit commitment.”

Output Commodity/ies: The commodities that flow out of a Process.

Primary Activity Commodity (PAC): The PACs specify which output/s are linked to the Process capacity. The combined output of all PACs cannot exceed the Asset's capacity. A user can define which output/s are PACs. Most, but not all Processes will have only one PAC.

Process: A blueprint of an available Process that converts input commodities to output commodities. Processes have economic attributes of capital cost, fixed operating cost per unit capacity, non-fuel variable operating cost per unit activity, and risk discount rate. They have physical attributes of quantity and type of input and output commodities (which implicitly specify efficiency), Availability limits (by time slice, season and/or year), lifetime (years). When a Process is selected by an Agent for investment an instance of it called an Asset is created.

Region: A geographical area that is modelled. Regions primarily determine trade boundaries.

Season: A year is usually broken down into seasons in the model. For example, summer, winter, other.

Sector: Models are often broken down into sectors, each of which is associated with specific Service Demands or specific Commodity production. For example, the residential sector, the power sector, etc.

Service Demand: A Service Demand is a type of commodity that is consumed at the boundary of the modelled system. For example, tonne-kilometers of road freight, PJ of useful heat demand, etc.

Discount Rate: The discount rate used to calculate any process-specific agent economic objectives that require a discount rate. For example, Equivalent Annual Cost, Net Present Value, Levelised Cost of X, etc.

Time Horizon: The overall period modelled. For example, 2025–2100.

Time Period: Refers to a specific Milestone Year in the time horizon.

Time Slice: The finest time period in the model. The maximum time slice length is 1 year (where a model does not represent seasons or within-day (diurnal) variation). A typical model will have several diurnal time slices, and several seasonal time slices.

Utilisation: The percentage of an Assets capacity that is actually used to produce Primary Activity Commodities. Must be between 0 and 1, and can be measured at time slice, season, or year level.

Variable Operating Cost: The variable operating cost charged per unit of input or output of the Primary Activity Commodity of the Process.

Developer Guide

This is a guide for those who wish to contribute to the MUSE 2.0 project or make local changes to the code.

The API documentation is available here.

Installing the Rust toolchain

We recommend that developers use rustup to install the Rust toolchain. Follow the instructions on the rustup website.

Once you have done so, select the stable toolchain (used by this project) as your default with:

rustup default stable

As the project uses the latest stable toolchain, you may see build errors if your toolchain is out of date. You can update to the latest version with:

rustup update stable

Installing C++ tools for HiGHS

The highs-sys crate requires a C++ compiler and cmake to be installed on your system. These may be installed already, but if you encounter errors during the build process for highs-sys (e.g. "Unable to find libclang"), you should follow the instructions here under "Building HiGHS".

Working with the project

To build the project, run:

cargo build

Note that if you just want to build-test the project (i.e. check for errors and warnings) without building an executable, you can use the cargo check command, which is much faster.

To run MUSE 2.0 with the "simple" example, you can run:

cargo run run examples/simple

(Note the two runs. The first is for cargo and the second is passed as an argument to the built muse2 program.)

Tests can be run with:

cargo test

More information is available in the official cargo book.

Checking test coverage

We use Codecov to check whether pull requests introduce code without tests.

To check coverage locally (i.e. to make sure newly written code has tests), we recommend using cargo-llvm-cov.

It can be installed with:

cargo install cargo-llvm-cov

Once installed, you can use it like so:

cargo llvm-cov --open

This will generate a report in HTML format showing which lines are not currently covered by tests and open it in your default browser.

Developing the documentation

We use mdBook for generating technical documentation.

If you are developing the documentation locally, you may want to check that your changes render correctly (especially if you are working with equations).

To do this, you first need to install mdBook:

cargo install mdbook

You can then view the documentation in your browser like so:

mdbook serve -o

Pre-Commit hooks

Developers must install the pre-commit tool in order to automatically run this repository's hooks when making a new Git commit. Follow the instructions on the pre-commit website in order to get started.

Once you have installed pre-commit, you need to enable its use for this repository by installing the hooks, like so:

pre-commit install

Thereafter, a series of checks should be run every time you commit with Git. In addition, the pre-commit hooks are also run as part of the CI pipeline.

Note: you may get errors due to the clippy hook failing. In this case, you may be able to automatically fix them by running cargo clipfix (which we have defined as an alias in .cargo/config.toml).