Priority impact measurement

Impact measurement approach for measuring the amount of impact that's generated by priorities that get addressed


Priority impact measurement focuses on determining the amount of impact that has been generated when a priority gets addressed.

Moderate impact measurability (Score - 3)

  • Total impact measuring complexity - There are many potential areas of impact that a priority could help with. A priority getting addressed can create impact outside of the intended areas of focus meaning it can be difficult to fully capture what total amount of impact was generated.

  • Ease of measurement - Most of the quantifiable areas of potential impact should be easy enough to track however there is still a need for interpreting these numbers as Web3 ecosystems can have many different priorities and ideas being executed at once as well as other changing environmental factors. This means it can be more difficult to attribute certain outcomes to a single priority that has been recently addressed.

  • Comparability - Comparing the impact between different priorities can be highly complex due to situations where there are many ideas that could help with addressing a priority. If each priority has numerous ideas that have been executed this means there can be a large range of data points and considerations that need to be made to determine which priorities have been the most impactful when they were addressed.

Moderate future impact opportunity (Score - 3)

  • Usefulness for future decisions - An addressed priority that helps to generate large amounts of impact does not necessarily mean an ecosystem can create a new priority that will generate a similar amount of impact. The environment and preferences of an ecosystem can change over time and what has worked in one ecosystem previously does not mean it will definitely work again in another ecosystem. There could be some correlation in some priorities that are more effective than others for generating impact and others that are much more wide ranging in what impact they generate.

  • Repeatability - There is at least some repeatability in situations where priorities can help to address certain market segments and could be more effective and impactful than other priorities. These priorities could be potentially extended and updated to take advantage of the full opportunity available. However in some cases this impact generation could be difficult to repeat. The challenge will be to determine what the next logical steps are in terms of prioritisation for the different market areas that an ecosystem has seen success with.

Moderate game theory risks (Score - 3)

  • Manipulated outcomes - Most of the quantifiable metrics could be potentially manipulated with bots or external actors. The larger the metric, the more complex or costly it could become to hide any manipulated actions. An example could be increasing the total number of transactions or chain utilisation as this could have far higher cost implications for generating those extra transactions if someone was trying to manipulate that metric. Bad actors could also try to influence qualitative measurement approaches by purposefully identifying and gathering certain feedback and responses to influence others in the community.

  • Exaggerated outcomes - Bad actors could conduct research and generate analytical resources that are biassed with identifying certain outcomes that could benefit themselves in future disbursement decisions. As more data becomes available these actors could cherry pick certain information that exaggerates a certain narrative that aligns with what they are trying to influence.

Moderate impact verification time required (Score - 3)

  • Automated verification - Much of the quantifiable metrics can be automatically verified due to the fact the information will be stored on immutable ledgers and blockchains. Tools can help with reducing the time it takes to get this data, format it and then present it in a digestible format.

  • Manual verification - The qualitative areas of impact measuring will need a more organised and collective effort to record, analyse and verify any relevant information. Tools could eventually help with identifying biases, flaws and potential conflicts of interest that emerge from the contributors who participate in collecting this information.

Total score = 12 / 20

Last updated