Akash Distribution Observability Proposal

Hello everyone. The Akash ecosystem is growing as more and more services and tools are built on it with more joining each month in its effort to increase the distribution channels of the network. Currently there is no way to transparently measure the usage of the many projects/tools built for the Akash Network and quantitively measure their share within this cloud computing marketplace. This proposal’s objective is to start the discussion on a standard to provide observability into the several community projects’ usage. That data should be shared in a format that can be easily deserialised by monitoring systems & third party tools. The goal is to easily answer questions like:

  • Based on all the transactions from x timeframe, which ones were made through Cloudmos Deploy?
  • How many transactions are being requested by the Terraform Provider?
  • Which project is getting the most failed transactions?


Usage data should be stored on-chain and be publicly available. To achieve this the note field can be used as the container of the payload in such a way that does would still allow It to be used for other purposes. Projects within the ecosystems would assign the node field to enrich the transactions with information specific to itself such as an identifier of the tool, the version of the payload’s schema, the version of the tool and other attributes.

The payload for the observability data should be easily segregated from the remaining content of the transaction message. One way to achieve this is through a delimiter that encapsulates the payload. A delimiter can be any combination of characters such as @@. So a message could look like This is a transaction @@<payload>@@.

Payload schema

As an initial proposal for the schema of the payload for the observability data I defined the following JSON object:

	“schemaVersion”: “number”,
	“distributor”: “string”,
	“version”: “string”

How a payload would look like from the Akash Terraform Provider is, {“schemaVersion”: 1, “distributor”: “akash_terraform_provider”, “version”: “0.0.5”}.

Projects of interest

As a standard, this data should be provided initially by the key distributors in the ecosystem that make part of these projects to serve as an example and also define the baseline for the adoption of this framework. These projects are:

  • Cloudmos Deploy
  • Praetor
  • Spheron
  • Akash Terraform Provider
  • Akash Console


The data provided by the projects that adopt this standard must not submit any portion of data that would be deemed private user data such as IPs, hostnames and location.

Please comment with your thoughts and improvements on this proposal so it can be refined into a final version to be implemented if you find it valuable.


Thanks for writing this up Luna - really great initiative. I have just a couple comments for now:

  1. When I think “observability” I think more “metrics, logs, traces” and the things that fall under “monitoring the application/ infrastructure for failures” while what you described above falls more under the “analytics” category. Now, there inherently isn’t anything in the JSON object that indicates it is meant for analytics on observability (you are just calling it a “note”) so it isn’t a problem per se but I would be curious if we should make it more concrete and call it “analytics_metadata” or “observability_metadata”?
  2. I think there is a minor typo with note and node - would be good to edit/ clarify… reference " To achieve this the note field can be used as the container of the payload in such a way that does would still allow It to be used for other purposes. Projects within the ecosystems would assign the node field to enrich the transactions with"

Your comment makes total sense. It is indeed a typo, I meant note as the note field in the transactions but it makes sense to be a different field in the transaction.

1 Like

I like the idea of analyzing the usage of different tools.
But i think adding a “analytics_metadata” field to deployment transactions could be used by providers to blacklist tools (or vice versa) or require a specific metadata value.
This could destroy the idea of one large decentralized compute marketplace.

I think with this in mind it will be necessary to somehow encrypt the metadata on deployment creation /bid creation and reveal it only after closure. This will delay the data so no real time analytics will be possible but we don’t have to worry about the abuse of this data.

1 Like

Few things to consider:

  • If the strings can be entered in any way, it can be gamed by trolls.
  • Although not a big deal, optimizing the data can be considered. (e.g. which bytes to use, what they mean, etc)
  • If intended for troubleshooting failed transactions, or simply bugs found in the software that are not chain-related, another data field for how to replicate it without infringing the users’ privacy.
  • For what @zJu is saying, everything can be encrypted one-way so that only the tool maintainers know what the data means, regardless of time locks. However, it will still be possible to differentiate between transactions with and without extra data, even though it won’t tell anything specific.


1 Like

Cool idea, here are my thoughs…

  1. If it’s to entered in the memo, what’s stopping someone to fabricate the same data with another tool or cli?
  2. We currently track if there’s failed transactions anonymously and we’ve fixed some bugs with our internal error handling. What’s the ultimate purpose other than simply comparing which tool has the most usage on the network?


  1. I haven’t thought about a way to stop someone from fabricating this data but what’s really the purpose for someone to fake this data? As the owner of a distribution channel (tool that makes transactions on chain) this data is more valuable to me if I do not fake it, otherwise it won’t reflect the real usage.

  2. There are many use cases for this apart from comparing the tool usage.

  • A few examples, you could monitor the rollout of a new version of your tool and check the adoption it gets.
  • Verify which versions of a specific distribution channel are being used and act upon it, such as incentivize updates from older versions.
  • Monitor in real time if your application is producing invalid transactions.

And many more I can’t think of now. Do you have any use cases you might think about?

Your first point makes sense but the note field already works this way for example. The logic to publish this data would be in the binary/library making the transactions. End-users wouldn’t necessarily know about this process. Likely would not be a concern for them if it respects their privacy.
The idea is not to troubleshoot failing transactions but provide some analytical data at a higher level for any team/person interested in analyzing such data through third-party software.
@zJu makes a good point but if we encrypt the data how would third-party systems read it? I think it is our interest to keep the principles of openness and transparency on the network.

Thank you for your feedback!

  1. I think monitoring the rollout of a version of a tool should be the responsibility of the tool owner and not something that needs to be open and public. Same thing for failed transactions.

Personally I don’t see it as something meaningful to implement as a standard on the network, especially at the current state of the adoption. It could be a cool experiment to see how and who is driving more adoption, but that’s pretty much what I see it could be used for.