An objective comparison of smart contract platforms
removing opinion & ideology, can we rank blockchains based on quantifiable characteristics alone?
Disclaimer: Omnichain Capital has positions in many of the tokens mentioned in this report. This analysis is meant for informational purpose only and should not be considered investment advice.
One of the most important aspects of investing is intellectual honesty — removing all bias, using a consistent framework, and making fair conclusions based on the best available information. The goal isn’t to “be right” but to figure out what’s objectively true.
Its also important to understand the difference between a good asset and a good investment. Even the worst quality assets can be good investments at the right price. And likewise, the highest quality assets can be bad investments at the wrong price. Again, removing personal bias with a consistent framework can uncover these nuances.
The analysis below ranks smart contract platforms based on a scoring system derived from six categories (explained in more detail below). Admittedly, many of the decisions I made are subjective (choice of categories, subcategories, and scoring formula), but it creates a consistent and repeatable system to evaluate blockchains, and I hope to continuously improve the model and reduce bias over time.
The point of this post isn’t to shill our portfolio or publish a definitive ranking system for these tokens — this analysis is simply a starting point for further research. My motivation for publishing this is to hear feedback on the methodology, suggestions for different metrics, and push-back on any of my logic.
As I explain further below, there are many qualitative network characteristics that are hard quantify, so again this system is never meant to be a “be-all, end-all” analysis, but rather a way to screen investment ideas, test assumptions, and keep investors honest and consistent.
With all that said, lets dive in:
In general, the model works as follows:
Each token receives a score based on six categories
Each category has several subcategories
Subcategories rank tokens from 0-100% based on various metrics (100% being the best)
Category scores are the average subcategory score per token
Total score is the weighted average of all six categories
The total score is relative and means nothing on its own
Note: I consider the tokens shaded in blue to be “layer 1.5” — they are either dependent on another network’s security (L2s) or ecosystem (app-chains).
The first three categories attempt to quantify the three pillars of the blockchain trilemma:
(Ledger)
Category 1: Decentralization
Data inputs (source)
Number of validators (various blockchain explorers); higher number = higher score.
Initial percentage token distribution to insiders (Messari, token supply schedules); higher percentage = lower score.
Minimum validator hardware cost (hardware specs from project documentation; cost per CPU instance ($150), GB of memory ($4), and GB of storage ($0.02) from various online sources); higher cost = lower score.
Would specifically love feedback on compute component pricing. Had trouble finding the validator hardware cost directly so figured specs x component cost was the cleanest way to calc.
Weighting
15% — in-line with equal weight (100% / 6 categories = 16.6% each).
Additional inputs looking to add
Nakamoto coefficient
Governance (still thinking through how to quantify the different qualitative nuances)
Category 2: Security
Data inputs (source)
Current dollar value of tokens staked on the network (stakingrewards.com, various network explorers); higher value = higher score.
Weighting
15% — equal weight.
Additional inputs to add
Thinking through how to quantify different consensus mechanisms as they pertain to security.
Category 3: Scalability
Data inputs (source)
Time to finality (various online sources); lower value = higher score.
Hardware cost (see decentralization); although for scalability, higher cost = higher score.
I purposely left out TPS from the analysis. This article by Kelvin Fichter (link) does a great job explaining why I’ve always been skeptical of the metric.
Weighting
10% — underweight given less certainty in the relevancy of the data.
Additional inputs to add
Compute/gas per block? Have yet to come across a good metric to consistently measure throughput capacity across blockchains.
The last three categories are more quantifiable by nature so the inputs are more straight forward IMO:
Category 4: Developer Activity
Data inputs (source)
Stars (GitHub); higher number = higher score.
Watchers (GitHub); higher number = higher score.
Weekly commits (GokuStats); higher number = higher score.
Weekly developers (GokuStats); higher number = higher score.
Weighting
25% — overweight because I have higher confidence in the relevancy of the input data.
Additional inputs to add
Open to suggestions.
Category 5: User Adoption
Data inputs (source)
Daily active addresses (GokuStats); higher number = higher score.
# of DeFi dApps (DefiLlama); higher number = higher score.
TVL (DefiLlama); higher number = higher score.
Stablecoin market cap (DefiLlama); higher number = higher score.
Last 90 day total revenue (Token Terminal); higher number = higher score.
Twitter followers (GokuStats, Twitter); higher number = higher score.
Reddit members (Messari, Reddit); higher number = higher score.
Weighting
25% — similarly overweight because I have higher confidence in the relevancy of the input data
Additional inputs to add
Open to suggestions.
Category 6: Relative Value
Data inputs (source)
Fully-diluted market cap (CoinGecko); lower value = higher score.
Weighting
10% — underweight because “cheaper” doesn’t necessarily mean a “better” investment.
Additional inputs to add
Open to suggestions.
Future considerations
Analysts should continuously iterate on these models by sanity checking the output and improving the methodology & data quality.
For instance, ADA scored relatively high. Lets say you don’t agree with that. Why not? Low scalability? Well the model caught that, but maybe scalability should have a higher weight. Low adoption? Well it scored middle of the pack in that category but maybe its worth checking ADA’s adoption data to make sure the inputs are accurate and relevant. Etc.
The more you can keep your investment framework consistent and repeatable, the more you can eliminate bias and find truth.
Interesting exercise! How do you convert the quantifiable metrics into a category score?