Sunday, July 2, 2023

XPower: Adaptive Rewards Rates

Hasan Karahan, MSc ETH Zurich

Proof-of-Stake (PoS) Sybil protection has a serious problem: inequality and hence, centralization. The problem with proportionally rewarding stake is that richer holders have a relatively smaller expense than the poorer ones, giving the latter an unfair advantage regarding rewards. This advantage translates into a creeping centralization over the course of time.

Therefore, we propose the following scheme to ensure long-term decentralization:

  • Split a population of stake holders into groups based on their stake size, i.e. discretization. Represent those groups by NFTs and then name them with the corresponding SI-prefixes: unit, kilo, mega, giga, tera, peta etc.

  • To each group offer a flat rate of let’s say 1%, which would correspond to proportionality. Alternatively, use increasing reward rates such that the larger holders would have an initial incentive to upgrade their holdings to the largest possible NFT levels.

  • Introduce adaptive reward rates with the goal of keeping the total income of each group equal: This simple rule avoids any group dominating the others and curbs the worst excesses of proportionality, which leads to the dystopian Pareto distribution, where 20% of the population control 80% of the resources (or worse).

The rewards can be distributed as a token, which can then be used as stake on a PoS based blockchain. Further, this centralization-resistant mechanism can be built right into the stake and reward scheme of a blockchain to offer long-term censorship-resistance.

Initial Reward Rates ¶

Below you see (on a logarithmic scale), that we have opted – in the case of XPower tokens – for increasing initial reward rates per NFT level. This super-proportionality serves as a major incentive to attract capital to the protocol. No NFTs have been minted or staked yet:

XPower NFTs: Initial Reward Rates
XPower NFTs: Initial Reward Rates

Minting UNIT, KILO & MEGA NFTs ¶

Now, we are going to deposit 1’048’575 XPower ODIN tokens and mint 575 UNIT, 48 KILO and 1 MEGA NFTs:

XPower NFTs: Minting UNITs, KILOs & MEGAs
XPower NFTs: Minting UNITs, KILOs & MEGAs

Adaptive Reward Rates ¶

After minting and staking the NFTs, the reward rates self-adapt: Since, the UNIT NFTs are in a tiny minority – in terms of the deposited 575 ODIN – their rate jumped to 607.87% (from 0%). Similarly, the KILO NFTs are also in a minority – with respect to the deposited 1K ODIN. Hence, their rate increased to 7.28% (from 1%). Finally, the MEGA NFTs absolutely dominate the distribution – with 1M ODIN deposited. Therefore, their rate decreased to only 0.35% (from the initial 2%):

XPower NFTs: Adapted Reward Rates
XPower NFTs: Adapted Reward Rates

Note: In the example above, the initial rates adapted immediately to the new stake distribution, since they have been active for only a very short time (just a few blocks). However, if the initial rates would have been prevalent for a longer duration, then the adaptation would have also taken longer!

Actual -vs- Target Rates ¶

Since now, the UNIT NFTs have a very high reward rate of 607.87%, as rational actors, we mint more of them increasing their number from 575 to 1M. Now, the number of deposited ODIN for the UNIT and MEGA groups is the same, namely 1M tokens.

Therefore, their target rates should also be the same, which with 0.68% is indeed the case. For UNIT NFTs the target rate decreased from 607.87% down to 0.68%, while for the MEGA NFTs the target rate increased from 0.35% up to 0.68%. Further, the KILO NFTs are now in a relative minority. Hence, their target rate increased from 7.28% up to 14.22%.

XPower NFTs: Actual vs Target Rates
XPower NFTs: Actual vs Target Rates

Note: The actual rates lag behind the target rates, with the lag duration being dependent on how long the actual rates have been active for: The longer an actual rate remains at a certain value, the longer it takes to approach a new target value. For a detailed discussion regarding the mathematical relation between the actual and target rates please see our blog post on variable reward rates.

Upgrading NFTs ¶

Since now, the KILO NFTs have with 14.22% a higher target rate than the target rate of UNIT NFTs, which is 0.68%, it makes sense to upgrade the UNITs to KILOs – once, the former’s actual rate falls below that of the latter:

XPower NFTs: Upgrading to Higher Levels
XPower NFTs: Upgrading to Higher Levels

Above, all UNITs have been unstaked to upgrade them to KILO NFTs, which will result in converting the 1M UNITs into 1K KILO NFTs. Further in the future, if the actual rates would change in favor of the MEGAs then the 1K KILOs could then be upgraded to 1 MEGA NFT.

Conclusion ¶

We have shown, that it is indeed possible to design a reward scheme that results long-term in a distribution that is flat enough, and where the “rich” (owners of the higher level NFTs) collectively cannot dominate the “poor” (owners of the lower level NFTs).

Further, thanks to short-term incentives the “rich” individuals always have a reason to upgrade their NFTs, and hence become differentiable from the “poor” – without the need to resort to drastic privacy-infringing measures like KYC or digital IDs.

Having such a uniform-ish distribution and then maintaining it over time is absolutely critical to ensure censorship resistance on a PoS blockchain. Otherwise, the entire system would become dominated by a few “oligarchs”, who could then arbitrarily impose their will on the rest, or who could be forced by out-of-chain powers to do so.

Saturday, May 6, 2023

XPower: Decentralization by Design

Hasan Karahan, MSc ETH Zurich.

The Problem: Abject Inequality ¶

Most of the current block-chains today fail to deliver on their core promise: decentralization. Without proper decentralization the main goal, the entire ethos of a globally censorship resistant and anonymous liquidity market becomes just an illusion.

But why? It boils down to stake concentration: Due to a lack of a safe and anonymous technological base for a one-man and one-vote on-chain democracy, even modern chains have to rely on proxy identities underwritten by proof-of-work or proof-of-stake.

While proof-of-work has the advantage of allowing anyone with access to energy to bootstrap a proxy identity, which can then be used to vote on the validity of transactions, it suffers from a potential centralization of access to superior hashing technology and cheap energy.

With proof-of-stake the issue is much more pronounced, where you – by design – start from a centralized setting, and then try with marketing to distribute your tokens to hopefully enough stakeholders to have a viable network!

Few rich folks end up moving-in onto your block-chain and either become an oligopoly of stakeholders, which then together could censor any “undesirable” transaction or worse, could simply decide to collectively dump their entire (possibly pre-minted) stake to destroy the security of the block-chain at any moment of their choice – due to greed, intrigue or external political pressure.

The Solution: On-chain Democracy ¶

Hence, the goal becomes to design stake re-distribution into the heart of the token-economics, that empowers the block-chain, to approach the ideal case of a one-man and one-vote on-chain democracy. Any other model seems to degenerate into tyranny, where the many are at the mercy of the few.

XPower: Proof-of-Work on Proof-of-Stake ¶

Our XPower project delivers the best approximation possible to on-chain democracy: (1) We use proof-of-work to distribute tokens, (2) which can then be burned to mint stake-able NFTs of various levels. (3) If you burn more tokens, then you can mint higher level NFTs with higher reward rates, (4) but which get reduced if a particular level gets too “crowded” – i.e. if the total rewards for that level become higher compared to that of the other ones.

So, the protocol equalizes the total share on the rewards across all levels, while preserving proportionality within them. Further, since XPower NFTs are upgrade-able, if you have minted (or bought) enough NFTs of a particular lower level, then you can “escape the crowd” to a higher one with higher reward rates, by reminting many of your lower level NFTs as a single higher level one.

The result is democracy between the XPower NFT levels, but capitalism within them! By design, none of the levels can dominate any of the other ones.

Using these rewards, which are distributed as aged XPower tokens (APower), we will run our own Avalanche subnet. The APower tokens wrap 1-to-1 XPower, as long as the project treasury has enough of the latter – where the treasury gets funded by a co-minting process on each newly mined and minted XPower. If however, mining stops and the treasury becomes empty, then APower will be backed by XPower only fractionally, to ensure a steady supply of APower for the stakers.

Since each subnet validator requires a minimum amount of AVAX, our XPower subnet will initially be protected by that stake until enough APower liquidity has accrued to ensure the safety of the block-chain.

Mine and mint proof-of-work tokens on Avalanche at xpowermine.com! We’ve got stake-able NFTs, too.

Sunday, February 26, 2023

XPower: Variable Reward Rates

Hasan Karahan, MSc ETH Zurich

Let’s say you offer stakeable NFTs with fixed reward rates of 1%: So, if a user stakes an NFT with a nominal face value of 1’000 units over a year then he should be rewarded with 10 units.

So, far so good. Now, let’s say you would like to introduce variable reward rates. Let’s discuss the most naive implementation of such variability, and the associated problems.

The Naive Approach to Increasing Rates ¶

The easiest way to realize variable reward rates would be to just increase the current value immediately to the desired target rate:

$$\textbf{rate}_t=\textbf{target[rate]}_t $$

Let’s investigate how such a sudden transition of the current rate would look like:

Reward Rates: Sudden Increase
Reward Rates: Sudden Increase

Above, you see that the reward rate has been doubled (in the middle of the year) from 1% to 2%. While such an increase is easy to understand, it has unfortunately many down sides:

  • In the rate curve above we would like to reward NFT stakers with respect to only the purple area without the rectangle in the red upper-left corner: So, instead of 10 units the reward should now be 15.

  • However, that is not what is happening here: At the end of the year the reward will be 20 units including the red area!

The issue is, that if we naively increase the reward rate then the staking period of the 6 months before the increase is also rewarded at 2%, which is obviously wrong. If the past staking period was long then this could even result in a dangerous situation, where the pool distributing the rewards could immediately be drained due to a sudden spike of the claims.

The Naive Approach to Decreasing Rates ¶

Reward Rates: Sudden Decrease
Reward Rates: Sudden Decrease

Alright, let’s investigate the reverse situation: What happens when an initial reward rate of 2% is dropped down to 1%? Nothing good:

  • Again, above we would like to reward with respect to the purple area (without the rectangle in the red upper-right corner): So, instead of 20 units the reward should be 15.

  • However, that is not what is happening here: At the end of the year the reward will only be 10 units excluding the area of the purple upper-left corner!

So, this time we have the reverse problem where the staking period before the decrease is not rewarded at 2% but instead only at 1%, which again is obviously wrong.

The Correct Approach ¶

Well, how can we solve this conundrum? Simply setting the reward rate to a new target does not work. So, we need somehow to defer a full update, and allow the current value to approach the target slowly over time.

But how slow? We would like the approach to be as fast as possible, but it apparently needs to be slower than an immediate switch. The solution lies in recognizing that the new target rate needs to blend in in proportion to the passage of time compared to the duration the current rate has been active for:

$$\textbf{rate}_t=\sum_{\tau\leqslant t} \textbf{target[rate]}_\tau \times \Delta[\tau] \Bigg{/} \sum_{\tau\leqslant t}\Delta[\tau]$$

The formula states in a nutshell that the reward rate corresponds to the area under the targets divided by the total duration.

Reward Rates: Asymptotic Increase
Reward Rates: Asymptotic Increase

Above, we see that the target rate switched after 6 months from 1% to 2%. If you look at the current value of the purple curve, you notice that it corresponds to the area under the red one at any point in time. For example, in the 12th month of the year the area is in total $1\%\times{6} + 2\%\times{6} = 1.5\%\times{12}$, which corresponds to an average reward rate of $1.5\%$. Obviously, a similar relation holds true if the target rate is decreased (instead of being increased).

Conclusion ¶

As we have seen, to offer variable reward rates on the XPower NFTs we are forced to operate with targets which are then asymptotically approached by the actual reward rates. A nice side effect of this natural constraint is that the stakers of the protocol can be assured that rates which have remained persistent over a long period of time, are hard to change to a new value: We call this property the “Lethargic Principle” which induces trust into the current rate configuration of the protocol.

Sunday, September 25, 2022

The Deep State

— by Hasan Karahan, MSc ETH Zurich, @notexeditor

Today, I’d like to present a React mixing that allows you to update a component’s state recursively. It’s primarily meant for software developers who still use class based components and who would like to apply a more sophisticated method instead of the standard setState one, which only allows you to shallow update a component’s state.

The mixing below provides the update method which enables you to set a new state and which at its core facilitates the jQuery.extend function, that can deep merge multiple objects in a single invocation. It’s written in Typescript and offers full type safety while updating a component’s state:

import { DeepPartial } from 'redux';
import { Component } from 'react';
/**
 * returns a generic constructor type
 */
export type Constructor<U = unknown> = {
    new (...args: any[]): U;
}
/**
 * @returns the *inferred* property type from a component's constructor
 */
type PropsOf<Ctor>
    = Ctor extends Constructor<Component<infer P>> ? P : never;
/**
 * @returns the *inferred* state type from a component's constructor
 */
type StateOf<Ctor>
    = Ctor extends Constructor<Component<infer _, infer S>> ? S : never;
/**
 * @returns a mixin with the `update` method to *deep* set the state
 */
export function Updatable<
    TBase extends Constructor<Component<TProps, TState>>,
    TProps = PropsOf<TBase>, TState = StateOf<TBase>
>(
    Base: TBase
) {
    return class extends Base {
        protected update(
            next_state: DeepPartial<TState>,
            callback?: () => Promise<void>
        ) {
            return new Promise<void>((resolve) => {
                const state = $.extend(
                    true, {}, this.state, next_state
                );
                this.setState(state, async () => {
                    if (typeof callback === 'function') {
                        await callback();
                    }
                    resolve();
                });
            });
        }
    };
}

It’s usage is rather simple — by just wrapping the React component with the Updatable mixin you gain access to the update method:

type Props = {}
type State = {
  my:{ deep:{ state:{ x:number; y:number; }}}
}
export class App extends Updatable(
  React.Component<Props, State>
) {
  constructor(props: Props) {
    super(props); this.state = {
      my:{ deep:{ state:{ x:0,y:0 }}}
    };
  }
  set X(x: number) {
    // old y is *not* overwritten but kept:
    this.update(my:{ deep:{ state:{ x }}});
  }
  set Y(y: number) {
    // old x is *not* overwritten but kept:
    this.update(my:{ deep:{ state:{ y }}});
  }
}

As you can see, the X and Y setters are only updating the respective x and y states while not explicitly copying the other one — which is handled by the jQuery.extend function in the update method!

Further, it’s also possible to combine the update method with the async and await promises:

export class App extends Updatable(
  React.Component<Props, State>
) {
  constructor(props: Props) {
    super(props); this.state = {
      my:{ deep:{ state:{ x:0,y:0 }}}
    };
  }
  async setX(x: number) {
    await this.update(my:{ deep:{ state:{ x }}});
  }
  async setY(y: number) {
    await this.update(my:{ deep:{ state:{ y }}});
  }
}

The redux and jQuery dependencies are superficial and software developers who’d rather like to avoid them can replace the DeepPartial type (3 lines of code) and the jQuery.extend function with their own implementations.

Monday, May 10, 2021

Avalanche Decentralization Proposals

Mainnet AVAX Distribution with a GINI of 85.3%
Mainnet AVAX Distribution with a GINI of 85.3%

The Avalanche mainnet has a very skewed AVAX distribution, enabling the Avalanche foundation to centrally control the entire network. Below are a few proposals to amend this situation:

(1) Redesign the wallet UI to encourage people to delegate to smaller validators.

(2) Setup empty validators, and ask the Avalanche foundation to provide stakes (@kevinsekniqi’s idea): While this would not truly diminish the central power of the foundation, it’s certainly a well-intentioned first step towards decentralization.

(3a) Progressively tax large validator rewards: A governable maximum tax rate should be set for the largest possible stakes, and diminish down to zero for the smallest stakes, where the tax rate should follow a sigmoid function.

(3b) If governance of the maximum tax rate should prove too cumbersome, a governable target GINI should be determined, which would then auto-set the now dependent maximum tax rate, such that the desired target GINI would be achieved within a fixed time horizon.

(3c) Redistribute the collected tax among the validators inversely proportional to their stake size. To emphasize smaller validators even more, an exponent larger than one could be introduced.

(4) Force the foundation to burn their rewards, while allowing them to keep their stakes to ensure protection against malicious actors.

(5) Burn or re-distribute the foundation stakes, when the non-foundation-related total value locked (TVL) is judged to be high and stable enough to protect against malicious actors.

(6) Partially decouple voting power in the Avalanche protocol from the staked amount: At the moment voting weight is proportional to a validator’s stake. This could be changed to e.g. an inverse quadratic or even logarithmic relationship, where smaller validators would have relative to their stake size much more say. However, the impact on the security of such a modification should be investigated carefully.

(7) Create an Avalanche subnet named EQUALIZER, where EQUAL tokens would evenly be distributed among the currently active validators (per P-chain address): Its very existence would be a strong incentive for the foundation to take action towards more decentralization.

(8) If the foundation would not take appropriate actions to decentralize properly, simply turn the EQUALIZER subnet into a standalone network, which would stop validating the mainnet and decouple from Avalanche. It would effectively be a fork.

Saturday, May 8, 2021

Avalanche Stake Imbalance

“Give me control of a nation’s money and I care not who makes the laws.” – M. A. Rothschild

— by Hasan Karahan, MSc ETH Zurich, @notexeditor

Dear Avalanche community, this is the first blog post of a multi-part analysis of the validator stake distribution imbalance present as of today, its potential effects on centralization, and multiple suggestions on how it could get fixed. While we will have an in-depth look at the imbalance, we will provide only a cursory elaboration on the centralization effects and potential solutions. In later blog posts, we will dive further into the latter and the former.

Proof of Stake ¶

Avalanche is a multi-blockchain platform, relying on the proof of stake (PoS) Sybil mechanism to prevent malicious actors from gaining control over a network of validator nodes that operate the platform. Each node has to provide a stake in the form of AVAX coins, and gain hence the right to judge whether a particular transaction on the network is valid or not: Hence, the validator designation for such a node.

The more stake a validator puts up, the higher its weight in terms of voting power within the network: This implies that a node $\mathcal{A}$'s vote counts twice as much compared to a node $\mathcal{B}$ with half as much in stakes compared to the node $\mathcal{A}$. Such a setup is the general case for most of the PoS blockchains.

In Avalanche, a transaction is regarded by a node valid, if it is considered by the peer nodes of the given node valid, where the node asks over 20 rounds 20 stake-weighted random peers their respective opinion. If 70% of those 20 peers (i.e. 14) in a particular round judge a transaction to be valid, then the given node considers the transaction also to be valid (for the given round). After 20 rounds the entire set of validators quickly converges to the same conclusion, thanks to the probabilistic meta-stability properties of the Avalanche algorithm.

Since peer nodes are not subsampled uniformly at random, but are given weight proportional to their stake this mandates a reasonable distribution of these stakes among the validators to ensure censorship resistance for proper decentralization. However, if the stake distribution is skewed, decentralization degrades and centralized control over the entire network reemerges allowing censorship on transactions. Let’s have a look at various stake distributions.

Equal Distribution ¶

Equal Distribution with a GINI=0%
Equal Distribution with a GINI=0%

The diagram above shows the equal distribution, i.e. the distribution in AVAX the Avalanche validators would have had if all of them would have acquired the same amount of stake of about 0.30M AVAX.

The lower diagram displays the stake distribution per validator, and the upper one the cumulative one. The red curves (or in this case lines) plot the total stake, while the blue ones exclude delegations.

The upper diagram is useful because the cumulative view allows visualizing the so-called GINI inequality coefficient. If $s_i$ is the stake (or reward) of a validator $i$, and there are $n$ validators, then the GINI coefficient $G$ is given by:

$$\require{physics} G = \frac{\sum_i\sum_j\abs{s_i-s_j}}{2n^2\bar{s}} $$

where $\bar{s}$ is the arithmetic mean i.e. average stake. Hence, the GINI $G$ measures the total difference within a population (the numerator) and then normalizes it for scale (the denominator).

The lower the GINI is the more equitable the stakes are distributed, and the higher it is the less equitable the network becomes. It ranges from 0% of total equality (and decentralization) to 100% of total inequality (and hence maximal centralization).

The dotted vertical line separates the left-hand side validators, which collectively control only 30% of the total stakes from the right-hand side ones, which control 70%. Due to this hypothetical distribution being so equitable, the number of validators is also split correspondingly between 30% left-hand side vs. 70% right-hand side validators, which is in line with proper decentralization.

Uniform Distribution ¶

Uniform Distribution with a GINI=33.3%
Uniform Distribution with a GINI=33.3%

The uniform distribution above shows the result of randomly assigning stakes to validators – which in Avalanche has of course not been the case.

Perhaps contrary to expectation, the GINI inequality is 33.3% (instead of 0%), because due to randomness there are validators that got almost no stake, while others got double the average of 0.3M AVAX. Since there are poorer and richer validators, the GINI coefficient correctly indicates this by displaying a value larger than zero.

Despite the presence of inequality the uniformly random distribution still looks decentralized enough to ensure censorship resistance, because almost half the validators would need to get corrupted to gain control over 70% of the stakes: Given a large enough total value locked (TVL) in stakes, such a scenario is rather unlikely.

Alternative definition for the GINI coefficient ¶

To gain a better understanding of the GINI inequality $G$, please note that it can also be expressed via the following formula:

$$G = \frac{A}{A+B} $$

where $A$ is the white and empty area below the dotted diagonal straight line (at the GINI of 0%) – going from the lower-left origin to the upper-right corner – and above the red curve (at for example the GINI of 33.3%). Further, $B$ represents the entire area below the red curve (all the way down to the horizontal axis). Hence $A+B$ is the area of the triangle or half the rectangular area of the entire diagram.

If we run the numbers, then the triangular area $A+B$ is equal to ${942}\times{301.1M}/2$ or about $150G$, and $A$ is equal to about $50G$ as expected by a GINI of 33.3%.

Log-logistic Distribution ¶

Log-logistic Distribution with a GINI=66.6%
Log-logistic Distribution with a GINI=66.6%

Let’s investigate the case where the GINI inequality is 66.6%: Due to the definition of the coefficient, multiple distributions lead to the same GINI. Hence, we need to choose a distribution ourselves. So, we decided to go for the so-called log-logistic function, which in economics is also known as the Fisk distribution:

$$f_\beta(x) = \beta \times x ^ {\beta-1} / \big(1+x^\beta\big)^2 $$

By plugging a uniformly distributed random variable $x$ into $f_{\beta=5}$, we managed to create a stake distribution with a GINI of 66.6%. Let’s have a closer look at the result:

  • The minimum stake is zero, and the maximum one at about 1.5M AVAX: The original uniform distribution naturally causes haves and have nots, and the log-logistic function exaggerates then the differences.

  • About 750 of the smaller validators control over only 30% of the stakes;

  • While only about 200 of the larger validators control 70% of the stakes.

As it is easily observable, inequality is up and decentralization has degraded significantly: About 20% of the validators control 70% of the stakes, and hence have the power to sway the Avalanche protocol to their liking! While this scenario is – under the assumption that the 200 larger validators do not collude – probably still censorship-resistant and hence secure enough, you do not get the same assurance levels as with the two networks from above with a GINI of 0% or even 33%.

Conclusion: The higher the GINI inequality the fewer validators are required to be able to collude against the network. Hence, low GINI is good and high GINI is bad for decentralization!

Mainnet Distribution ¶

Mainnet Distribution with a GINI=85.1%
Mainnet Distribution with a GINI=85.1%

Now, if we plot the distribution of AVAX in the Avalanche mainnet, we are confronted with a rather desolate landscape: On the left-hand side of the diagram, the stakes of the smaller validators are – even when cumulated – really tiny that they are barely visible, while on the right-hand side the larger validators tower over the rest of the network!

The GINI coefficient is a whopping 85.1%, the 860 smaller validators of the 942 total have combined access to only 30% of all staked AVAX, while the remaining 82 mega-validators own the lions share of 70% of the stakes. Therefore, this “cabal” of insiders is effectively in total control of the network with the power to censor any transaction they wish to do so.

Conclusion: The Avalanche platform with its current stake distribution of AVAX is centralized, and not censorship-resistant.

Improvement Proposals ¶

We have collected in our blog post of decentralization proposals a preliminary list of suggestions on how the Avalanche community can improve and hopefully also fix the stake imbalance as presented in this exposition. In later blog posts, we will elaborate on each, and investigate their potential impact on the current stake distribution, GINI inequality coefficient, and decentralization.

Saturday, May 30, 2020

Percim’s API Explorer

— by Hasan Karahan, CTO Percim.com

Providing fast and reliable digital services via a so-called Application Programming Interface (API) for integrator customers is nowadays standard. It allows a modern technology company to quickly enter a market, and start immediately adding value to its clients.

The basic idea is that the company sets up a server and offers a secure authentication mechanism; upon which then a customer client software can start to consume the provided services. An example of authentication could be a simple login screen:

Login for Percim's API Explorer
Login for Percim's API Explorer

However, for this to work this API between the server and the client systems needs to be thoroughly documented, in such a way that a customer can quickly gain an understanding of the offered technical capabilities.

It used to be the case – and it is still often so – that such documentation was a static technical manual being provided in print or via a simple digital PDF.

But, this also meant that expensive software developers integrating the client system had to laboriously study those manuals, and write their own code snippets to test an API to gain a better understanding of what is being specifically offered. This process of getting acquainted with the technicalities took time, and often still does so!

In recent years however things have changed, and software providers started specifying their APIs plus writing user interfaces consuming those formal descriptions to offer an interactive experience for their integrators. If done well, this form of support can result in dramatically shorter project durations and hence lead to higher success rates in terms of adoption.

An Overview of Percim’s API Explorer ¶

Now, at Percim.com we offer our brand new API Explorer (free of charge and open-source), allowing our current and future customers to rapidly acquire a deep understanding of our services:

Percim's API Explorer
Percim's API Explorer

Let’s go quickly through the various sections of our API to get an overview:

Section Description
OAS offers an end-point to fetch the OpenAPI specification (a formal description of the API)
image offers end-points to process images
extra offers some miscellaneous end-points
debug offers end-points for debugging
An Overview of Percim's API
An Overview of Percim's API

As shown above there are at the moment four different sections in the current $1.0.0$ version of the API.

The user interface for each end-point can (i) be toggled on, (ii) the required (or optional) parameters can be filled in, and then (iii) a corresponding request to the server will be sent off, upon which (iv) the received response will be displayed.

Using the /now End-Point ¶

As an example let’s test if the API is running, by sending a request to the /now end-point (without any parameters), to verify that the server responds with the current time:

Request to the /now End-Point
Request to the /now End-Point
Response from the /now End-Point
Response from the /now End-Point

As shown above, the API indeed seems to be online and responds as expected with the current time of the server.

Terminal Interaction ¶

If we were to use a command line interface (CLI) in a terminal, we could achieve the same request/response interaction via:

$ http https://api.percim.com/now authorization:"$AUTHORIZATION"
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 36
Content-Type: application/json
Date: Fri, 29 May 2020 14:46:07 GMT
{
    "now": "2020-05-29T14:46:07.396702"
}

…where the $AUTHORIZATION token needs to be acquired from an authorization server (such that the API server can then authenticate our request). Above, the API Explorer fetches this token automatically behind the scenes.

Download the API Explorer ¶

We will not further delve further into the specifics of the other end-points. However, you are invited to explore them for yourself by downloading the API Explorer from our release page on GitHub.com:

github.com/percim/oas-explorer/releases