Calaganne — Science and Technology
My thoughts on science and technology
Wednesday, June 12, 2019
Thursday, June 6, 2019
Creating NPM packages with @dizmo/generatormodule
NPM has become a hugely important part of the JavaScript ecosystem. It allows the easy packaging and publication of various projects and libraries, which can be based on JavaScript or other languages like for example TypeScript (or CoffeeScript).
At dizmo, we have developed the @dizmo/generatormodule NPM module generator, which allows us to quickly create an NPM based library project with support for the following features:

Generation: the process of generating an NPM based module (based upon a template);

Transpilation: the process of translating from e.g. (modern) JavaScript or TypeScript to (older) JavaScript;

Bundling: the process of bundling various separate JavaScript modules into a single one;

Minification: the process of compressing a large module into a smaller one;

Linting: the process of checking that the coding style meets certain standards;

Testing: the process of testing code for expected behavior;

Coverage analysis: the process of checking how much of the code is covered by tests;

Continuous integration: the process of automatically running tests (on a remote server).
Generation ¶
Our module generator is based on Yeoman. After installing Yeoman and the @dizmo/generatormodule package, you can create an NPM project with:
yo @dizmo/module [git] [typescriptcoffeescript]
As you see we support pure JavaScript, TypeScript and CoffeeScript projects. Further – provided the git
command is available – it is possible to initialized a project as a GIT repository. Let’s have a closer look at a JavaScript project:
JavaScript module ¶
yo @dizmo/module git
__
  ╭──────────────────────────╮
(o) │ Welcome to the awesome │
`´ │ dizmo module generator! │
( _´U`_ ) ╰──────────────────────────╯
/___A___\ /
 ~ 
__'.___.'__
´ ` ° ´ Y `
? Name your module: @dizmo/mymodule
? Describe it: library module
? And your URL? https://github.com/hsk81
create package.json
create cli/runbuild.js
create cli/runlint.js
create cli/runprepack.js
create cli/runtest.js
create cli/runutils.js
create lib/index.js
create test/test.js
create LICENSE
create README.md
create .travis.yml
create .eslintrc.json
create .gitignore
create dist/.gitignore
create dist/.npmignore
Setting the project root at: /home/hsk81/mymodule.git
Directory structure ¶
tree mymodule.git/ a
mymodule.git/
├── cli
│ ├── runbuild.js
│ ├── runlint.js
│ ├── runprepack.js
│ ├── runtest.js
│ └── runutils.js
├── dist
│ ├── .gitignore
│ └── .npmignore
├── .eslintrc.json
├── .gitignore
├── lib
│ └── index.js
├── LICENSE
├── package.json
├── README.md
├── test
│ └── test.js
├── .travis.yml
└── .yorc.json
4 directories, 16 files
where we have omitted above the .git
subdirectory. Let’s have an even closer look at the various subdirectories:
./cli
— the embedded build system ¶
tree mymodule.git/cli/ a
mymodule.git/cli/
├── runbuild.js
├── runlint.js
├── runprepack.js
├── runtest.js
└── runutils.js
0 directories, 5 files
These scripts beneath the ./cli
folder provide support for all the features listed above: building (transpilation), linting (code standard adherence), testing (code correctness with coverage analysis), and prepackaging (bundling with minification).
The totality of this embedded build systems is less than 100
lines of code, and thanks to it being directly embedded into the project, it allows for specific adjustments of the build process.
./dist
— the distribution folder ¶
tree mymodule.git/dist/ a
mymodule.git/dist/
├── .gitignore
└── .npmignore
0 directories, 2 files
The ./dist
folder is the target location of the transpilation, bundling and minification processes. The .gitignore
file here ensures that no bundle (which can be rather large) is committed to the GIT repository, while the (empty) .npmignore
file ensures that the bundle is published to the NPM repository.
Hence, these two files should never be removed: Sometimes, it is necessary to cleanup the remaining files under ./dist
, upon which great care should be taken to not also accidentally remove these .gitignore
and .npmignore
files.
./lib
— the source folder ¶
tree mymodule.git/lib/ a
mymodule.git/lib/
└── index.js
0 directories, 1 file
The ./lib
folder contains all the source files, which collectively make up the library; it is also possible to create subfolders within it. By default a single index.js
is given, which with its export
statements provides the API of this library.
./test
— the test folder ¶
tree mymodule.git/test/ a
mymodule.git/test/
└── test.js
0 directories, 1 file
The ./test
folder contains test scripts (beginning with a test
prefix), which together test the code under ./lib
for expected behavior. By default a single test.js
is given containing test cases for ./lib/index.js
.
Development ¶
Build ¶
npm run build
without linting: ¶
npm run  build nolint
with UMD support (incl. minimization): ¶
npm run  build prepack
with UMD support (excl. minimization): ¶
npm run  build prepack nominify
By invoking npm run build
all scripts under ./lib
(and ./test
) are by default transpiled and linted. Optionally, they are bundled and then minified. The transpilation is performed with Babel, linting with ESLint, (UMD) bundling with Browserify and minification with Terser.js.
Lint ¶
npm run lint
with autofixing: ¶
npm run  lint fix
By invoking npm run lint
the linting process can also be triggered separately (outside of a build process), and it is performed with ESLint, whereas the .eslintrc.json
file is used for configuring the set of linting rules.
Test ¶
npm run test
without (re)building: ¶
npm run  test nobuild
By invoking npm run test
by default a build step is performed, after which the (transpiled) test scripts are executed – using the Mocha framework.
Cover ¶
npm run cover
without (re)building: ¶
npm run  cover nobuild
By invoking npm run cover
by default a build step is performed, after which the (transpiled) test scripts are executed – using Mocha in combination with the Istanbul.js coverage framework (to provide corresponding statistics).
Publish ¶
npm publish
initially (if public): ¶
npm publish access=public
By invoking npm publish
the transpiled scripts under ./dist
are prepackaged by bundling and minifying them, after which the bundle is then published to the NPM registry. While for bundling Browserify is used, for minification Terser.js is facilitated.
Continuous integration ¶
$ cat .travis.yml
language : node_js
node_js :
 stable
install:
 npm install
script:
 npm run cover
after_script: "npx nyc report reporter=textlcov  npx coveralls"
When the source code of the NPM library is pushed to a GIT repository (for example on GitHub.com), it can automatically be tested using the Travis CI service. For this to work a .travis.yml
configuration (as shown above) needs to be provided (which is generated by @dizmo/generatormodule). Further, the Travis CI needs to have access to the corresponding GIT repository – to be able to pull and test the most recent commits.
Finally, the results of the tests is reported to the Coveralls service, which offers a user interface to interactively investigate the portions of the code base, which are (or are not) covered by the provided test cases (under ./test
).
Sunday, February 3, 2019
Mimblewimble
— Tom Elvis Jedusor, 19^{th} July, 2016
Introduction ¶
Bitcoin is the first widely used financial system for which all the necessary data to validate the system status can be cryptographically verified by anyone. However, it accomplishes this feat by storing all transactions in a public database called “the blockchain” and someone who genuinely wishes to check this state must download the whole thing and basically replay each transaction, check each one as they go. Meanwhile, most of these transactions have not affected the actual final state (they create outputs that are destroyed a transaction later).
At the time of this writing, there were nearly $150$ million transactions committed in the blockchain, which must be replayed to produce a set of only $4$ million unspent outputs.
It would be better if an auditor needed only to check data on the outputs themselves, but this is impossible because they are valid if and only if the output is at the end of a chain of previous outputs, each signs the next. In other words, the whole blockchain must be validated to confirm the final state.
Add to this that these transactions are cryptographically atomic, it is clear what outputs go into every transaction and what emerges. The “transaction graph” resulting reveals a lot of information and is subjected to analysis by many companies whose business model is to monitor and control the lower classes. This makes it very nonprivate and even dangerous for people to use.
Some solutions to this have been proposed. Greg Maxwell discovered to encrypt the amounts, so that the graph of the transaction is faceless but still allow validation that the sums are correct [1]. Dr Maxwell also produced CoinJoin, a system for Bitcoin users to combine interactively transactions, confusing the transaction graph. Nicolas van Saberhagen has developed a system to blind the transaction entries, goes much further to cloud the transaction graph (as well as not needed the user interaction) [3]. Later, Shen Noether combined the two approaches to obtain “confidential transactions” of Maxwell and the darkening of van Saberhagen [4].
These solutions are very good and would make Bitcoin very safe to use. But the problem of too much data is made even worse. Confidential transactions require multikilobyte proofs on every output, and van Saberhagen signatures require every output to be stored for ever, since it is not possible to tell when they are truly spent.
Dr. Maxwell’s CoinJoin has the problem of needing interactivity. Dr. Yuan Horas Mouton fixed this by making transactions freely mergeable [5], but he needed to use pairingbased cryptography, which is potentially slower and more difficult to trust. He called this “oneway aggregate signatures” (OWAS).
OWAS had the good idea to combine the transactions in blocks. Imagine that we can combine across blocks (perhaps with some glue data) so that when the outputs are created and destroyed, it is the same as if they never existed. Then, to validate the entire chain, users only need to know when money is entered into the system (new money in each block as in Bitcoin or Monero or pegins for sidechains [6]) and final unspent outputs, the rest can be removed and forgotten.
Then we can have Confidential Transactions to hide the amounts and OWAS to blur the transaction graph, and use LESS space than Bitcoin to allow users to fully verify the blockchain. And also imagine that we must not pairingbased cryptography or new hypotheses, just regular discrete logarithms signatures like Bitcoin. Here is what I propose.
I call my creation Mimblewimble because it is used to prevent the blockchain from talking about all user’s information [7].
Confidential TXs and OWAS ¶
The first thing we need to do is remove Bitcoin Script. This is sad, but it is too powerful so it is impossible to merge transactions using general scripts. We will demonstrate that confidential transactions of Dr. Maxwell are enough (after some small modification) to authorize spending of outputs and also allows to make combined transactions without interaction. This is in fact identical to OWAS, and allows relaying nodes take some transaction fee or the recipient to change the transaction fees. These additional things Bitcoin can not do, we get for free.
We start by reminding the reader how confidential transactions work. First, the amounts are coded by the following equation:
$$C = r*G + v*H $$where $C$ is a Pedersen commitment, $G$ and $H$ are fixed nothingupmysleeve elliptic curve group generators, $v$ is the amount, and $r$ is a secret random blinding key.
Attached to this output is a rangeproof which proves that $v$ is in $[0, 2^{64}]$, so that user cannot exploit the blinding to produce overflow attacks, etc.
To validate a transaction, the verifier will add commitments for all outputs, plus $f*H$ ($f$ here is the transaction fee which is given explicitly) and subtracts all input commitments. The result must be $0$, which proves that no amount was created or destroyed overall.
We note that to create such a transaction, the user must know the sum of all the values of $r$ for commitments entries. Therefore, the rvalues (and their sums) act as secret keys. If we can make the $r$ output values known only to the recipient, then we have an authentication system! Unfortunately, if we keep the rule that commits all add to $0$, this is impossible, because the sender knows the sum of all his $r$ values, and therefore knows the receipient’s $r$ values sum to the negative of that. So instead, we allow the transaction to sum to a nonzero value $k*G$, and require a signature of an empty string with this as key, to prove its amount component is zero.
We let transactions have as many $k*G$ values as they want, each with a signature, and sum them during verification.
To create transactions sender and recipient do following ritual:

Sender and recipient agree on amount to be sent. Call this $b$.

Sender creates transaction with all inputs and change output(s), and gives recipient the total blinding factor (rvalue of change minus $r$values of inputs) along with this transaction. So the commitments sum to $r*G  b*H$.

Recipient chooses random $r$values for his outputs, and values that sum to $b$ minus fee, and adds these to transaction (including range proof). Now the commitments sum to $k*G  {fee}*H$ for some $k$ that only recipient knows.

Recipient attaches signature with $k$ to the transaction, and the explicit fee. It has done.
Now, creating transactions in this manner supports OWAS already. To show this, suppose we have two transactions that have a surplus $k_1*G$ and $k_2*G$, and the attached signatures with these. Then you can combine the lists of inputs and outputs of the two transactions, with both $k_1*G$ and $k_2*G$ to the mix, and voilá! [It] is again a valid transaction. From the combination, it is impossible to say which outputs or inputs are from which original transaction.
Because of this, we change our block format from Bitcoin to this information:

Explicit amounts for new money (block subsidy or sidechain pegins) with whatever else data this needs. For a sidechain pegin maybe it references a Bitcoin transaction that commits to a specific excess $k*G$ value?

Inputs of all transactions

Outputs of all transactions

Excess $k*G$ values for all transactions
Each of these are grouped together because it do not matter what the transaction boundaries are originally. In addition, Lists $2$, $3$ and $4$ should be required to be coded in alphabetical order, since it is quick to check and prevents the block creator of leaking any information about the original transactions.
Note that the outputs are now identified by their hash, and not by their position in a transaction that could easily change. Therefore, it should be banned to have two unspent outputs are equal at the same time, to avoid confusion.
Merging Transactions Across Blocks ¶
Now, we have used Dr. Maxwell’s Confidential Transactions to create a noninteractive version of Dr. Maxwell’s CoinJoin, but we have not seen the last of marvelous Dr. Maxwell! We need another idea, transaction cutthrough, he described in [8]. Again, we create a noninteractive version of this, and to show how it is used with several blocks.
We can imagine now each block as one large transaction. To validate it, we add all the output commitments together, then subtracts all input commitments, $k*G$ values, and all explicit input amounts times $H$. We find that we could combine transactions from two blocks, as we combined transactions to form a single block, and the result is again a valid transaction. Except now, some output commitments have an input commitment exactly equal to it, where the first block’s output was spent in the second block. We could remove both commitments and still have a valid transaction. In fact, there is not even need to check the rangeproof of the deleted output.
The extension of this idea all the way from the genesis block to the latest block, we see that every nonexplicit input is deleted along with its referenced output. What remains are only the unspent outputs, explicit input amounts and every $k*G$ value. And this whole mess can be validated as if it were one transaction: add all unspent commitments output, subtract the values $k*G$, validate explicit input amounts (if there is anything to validate) then subtract them times $H$. If the sum is $0$, the entire chain is good.
What is this mean? When a user starts up and downloads the chain he needs the following data from each block:

Explicit amounts for new money (block subsidy or sidechain pegins) with whatever else data this needs.

Unspent outputs of all transactions, along with a merkle proof that each output appeared in the original block.

Excess $k*G$ values for all transactions.
Bitcoin today there are about $423000$ blocks, totaling $80$GB or so of data on the hard drive to validate everything. These data are about $150$ million transactions and $5$ million unspent nonconfidential outputs. Estimate how much space the number of transactions take on a Mimblewimble chain. Each unspent output is around $3$Kb for rangeproof and Merkle proof. Each transaction also adds about $100$ bytes: a $k*G$ value and a signature. The block headers and explicit amounts are negligible. Add this together and get $30$Gb – with a confidential transaction and obscured transaction graph!
Questions and Intuition ¶
Here are some questions that since these weeks, dreams asked me and I woke up sweating. But in fact it is OK.
Q. If you delete the transaction outputs, user cannot verify the rangeproof and maybe a negative amount is created.
A. This is OK. For the entire transaction to validate all negative amounts must have been destroyed. User have SPV security only that no illegal inflation happened in the past, but the user knows that at this time no inflation occurred.
Q. If you delete the inputs, double spending can happen.
A. In fact, this means: maybe someone claims that some unspent output was spent in the old days. But this is impossible, otherwise the sum of the combined transaction could not be zero. An exception is that if the outputs are amount zero, it is possible to make two that are negatives of each other, and the pair can be revived without anything breaks. So to prevent consensus problems, outputs $0$amount should be banned. Just add H at each output, now they all amount to at least $1$.
Future Research ¶
Here are some questions I can not answer at the time of this writing.

What script support is possible? We would need to translate script operations into some sort of discrete logarithm information.

We require user to check all $k*G$ values, when in fact all that is needed is that their sum is of the form $k*G$. Instead of using signatures is there another proof of discrete logarithm that could be combined?

There is a denialofservice option when a user downloads the chain, the peer can give gigabytes of data and list the wrong unspent outputs. The user will see that the result do not add up to $0$, but cannot tell where the problem is.
For now maybe the user should just download the blockchain from a Torrent or something where the data is shared between many users and is reasonably likely to be correct.
Friday, November 30, 2018
Maximum Likelihood Estimation
There are many ways we can define in statistics good estimators for a model, like the mean or variance. However, instead of relying on authority and tradition to provide such estimators it would be useful to have a principled approach.
Hence, we are going to have a look at the maximum likelihood estimation, which is a very commonly used principle to derive the sought after estimators.
Let’s start by examining $m$ independent samples $\mathbb{X} = \{\x^{(1)}...\x^{(m)}\}$ from a distribution $\pdata$, where the latter is actually not known. Then, we can write down the model distribution as:
$$\begin{equation} p_{\text{model}}: (\x;\THETA) \mapsto \hat{p}\in\mathbb{R} \;\hat{=}\; p_{data}(\mathbf{x}) \end{equation}$$where $\THETA$ is a parameter over the family of probability distributions $\pmodel{\x}$. Then the maximum likelihood estimator for $\THETA$ is defined as:
$$\begin{align} \THETA_{\text{ML}} &= \argmax_{\THETA} \pmodel{\mathbb{X}} \\ &= \argmax_{\THETA} \prod_{i=1}^{m} \pmodel{\x^{(i)}} \end{align}$$Please note, that working with the product $\prod$ is for a variety of reasons rather inconvenient. Hence, let’s transform it into a sum $\sum$ by taking the logarithm:
$$\begin{equation} \THETA_{\text{ML}} = \argmax_{\THETA} \sum_{i=1}^{m} \log\pmodel{\x^{(i)}} \end{equation}$$This works because applying the logarithm does not change the $\argmax$ for $\THETA$. We can continue transforming by dividing by $m$, which also has no effect on the $\argmax$ for $\THETA$:
$$\begin{equation} \THETA_{\text{ML}} = \argmax_{\THETA} \frac{1}{m} \sum_{i=1}^{m} \log\pmodel{\x^{(i)}} \end{equation}$$which we can then express as an expectation with respect to the empirical distribution $\hat{p}_\text{data}$:
$$\begin{equation} \THETA_{\text{ML}} = \argmax_{\THETA} \mathbb{E}_{{\x} \sim {\hat{p}_\text{data}}} \log\pmodel{\x^{(i)}} \end{equation}$$Summarized: So, by maximizing the expectation $\mathbb{E}$ over the logarithm of our model distribution $p_\text{model}$ for the given samples ${\x} \sim {\hat{p}_\text{data}}$ and with respect to the parameter $\THETA$, we tend to end up with good estimators.
Friday, November 2, 2018
Consensus by Avalanche
— by Hasan Karahan #Blockchain, #Avalanche, #AVA
Introduction ¶
Let’s assume you are watching a soccer game in a very large stadium built for such an event. Further, let’s assume that the capacity of the stadium is about one hundred thousand people. So the population $\mathcal{P}$ has a size of:
$$\begin{equation} \mathcal{P} = 100'000\tag{$\star$} \end{equation}$$Now, let’s give the game a twist and ascribe to the spectators a dislike for centralized decision making: They’ll reflect such distaste equally in the soccer game and prefer to not have a referee. Or rather, they have restricted the power of the referee to not being able to issue a yellow or a red card upon a foul.
So, what happens when there is a situation where a player engages another player unfairly, but where it is not so clear if the situation deserves a yellow or a red punishment? The referee cannot issue a decision because otherwise the spectators would rip him apart for transgressing his powers.
So, they themselves have to decide: But the decision should be made almost unanimously, otherwise it does not count. To enable all of the $100'000$ people to converge quickly to the same conclusion, everybody agrees to the following consensus rules:

Upon an unfair behavior make an individual decision if the culprit deserves a yellow or a red punishment and then raise your card, such that it is easily visible for everybody else.

Then take a random sample of $n$ people among the other fellow spectators in the stadium and change your own card according to the majority of those people you have chosen; but stick to your card if there is a tie.

Iterate the second step until the whole (or almost whole) stadium displays the same color.
Formalism ¶
For each person $p$ among all the population $\mathcal{P}$ and a given round $r\in\mathbb{N}$ we can define a ${card}_r(p)$ as:
$$\begin{equation} {card}_{r}(p) =\begin{cases} 1_{red} & \mathbb{S}_{1_{red}}^{(r)}(p) > {n}\big{/}{2},\\ {card}_{r1}(p) & \mathbb{S}_{1_{red}}^{(r)}(p) = {n}\big{/}{2},\\ 0_{yellow} & otherwise; \end{cases} \end{equation}$$where we encode a value of $1$ for $red$ and $0$ for $yellow$ cards, and where the sample set $\mathbb{S}_{color}$ is defined as:
$$\begin{equation} \mathbb{S}_{color}^{(r)}(p) = \big\{ s \mid {s\in{sample_{r}(p)}}\land{s=color} \big\} \end{equation}$$where $sample_r(p)$ is defined as:
$$\begin{equation}sample_r(p) = \big\{ {card_{r1}(p_i) \mid p_i\hat\in{\mathcal{P}}} \land {i\in{\{1..n\}}} \big\}\end{equation}$$where each $p_i\hat\in{\mathcal{P}}$ denotes a random selection of $p_i$ from the population $\mathcal{P}$ according to the uniform distribution, i.e. probability $\mathbb{P}(p_i) = 1\big/\mathcal{P}$ for each $p_i\in{\mathcal{P}}$.
Summarized: People keep influencing each other, until everybody hopefully converges onto the same opinion.
Simulation ¶
There is now a very interesting question that needs to be clarified:
How many rounds does it usually take to reach consensus, i.e. until almost the whole stadium displays either $red$ or $yellow$ cards?
We have answered this question by running multiple simulations of the whole process as described above, and found that the number of rounds required to reach consensus is dependent on:

the total population $\mathcal{P}$, where the larger it gets the more rounds need to be iterated, and also on

the number of people randomly sampled in each round i.e. $\mathbb{S}_{color}^{(r)}(p)$, where for larger sample sizes fewer rounds need to be iterated.
The simulation is using the term avalanche, since the change of opinions is reminiscent of an avalanche, where the higher the percentage of people (in a given round $r$) thinking for example that a foul should be punished with red, the higher the probability of people changing their opinion from yellow to red (in the next round $r+1$).
Aggregate over $1'000$ simulations for $\mathcal{P}=100'000$: ¶
For a population of $100'000$ people consensus can be achieved within $13$ rounds, where it is reached faster when more fellow spectators are taken into consideration: A sample size of $\mathbb{S}=8$ (or more) seems to produce average and median consensus levels very close to $100\%$ with a tiny standard deviation, while having reasonably low amounts of outliers.
The same simulation data shown here as histograms yields overall the same results as above with the box plots.
Aggregate over $1'000$ simulations for $\mathcal{P}=1'000'000$: ¶
One may expect, that by drastically increasing the population size by an order to $\mathcal{P}=1'000'000$ people, a similar increase of the sample size and rounds would be required, however that does not seem to be the case: While there is a small decrease in consensus, it is almost not perceptible (for the same sample size of $8$ and same number of rounds of $13$). However, the number of outliers seems to increase while still being reasonable.
Again, using histograms for the visualization of the same simulations for a population of one million leads to the same conclusions as above with the box plots.
Conclusion ¶
The overall approach of using the model of an avalanche seems to be an efficient process of generating consensus among a very large population. While there seems to be an engineering tradeoff between a lower sample size and a higher number of rounds, sampling $8$ or more fellow spectators over $13$ rounds seems to be a good choice, given a population of $100'000$ people. Also, these parameters can be applied to a much larger crowd of one million people without almost any significant decrease in consensus.
Appendix ¶
The avalache.py
simulator ¶
Simulates for the provided population size $\mathcal{P}$, maximum sample size $\mathbb{S}$ and maximum number of rounds the avalanche process:
#!/usr/bin/env python
from typing import Dict
from numpy.random import choice
from numpy.random import randint
from numpy import array
from numpy import maximum
from numpy import round
from numpy import zeros
from matplotlib import pyplot as pp
import argparse
import json
import numpy
import sys
def avalanche(
population_size: int, max_sample: int, max_round: int
) > array:
"""
Simulates for the provided number of `max_round`, the given
`population_size` and `max_sample` the *avalanche* process,
where we keep the `population_size` fixed, but consider all
sample sizes from `1` until `max_sample` (inclusive); then
return a matrix (array) `m` for each round and sample size.
While technically it is not required to use the matrix `m`,
and rely *only* on the array `p` (to successfully simulate
the avalanche process), it is used for bookkeeping helping
lateron to generate plots of the entire process.
"""
m = zeros(shape=(max_round, max_sample))
for s in range(max_sample):
p = population(size=population_size)
for r in range(max_round):
m[r, s] = p.sum() / population_size
p = resample(p, s + 1)
return m
def population(size: int) > array:
"""
Returns a uniformly sampled population for the given `size`
where a value of `0` represents *yellow* and a value of `1`
*red* cards.
"""
return randint(0, 2, size=size)
def resample(p: array, size: int) > array:
"""
Make a random choice of a sample `size` (from a given array
`p`) for *each* element within the array `p`. Then for each
sample calculate, if the majority of the sample indicates a
*red* (i.e. `1`) or a *yellow* (i.e. `0`) card. However, if
there is *no* majority (i.e. a tie) within the sample, then
do nothing.
Note that the returned array has the same size as the array
`p`: It represents a state update of `p`, where elements of
the array have switched their "opinion" with respect to the
majority they have sampled.
"""
ps = choice(p, size=(size, p.size)).sum(axis=0)
eq, gt = ps == size / 2.0, ps > size / 2.0
return p * eq + gt
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Produces the Avalanche Matrix '
'[rounds x max.samplesize]')
parser.add_argument('p', 'population',
type=int, help='population size '
'(default: %(default)s)', default=100000)
parser.add_argument('r', 'maxround',
type=int, help='maximum number of rounds '
'(default: %(default)s)', default=13)
parser.add_argument('s', 'maxsample',
type=int, help='maximum sample size '
'(default: %(default)s)', default=10)
args = parser.parse_args()
m = avalanche(
args.population, args.max_sample, args.max_round)
numpy.savetxt(
sys.stdout, m, header=json.dumps(vars(args)))
Running a single simulation for the population size of $\mathcal{P}=100'000$ people yields the matrix below, where the rows display the rounds in the range of $[1..13]$ and the columns iterate over the sample sizes of $[1..10]$:
$ ./avalanche.py population=100000 maxsample=10 maxround=13
# {"population": 100000, "max_sample": 10, "max_round": 13}
4.988699999999999801e01 4.966800000000000104e01 4.984000000000000097e01 5.022699999999999942e01 5.014100000000000223e01 5.012299999999999534e01 5.022900000000000142e01 5.009400000000000519e01 5.020000000000000018e01 5.022199999999999998e01
4.988299999999999956e01 4.967199999999999949e01 4.961999999999999744e01 5.048200000000000465e01 5.040599999999999525e01 5.027399999999999647e01 5.079200000000000381e01 5.044100000000000250e01 5.049900000000000500e01 5.068599999999999772e01
5.008399999999999519e01 4.946300000000000141e01 4.975700000000000123e01 5.091999999999999860e01 5.078399999999999581e01 5.056699999999999529e01 5.165199999999999791e01 5.125600000000000156e01 5.125399999999999956e01 5.180099999999999705e01
5.001700000000000035e01 4.928899999999999948e01 4.971099999999999963e01 5.179200000000000470e01 5.152900000000000258e01 5.101200000000000179e01 5.370099999999999874e01 5.320200000000000484e01 5.312400000000000455e01 5.497100000000000319e01
4.999100000000000210e01 4.890999999999999792e01 4.946800000000000086e01 5.317199999999999704e01 5.306300000000000461e01 5.204100000000000392e01 5.826999999999999957e01 5.774500000000000188e01 5.754700000000000371e01 6.302799999999999514e01
5.019000000000000128e01 4.824499999999999900e01 4.934000000000000052e01 5.604200000000000292e01 5.594000000000000083e01 5.454499999999999904e01 6.734200000000000186e01 6.862800000000000011e01 6.799100000000000144e01 8.149999999999999467e01
5.020700000000000163e01 4.750900000000000123e01 4.914999999999999925e01 6.127299999999999969e01 6.083399999999999919e01 5.992399999999999949e01 8.385000000000000231e01 8.855800000000000338e01 8.746000000000000441e01 9.923300000000000454e01
4.981400000000000272e01 4.623099999999999987e01 4.866599999999999815e01 7.039199999999999902e01 6.959999999999999520e01 7.061199999999999699e01 9.845000000000000417e01 9.983600000000000252e01 9.976399999999999713e01 1.000000000000000000e+00
5.013100000000000334e01 4.423199999999999910e01 4.833500000000000019e01 8.430499999999999661e01 8.304599999999999760e01 8.831599999999999451e01 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00
5.021499999999999853e01 4.149599999999999955e01 4.769399999999999751e01 9.706099999999999728e01 9.632399999999999851e01 9.950499999999999901e01 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00
5.045800000000000285e01 3.738000000000000211e01 4.657899999999999818e01 9.998099999999999765e01 9.995000000000000551e01 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00
5.050700000000000189e01 3.149700000000000277e01 4.497700000000000031e01 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00
5.029900000000000482e01 2.368300000000000127e01 4.264899999999999802e01 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00 1.000000000000000000e+00
Monday, May 15, 2017
Libbitcoin: bx ectoaddress
Today, I’d like to discuss how you can derive a Bitcoin address from a public key using bx ectoaddress
:
hsk81 ~ $ bx help ectoaddress
Usage: bx ectoaddress [h] [config value] [version value]
[EC_PUBLIC_KEY]
Info: Convert an EC public key to a payment address.
Options (named):
c [config] The path to the configuration settings file.
h [help] Get a description and instructions for this command.
v [version] The desired payment address version.
Arguments (positional):
EC_PUBLIC_KEY The Base16 EC public key to convert. If not
specified the key is read from STDIN.
Alright, apparently this command eats a public key, and produces an output, which we call then a Bitcoin address. Here is a way to produce one:
hsk81 ~ $ bx seed  bx ecnew  bx ectopublic  bx ectoaddress
1CDR8xyAJ4vzAHoTBbXy1J14B8QhjZ366r
So, the 1CDR8xyAJ4vzAHoTBbXy1J14B8QhjZ366r
is a bitcoin address, which you can use to ask people to send you Bitcoins to. But beware! If you create your address this way, you omit to save your private key, and hence you would have zero possibility to spend you coins again. That would be bad. Hence, let’s do this:
hsk81 ~ $ EC_PRIVATE=$(bx seed  bx ecnew)
hsk81 ~ $ echo $EC_PRIVATE
fa86579eb7754d0ba98c96d0d180da7c10f1b9def22db603925b76fa9e8ec87c
hsk81 ~ $ EC_ADDRESS=$(echo $EC_PRIVATE  bx ectopublic  bx ectoaddress)
hsk81 ~ $ echo $EC_ADDRESS
19vLj79rrErRK6q2GAcfYxRmWyT41MXcV
Now, we capture the private key in EC_PRIVATE
and the address in EC_ADDRESS
. Alright! But how does bx ectoaddress
work under the hood?
Let’s examine the source code:
console_result ec_to_address::invoke(std::ostream& output, std::ostream& error)
{
const auto& point = get_ec_public_key_argument();
const auto version = get_version_option();
output << payment_address(point, version) << std::endl;
return console_result::okay;
}
Good: So, the point
argument seems to be the public key we provide via the CLI, and the version
is some additional argument we don’t want to care about (in this post).
Then point
is forwarded to payment_address
, where the result is then serialized to the output
. Good! But what does payment_address
do? Let’s check:
payment_address::payment_address(const ec_public& point, uint8_t version)
: payment_address(from_public(point, version))
{
}
Above, the constructor is just putting together the payment address with the help of the (static) payment_address
function:
payment_address payment_address::from_public(const ec_public& point,
uint8_t version)
{
if (!point)
return payment_address();
data_chunk data;
return point.to_data(data) ?
payment_address(bitcoin_short_hash(data), version) :
payment_address();
}
Alright, this looks very promising: So the public key point
is converted to some data
chunk, hashed and then turned into a payment address. Let’s have a look at that bitcoin_short_hash
:
short_hash bitcoin_short_hash(data_slice data)
{
return ripemd160_hash(sha256_hash(data));
}
OK, so here we got the famous double hash where a sha256
hash is rehashed using a ripemd160
hash! So, this is pretty much it: The result get’s saved in the internal data structures of the payment address and then the whole thing is serialized using base58
encoding:
std::string payment_address::encoded() const
{
return encode_base58(wrap(version_, hash_));
}
Finally, we’ve managed to work through the generation of a seed with bx seed
, converted it into a private address with bx ecnew
, and turned that one into a public address with bx ectopublic
, which was then turned into a Bitcoin address with bx ectoaddress
.
Friday, April 21, 2017
Libbitcoin: bx ectopublic
— Don’t trust a man who needs an income – except if it is minimum wage. Those in corporate captivity would do anything to “feed a family”. [NNT]
Today, I’d like to talk about how to derive a public key from a secret using Elliptic Curve Cryptography. Let’s try the following command bx ectopublic
:
hsk81 ~ $ bx ectopublic h
Usage: bx ectopublic [hu] [config value] [EC_PRIVATE_KEY]
Info: Derive the EC public key of an EC private key. Defaults to the
compressed public key format.
Options (named):
c [config] The path to the configuration settings file.
h [help] Get a description and instructions for this command.
u [uncompressed] Derive using the uncompressed public key format.
Arguments (positional):
EC_PRIVATE_KEY The Base16 EC private key. If not specified the key
is read from STDIN.
Alright, apparently ectopublic
takes a secret and turns it into something public, allowing other people to encrypt messages, which only you the holder of the secret can decrypt. That’s at least the working assumption under which public key cryptography operates!
In the context of Bitcoin the public key can be used as your address people can send “money” to, and only you can spend them, since as a holder of the private key only you can sign the associated transactions.
So, let’s generate such a public address for other people to send us money:
hsk81 ~ $ bx seed  bx ecnew  bx ectopublic
03e81a84fe1d5aa4269b0faa78110549caf3873364467688dec27c94761c2b1e6e
Well, this does not really look like the traditional Bitcoin address you would expect so see – for example:
12AEJmWva5QJVYmQ3r3vdFZeFfLUeSVPBq
But that conversion from a public key to an actual address (accomplished with bx ectoaddress
) will be discussed in depth in another post. Here, we want to focus on how exactly this “magic” transformation from a private key (secret) to a public key (address) is accomplished.
Let’s dive directly into the code of the bx ectopublic
command:
console_result ec_to_public::invoke(
std::ostream& output, std::ostream& error)
{
const auto& secret = get_ec_private_key_argument();
const auto& uncompressed = get_uncompressed_option();
ec_compressed point;
secret_to_public(point, secret);
output << ec_public(point, !uncompressed) << std::endl;
return console_result::okay;
}
As we see, some secret
is transformed to a public point
, which is called so because it corresponds to an actual “point” in Elliptic curve cryptography. The short explanation of how this is achieved, is to note that the secret
is like the number of multiplications of the point
over Elliptic curves, and the NSA telling us that this multiplication is “secure” enough to operate a multibillion economy. Let’s see, where we will end up…
Anyway, since there is a bunch of “independent” mathematicians telling us more or less the same story, that ECC is secure enough (under the assumption that oneway functions exist, and under the assumption that $P\neq{NP}$ in a Turing model of computation, and under the assumption that current Quantum computers are not able to brute force the secret
from the public point
– lot’s of assumptions), we believe what we’re told and move on with our technical discussion.
Alright, let’s have a look at the secret_to_public
function:
bool secret_to_public(ec_compressed& out, const ec_secret& secret)
{
const auto context = signing.context();
return secret_to_public(context, out, secret);
}
So, apparently what we do is to turn the secret
under a certain signing context
(which I will not further elaborate) into something public –
represented here by the out
reference. Let’s dive deeper:
template <size_t Size>
bool secret_to_public(
const secp256k1_context* context,
byte_array<Size>& out, const ec_secret& secret)
{
secp256k1_pubkey pubkey;
return secp256k1_ec_pubkey_create(
context, &pubkey, secret.data()) == 1 &&
serialize(context, out, pubkey);
}
We’re getting close to the truth, and the magic seems to become more and more profane: Apparently, the secp256k1_ec_pubkey_create
function seems to be doing the hard job, and if everything goes fine (== 1
), we serialize the pubkey
to out
. Let’s dive even further:
int secp256k1_ec_pubkey_create(
const secp256k1_context* ctx,
secp256k1_pubkey *pubkey,
const unsigned char *seckey)
{
secp256k1_gej pj;
secp256k1_ge p;
secp256k1_scalar sec;
int overflow;
int ret = 0;
VERIFY_CHECK(ctx != NULL);
ARG_CHECK(pubkey != NULL);
memset(pubkey, 0, sizeof(*pubkey));
ARG_CHECK(secp256k1_ecmult_gen_context_is_built(&ctx>ecmult_gen_ctx));
ARG_CHECK(seckey != NULL);
secp256k1_scalar_set_b32(&sec, seckey, &overflow);
ret = (!overflow) & (!secp256k1_scalar_is_zero(&sec));
if (ret) {
secp256k1_ecmult_gen(&ctx>ecmult_gen_ctx, &pj, &sec);
secp256k1_ge_set_gej(&p, &pj);
secp256k1_pubkey_save(pubkey, &p);
}
secp256k1_scalar_clear(&sec);
return ret;
}
And welcome to the funny world of C
programmers, who are not able to come up with reasonable variable names: The code above is actually not part of the libbitcoin
software (which is currently maintained by Eric Voskuil using his beautiful C++
).
Alright, let’s deconstruct the gibberish above and understand what’s going on: Well, this code does not do actually much except taking the input arguments, doing some sanity checks and invoking then the secp256k1_ecmult_gen
function, which does the “heavy” lifting – namely executing a sec
number of times an addition over Elliptic curves (modulo some prime number), which is then saved to pj
, which is then turned into p
, which is then finally stored in the pubkey
pointer.
So, what does secp256k1_ecmult_gen
really do? Let’s see:
static void secp256k1_ecmult_gen(
const secp256k1_ecmult_gen_context *ctx,
secp256k1_gej *r, const secp256k1_scalar *gn)
{
secp256k1_ge add;
secp256k1_ge_storage adds;
secp256k1_scalar gnb;
int bits;
int i, j;
memset(&adds, 0, sizeof(adds));
*r = ctx>initial;
/* Blind scalar/point multiplication by
computing (nb)G + bG instead of nG. */
secp256k1_scalar_add(&gnb, gn, &ctx>blind);
add.infinity = 0;
for (j = 0; j < 64; j++) {
bits = secp256k1_scalar_get_bits(
&gnb, j * 4, 4
);
for (i = 0; i < 16; i++) {
secp256k1_ge_storage_cmov(
&adds, &(*ctx>prec)[j][i], i == bits
);
}
secp256k1_ge_from_storage(&add, &adds);
secp256k1_gej_add_ge(r, r, &add);
}
bits = 0;
secp256k1_ge_clear(&add);
secp256k1_scalar_clear(&gnb);
}
Mmh, more gibberish – although the main idea of this implementation is actually pretty ingenious (provided you believe the NSA as mentioned above): So, the r
and gn
variables seem to be important, where the former looks like the current sum and gn
is of course our secret or the number of times we’re supposed to sum r
over and over again (hence the name ec_mult_gen
). The gen
suffix looks like to stand for the generator of the Elliptic curve, which initially seems to come from the context member ctx>initial
.
Two nice features, I like about this code is that the author tries very hard to write secure code by doing a so called “blind” multiplication, by initially adding a number ctx>blind
to gn
. What I find interesting, and don’t really understand yet is, that this initial blind number is not subtracted at the end to neutralize the initial addition. I can only explain this by speculating, that either ECC multiplication is invariant w.r.t. to an initial (blind) shift or it is automagically taken care of implicitly within the (nested) loops. If there is somebody among my readers, who’d like to enlighten me, please drop below a comment!
The other feature I like is, that the author seems to have tried to protect the multiplication against sidechannels attacks, since as we all know measuring CPU thermodynamics can reveal under the right circumstances the very secret we utilize here (see the original source on GitHub with some corresponding comments, which I’ve omitted here for the sake of brevity).
Wow, alright! Let’s recapitulate:
hsk81 ~ $ bx seed  bx ecnew  bx ectopublic
03e81a84fe1d5aa4269b0faa78110549caf3873364467688dec27c94761c2b1e6e
So, an initial pseudorandom seed, which is calculated based on the clock (according to the current implementation), is turned into a private key (based on HD wallets – to be discussed in a later post), which is then morphed into a public key from which we can derive then an address.
Summarized: (a) measure the current time really really well (to a high degree of precision), while making sure nobody is watching; (b) do some deterministic magic to it and label it as your private key (and trust all your wealth upon this calculation – including your wife and children); (c) then take some publicly available number (the initial generator of an Elliptic curve), and (d) multiply your private key with that number (again trust your wife and children upon this multiplication), deriving finally (e) the public key. Ultimately (f) turn this public key into a Bitcoin address.
If it were only for the initial steps (a) till (e), I would be very suspicious of this whole construction; although admittedly, it’s pretty much the current state of the art in today’s – publicly available – cryptography.
But the last step (f) does mitigate a lot of the risks I see here, which I’m going to discuss in my next post.