Saturday, October 1, 2016

Quantum Chance: Non-locality – Part #1

I had recently the privilege to enjoy Nicolas Gisin’s book “Quantum Chance: Non-locality, Teleportation, and Other Quantum Marvels”. It was a tough reading (as promised by the author), but a very eye opening experience!

Here, I’d like to focus on the non-locality aspect, and investigate especially the (in)famous Bell’s Game: It’s rather a mind-boggling thought experiment, which has later on been tested successfully by physicists.

Quite frankly, I don’t want to claim to have fully understood the whole book. I’m merely a computer scientist, who is interested in grasping what these crazy physicists have been up to recently: The whole point of this post is to clarify my own mind by forcing myself to go through the experiment point by point, while not committing blatant mistakes.

Bell’s Game: gorilla or girl?

So, what is this whole game about? Well, it’s not your everyday game; that I can promise you. But to have a more everyday approach to the whole experiment, let’s think of it as a pair of very special gambling devices in a casino:

One day the brothers Ahmed and Mehmed enter the casino, with the strong determination to enjoy a sinful night and get rich in the process. They check Roulette, Black Jack and all the other fancy games, but since they painfully know from experience that somehow the house seems to always win – despite strong claims of fair play – they refuse to play those games.

Then Ahmed discovers a pair of floating spheres - both of them translucent - in the corner of the casino: Neither of them seems to be connected to the ceiling nor to each other, but they simply seem to be suspended in midair! How can this be? Ahmed asks Mehmed: “Yaw, how come that these orbs are floating midair without falling down? Tell me, you’re the physicist here!”

Dr. Mehmed explains to Ahmed with a grin on his face: “Aah, my silly brother: You may have graduated from the number one Computer Science department in the world”, which by the way would be at ETH Zurich in Switzerland, “but when it comes to matters of reality you are utterly lost! Don’t you see that these spheres are made of glass and kept in place thanks to a strong and absolutely steady stream of air from below?”

Ahmed responds: “I see, now I get it! But my cherished brother, I’m not as much of an idiot as you would like to think of myself, alright? Look, there is a scanner embedded into the orbs and when I put my hand onto one it either turns pitch black or marble white. Isn’t that interesting?”

Mehmed is confused: “Indeed very much so! But how do you win or lose this game? Look, independent of which hand I use, the right or the left one, the sphere seems to turn completely randomly black or white. What is the point?”

Ahmed confirms: “You’re right! My orb is acting exactly the same way. But why would they put such a useless game in a casino?” But then he has an idea: “Just wait a minute. What if we touch the orbs at the same time? Maybe then something will happen!”

So, they both touch their respective orbs with their left hands. Ahmed’s orb turns black, while Mehmed’s sphere turns also black. The whole casino starts flashing, and a ravishing girl materializes between the orbs. The brothers cannot believe their eyes and Mehmed asks her: “Amaneen! What’s going on? Who are you?”

The girl responds: “My name is Hooriyeh, and I came to congratulate both of you: You just won one golden ruby. Her you go!” And flip, she’s gone.

Ahmed cheers: “Uy uyyyh, can you believe that? Let’s try it again. Maybe next time she’ll come with a sister of hers and we’ll force them to stay!”

So with greed they touch the spheres again, both of them again with their left hands. But this time one of the orbs turns black, while the other one turns white: Immediately a huge and hairy gorilla materializes and groans: “My name is Hanumaan, and I came to punish you for your sins!” and it starts immediately to whip them without mercy. After some time, which feels like an eternity to the poor brothers, the gorilla vanishes into thin air, but does not take the golden ruby away!

Both brothers get very upset, and complain to the casino manager Zoltaan: “What kind of a violent game is this? We’ll sue you!” But the manager is unmoved, and points out: “When the girl appeared, you were not so upset! I’m going to explain to you now the rules of the game, and then you can decide yourself if you want to continue playing:”

  • Rule #1: “If at least one of you two brothers touches the orbs with a left hand, and both orbs turn the same color, Hooriyeh will appear and give you a golden ruby.”

  • Rule #2: “If both of you brothers touch the orbs with your right hands, and the orbs color themselves differently, again Hooriyeh will appear with a golden ruby.”

  • Rule #3: “In all other cases you will be whipped by Hanumaan. But the gorilla will never take your golden rubies away. If after 400 trials you manage to collect 300 or more rubies, you can marry Hooriyeh and her sister. Otherwise, you’ll be enslaved by Hanumaan for life!”

The brothers are terrified by the possibility of serving a gorilla and potentially be whipped for the rest of their lives. Ahmed speaks: “Come Mehmed, this game is very weird! Let’s switch to roulette.”

However, Mehmed – still a bachelor – cannot silence the masochistic physicist inside himself, and starts unconsciously to calculate the probabilities of winning and losing…

What do you think? Will they stop playing the game, or will they risk everything and potentially get married to two ravishing sisters? It’s a tough choice to make!

Tuesday, August 2, 2016

TypeScript Decorators: @buffered

There are many circumstances, where a developer desires to ignore subsequent invocations of a function, except the last one. For example, you may want to ignore a crazy user’s high speed clicking, till he stops doing so: Only the very last click should cause an action.

But how do we define last in this context? Well, a simple way to do it, is to buffer all invocations of a function, and then invoke only the very last one, after which (for a certain time window) no such invocation is triggered by the manic user.

For example, I could put the buffer threshold to 200 milli-seconds: This will cause very fast clicks to be interpreted as a single click, but any two subsequent clicks, which are apart by more than 200 milli-seconds, will be interpreted as two separate clicks. Let’s have a look at a toy example:

import {buffered} from "./buffered";

class App {
    public nilHundredMsAgo() {
        console.log('[000-ms-ago]', new Date().toISOString());
    }
    @buffered
    public twoHundredMsAgo() {
        console.log('[200-ms-ago]', new Date().toISOString());
    }
}

let app = new App();
app.nilHundredMsAgo();
app.twoHundredMsAgo();

If we run the example we get:

$ npm start
[000-ms-ago] 2016-08-02T11:20:50.463Z
[200-ms-ago] 2016-08-02T11:20:50.667Z

As you see above their is a time difference of at least 200 milli-seconds, which confirms that the App.twoHundredMsAgo method has been successfully buffered. Let’s extend the above example:

import {buffered, IBufferedFunction} from "./buffered";

class App {
    public nilHundredMsAgo() {
        console.log('[000-ms-ago]', new Date().toISOString());
    }
    @buffered
    public twoHundredMsAgo() {
        console.log('[200-ms-ago]', new Date().toISOString());
    }
    @buffered(600)
    public sixHundredMsAgo() {
        console.log('[600-ms-ago]', new Date().toISOString());
    }
}

let app = new App();
app.nilHundredMsAgo();
app.twoHundredMsAgo();

let fn:Function = app.sixHundredMsAgo;
for (let i = 0; i<256; i++) fn();
let bn = <IBufferedFunction>fn;
bn.cancel();

And this time, if we run the extended example we get:

$ npm start
[000-ms-ago] 2016-08-02T11:23:59.159Z
[200-ms-ago] 2016-08-02T11:23:59.362Z

This looks like the previous output from before! What happened at the 256 different invocations of App.sixHundredMsAgo? Well, they got canceled because of which none of the invocations produced any time stamp.

Accessing the cancel function is a little awkward, since the corresponding method is required first to be converted to a Function and then again to a IBufferedFunction, which has cancel declared. But since it is expected that cancelling a buffered method is not to be used that often, we can live with the way to access cancel.

Further, please also note that above we used @buffered(600), to change the default time window from 200 to 600 milli-seconds. Alright, let’s have a look at a final and more realistic example:

/// <reference path="lib/jquery/index.d.ts" />
import {buffered} from './buffered';

class App {
    public constructor() {
        $('#my-button').on('click', this.onClick.bind(this));
    }

    @buffered
    public onClick(ev:MouseEvent) {
        console.log('[on:click]', ev);
    }
}

let app = new App();

As you see above, by simply decorating the onClick handler with @buffered we can fend off crazy users, who have lost their minds and became click-o-maniacs! Please also note, that we used jQuery to subscribe the buffered handler to click mouse events.

Finally, here is the magic that enables us to use the @buffered decorator:

export interface IBufferedFunction extends Function {
    cancel:Function;
}

export function buffered(
    ms:number):Function;
export function buffered(
    target:any, key:string, descriptor?:PropertyDescriptor):void;
export function buffered(
    arg:number|any, key?:string, descriptor?:PropertyDescriptor
):Function|void {
    if (typeof arg === 'number') {
        return _buffered(arg);
    } else {
        _buffered(200)(<any>arg, key, descriptor);
    }
}

function _buffered(ms:number) {
    return function (
        target:any, key:string, descriptor?:PropertyDescriptor
    ) {
        let fn:Function = descriptor ? descriptor.value : target[key],
            id:number;
        let bn:Function = function (...args:any[]) {
            if (id !== undefined) {
                clearTimeout(id);
                id = undefined;
            }
            if (id === undefined) {
                id = setTimeout(() => {
                    fn.apply(this, args);
                    id = undefined;
                }, ms);
            }
        };
        for (let el in fn) {
            if (fn.hasOwnProperty(el)) {
                (<any>bn)[el] = (<any>fn)[el];
            }
        }
        (<IBufferedFunction>bn).cancel = function () {
            if (id !== undefined) {
                clearTimeout(id);
                id = undefined;
            }
        };
        if (descriptor) {
            descriptor.value = bn;
        } else {
            target[key] = bn;
        }
    };
}

export default buffered;

Saturday, July 30, 2016

TypeScript Decorators: @trace

Alright, we want to be able to trace our TypeScript classes using a simple decorator:

@trace
class App {
    public method(n:number, text:string) {
    }
}

let app = new App();
app.method(1, 'text')

This shall produce the following output:

[2016-07-30T12:23:25.520Z]#bc0b >>> @.method
[2016-07-30T12:23:25.520Z]#bc0b { '0': 1, '1': 'text' }
[2016-07-30T12:23:25.546Z]#bc0b <<< @.method
[2016-07-30T12:23:25.546Z]#bc0b undefined

Above, we shall have the time stamp of the invocation followed by some random string (identifying identical invocations). Then, we shall have the method name plus, on the second line, a list of arguments. Further, on the third line, we shall have the time stamp of the return, and finally on the last line the resulting value.

By default, the method name shall not include the corresponding class name. To create fully qualified method names the @named decorator shall be used:

@trace
@named('App')
class App {/*..*/}

Further, we want to be able to provide a boolean flag to @trace to switch tracing on and off:

@trace(false)
class App {/*..*/}

Further, we want the ability to trace a class but omit certain methods, we’re not interested in (since maybe they are called simply too often and tracing the corresponding invocations would quickly become infeasible):

@trace
class App {
    public method1(n:number, text:string) {/*..*/}

    @traceable(false)
    public method2(n:number, text:string) {/*..*/}
}

We also want the opposite, where only certain methods shall be traced, while in general the rest of the class shall be left alone:

class App {
    public method1(n:number, text:string) {/*..*/}

    @traceable
    public method2(n:number, text:string) {/*..*/}
}

How do we implement all this various tracing features? Here it is:

import '../string/random';

export function trace(
    flag:boolean):Function;
export function trace(
    ctor:Function):void;
export function trace(
    arg:boolean|Function):Function|void
{
    if (typeof arg === 'boolean') {
        return _trace(arg);
    } else {
        _trace(true)(<Function>arg);
    }
}

function _trace(flag:boolean):Function {
    return function (ctor:Function) {
        Object.keys(ctor.prototype).forEach((key:string) => {
            let dtor = Object.getOwnPropertyDescriptor(ctor.prototype, key);
            if (dtor && typeof dtor.value === 'function') {
                _traceable(flag)(ctor.prototype, key);
            }
        });
        Object.keys(ctor).forEach((key:string) => {
            let dtor = Object.getOwnPropertyDescriptor(ctor, key);
            if (dtor && typeof dtor.value === 'function') {
                _traceable(flag)(ctor, key);
            }
        });
    };
}

export function traceable(
    flag:boolean):Function;
export function traceable(
    target:any, key:string, dtor?:PropertyDescriptor):void;
export function traceable(
    arg:boolean|any, key?:string, dtor?:PropertyDescriptor
):Function|void {
    if (typeof arg === 'boolean') {
        return _traceable(arg);
    } else {
        _traceable(true)(<any>arg, key, dtor);
    }
}

function _traceable(flag:boolean):Function {
    return function (target:any, key:string, dtor?:PropertyDescriptor) {
        let wrap = (fn:Function, callback:Function) => {
            if (!flag) {
                (<any>fn)['_traced'] = false;
            } else {
                if ((<any>fn)['_traced'] === undefined) {
                    (<any>fn)['_traced'] = true;

                    let tn:Function = function () {
                        let _named = target._named || '@',
                            random = String.random(4, 16),
                            dt_beg = new Date().toISOString();

                        console.log(
                            `[${dt_beg}]#${random} >>> ${_named}.${key}`);
                        console.log(
                            `[${dt_beg}]#${random}`, arguments);

                        let result = fn.apply(this, arguments),
                            dt_end = new Date().toISOString();

                        console.log(
                            `[${dt_end}]#${random} <<< ${_named}.${key}`);
                        console.log(
                            `[${dt_end}]#${random}`, result);

                        return result;
                    };
                    for (let el in fn) {
                        if (fn.hasOwnProperty(el)) {
                            (<any>tn)[el] = (<any>fn)[el];
                        }
                    }
                    callback(tn);
                }
            }
        };
        if (dtor) {
            if (typeof dtor.value === 'function') {
                wrap(dtor.value, (tn:Function) => {
                    dtor.value = tn;
                });
            } else {
                if (typeof dtor.get === 'function') {
                    wrap(dtor.get, (tn:Function) => {
                        dtor.get = <any>tn;
                    });
                }
                if (typeof dtor.set === 'function') {
                    wrap(dtor.set, (tn:Function) => {
                        dtor.set = <any>tn;
                    });
                }
            }
        } else {
            wrap(target[key], (tn:Function) => {
                target[key] = tn;
            });
        }
    };
}

export default trace;

The details are onerous, but the main idea is simple: Wrap a method, which shall be traced, with a function printing the method name and arguments before the invocation, and the result after the invocation.

As hinted above, we shall be able to write @trace or @trace(true|false) (and similarly @traceable or @traceable(true|false)): In the implementation this is achieved using function overloads.

Decorating static methods

Another point, which is worth of mentioning, is the fact that static methods can automatically (or manually via @traceable(true)) be traced as well:

@trace
class App {
    public static method(n:number, text:string) {/*..*/}
}

Decorating get and set accessors

Finally, get and set accessors are by default not traced: This makes sense since in general you do not want to track each and very read and write to a member variable of a class. However there will be situations, where for example you synchronize the state of your class with a persistency layer. In such situations it might very well make sense to closely follow the synchronization process:

@trace
class App {
    @traceable(true)
    public get state():any {
        return this._state;
    }
    public set state(value:any) {
        this._state = value;
    }
    private _state:any;
}

As far as I know, so far Typescript does not allow to apply decorators separately to a getter and setter accessor: You should apply a particular decorator to the first accessor within the class’ declaration. It is then automatically applied to the corresponding partner accessor as well (if such a partner exists).

Thursday, July 28, 2016

TypeScript: String.random

Today, I’d like to discuss and analyze a function I’m using quite often during my daily work with TypeScript. It’s about generating random strings, and here is the code:

interface StringConstructor {
    random(length?:number, range?:number):string;
}

String.random = function (length:number, range:number = 36):string {

    length = Math.floor(length);
    range = Math.floor(range);

    let p_0 = Math.pow(range, length),
        p_1 = range * p_0;

    return (length > 0) 
        ? Math.floor(p_1 - p_0 * Math.random()).toString(range).slice(1)
        : '';
};

So, I attach the random function to the String interface: Yes, normative pundits will point out now that I should not overwrite or extend any existing vanilla constructions, but since I use random strings so often, I decided to commit this sin in the name of convenience!

Further, since the result of random is a string, there was no better place for me than to attach the former to the latter. If you cannot follow my logic, so be my guest and put the function where ever you deem it’s best.

Alright, after having addressed the dogmatic computer scientists, it’s time to have a look how we use String.random:

import './random';

let random_1 = String.random(8);
console.log(`random-1 w/{length:8, range:36}: ${random_1}`);
let random_2 = String.random(6, 16);
console.log(`random-2 w/{length:6, range:16}: ${random_2}`);
let random_3 = String.random(4, 2);
console.log(`random-3 w/{length:4, range: 2}: ${random_3}`);

The above code produces on the terminal the following random strings:

random-1 w/{length:8, range:36}: bicgtcoq
random-2 w/{length:6, range:16}: 8cf784
random-3 w/{length:4, range: 2}: 0110

So, apparently the length argument controls the number of characters in the random strings, while the range argument sets the range of characters from which they are chosen from. Do not put a range larger than 36, since otherwise the number.toString(range) function, which is used to convert numbers to strings, will complain very loudly!

Well, so far for the practical side of the code; let’s investigate the theoretical side of randomness: Computers cannot create “true” random numbers, but rely on so called pseudo-random generators (PRNG). In our case, we rely on Math.random() to deliver a reasonably usable (discrete) uniform distribution. The latter reference describes it as:

In probability theory and statistics, the discrete uniform distribution is a symmetric probability distribution whereby a finite number of values are equally likely to be observed; every one of $n$ values has equal probability $1/n$.

Or using a simpler language:

Another way of saying “discrete uniform distribution” would be “a known, finite number of outcomes equally likely to happen”.

Actually, getting randomness is in general quite hard and it’s a science: So, do not try to use in security related applications your homegrown PRNGs, but rely on well researched algorithms and correct implementations!

In this context, I’d recommend to even forgo the above code and use for example the Stanford Javascript Crypto Library. However, if you need some quick and good enough implementation then String.random might be your candidate.

Analysis of the Distribution

So what is good enough? Well, the distribution of the random strings should be uniform. Let’s produce the data to analyze, where we’ll generate binary random strings with a length of $16$ characters:

import './random';

class App {
    public logRandom(size:number) {
        for (let i=0; i<size; i++) {
            console.log(String.random(16, 2))
        }
    }
}

let app = new App();
app.logRandom(65536);

This will create a huge list of binary random strings: But how do we determine if it is uniform? Creating directly a histogram might be an approach, but we might not have enough data to gain significant insight.

Why is that? The total number of binary strings of size $16$ — which is $2^{16}=65536$ — happens to be the number of samples we have in our data. So we would expect to see each binary string on average only once: Counting each string once, and creating a corresponding histogram might confirm that we might not have a very skewed distribution, but that’s pretty much it.

However, for uniformly distributed binary strings the following property should hold as well: The number of different characters between any two strings should follow a normal distribution. Let’s check this with a small Python script:

#!/usr/bin/env python

from matplotlib import pyplot as pp
import numpy as np
import sys

def levenstein(source, target):
    if len(source) < len(target):
        return levenstein(target, source)
    if len(target) == 0:
        return len(source)

    source = np.array(tuple(source))
    target = np.array(tuple(target))

    prev_row = np.arange(target.size + 1)
    for s in source:
        curr_row = prev_row + 1
        curr_row[1:] = np.minimum(
                curr_row[1:], np.add(prev_row[:-1], target != s))
        curr_row[1:] = np.minimum(
                curr_row[1:], curr_row[0:-1] + 1)
        prev_row = curr_row

    return prev_row[-1]

with open(sys.argv[1]) as file:
    lines = list(map(lambda l: l[:-1], file.readlines()))

ds, l0 = [], lines[0]
for line in lines[1:]:
    d = levenstein(line, l0)
    if d > 0: ds.append(d)

pp.title('Levenstein Differences')
pp.hist(ds, bins=13)
pp.grid()
pp.show()

And finally let’s have a look at the histogram:

Levenstein Differences
Levenstein Differences

So this pretty much confirms our expectation: The histogram is symmetric around a difference of $7$ characters and fitting a normal distribution to this data should not be a problem.

We conclude that an original uniform distribution might have caused the observed normal distribution, and will stop analyzing further. Of course many more statistical tests should be carried out to determine beyond doubt the quality of the PRNG, but will stop here for the sake of brevity.

Tuesday, July 26, 2016

TypeScript Decorators: @named

I’ve been playing around recently with this fantastic new language TypeScript in general plus with TypeScript decorators in particular, and would like to share some of my experiences. In this post I’ll keep things to the bare minimum, and will elaborate more in the upcoming posts of mine.

Alright, let’s dive in: I actually wanted to build my own tracing system, where I would see which parts of my code are invoked when, with which arguments and returning with which results.

I was particularly interested in class methods: However, I figured out rather soon that the name of the class is not accessible, unless you would use metadata reflection. But since the latter seems to be rather experimental, I decided to go for my own minimal thing.

So, the easiest approach I imagined was to simply attach any name of my choice to a class of mine using decorators:

import {named} from "./named";

@named('App')
class App {
    public toString():string {
        return this['_named'];
    }

    public static toString():string {
        return this['_named'];
    }
}

var app = new App();
console.log('app.toString:', app.toString());
console.log('App.toString:', App.toString());

Yes, it’s cheap but hey I was looking for a quick and working solution! So, when we run npm start the output should look like:

app.toString: App
App.toString: App

As you see, it works: Both the instance and static toString methods manage to return the expected App string. This little feature will become later important for us to provide tracing using fully qualified function names.

Let’s check the named.ts implementation: It’s rather straight forward, since the supplied named string is attached as the _named instance and static member directly to the object.

export function named(name:string) {
    return function (object:any) {
        if (object.prototype._named === undefined) {
            object.prototype._named = name;
        }
        if (object._named === undefined) {
            object._named = name;
        }
    };
}

export default named;

The source code of this example is available on GitHub.com:

git clone git@github.com:hsk81/calaganne calaganne.git
cd 'calaganne.git/2016-07-26/TypeScript Decorators: @named'/

Further, you have to install the npm dependencies (and compile the project):

npm install

Now, you should be able to start the application as already mentioned above:

npm start

Sunday, March 27, 2016

The Probability of Co-Prime Integers

This post requires a basic understanding of prime numbers. In case you are not familiar with them, please go ahead and watch the video below:

Random Co-Prime Integers

Alright, on our journey to investigate quantum algorithms[1] like Shor’s Period Finding et al. sooner or later we will come upon the subject of so called co-prime integers. Of particular interest for us is the question about the likelihood of two random integers being co-prime. Let us mold this interest of ours into crisp mathematics:

$$\begin{equation} \text{co-prime}(x,y)\iff\text{gcd}(x,y)=1\label{EQ:P1} \end{equation}$$

where $x,y∈\mathbb{Z}$ and $\text{gcd}(x,y)$ is the greatest common divisor, which can be calculated efficiently using the Euclidean algorithm. Well this is actually pretty straight forward, but what about the probability of two random integers being co-prime?

$$\begin{equation} \forall{x,y}\in\mathbb{Z}:\Pr{\{\text{co−prime}(x,y)\}}=\,?\label{EQ:P2} \end{equation}$$

Well, what could the value of $\eqref{EQ:P2}$ be? The probability of $x$ and $y$ not being divisible by $2$ is $3/4$, by $3$ is $8/9$, by $5$ is $24/25$ and so on. So the total probability that two random integers do not share prime factors is then:

$$\begin{equation}\prod_{p\;\in\;\mathbb{P}}(1-1/p^2) =1\Bigg/\prod_{p\;\in\;\mathbb{P}}{\bigg(1-\frac{1}{1/p^2}\bigg)}\\ =\zeta(2)=6\times\pi^{-2}\approx{60.79\%}\label{EQ:P3}\end{equation}$$

You can look up the proof yourself, but once having accepted this relation let us try to visualized it:

$\Pr{\{\text{co-prime}(X=x,Y=y)\}\approx{60.82\%\pm{1.63}\%}}$

As you can see, I have plotted three sample populations against each other, and the mean of $60.82\%$ is consistent with the calculated probability of $60.79\%$. Another aspect the plot reveals is the fact, that the standard deviation of $1.63\%$ is sitting quite tightly around the mean.

Our conclusion is that when we take two random integers, then we can with some confidence (larger than $50\%$) expect them to be co-prime.

#!/usr/bin/env python
#####################################################################
# 2345678901234567890123456789012345678901234567890123456789012345678

import numpy as np
from math import gcd
from matplotlib import pyplot as pp

#####################################################################

def NEXT(n):
    return np.random.random_integers(2**n)
def CO_PRIME_PCT(m, n):
    return sum([1 if gcd(NEXT(n), NEXT(n))==1 \
        else 0 for i in range(m)]) / m
def SAMPLE(size, m=1000,n=48):
    return np.fromiter(map(lambda _: CO_PRIME_PCT(m,n), \ 
        range(size)), dtype=np.float)

s0 = SAMPLE(size=256)
s1 = SAMPLE(size=256)
s2 = SAMPLE(size=256)

mm = 0.0; std = 0.0
m0 = s0.mean(); mm += m0 / 3.0
m1 = s1.mean(); mm += m1 / 3.0
m2 = s2.mean(); mm += m2 / 3.0

std += s0.std() / 3.0
std += s1.std() / 3.0
std += s2.std() / 3.0

pp.plot(s0, s1, 'r.')
pp.plot(s1, s2, 'g.')
pp.plot(s2, s0, 'b.')

mm = float(100.0*mm)
std = float(100.0*std)

pp.legend([
    'Sample #1 vs #2', 'Sample #2 vs #3', 'Sample #3 vs #1'])
pp.title('Pr{co-prime(X=x,Y=y)} = ca. %0.2f%% ± %0.2f%%' \
    % (mm, std))

pp.grid()
pp.show()

#####################################################################

  1. algorithms based on quantum mechanical properties ↩︎

Friday, February 26, 2016

Factoring and period finding I

Once you understand Shor’s period-finding then it is an easy piece of cake to also grasp how you can do factoring of for example a number with two large prime factors $N=pq$ with $p$, $q$ in $\mathbb{P}$.

Today, we won’t go into the details how that is done, but look at a preliminary number theoretic relation, which enables us (in combination with Shor’s period finding) to do factoring. It’s a little difficult to understand the details, but the general picture is quite clear:

The probability is at least $1/2$ that if $a$ is a random member of $G_{pq}$ for prime $p$ and $q$, then the order $r$ of $a$ in $G_{pq}$ satisfies both

$$\begin{equation}r\text{ is even}\label{i}\end{equation} $$

and

$$\begin{equation}a^{r/2}\not\equiv−1 \pmod{pq}\label{ii}\end{equation} $$

where $a$ to the power of it’s order $r$ is equal to $1 \pmod{pq}$. Let’s compress this already mathematical statement a little more. For $a\in G_{pq}$ and $p, q\in\mathbb{P}$ it holds that:

$$\begin{equation}\Pr\{a^{r/2}\not\equiv−1\pmod{pq}\}⩾1/2\end{equation} $$

with $ar\equiv1\pmod{pq}$, where I’ve omitted the $r$ is even part, since the condition $\eqref{ii}$ can be subsumed in $\eqref{i}$. Alright, let’s do some plotting:

$\Pr\{a^{r/2}\not\equiv−1\pmod{pq}\}⩾1/2$

The plot may be displayed rather small, so I’d suggest you click on it: What you’ll see is that all probabilities are larger or equal than $1/2$ for some chosen prime pairs $p$ and $q$. Apparently, the larger $p$ and $q$ the higher the probability that $ar/2\not\equiv−1\pmod{pq}$, which makes sense since there are then a lot more numbers not congruent to $−1$ in $G_{pq}$. Above the prime pairs $(p,q)$ are in the range $(3,3)$…$(19,19)$. When we extend the range then the probabilities get very close to $1$:

$\Pr\{a^{r/2}\not\equiv−1\pmod{pq}\}⩾1/2$

This looks fantastic! Given this relation we can now factor a larger number with prime factors $N=pq$ into it’s components, the details of which will follow in my next post.

01 #!/usr/bin/env python
02 ########################################################################
03 
04 import itertools as it
05 import numpy as np
06 import random
07 import math
08 import gmpy2 as gmp
09 
10 from matplotlib import pyplot as pp
11 
12 ########################################################################
13 
14 def POWERs(a, p, q):
15 	fn = lambda k: a**k % (p*q)
16 	return map(fn, range(p*q))
17 def ORDERs(a, p, q, rem=1):
18 	f1 = lambda t: t[1] == rem
19 	f2 = lambda t: t[0]
20 	return map(f2, filter(f1, enumerate(POWERs(a, p, q))))
21 def ORDER(a, p, q):
22 	return list(it.islice(ORDERs(a, p, q), 1, 2))[0] \
23 		if a % p != 0 and a % q else None
24 def OSAMPLE(p, q, sz):
25 	f1 = lambda _: random.randint(0, p*q - 1)
26 	f2 = lambda a: (a, ORDER(a, p, q))
27 	f3 = lambda t: t[1] is not None
28 	return filter(f3, map(f2, map(f1, range(sz))))
29 def M1(p, q, sz):
30 	fn = lambda t: t[1] % 2 == 0
31 	return sum(map(fn, OSAMPLE(p, q, sz))) / sz
32 def M2(p, q, sz):
33 	fn = lambda t: gmp.powmod(t[0], int(t[1]/2), p*q) != -1
34 	return sum(map(fn, OSAMPLE(p, q, sz))) / sz
35 
36 def PRIMEs(n):
37 	sieve = [True] * n
38 	for i in range(3,int(n**0.5)+1,2):
39 		if sieve[i]: sieve[i*i::2*i]=[False]*int((n-i*i-1)/(2*i)+1)
40 	return [i for i in range(3,n,2) if sieve[i]]
41 
42 def PSAMPLE(sz):
43 	return [(p1, p2) for p1 in PRIMEs(n=sz) for p2 in PRIMEs(n=sz)]
44 
45 ps = PSAMPLE(sz=25)
46 m2 = list(map(lambda p: M2(p[0], p[1], sz=250), ps))
47 
48 for i, p in enumerate(ps):
49 	pass ## pp.text(i, m2[i], '(%s,%s)' % p)
50 
51 pp.title('M2: Pr{a^{order(a)/2} != -1 (mod pq)} for random a in G{pq}')
52 pp.plot(m2, 'b*')
53 pp.grid()
54 pp.show()
55 
56 ########################################################################

Sunday, February 14, 2016

Quantum Fourier Transformation II

Alright, let’s have a deeper look at the Quantum Fourier Transformation (QFT):

QFT: Circuit with four Qbits
QFT: Circuit with four Qbits

As you see the particular QFT depicted has four input and four output Qbits, where the boxes with an $\textbf{H}$ inside are Hadamard gates and where the circles are 2-Qbit unitary operators. How do we formalize a QFT? Here you go:

$$\begin{equation}\textbf{U}_{FT}{|x⟩}_n=\frac{1}{\sqrt{2^n}}\times\sum_{y=0}^{2^n-1} e^{2{\pi}ixy/2^n}{|y⟩}_n\end{equation} $$

Essentially, it looks like a bunch of roots of unity summed over $y$. There is further another very beautiful expression of $\textbf{U}_{FT}$, which is based on the $\textbf{H}^{⊗n}$ and $\mathcal{Z}^x$ operators:

$$\begin{equation}\textbf{U}_{FT}{|x⟩}_n={\mathcal{Z}^x}{\textbf{H}^{⊗n}}\end{equation} $$

where $\mathcal{Z}{|y⟩}_n=\exp(2{\pi}iy/2^n)$ and $\textbf{H}^{⊗n} = \sum_{y=0}^{2^n-1}{|y⟩}_n$. Further, the 2-Qbit unitary operators $\textbf{V}_{ij}$ looks like:

$$\begin{equation} \textbf{V}_{ij}=e^{{\pi}i\textbf{n}_i\textbf{n}_j/2^{|i-j|}} \end{equation}$$

where $|i−j|=k$ corresponds to the integer you see in the circles above and $\textbf{n}$ is the number operator $\bigl[\begin{smallmatrix}0&0 \\ 0&1\end{smallmatrix}\bigr]$. Notice how the wires end up upside down, which is another particularity of the $\textbf{U}_{FT}$ transformation. It’s actually quite a bit abstract and difficult to imagine why and how QFT could be useful, but if you remember your regular Fourier transformation, then you will recall that an FT is essentially a mapping to the frequency basis, allowing us to figure out immediately certain periods of a given function.

This is exactly what a QFT delivers too! Just the quantum way, i.e. we will not be able to see all frequencies at once, but we’ll be able to query for the most significant ones (if they have a high probability). So, the trick is to prepare our input and output registers such, that a subsequent QFT will hand over to us just what we have been looking for!

Friday, February 12, 2016

Quantum Fourier Transformation I

So, you want to learn about Quantum Fourier Transformation (QFT), but have no idea about either Quantum Computer Science (QCS) nor so called Qbits? Well, that’s the reason I’m writing this posts: to explain to you what QFT is about!

But let me warn you: It took me about a year to get my heads around even the basics of QCS, and I won’t tell you in the following posts – I’m hoping to produce in the coming weeks and month – each and every single detail! However, I’ll try to provide enough context information so you get the big picture and can go digging yourself for more entanglement with reality! Alright, let’s throw you into the cold water! This is the QFT:

QFT: Quantum Fourier Transformation
QFT: Quantum Fourier Transformation

Well, it’s a circuit diagram of a QFT with four input and output Qbits. Just wait a minute: “What is a Qbit?” I hear you screaming. I did not explain it yet, did I? And “what are these funny brackets around $|x_i⟩$ and $|y_i⟩$?” And “what is $\textbf{H}$?” Further, “why do you have these numbers in these circles, what do they do?”.

Just slow down, and let me explain: A lonely Qbit is a magical box, since sometimes it’s zero and sometimes one! Often, you have no idea what it will tell you, but sometimes you can gleam an answer by judging its mood. It’s an entirely different beast compared to the classical bit or Cbit, but if you treat it nicely, it will provide you with the answers you have been looking for…

A Qbit is like the oracle of Delphi: offer the correct kind of gifts, and ask whatever your heart desires. If you time your question precisely, you will be rewarded with a revelation. But be careful: Often the oracle provides you with the correct answers, but sometimes it’s just produces junk! So you need to learn to distinguish the truth from the not so true statements.

Another problem with the answers might be, that you may have asked the wrong question: Well, I’d advise you to maybe repeat it to clarify matters, since otherwise, if you set out to destroy an empire you may very possibly doomed yourself!

Alright, after this rather confusing analogies let’s get precise: A Qbit is essentially a vector: A zero $|0⟩$ ket is a column vector equal to $\begin{bmatrix}1,0\end{bmatrix}^T$, and a zero $⟨0|$ bra the corresponding row vector $\begin{bmatrix}1;0\end{bmatrix}$. Further, a one $|1⟩$ ket is the column vector $\begin{bmatrix}0,1\end{bmatrix}^T$, while a one $⟨1|$ bra is the row vector $\begin{bmatrix}0;1\end{bmatrix}$. It’s simple! To be more precise, a $|0⟩$ or $|1⟩$ are the pure states a Qbit can be associated with, whereas the general Qbit looks like:

$$\begin{equation} |ψ⟩=α|0⟩+β|1⟩=\begin{bmatrix}α\\β\end{bmatrix} \end{equation}$$

where $α$ and $β$ are complex numbers. So, I’ve told you that the Qbit $|ψ⟩$ sometimes likes to be a $|0⟩$ and sometimes a $|1⟩$: There is not much you can do, it just does what ever it wants to do! But it’s not completely (uniformly) random – no, there is a pattern! Namely depending on the probabilities $|α^2|$ and $|β^2|$ with:

$$|α^2|+|β^2|=1 $$

it will tell you either $|0⟩$ or $|1⟩$. So, if we can adjust these probabilities we can control and influence the Qbit to run useful calculations. So, the subject of a Qbit should be crystal clear by now. Let’s investigate $\textbf{H}$, which is a so called Hadamard operator: I like to call it a washing machine, where you throw in a $|0⟩$ and a $|1⟩$ and you get the perfect mix! The result is neither a $|0⟩$ nor a $|1⟩$, but somehow both.

So, in crisp mathematical language an $\textbf{H}$ is a matrix (operator) that when multiplied with a column vector produces another column vector, namely:

$$\begin{equation} \textbf{H}|0⟩=\frac{1}{\sqrt2}×(|0⟩+|1⟩)=\frac{1}{\sqrt2}×\begin{bmatrix}1\\1\end{bmatrix} \end{equation}$$

and

$$\begin{equation} \textbf{H}|1⟩=\frac{1}{\sqrt2}×(|0⟩-|1⟩)=\frac{1}{\sqrt2}×\begin{bmatrix}+1\\-1\end{bmatrix} \end{equation}$$

Therefore:

$$\begin{equation} \textbf{H}=\frac{1}{\sqrt2}×\begin{bmatrix}1&+1\\1&-1\end{bmatrix} \end{equation}$$

So far we’ve not encountered anything complicated: Just a bunch of definitions combined with some probabilities to model the inherent uncertainty within the realm of the quantum world.

Now the next question: what was it again? Yes, “in the QFT circuit: what are these funny dots with the numbers?” Well, the answer will be revealed in my next post!

Till then do some multiplications with $\textbf{H}$ and figure out its properties; alright? For example, did you know that $\textbf{H}$ is unitary – figure out what that means and prove it! Further, $\textbf{H}$ is also its own inverse, i.e. $\textbf{H}^2=1$, where the latter is the unity matrix: again prove it, will you?