[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Saturday, November 26, 2022

Get Rich Fast

I wrote a text as a comment on the episode of the Logbuch Netzpolitik podcast on the FTX debacle but could not post it to the comment section (because that appears to be disabled). So in order not to waste I post it here (in German):


1. Hebel (leverage): Wenn ich etwa glaube, dass in Zukunft die Appleaktie weiter steigen wird, kann ich mir eine Appleaktie kaufen, um davon zu profitieren. Die kostet momentan etwa 142 Euro, kaufe ich eine und steigt der Preis auf 150 Euro habe ich natürlich 8 Euro Gewinn gemacht. Besser natürlich noch, wenn ich 100 kaufe, dann mache ich 800 Euro Gewinn. Hinderlich ist dabei nur, wenn ich nicht 14200 Euro dafür zur Verfügung habe. Aber kein Problem, dann nehme ich eben einen Kredit über den Preis von 99 Aktien (also 14038 Euro) auf. Der Einfachheit halber ignorieren wir mal, dass ich dafür Zinsen zahlen muss, die machen das ganze Spiel für mich nur unattraktiver. Ich kaufe also 100 Aktien, davon 99 auf Pump. Ist der Kurs bei 150, verkaufe ich sie wieder, zahle den Kredit ab und gehe mit 800 Euro mehr nach Hause. Ich habe also den Kursgewinn verhundertfacht.


Doof nur, dass ich gleichzeitig auch das Verlustrisiko verhundertfache: Fällt der Aktienkurs entgegen meiner optimistischen Erwartungen, kann es schnell sein, dass ich beim Verkauf der Aktien nicht mehr genug Geld zusammenbekomme, um den Kredit abzuzahlen. Das tritt dann ein, wenn die 100 Aktien weniger wert sind, als der Kredit, wenn also der Aktienwert unter 140,38 Euro fällt. Wenn ich in dem Moment meine Aktien verkaufe, kann ich grade noch meine Schulden bezahlen, habe aber mein Eigenkaptial, das war die eine Aktie, die ich von meinem eigenen Geld gekauft habe, komplett verloren. Ist der Kurs aber noch tiefer gefallen, kann ich beim Spekulieren auf Pump aber mehr als all mein Geld verlieren, ich habe nichts mehr, aber immer noch nicht meine Schulden abbezahlt. Davor hat aber natürlich auch die Bank, die mir den Kredit gegeben hat, Angst, daher zwingt sie mich spätestens, wenn der Kurs auf 140,38 gefallen ist, die Aktien zu verkaufen, damit sie auf jeden Fall ihren Kredit zurück bekommt. Daniel nennt das "glattstellen".


2. Das finde ich natürlich blöd, weil der Kurs viel schneller mal um diese 1,42 Euro fällt, als dass er um 8 Euro steigt. Um das zu verhindern, kann ich bei der Bank noch andere Dinge von Wert hinterlegen, zB mein iPhone, das noch 100 Euro wert ist. Dann zwingt mich die Bank erst meine Aktien zu verkaufen, wenn der Wert der Aktien plus den 100 Euro für das iPhone unter den Wert des Kredits fällt. Sie könnte ja immer noch das iPhone verkaufen, um ihr Geld zurück zu bekommen. Wenn ich aber kein iPhone zum Hintelegen habe, muss ich etwas anderes werthaltiges bei der Bank hinterlegen (collateral).


3. Hier kommen die Tokens ins Spiel. Ich kann mir 1000 Kryptotokens ausdenken (ob mit dem Besitz von computergenerieren Cartoons von Tim und Linus verknüpft ist dabei egal). Da ich mir die nur ausgedacht habe, bin ich noch nicht weiter, so haben sie ja keinen Wert. Ich kann versuchen, sie zu verkaufen, aber dabei werde ich nur ausgelacht. Hier kommt meine zweite Firma, der Investment Fond ins Spiel: Mit dem kaufe ich mir selber 100 der Tokens zum Preis von 30 Euro das Stück ab. Wenn jetzt nicht klar ist, dass ich mir selber die Dinger abgekauft habe (ggf. über einen Strohmann:in) sieht es so aus, als würden die Tokens ernsthaft für einen Wert von 30 Euro gehandelt. Ausserdem verkaufe ich noch den Kunden meines Fonts 100 weitere auch für 30 Euro mit dem Versprechen, dass die Besitzer der Coins Rabatt auf die Gebühren meines Fonds bekommen. Spätestens jetzt ist der Wert von 30 Euro pro Token etabliert. Ich habe von den ursprünglichen 1000 immer noch 800. Jetzt kann ich behaupten, ich habe Besitz im Wert von 24000, denn das sind 800 mal 30 Euro. Diesen Besitz habe ich quasi aus dem Nichts geschaffen, da die Annahme, dass ich auch noch echte Käufer für die anderen 800 bei diesem Preis finden kann, Quatsch ist.


Wenn ich das ganze aber nur gut genug verschleiere, glaubt mir vielleicht jemand, dass ich wirklich auf Werten von 24000 Euro sitze. Insbesonder die Bank aus Schritt 1 und 2 glaubt mir das vielleicht und ich kann diese Tokens als Sicherheit für den Kredit hinterlegen und damit noch höhere Kredite aufnehmen, um damit Apple-Aktien zu kaufen.


Das ganze fliegt erst auf, wenn der Kurs der Aktien so weit fällt, dass die Bank darauf besteht, dass der Kredit zurück gezahlt werden muss. Dann muss ich eben nicht nur die Aktien und das iPhone verkaufen, sondern auch noch die weiteren Tokens. Und dann stehe ich eben ohne Hose da, weil dann klar wird, dass natürlich niemand die Tokens, die ich mir einfach ausgedacht habe, haben will, schon gar nicht für 30 Euro. Dann fehlt in den Worten von Daniel die "Liquidität".


Das ist nach meinem Verständnis, was passiert ist, natürlich nicht mit Apple-Aktien und iPhones, aber im Prinzip. Und der Sinn des mit sich selbst Geschäfte-im-Kreis machen, ist eben, damit künstlich die scheinbaren Preise von etwas, wovon ich noch mehr habe, in die Höhe zu treiben. Der Fehler des ganzen ist, dass schwierig ist, die Werte von etwas zu beurteilen, was gar nicht wirkich gehandelt wird, bzw wo der Wert nur auf anderen angenommenen Werten beruht, wobei sich die Annahmen über die Werte sehr schnell ändern können, wenn jemand "will sehen!" sagt und keine realen Werte (wie traditionell in Form von Fabriken, Know-How etc) dahinter liegen.

Tuesday, October 04, 2022

No action at a distance, spooky or not

 On the occasion of the announcement of the Nobel prize for Aspect, Clauser and Zeilinger for the experimental verification that quantum theory violates Bell's inequality, there seems to be a strong urge in popular explanations to state that this proves that quantum theory is non-local, that entanglement is somehow a strong bond between quantum systems and people quote Einstein on the "spooky action at a distance".

But it should be clear (and I have talked about this here before) that this is not a necessary consequence of the Bell inequality violation. There is a way to keep locality in quantum theory (at the price of "realism" in a technical sense as we will see below). And that is not just a convenience: In fact, quantum field theory (and the whole idea of a field mediating interactions between distant entities like the earth and the sun) is built on the idea of locality. This is most strongly emphasised in the Haag-Kastler approach (algebraic quantum field theory), where pretty much everything is encoded in the algebras of observables that can be measured in local regions and how these algebras fit into each other. So throwing out locality with the bath water removes the basis of QFT. And I am convinced this is the origin why there is no good version of QFT in the Bohmian approach (which famously sacrifices locality to preserve realism, something some of the proponents not even acknowledge as an assumption as it is there in the classical theory and it needs some abstraction to realise it is actually an assumption and not god given).

But let's get technical. To be specific, I will use the CHSH version of the Bell inequality (but you could as well use the original one or the GHZ version as Coleman does). This is about particles that have two different properties, here termed A and B. These can be measured and the outcome of this measurement can be either +1 or -1. An example could be spin 1/2 particles and A and B representing twice the components of the spin in either the x or the y direction respectively.

Now, we have two such particles with these properties A and B for particle 1 and A' and B' for particle 2. CHSH instruct you to look at the expectation value of the combined observable

\[A (A'+B') + B (A'-B').\]

Let's first do the classical analysis: We don't know about the two properties of particle 2, in the primed variables. But we know, they are either equal or different. In case they are equal, the absolute value of A'+B' is 2 while A'-B'=0. If they are different, we have A'+B'=0 while the absolute value of A'-B' is two. In either case, one one of the two terms contribute and in absolute value it is 2 times the unprimed observable of particle one, A for equal values in particle 2 an B for different values for particle 2. No matter which possibility is realised, the absolute value of this observable is always 2.

If you allow for probabilistic outcomes of the measurements, you can convince yourself that you can also realise smaller absolute values than 2 but never larger ones. So much for the classical analysis.

In quantum theory, you can, however, write down an entangled state of the two particle system (in the spin 1/2 case specifically) where this expectation value is 2 times the square root of 2, so larger than all the possible classical values. But didn't we just prove it cannot be larger than 2?

If you are ready to give up locality you can now say that there is a non-local interaction that tells particle 2 if we measure A or B on particle one and by this adjust its value that is measured at the site of particle two. This is, I presume, what the Bohmians would argue (even though I have never seen a version of this experiment spelled out in detail in the Bohmian setting with a full analysis of the particles following the guiding equation).

But as I said above, I would rather give up realism: In the formula above and the classical argument, we say things like "A' and B' are either the same or opposite". Note, however, that in the case of spins, you cannot both measure the spin in x and in y direction on the same particle because they do not commute and there is the uncertainty relation. You can measure either of them but once you decided you cannot measure the other (in the same round of the experiment). To give up realism simply means that you don't try to assign a value to an observable that you cannot measure because it is not compatible with what you actually measure. If you measure the spin in x direction it is no longer the case that the spin in the y direction is either +1/2 or -1/2 and you just don't know because you did not measure it, in the non-realistic theory you must not assign any value to it if you measured the x spin. (Of course you can still measure A+B, but that is a spin in a diagonal direction and then you don't measure either the x nor the y spin).

You just have to refuse to make statements like "the spin in x and y directions are either the same or opposite" as they involve things that cannot all be measured, so this statement would be non-observable anyways. And without these types of statement, the "proof" of the inequality goes down the drain and this is how the quantum theory can avoid it. Just don't talk about things you cannot measure in principle (metaphysical statements if you like) and you can keep our beloved locality.

Thursday, July 21, 2022

Giving the Playground Express a Spin

 The latest addition to our single chip computer zoo is Adafruit's Circuit Playground Express. It is sold for about 30$ and comes with a lot of GIO pins, 10 RGB LEDs, a small speaker, lots of sensors (including acceleration, temperature, IR,...) and 1.5MB of flash rom. The excuse for buying it is that I might interest the kids in it (being better equipped on board than an Arduino while being less complex than a RaspberryPi.


As the ten LEDs are arranged around the circular shape, I thought a natural idea for a first project using the accelerometer would be to simulate a ball going around the circumference.



The video does not really capture the visual impression due to overexposure of the lit LEDs.

The Circuit Playground Express comes with a graphical programming language (like Scratch) and an embedded version of Python. But you can also directly program it with the Arduino IDE to code in C which I used since this is what I am familiar with.

Here is the source code (as always with GPL 2.0)
// A first project simulating a ball rolling around the Playground Express

#include <Adafruit_CircuitPlayground.h>

uint8_t pixeln = 0;
float phi = 0.0;
float phid = 0.10;

void setup() {
  CircuitPlayground.begin();
  CircuitPlayground.speaker.enable(1);
}

int phi2pix(float alpha) {
   alpha *= 180.0 / 3.141459;
   alpha += 60.0;
   if (alpha < 0.0) 
      alpha += 360.0;
    if (alpha > 360.0)
      alpha -= 360.0;
      
    return (int) (alpha/36.0);
}

void loop() {
    static uint8_t lastpix = 0;
    float ax = CircuitPlayground.motionX();
    float ay = CircuitPlayground.motionY();
    phid += 0.001 * (cos(phi) * ay - sin(phi) * ax);
    phi += phid;
    phid *= 0.997;
    Serial.print(phi);

    while (phi < 0.0) 
      phi += 2.0 * 3.14159265;

    while (phi > 2.0 * 3.14159265)
      phi -= 2.0 * 3.14159265;


    pixeln = phi2pix(phi);
 
    if (pixeln != lastpix) {
      if (CircuitPlayground.slideSwitch())
        CircuitPlayground.playTone(2ssseff000, 5);
      lastpix = pixeln;
    }
    CircuitPlayground.clearPixels(); 
    CircuitPlayground.setPixelColor(pixeln, CircuitPlayground.colorWheel(25 * pixeln));
    delay(0);
}

Saturday, July 09, 2022

Voting systems, once more

 Over the last few days, I have been involved in some heated Twitter discussions around a possible reform of the voting system for the German parliament. Those have sharpened my understanding of one or two things and that's why I think it's worthwhile writing a blog post about it.

The root of the problem is that the system currently in use tries to optimise two goals which are not necessarily compatible: Proportional representation (number of seats for a party should be proportional to votes received) and local representation (each constituency being represented by at least one MP). If you only wanted to optimise the first you would not have constituencies but collect all votes in one big bucket and assign seats accordingly to the competing parties, if you only wanted to optimise the second goal you would use a first past the pole (FPTP) voting system like in the UK or the US.

In a nutshell (glancing over some additional complications), the current system is as follows: We start by assuming there are twice as many seats in parliament as there are constituencies. Each voter has two different votes. The first is a FPTP vote that determines a local candidate that will definitely get a seat in parliament. The second vote is the proportional vote that determines the percentage of seats for the parties. The parties will then send further MPs to reach their allocated lot but the winners of the constituencies are counted as well and the parties only "fill up" the remaining seats from their party list. So far so good, you have achieved both goals: There is one winner MP from each constituency and the parties have seats proportional to the number of (second) votes. Great.

Well, except if a party wins more constituencies  than they are assigned seats according to proportional votes. This was not so much of a problem some decades ago when there were two major parties (conservative and social democrat) and one or two smaller ones. The two parties would somehow share the constituency wins but since those make up only half  of the total number of seats those would not be many more than their share to total seats (which would typically be well above 30% or even 40%).

The voting system's solution to this problem is to increase the total number of seats to the minimal total number such that each party's number of won constituencies is at least as high as their shore of total seats according to proportional vote.

But these days, the two former big parties have lost a lot of their support (winning only 20-25% in the last election) and four additional parties being also represented and not getting much less votes than the two former big ones. In the constituencies it is not rare that you win your FPTP seat with less than 30% of the votes in the constituency and it the last election it can be as low as only 18% sufficient to being the winner of a seat. This lead to the parliament having 736 seats as compared to the nominal size of 598 and there were polls not long before that election which suggested 800+ seats or possibly even over 1000.

A particular case is the CSU, the conservative party here in Bavaria (which is nominally a different party from the CDU, which is the conservative party in the rest of Germany. In Bavaria, the CDU is not competing while in the rest of the country, the CSU is not on the ballot): Still being relative winners here, they won all but one constituencies but got only about 30% of the votes in Bavaria which translates to slightly above 5% of all votes in Germany.

According to a general sentiment, 700+ seats is far too big (for a functioning parliament and also cost wise), so the system should be reformed. But people differ on how to reform it. A simple solution mathematically would be to increase the size of the constituencies to decrease their total number. So the total number of constituency winners to be matched by proportional votes would be less. But that solution is not very popular with the main argument being that those constituents would be too big for a reasonable contact of the local MPs to their constituents. Another likely reason nobody really likes to talk about is that by redrawing district lines by a lot would probably cause a lot of infighting in all the parties because the candidatures would have to be completely redistributed with many established candidates losing their job. So that is off the table, after all, it's the parties in parliament which decide about the voting system by simple majority (with boundary conditions set by relatively vague rules set by the constitution).

There is now a proposal by the governing social democrat-green-liberal coalition. The main idea is to weaken the FPTP system in the constituencies maintaining the proportional vote: Winning a constituency no longer guarantees you a seat in parliament. If you party wins more constituencies than their share of total seats according to the proportional votes, those constituency seats where the party's relative majority was the smallest would be allocated to the runner up (as that candidates party still has to be allocated seats according to proportional vote). This breaks FPTP, but keeps the proportional representation as well as the principle of each constituency sending at least one MP while fixing the total number of seats in parliament to the magic 598.

The conservatives in opposition do not like this idea (been traditionally the relatively strongest parties and thus tending to win more constituencies). You can calculate how many seats each party would get assuming the last election's votes: All parties would have to give up about 18% of their seats except for the CSU, the Bavarian conservatives, who would lose about 25% since some fine print I did not explain so far favours parties winning relatively many constituencies directly. 

The conservatives also have a proposal. They are willing to give up proportionality in favour of maintaining FPTP and fixing the number of seats to 598: They propose to assign 299 of the seats according to FPTP to constituency winners and only distributing the remaining 299 seats proportionally. So they don't want to include the constituency winners in the proportional calculation.

This is the starting point of the Twitter discussions. Both sides accusing the other side have an undemocratic proposal. One side says a parliament where the majorities do not necessarily (and with current data) unlikely represent majorities in the population is not democratic while the other side arguing that denying a seat to a candidate that won his/her constituency (even by a small relative majority) being not democratic.

Of course it is a total coincidence that each side is arguing for the system that would be better for them (the governing coalition hurting everybody almost equally only the CSU a bit more while the conservative proposal actually benefitting the conservatives quite a bit while in particular hurting the smaller parties that do not win many constituencies or none at all).

ImageImage

(Ampel being the governing coalition, Union being the conservative parties).

Of course, both proposals are in a mathematical sense "democratic" each in their own logic emphasising different legitimate aspects (accurate proportional representation vs accurate representation of local winners).

Beyond the understandable preference for a system that favours one's own political side I think a more honest discussion would be about which of these legitimate aspects is actually more relevant for the political discourse. If a lot of debates would be along geographic lines, north against south, east against west or even rural vs urban then yes, it is very important that the local entities are as accurately represented as possible to get the outcomes of these debates right. That would emphasise FPTP as making sure local communities are most honestly represented.

If however typical debates are along other fault lines, for example progressive vs conservative or pro business vs pro social wealth redistribution then we should make sure the views of the population are optimally represented. And that would be in favour of a strict proportional representation. 

Guess what I think is actually the case.

All that in addition to a political tradition in which "calling your local MP or representative" is a much less common thing that in anglo-saxon countries and studies showing that even shortly after a general election less than a quarter of the voters being able to name at least two of the names of their constituency's candidates casting serious doubts about an informed decision at the local level rather than along party lines (where parties being only needed to make sure there is only one candidate per party in the FPTP system while being the central entity for proportional votes).

PS: The governing coalition's proposal has some ambiguities as well (as I demonstrate here --- in German).

Monday, January 17, 2022

You got me wordle!

 Since a few days, I am following the hype and play wordle. I think I got lucky the first days but I had already put in some strategy as in starting with words where the possible results are most telling. I was thinking that getting the vowels right early is a good idea so I tend to start with "HOUSE" (continuing three vowels and an S) possibly followed by "FAINT" (containing the remaining vowels plus important N and T).

With this start it never took me more than four guesses so far and twice I managed to find the solution in three guesses.


Of course, over time you start thinking how to optimise this. I knew that Donald Knuth had written a paper solving the original Mastermind showing that five moves are sufficient to always find the answer. So today, I sat down and wrote a perl script to help. It does not do the full minimax (but that shouldn't be too hard from where I am) but at least tells you which of your possible next guesses leaves the best worst case in terms of number of remaining words after knowing the result of your guess. 

In that metric, it turns out "ARISE" is the optional first guess (leaving at most 168 out of the possible 2314 words on this list after knowing the result). In any case, here is the source: 

NB: Since i started playing, there was no word that contained the same letter more than once, so I am not 100% sure how those cases are handled (like what color do the two 'E' in "AGREE" receive if the solution is "AISLE" (in mastermind logic, the second would be green the other grey, not yellow) and what when the solution were "EARLY"? So my script does not handle those cases correct probably (for EARLY it would color both yellow).

#!/usr/local/bin/perl -w

use strict;

# Load the word list of possible answers
my @words = ();
open (IN, "answers.txt") || die "Cannot open answers: $!\n";
while(<IN>) {
  chomp;
  push @words, uc($_);
}
close IN;

my %letters = ();
my @appears = ();

# Positions at which letter $l can still appear
foreach my $c (0..25)  {
  my $l = chr(65 + $c);
  $letters{$l} = [1,1,1,1,1];
}


# Running without an initial guess shows that ARISE is the best guess at it leaves 168 words.

&filter("ARISE", &bewerten("ARISE", "SOLAR"));
#&filter("SMART", &bewerten("SMART", "SOLAR"));

# Find the remaining words
my @remain = @words;
# Only keep words containing the letters in @appeads
foreach my $a(@appears) {
  @remain = grep {/$a/} @remain;
}
my $re = &makeregex;

# Apply positional constraints
@remain = grep {/$re/} @remain;


my $min = @remain;
my $best = '';

# Loop over all possible guesses and targets and count how ofter a potential result appears for a guess
foreach my $g(@remain) {
  my %results = ();
  foreach my $t(@remain) {
    ++$results{&bewerten($g, $t)}
  }
  my $max = 0;
  foreach my $res(keys %results) {
    $max = $results{$res} if $results{$res} > $max;
  }
  #print "$g leaves at most $max.\n";
  if ($min > $max) {
    $min = $max;
    $best = $g;
  }
}

print "Best guess: $best leaves at most $min.\n";

# Assemble a regex for the postional informatiokn
sub makeregex {
  my $rem = '';
  foreach my $p (0..4) {
    $rem .= '[';
    foreach my $l (sort keys %letters) {
      $rem .= $l if $letters{$l}->[$p];
    }
    $rem .= ']';
  }
  return $rem;
}

# Find new constraints arising from the result of a guess
sub filter {
  my ($guess, $result) = @_;

  my @a = split //, $result;
  my @w = split //, uc($guess);
  foreach my $p (0..4) {
    my $l = $w[$p];
    if ($a[$p] == 0) {
      $letters{$l} = [0,0,0,0,0];
    } elsif ($a[$p] == 1) {
      &setletter($l, $p, 0);
      push @appears, $l;
    } else {
      foreach my $o (sort keys %letters) {
	&setletter($o, $p, 0);
      }
      &setletter($l, $p, 1);
    }
  }
}

# Update the positional information for letter $l at position $p with value $v
sub setletter {
  my ($l, $p, $v) = @_;
  my @a = @{$letters{$l}};
  $a[$p] = $v;
  $letters{$l} = \@a;
}

# Find the result for $guess given the $target
sub bewerten {
  my ($guess, $target) = @_;
  my @g = split //, $guess;
  my @t = split //, $target;

  my @result = (0,0,0,0,0);
  foreach my $p(0..4) {
    if($g[$p] eq $t[$p]) {
      $result[$p] = 2;
      $t[$p] = '';
      $g[$p] = 'x';
    }
  }
  $target = join('', @t);
  foreach my $p(0..4) {
    if($target =~ /$g[$p]/) {
      $result[$p] = 1;
    }
  }
  return join('', @result);
}