## Sunday, August 2, 2020

### Digital circuits in two-dimensional time

Time appears to be one-dimensional. But does it have to be that way?

For example, here is a simulation of a ball bouncing in two dimensions of time, `tx` and `ty`:

The ball's velocity is an orientation (tangent plane or normal vector) in `(tx, ty, h)` space, and its acceleration (from gravity) is the local curvature of the surface.

But suppose you built a digital circuit in a universe with two time dimensions. Specifically, a synchronous digital circuit, whose discrete state `S` is a function of two discrete time variables, `ix` and `iy`:

State changes are driven by a pair of two-dimensional clocks:

As the system crosses over a clock edge, it transitions according to a pair of rules:

`Six,iy` = fx(`Six-1,iy`)
`Six,iy` = fy(`Six,iy-1`)

But wait! This creates a potential inconsistency:

Such state transitions only avoid a race condition at the intersection of rising edges if `fx · fy = fy · fx`.

Well, that's possible. For example, the following state transitions satisfy the above consistency criterion:

S fx(S) fy(S)
A B C
B A D
C D A
D C B

The problem is, if `fx · fy = fy · fx`, then any sequence

`fx · ... · fy · ... · fy · ... · fx`

is just

`fx · ... · fx (n times) · fy · ... · fy (m times)`.

That is, there is no interaction between the two time dimensions. The system is, for all intents and purposes, two independent systems, each with one-dimensional time. And, if you lived in such a universe, you would never know, because each version of "you" would have no way of knowing about the other.

And so it would be just like the world we know.

## Sunday, July 23, 2017

### redo-signals: Breaking cycles

Consider a slider/text-field:

When the user edits one, the other should update to match:

Can we express this in FRP-style?

`val textField = new JTextField("50", 6)val slider = new JSliderval win = new JFrame("Test")win.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)win.setLayout(new FlowLayout)win.add(textField)win.add(slider)win.pack()win.setLocationRelativeTo(null)win.setVisible(true)textField.text.foreachDelayed { text => implicit u =>  slider.value.updateLater(text map { text =>    Try(text.toInt) match {      case Failure(_) => slider.value.now      case Success(value) => value    }  })} (slider.value.redoObserving)slider.value.foreachDelayed { value => implicit u =>  textField.text.updateLater(value map (_.toString))} (textField.text.redoObserving)`

It turns out we can. But the code looks cyclical... how does it not produce an infinite loop?

`foreachDelayed` comes to the rescue. It runs on the invalidation sweep. A signal can only be invalidated once before it is recalculated. So, the invalidation sweep can never produce an infinite loop; it simply traverses the graph of signals and invalidates each one it hits.

## Saturday, July 22, 2017

### redo-signals: Planting imperative trees in a declarative forest

`redo-signals` is a Scala library for functional reactive programming. With `redo-signals`, you write declarative code:

`val x = new Source[Int](0)val y = new Source[Int](0)val z = tracking { implicit t =>  x.track + y.track}`

But some behavior is poorly suited to declarative style. Consider the following set of rules:

1. For every element of `sourceList1`, ensure there exists a `Type1Worker` worker in `workerList`.
2. For every element of `sourceList2`, ensure there exists a `Type2Worker` worker in `workerList`.
3. Don't remove or re-create workers.

You could write these rules declaratively, but they are more natural as imperative code:

`sourceList1.zipWithStalenessFrom(Seq()).foreach { case (oldXs, newXs) =>  workerList() = workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type1Worker(_)))}sourceList2.zipWithStalenessFrom(Seq()).foreach { case (oldXs, newXs) =>  workerList() = workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type2Worker(_)))}`

But this introduces an unintended consequence. Supposing `sourceList1` and `sourceList2` originate from the same source. Upon every change in the source data, each `foreach()` will execute once, resulting in two updates to `workerList`, when there should be only one.

Consider the following complete example:

`case class Payload(name: String)trait Workerclass Type1Worker(val payload: Payload) extends Workerclass Type2Worker(val payload: Payload) extends Workerval rootSourceList = new Source[Seq[Payload]](Seq())val sourceList1 = rootSourceList map (_ filter (_.name.startsWith("a")))val sourceList2 = rootSourceList map (_ filter (_.name.startsWith("b")))val workerList = new Source[Seq[Worker]](Seq(), debugName = Some("workerList"))sourceList1.zipWithStalenessFrom(Seq()).foreach { case (oldXs, newXs) =>  workerList() = workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type1Worker(_)))} (workerList.redoObserving)sourceList2.zipWithStalenessFrom(Seq()).foreach { case (oldXs, newXs) =>  workerList() = workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type2Worker(_)))} (workerList.redoObserving)workerList foreach { workers =>  println(s"Now have \${workers.length} workers")}// One updaterootSourceList() = Seq(Payload("a 1"), Payload("b 1"))`

This prints

```Now have 0 workers
Now have 1 workers
Now have 2 workers```

Two updates appear in output: from 0 to 1, and from 1 to 2. How can we merge these into a single update?

The solution is to use `foreachDelayed`:

`sourceList1.zipWithStalenessFrom(Seq()).foreachDelayed { xs => implicit u =>  workerList.updateLater {    xs map { case (oldXs, newXs) =>      workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type1Worker(_)))    }  }}(workerList.redoObserving)sourceList2.zipWithStalenessFrom(Seq()).foreachDelayed { xs => implicit u =>  workerList.updateLater {    xs map { case (oldXs, newXs) =>      workerList.now ++ ((newXs.toSet -- oldXs.toSet) map (new Type2Worker(_)))    }  }}(workerList.redoObserving)`

How does this work? In `redo-signals`, changes to the state of the world are made in two sweeps: the first sweeps invalidates all signals, and the second sweep executes listeners, which, in the process, often re-evaluate signals, causing them to become valid. `foreachDelayed` and `updateLater` allow the user to tap in to this procedure, by scheduling code to run on either the first or second sweep.

The body of the function passed to `foreachDelayed` runs on the first sweep -- the invalidation sweep. This means that it is guaranteed to run exactly once per update. However, because it is on the invalidation sweep, it's not allowed to look at the state of any signals, because any signal is at risk of being invalid.

So, it passes the work on to `updateLater`. `updateLater` is the complement to `foreachDelayed`. It schedules an update to run on the second sweep, execution sweep, at which time it may evaluate `xs` and `workerList.now`.

The result is that you can write imperative code that executes like declarative code -- no more than it has to. The various parts are schedules to run on the correct sweeps, and signals change their values exactly as many times as their are root changes to the system. And so imperative code mixes seamlessly into the declarative world.

## Sunday, June 4, 2017

### The ∞ levels of programming

At the 0th level are functions. Functions operate on data. Data like 1 or 2 or "three".

```def f(x):
return x + 1```

At the 1th level are meta-functions, what you might call macros. Meta-functions operate on code.

```def reapply(c):
return apply(c.func, c)```

`reapply(f(2))``f(f(2))``4`

There is of course meta-data as well, and a meta-function can operate on meta-data:

`map(reapply, [f(2), f(3), f(4)])`

At the 2th level are meta-meta-functions, that operate on meta-functions and meta-meta-data.

```def reapply(c):
return apply(c.func, c)```

`reapply(reapply(f(2)))``reapply(reapply(f(2)))``reapply(f(f(2))``f(f(f(2)))``5`

And so on.

What the ∞ levels of programming give you is precision. Remember Scala continuations? Remember how you couldn't use `shift` inside a `map` or a `foreach`? Well that was because `shift` was a meta-function, and `map` was a function, and functions can't operate on meta-functions.

Or more concretely, `shift` tears apart the code itself. But when `shift` is mapped, you don't know yet--at compile time--where it will land. So you can't pull the code apart.

But with a meta-meta-map, and a meta-meta-list, you could meta-meta-map your shift over your meta-meta-list.

There's some analogy to universes in type systems, that the way to clarify statements about statements is to separate them by levels. When code may only refer to the level below it, a multitude of paradoxes go away.

And then the puzzle is how to compile it efficiently.

## Monday, March 13, 2017

### My computer is haunted

After my computer sleeps, in certain GTK+ 3 applications, specific colors will not appear:

(Note the blank spaces)

After causing enough unique colors to be drawn, the problem goes away.

Some googling around suggests that this may be related to the `i915` driver for Intel graphics cards, at which point it became apparent that this was the kind of problem that will literally never be fixed.

It was suggested either to add the following incantation to `/etc/X11/xorg.conf.d/50-device.conf`:

```Section "Device"
Identifier "Default Device"
Driver "intel"
Option "AccelMethod" "UXA"
EndSection```

or the following in `/etc/environment`:

`COGL_ATLAS_DEFAULT_BLIT_MODE=framebuffer`

neither of which, of course, made any difference.

Or, instead of waiting for the impossible to happen I can just run this bit of code:

```for i in \$(seq 0 255) ; do
bash -c 'echo -n \$'"'"'\033[38;5;'"\$i"'mhi\033[0m'" '"
done
echo```

and then everything is good.

## Monday, November 21, 2016

### The ∞ arrows of version control

At the 0th level is a point:

The point is the current state of your source code.

At the 1th level are the arrows:

These are your commits, connected by history.

At the 2th level are the arrows between arrows:

Every time you make a change to your history, such as by creating a new commit, or deleting a commit, you create a new history, linked to the old history with a 2-level arrow.

Most VCSs don't keep track of 2-level arrows, but they could.

At the 3th level are the arrows between 2-level arrows:

You may not see the need for 3-level arrows, but that's because you never work with 2-level arrows. You like 1-level arrows because you work with points, and 1-level arrows help you keep track of your points. You don't use 2-level arrows, but you can probably see why they are useful. Everyone knows that you should never git amend after a git push. But if git kept track of 2-level arrows, you could do a 2-level push, and sync your local amend with a remote amend. You could do a 2-level revert to undo your amend. And once you start working with 2-level arrows, you need 3-level arrows to keep track of those.

There are ∞ levels of arrows.

How can the computer store ∞ levels of arrows? Won't they take up ∞ space? Just one commit creates a 1-arrow, a 2-arrow, a 3-arrow, ... and so on.

But most 2-arrows are uniquely determined by a single 1-arrow, and uniquely determine their 3-arrow. Most n-arrows are degenerate, representing only the addition of a single (n-1)-arrow. At any time, only finitely many arrows require their own representation in memory.

Can a human mind keep track of 20 levels of arrows, let alone ∞ levels? That is hard to say, because no one has tried.

## Friday, October 14, 2016

### Building OpenCV with Java on Linux

This command worked for me:
cmake .. -DJAVA_INCLUDE_PATH=/usr/lib/jvm/java-8-oracle/include/ -DJAVA_INCLUDE_PATH2=/usr/lib/jvm/java-8-oracle/include/linux
It was not necessary to set BUILD_SHARED_LIBS to false as described on opencv.org. That would be rather undesirable to lose the ability to use shared libs just because you also want to use Java!

**HOWEVER**, because BUILD_SHARED_LIBS is set, libopencv_java310.so will have dependencies. It will be necessary to load libopencv_core.so.3.1 among others into the JVM at runtime, AND THEN to have libopencv_java310.so see those symbols. This is tricky in Java! If you don't want to go through this pain, you can go and unset BUILD_SHARED_LIBS; I won't judge you.

Essentially, what you need is a way to set RTLD_GLOBAL when loading the library. This can be accomplished using JNA:
NativeLibrary.getInstance(path);
Then you should be able to use OpenCV.

## Sunday, June 19, 2016

### Example of using WebWorkers in ScalaJS

https://github.com/ellbur/scalajs-webworker-test

In case you were curious.

The biggest disadvantage of webworkers is that, at least, Chrome browser does not consider functions to be translatable from one webworker to another. This in theory should be possible, as a function can be represented as code+environment, and both of these are by themselves translatable.

## Wednesday, June 26, 2013

### I'm sick of tools

Synonym's for "tool" include "framework" and "system". So for example "build tool", "build system", "web framework", "message-passing system", "text processing system", "dependency injection framework" etc.

The problem with a tool is that it takes control away from the client code. make ("Make is a tool", begins the intro) will plan your build from start to finish [1]; sphinx (also a self-described tool) starts in control and relinquishes it through plugins; maven takes usurping control to a whole new level.

I am not bashing the merits of these programs; Sphinx produces beautiful output and there simply isn't a substitute for Maven right now. I am rather noting that to the extent that these programs are infuriating ("Maven is a device for reducing grown men to sobbing masses of absolute terror") it is because they assume too much control over the process flow.

But what mystifies me in retrospect is that I did not always feel this way. I used to feel insecure over my dislike of tools. In the same way that a novice C++ programmer when quickly shot down for their ignorance of sequence points is unlikely to question the philosophical rationale of even having sequence points, I assumed that the reason for my dislike of tools was due to my lack of mastery with tools, my lack of strength in the face of tools, and my lack of knowledge for the design considerations behind tools.

And yet somehow, when I did master a tool, enough to build my own tool around the tool, I still didn't like the tool.

I didn't quite understand why; I thought I had simply bumped against architectural mistakes in docutils; I didn't realize that the problem was that it was a tool, and that by wrapping it in my own tool I was in fact making the situation worse.

The moral became apparent while I was drafting a Makefile to build some code that used to be managed by TI Code Composer Studio (note the URL) and compiled with TI Code Generation Tools. The compiler cl6x for Code Generation Tools has perhaps the most mischievous command line interface I have ever encountered. Its rules for the naming of output files (which sometimes permit explicit overriding and sometimes not), which files must be built in a common invocation, and what must be the working directory are like carefully placed traps, and make has a habit of stepping everywhere.

And then, I decided, I don't have to write a Makefile. Nor must I use make or Scons or CMake or redo or any build tool at all. I can write a Python script that invokes the appropriate commands, and that's it.

And I won't sacrifice incremental build, because I can add memoization, and I won't sacrifice selectable targets because I can use argparse for command line arguments, and I won't sacrifice wildcards and path substitutions because I can write a few Python functions for manipulating paths and that's all I need (os.path does most of the work).

A skilled Makefile writer will point out that the job would have been easy in make if you knew what you were doing. That's probably true. But the lesson to be learned is that we no longer need to feel bad that we didn't know what we were doing with make, we can get the job done in pure Python and nobody need ever know what they are doing, beyond what they are actually in reality doing.

Building on this philosophy I wrote scooter, which I think in retrospect contains some mistakes that bring it a little too close to being a tool (such as using exclusively inotify for dependency tracking), but it is at least a step in the right direction. The project quickfiles that came out of it is better.

As a tool-less conclusion I present you the simplest possible HTML document generator for Python:

```  1  def surround_with(start, end):
2      def dec(f):
3          print(start)
4          f()
5          print(end)
6      return dec

```

Because that really is all you need, and does 50% of the work that docutils or Sphinx or LaTeX do, without requiring any new knowledge (everyone has to learn HTML at some point in their lives). Add in this function:

```  7  def pyplot_image(**fig_attrs):
8      from matplotlib.pyplot import figure
9      def dec(f):
10          fig = figure(**fig_attrs)
11          f()
12          tmp = Path.mktemp().setext('.png')
13          fig.savefig(str(tmp))
15          print('<img src="data:image/png;base64,%s"/>' % (data,))
16      return dec

```

And you have everything you need to autogenerate reports of analyses including plots, as a standalone HTML document with no external dependencies, that you can email to someone, that will display in any browser.

 [1] Otherwise make -n would be much harder.

## Tuesday, June 25, 2013

### Math?

TEDxManhattanBeach - John Bennett - Why Math Instruction Is Unnecessary

I don't do math on a typical day. I am an electrical engineer; I work on X-ray spectroscopy systems.

It is true that I do math sometimes, but maybe one day in twenty. It is sometimes quite sophisticated math (probability or signal processing), but it takes no more than half an hour.

I have studied a lot of math. I have studied groups, rings, fields, field extensions, vector spaces, modules, categories, monads, adjunctions, topologies, sets, logic, algorithms, derivatives, integrals, differential forms, complex numbers, matrices, differential equations, and Laplace transforms. But I don't think about those things much anymore. I mostly do a lot of coding.

Though I suppose my job would be harder without math. The 80/20 rule still applies; I do little math because I can get it done quickly and move on to something else. If I couldn't do math it might consume my whole day.

Though I still don't see myself factoring a polynomial.

### I never use logging frameworks

Logging frameworks are not respectful of the 80/20 rule. A logging framework will make 80% of your logging needs easier, but they make the extra 20% harder.

That extra 20% consists of cases where you are in doubt as to whether logging messages can get through at all, such as starting a new project, moving to a new platform, misconfiguring the logging framework, long-forgotten bits of code overriding the configuration, layered logging frameworks that delegate to each other, missing write permissions on a log file, etc.

It is a special case of the 80/20 rule that the hardest part of any task is verifying whether an event ever happens at all. Are all my unit tests passing, or are they simply not being run? Did this assertion pass or are assertions disabled? Did the program succeed or was it never invoked? Is it still working or is it waiting for input, or did it hang? Is this logging statement never reached or is logging disabled, or the output redirected, or logging for this particular class disabled, or a filter applied to the log messages, or a dependency missing causing a default to no logging, or a buffer not yet flushed? Does the new version still work, or am I running an old version? Did he not send the email or did my spam filter reject it, or the SMTP server bounce it for being too large? Did she not call because everything was ok or because she was already passed out? Is he cool with me and Ryan last night or is he too pissed for words?

I never use logging frameworks. I debug by printf, and then delete the printf. What if I forget to delete it? That's what github is for. All of my projects are either small and open source, or GUI applications used by non-programmers where spamming stdout is a non-issue.

## Monday, June 3, 2013

### Proving type equality in a Scala pattern match

Finally got this to work. Here's the infuriating code:

```  1  sealed trait Two[A,B]
2  case class One[A]() extends Two[A,A]
3
4  def convert[A,B](x: A, t: Two[A,B]): B = t match {
5    case One() => x
6  }

```

which of course gives:

```type mismatch;
found   : x.type (with underlying type A)
required: B
case One() => x
^
```

You would think that Scala might introduce a constraint that A =:= B, but apparently not. Of course you could just make the =:= yourself

```  7  sealed trait Two[A,B]
8  case class One[A,B](pf: A =:= B) extends Two[A,B]

```

except that that is much more cluttered and not how people usually write and use case classes.

It turns out that the working solution is to use a matched (lower-case) type variable:

```  9    def convert[A,B](x: A, t: Two[A,B]): B = t match {
10      case o: One[a] =>
11        x: a
12    }

```

So, the match is not as pretty as it could be, but the case class definition is clean and we never explicitly mentioned =:=, which is always a good thing.

(What's going on is that Scala is introducing two implicit =:= s, one linking a to A and one linking a to B, but doesn't chain implicits).

## Saturday, June 1, 2013

### Using continuations to reorder type inference, or, a lesson in literal program design

Consider the following definition for a chain of computations:

```  1  sealed trait Chain[-A,+B]
2  case class End[A]() extends Chain[A,A]
3  trait Chaining[A,B] extends Chain[A,B] {
4    type I
5    val first: A => I
6    val rest: Chain[I,B]
7  }

```

It is much like the List type, except that there is a hidden (existential) dependency between adjacent elements.

The "nestedness" of the structure is chosen to be like the List type for a very specific reason: when you go to use a Chain, you will perform the first computation first, so it should be the least nested; the last computation should be the most nested.

List defines a right-associative :: operator for building. Let us define such an operator:

```  8  sealed trait Chain[-A,+B] {
9    def :->:[A2<:A,P](f: P => A2): Chain[P,B] = new Chaining[P,B] {
10      type I = A
11      val first = f
12      val rest = Chain.this
13    }
14  }

```

But we see we have a type inference problem:

``` 15  // Note the type annotation needed for second stage.
16  val pipeline = (
17         { (i: Int) => i.toString }
18    :->: { (s: String) => s ++ s }
19    :->: End()
20  )

```

We might be tempted to think that the problem arises with Scala insisting that infix operators be members -- this does seem to be a likely explanation, since :->: being a member means that Scala must deduce the type of the right-hand operand first, which is the opposite of the direction in which we would like type information to flow. But you will find that the following non-member function fairs no better:

``` 21  def c[A,B,C](f: A => B, rest: Chain[B,C]): Chain[A,C] = f :->: rest

```

But we can learn something from the areas where Scala's type inference does seem to work very well, namely chains of list operations, map, filter, grouBy, etc, where you almost never need to annotate types. The reason is that any stage of a list-operation chain is itself an "incomplete operation chain" (ie we are (potentially) not done the computation yet), which can be extended to another incomplete chain by way of the next map or filter.

So what we need is a representation of an "incomplete Chain".

The most obvious meaning of an "incomplete Chain" is "something that can be turned into a complete chain by providing the following steps".

Principle rule of functional programming design: when you can say plainly and succinctly what something "is", translate that sentence literally into code, and there is your design.

Do not use tricks, do not use fancy types: simply turn sentences into their most literal meaning.

(Analogously in imperative code, you are looking to say what something "does").

Let us apply the rule as literally as we can:

``` 22  trait ChainBuilder[A,B] { outer =>
23    def build[C](rest: Chain[B,C]): Chain[A,C]
24  }

```

Our "incomplete Chain" is, unsurprisingly, a kind of continuation, since a continuation is the most basic form of an incomplete computation.

We can then define the incremental building operations:

``` 25  trait ChainBuilder[A,B] { outer =>
26    def build[C](rest: Chain[B,C]): Chain[A,C]
27
28    def and[C](f: B => C) = new ChainBuilder[A,C] {
29      def build[D](rest: Chain[C,D]) = outer.build(f :->: rest)
30    }
31    def done = build(End())
32  }
33
34  def start[A] = new ChainBuilder[A,A] {
35    def build[B](rest: Chain[A,B]) = rest
36  }

```

And now everything works:

``` 37  val pipeline3 = (start[Int]
38    and { i => i.toString }
39    and { s => s ++ s }).done

```

I feel there may be something more useful here than merely getting better type inference but I haven't yet figured out what.