advancedresearch.github.io

Poi 0.17 Released

by Sven Nilsen, 2020

A new version of Poi has been released!

In this blog post I will discuss Poi in general, give a short introduction to normal paths, and explain a bit how Poi works and some development challenges.

Link to Github project

Why develop a new theorem prover assistant?

I often experiment with various approaches to automated theorem proving to learn new insights about mathematics. While other people read books to learn math, I often learn by coding. This year alone, I made a dozen different attempts with varying degrees of success.

Poi is one of these experiments that made it far enough to open source it.

It all started when I read about SKI combinator calculus. This got me thinking about just leaving out lambda abstraction for once, instead just use point-free representations. This was combined with the standard way of using variables in algebra.

What I like about normal paths in path semantics is that they are point-free and high level. So, I decided to make an attempt with normal paths as a starting point.

By restricting myself from thinking too much about the common problems with lambda abstractions and type checking, I could focus on the problems that are specific to point-free theorem proving:

These two problems are quite different from the out-of-the-box algorithms that I previously developed, such as the monotonic_solver and linear_solver.

Taming the beast

If you ignore foundations of mathematics or bird-view theories, then the primary tools we use in math is about algebra.

The problem with algebra is that on the surface it looks beautiful and innocent, but below the surface lives a beast:

In languages with dependent types, you can prove many of the things you can prove in algebra. However, sometimes dependent types feels too restrictive or too technical.

I want a tool where you write as little as possible and where you are not bound to 100% strict formalization. For example, maybe I want to change my formula slightly, just because I changed my mind or want to try something else.

I think algebra does a good job at representing this level of math, but I also feel it is hard to represent knowledge formally.

It is like you have dependent types on one side of a spectrum and algebra at the opposite side:

dependent types <-----------------------> algebra
easy for formalize                        hard to formalize
hard to represent math                    easy to represent math

By representing math I mean in a type-agnostic sense, where communication of an idea is more important than its formalization.

Normal paths

One reason I use normal paths is to bring some of the ideas of dependent types into algebra.

So, what is a normal path?

Using an analogy, a complex number is written:

a + b * i

Where a is the real part and b is the imaginary part. A real number can be thought of as a complex number where b = 0:

a + 0 * i = a

The analogue of a “real” function f when using normal path notation:

f[id] = f

Since id is also a function, one can also think of f like this:

f[id[id[id[id[id[id[id[id[...]]]]]]]]]

The function f is not just a function, but also the space of identity morphisms, or paths to itself, paths between paths and so on.

When you write:

f[g] = h

This means the property g of function f is predicted by h.

For example:

concat[len] = add

This is symmetric path, because one studies the output property len from the input properties len x len:

concat[len x len -> len] = add

However, the details of asymmetric path can be added later. For simplicity I will just use symmetric paths.

Just like you can add complex numbers in real dimension and complex dimension, you can compose normal paths in the “computation” dimension and the “path” dimension:

f2[g] . f1[g] = (f2 . f1)[g]           Composition in the "computation" dimension
f[g1][g2] = f[g2 . g1]                 Composition in the "path" dimension

Notice that composition along one dimension requires fixing the other dimension. Normal computation might be thought of as fixing the “path” dimension to id.

So, what is this “path” dimension?

The “path” dimension is a kind of moving from the concrete to the abstract.

For example, addition is more abstract that concatenating lists, because if you study the length property of concatenation, you get addition.

In a computer, all programs are equal in the sense that they don’t mean anything concretely or abstractly. However, since normal paths extends computation with a new dimension, all of a sudden one can talk about what these programs mean, relative to each other. This is the basic idea behind Path Semantics.

The rewriting system in Poi

Poi uses a simple idea, which is similar to how macros work in Rust:

<pattern> => <synthesis>

On the left side, you have a pattern, where you bind variables, and on the right side you substitute for the variables in an expression.

For example:

concat[len] => add

However, despite this simple design, it is not trivial to do this in a practical way. This is because the pattern can match on any sub-expression.

If you take a look at Expr::reduce_eval in the source code, you will find that Poi uses e.g. index of the rule to detect whether it should prioritize sub-expressions or the parent expression.

In addition, a lot of special patterns are needed to handle edge cases, for example:

f:!{}([x..]) => f{(: vec)}(x)

This means when f is not constrained and takes a list, Poi will insert the list constraint on f.

All these special cases is one of the reasons I for now hard-code all the rules in Rust.

Most of the work on Poi, in addition to adding new features, is about finding a balance in the rules:

Downsides with rewriting systems

The problem with using a rewrite system is that you get the same “spaghetti-code” phenomena as with Rust macros.

When I change a rule, I can not immediately see how this change affects the overall system.

It is also difficult to measure how useful a rule is compared to another alternative.