Analysis: Why generics scales more than linearly in cost as abstraction

I remember, several years back, when learning how powerful interfaces and generics was as an abstraction in C#. Rust takes this idea to the extreme: There are very few library abstractions that does not take advantage of generics and traits in some form or another.

However, after working on Piston for 2 years, built a meta parsing library, developing a new field of computer science called “path semantics”, and released my own dynamical programming language Dyon, I have a complete different perspective of what kind of abstractions are needed to scale very big projects.

Path semantics and complexity of scaling

Path semantics tells us that programming abstractions are compressed rules over a super-exponential geometric space of meaning. This follows from the geometrical structure that underpins the notation for path semantics. All normal programming languages are subsets of some language that otherwise look identical but can be extended with path semantics (like a slice of bread is a subset of a bread).

It seems to be a big engineering challenge to extend language with path semantics, and an incredible amount of unsolved and unknown problems associated with it, which is irrelevant for this post. Yet, the seemingly universality of path semantics to precisely capture meaning of things we usually want to do with programs is pretty convincing. Most things normal programming languages do, including executing programs, functional programming, solving equations, unit testing and dependently types can be traced back to path semantics.

This also means that natural languages must be at least super-exponentially complex, because you can easily translate from path semantics to natural language, but not the other way around.

It does not mean that all abstractions try to represent something super-exponentially complex, but that all meaning that programs can possibly express are super-exponentially complex by nature. Therefore, as the world requires more and more complex features, the abstractions tend to be pressed toward the super-exponential spectrum of semantics.

Scaling

When drawing a function that grows exponentially, we often use a logarithm scale. Super-exponential means it grows faster than linear in a logarithmic scale. This is how hard it is to scale abstractions by default.

Only a language that grows super-exponentially efficient can scale perfectly, but if it is merely exponential efficient it can not scale less than linearly with size of programming projects. There is no language, as far as I know, that has the property of scaling perfectly for the same space of meaning that path semantics covers.

Btw, if you solved this problem, you might also be able to make super-intelligent software, since exhausting this space is equivalent to inventing entirely new mathematical concepts and understanding physical processes, something that belongs to the super-intelligent category of problems. This is unsolved because we do not currently know an efficient way of using path semantics without manual human creativity.

Some things we should ask ourselves:

  • Even if we know generics does not scale perfectly, for which exact reason does not scale?
  • What does “scale linearly” mean for programming projects?

If you want to write code that does X and the amount of time it requires to do it is f(X), then f is a function that depends on the size of a project f=g(S). This is because the meaning of X depends on a specific configuration of features that are coded in the project. The cost function of this configuration is strictly less for a smaller but nearly identical project except for irrelevant features, therefore if h(S,X)=g(S)(X) then the best we can do is making h(S,X) behave the same for any value of S. This is scaling perfectly.

Scaling linearly means there is a k1 and k2 factor such that k1*h(S,X)=h(k2*S,X) where k1 and k2 are proportional. If k2/k1=k2 it scales perfectly. If the right side is a constant it scales linearly. This is what it means to scale linearly as abstraction.

All you need to test how something scales, is to measure the time it takes to do X using a stop watch for a project over time. In a large project most features are irrelevant, but the overall project is mostly identical. This is the usual case where you want good abstractions.

An important thing to notice here: How well something scales is not merely an opinion. It has a precise meaning and is relatively easy to measure for a concrete case. The ambiguity comes from comparing doing X or Y. This is like comparing apples or oranges and makes it difficult to compare programming languages.

Things go slower and slower when complexity increases

Programming projects increases in complexity over time.

As a counter example, a running program that is unable to modify itself does not increase in complexity.

A programming project on the other hand is constantly modifying itself.

Not only gets it slower and slower to do the same operation over time, but it increases more than linearly. This does not mean that the running programming will be slower, but that making changes or adding features get usually slower overall.

For example, by adding generics it takes slightly longer time to type the same code:

fn foo(x: u32) {}

becomes

fn foo<A: Num>(x: A) {}

This is merely a linear increase.

Why does generics scale more than linearly?

The problem comes from generics need to be provably correct (inductive), or provably not to be incorrect (coinductive). For each generic parameter is added, it requires more work to prove all the dependencies between generic parameters. This is what they are for: Prove code to be correct in absence of concrete types.

Luckily, most of this work can be done by using a very fast algorithm.

Unfortunately, some of this work is offset to the user that needs to write the generic code. This is a BIG problem. If we could solve this, then we could write super safe code, very fast and easily, and reuse extreme amounts of code.

Closed sets of types

This is an idea of how to make generics much easier to reason about in some cases.

It is a kind of “closed set” or “family” of types:

  1. For every type in the closed set, the other types can be inferred
  2. All types belonging to a closed set must be known in the same crate
  3. Import an implementation of the closed set to put it into scope

This is almost like a trait with lots of associated types, but with the difference that each associated type implements the trait, instead of a type that implements the trait. It is like a trait that types have in common.

For example, if you want to make Texture type and an Image type know about each other:

family Foo {
     type Texture;
     type Image;
}

impl Foo as Bar {
    type Texture = MyTexture;
    type Image = MyImage;
}

fn main() {
    // Decide which family to use for this scope.
    use Bar;

    let a: MyImage = my_image();
    let b: MyTexture = my_texture(a);
}

fn my_image() -> MyImage;
fn my_texture<A: Foo>(a: Foo::Image) -> Foo::Texture;

The benefit of this feature is that you do not have to worry about which direction you need to prove the constraints of a type relative to other types. You do not need to associated MyImage with MyTexture and then MyTexture with MyImage.

Adding this to Rust might make abstractions like Gfx much easier to work with.

How my experience with Piston influenced Dyon

In Dyon I tried to reduce costs associated with scaling a project, by trading it with other costs.
One thing I realized when thinking about closed sets of types, was that generic code using is equivalent to source code where the dependencies are unknown:

fn main() {
    let a: Image = my_image();
    let b: Texture = my_texture(a);
}

fn my_image() -> Image;
fn my_texture(a: Image) -> Texture;

If you could interpret the source code as a whole, and putting in concrete types, then this would be the same thing as using a closed set of types.

This is why Dyon uses dynamic modules to organize code: Switching from one closed set of types to another is the same as switching out one dynamic module with another.

It is not the only reason: In a dynamically typed scripting language you can also decide when and which order to load the modules, which is pretty useful for interactive programming. Another benefit is that you also save typing, since you do not have to type the extra generic parameters.

This is how the same code would look like in Dyon:

// main.dyon
fn main() {
    a := my_image()
    b := my_texture(a)
}

// lib.dyon
fn my_image() -> MyImage { ... }
fn my_texture(a: MyImage) -> MyTexture { ... }

The code would be split into two files, where one is loaded as the context for the other. Either as external functions (written in Rust) or as loaded dynamic modules using a loader script.

Conclusion

Analyzing this problem carefully shows that scaling is hard by default. As far as I know, it can not be perfect, so you have to make trade-offs.