How to Think about Parallel Programming: Not!

I just watched this very interesting talk by Guy Steele. He talks about how programming languages have to adapt from a sequential programming style to a style that enables automatic allocation of processor resources to computations. He identifies the “accumulator pattern” as a sequential programming tool that must be avoided and instead be replaced by various operators like “Sum”, “Product” etc. which come with algebraic properties like associativity, commutativity, idempotence and so on. The algebraic properties provide invariants that create “the wiggle room” for compilers to do proper automatic processor allocation.

Nothing in this talk is real news, but it is always interesting to hear directly the opinion of a guy like Steele. I especially like and agree with the emphasis he puts on accumulators as a bad design choice for parallel programming. But then how come that in Babel-17, accumulators are an integral and important part of the language?! Isn’t Babel-17 supposed to be great for parallel programming?

Actually, there is a difference between the accumulators in Babel-17 and the accumulators Steele frowns upon. In Babel-17, accumulators are explicitly marked as such! Although currently there is no way to associate accumulators with properties like associativity, but there is nothing stopping a future version of Babel-17 from providing such facilities.

To illustrate the two different coding styles, look at the following Babel-17 code below which defines two functions sqrsum1 and sqrsum2 and applies both of them to the vector (1, 3, 7). In both cases the result is 59, as the result of summing up the squares of all elements of the vector should be. The function sqrsum1 is written with an implicit accumulator (implicit for the compiler, explicit for the programmer!). The function sqrsum2 is written with an explicit accumulator (explicit for the compiler, implicit for the programmer!). The style used for sqrsum1 is the one Steele suggests to avoid, but the style used in sqrsum2 is already pretty close to the style Steele advocates. A problem with sqrsum2 is that the protocol for an accumulator in Babel-17 is essentially sequential: collector_add_ x returns a new accumulator by accumulating x into the current one, and collector_close_ projects the final result out of the accumulator. In a future version of Babel-17, there could be additional accumulator protocols, for example collector_inject_, collector_merge_, collector_project_ for a divide/conquer type of protocol. The sequential and the parallel protocols could be distinguished via the “with” construct by adding a qualifier: “with seq” and “with par” instead of just “with”.


28 Responses to “How to Think about Parallel Programming: Not!”

  1. Shelby Moore III Says:

    I wanted to email you this privately, but I can not find your email address.

    I think you are missing the point about the critical importance of typing for composability and concurrency.

    Babel-17 vs. Copute quick analysis:

    That is not meant to be antagonist, but rather perhaps to spur us to share ideas.

    • phlegmaticprogrammer Says:

      Hi Shelby, I should probably have an about page somewhere with my name and email-address: it is firstname at lastname dot com, if you prefer a private discussion, but I’d prefer a public one.

      Note that you can download a Netbeans plugin for Babel-17 and play around with Babel-17 if anything is unclear.

      I think you have not understood what linear scope in Babel-17 is all about. I base this on your comment: “Afaics, the key problem with his design, is there is no structured and explicit way to define the boundaries of which functions are referentially transparent and which are not.”.

      In Babel-17, ALL FUNCTIONS ARE REFERENTIALLY TRANSPARENT! Therefore there is no need to define any boundaries. I don’t think it gets much simpler than that šŸ™‚

  2. Shelby Moore III Says:

    Hi Steven, thanks for the clarification. I agree, a public discussion is preferred. I didn’t want to create Copute (and I don’t even have a working compiler yet), but I need a better mainstream language, and I got tired of begging others to do it and waiting. I don’t even think I am the most qualified to do it (I’m historically an applications programmer learning to become a language research + designer since 2009), so here I am. So it is worthwhile if there is anything we can learn from each other, and share publicly. In short, I appreciate the discussion, because I don’t want to make a design error or waste my effort “barking up the wrong tree”.

    If I understand correctly, per your clarification Babel-17 is making all functions pure, but I asserted my understanding in my prior post that within pure functions, the structured code is granularly opaque by employing per data instance shadowing. That does not make the containing function impure, so that is fine (and Copute and JavaScript can do that too, but JavaScript can’t assert that the function is pure and closures are even opaque). One proposed difference for Copute, is the function is only a referentially transparent boundary if it is declared to enforced as pure.

    If I understand correctly that we share the goal of facilitating integration/interoption of (and transition between) imperative and pure functional programming, then why would we not need both impure and pure functions?

    My understanding is that programs are more than just functions, they are compositions of rich semantic paradigms which can be declared with typing. And life is not entirely entirely referentially transparent. For example, the Observer pattern requires a callback (external state), thus it can never be a referentially transparent construction. However, in an idealized world, we can invert the Observer pattern as Functional Reactive (FPR):

    FPR is not theoretically less efficient, because it could be optimized to only recompute the portions of the global FPR chain that have changed, thus it isn’t different from the propagation of state dependencies in the Observer pattern in that respect. In both Observer and FPR, the order of propagation of state change can be coded indeterminate or deterministically. However, in some respects Observer pattern is easier, because one can just slap on any where, without too much concern for overall design (however, this sloppiness will manifest as race conditions and other out-of-order situations, etc).

    So I will not argue theoretically that it is impossible to make every possible program a composition of referentially transparent functions (that where we want to be); however, in practice the transition from where we are today in the computer world to the future, will be eased if one can use a referentially opaque function sometimes. Sometimes “quick and dirty” is what gets the job done and often what gets the job done, is what is popular. So my idea with Copute was to make the transition to pure functional programming as familiar and painless as possible…and in my case to users of afaik the most popular computer language in the world, JavaScript. Also because then I will have a ready market since afaik there is no good pure FP compiler with great typing system, and optional integrated dynamic typing, that outputs JavaScript? (I started by studying HaXe strengths and weaknesses, then I learned Haskell, etc).

    Back to the more fundamental theory point. Although we want to have programs which are composed of entirely referentially transparent (i.e. pure) functions, we encounter gridlock where we need to refactor up a tree of pure FP code, if some function on a branch wasn’t granular enough (i.e. conflated some data which really should be orthogonal). So eventually our ideal world of pure FP every where becomes untenable– in short, it can’t scale in a wide area composition. It is sort of analogous to C++ “const” blunder, which had to propagate every where in order to be used any where.

    Thus we are likely to use pure FP for problem spaces that are well encapsulated, but we will continue to use imperative coding for more dynamic social integration. Thus it seemed to, critical that our typing system will make these boundaries transparent (explicit, not silent inferred polymorphism).

    Do you have any comments that could spur another round of exchange? Or does this just seem wrong or irrelevant from your perspective? I am eager to learn from any one who is willing to share. Thanks.

    • Shelby Moore III Says:

      ADD: Coase’s theorem applies, i.e. that there is no external reference point, thus all boundaries will fail (be subverted by the free market of the fact that the universe is trending to maximum disorder). Thus the all pure FP or nothing boundary at the function, is not realistic. It is not in harmony with thermodynamics and entropy.

    • phlegmaticprogrammer Says:

      Well, Babel-17 is not about quick-and-dirty. If it was, I could just use Python instead šŸ™‚ There are already enough quick-and-dirty solutions out there, I don’t need to add to that. Of course that does not mean that it would not be possible to write quick solutions in Babel-17; but these quick solutions just wouldn’t be dirty …

      My goal is actually not to mix imperative and functional programming (Scala has done this pretty well) but to be purely functional while allowing imperative programmers an easy inroad to Babel-17.

      It is true that pure FP might break down at some point when interfacing with the “real world”. But as you also pointed out, it might not because paradigms involving state change might be rephrased employing only pure functions. My approach is to examine all the scenarios where I want to use Babel-17 and only to introduce state when I see no other way of getting the job done.

      IF state is introduced, some sort of simple static typing might come in handy, to distinguish between pure and impure functions. Static typing might also come in handy when interfacing with foreign APIs like CUDA or OpenCL. But I’ll worry about that when I get there.

  3. Shelby Moore III Says:

    I had noticed that Python has dynamic rebinding of reference identifiers. Also conflating layout with grammar is objectively a problem with Python (I’ve written the objective reasons).

    I am not a Scala programmer, but please correct me if I am wrong, my understanding is there does not exist a declaration in Scala to have the compiler check for referential transparency. So imho this leaves a wide-open market for Copute. I don’t understand O’Caml well enough to comment on it, other than to say that its syntax is so different than what I am used to looking at, I have not been able yet to train my mind to read O’Caml sources naturally as I can C-like code. I actually thought about using O’Caml instead of writing my own language, but after 2 or 3 quick attempts to wrap my mind around it, it just seemed it wasn’t going to work out for imperative programmers (if I can’t grasp it quickly, imagine how much more unrealistic for your average programmer). And afaics, it is not the ML heritage, but some O’Caml specific issues, but again I don’t yet have a unified understanding of ML, O’Caml, etc.. It is on my todo list.

    If we look at just the referentially transparent portion of Copute’s design, I have come to the provisional conclusion that Copute is equal to Haskell in fundamental expressive power, except afaics Copute has a key typing system advantage. There is a research paper referenced at that link, which explains the analog between OOP methods plus inheritance, and FP algebraic types plus function overloading. Btw, I made the same (non-formally expressed) discovery independently in late 2008 or 2009 when I was studying Haskell, then became aware of that research 2 days ago while reading LtU.

    So I guess I want to analyze two questions.

    1. Compare Babel-17’s approach, versus Copute’s approach of using a C-like syntax with a pure declaration to enforce a pure function?

    2. Given that Babel-17 can not have the same expressive power as Haskell or Copute without a typing system, any possible issues with designing Babel-17’s typing system later, versus having a unified design from the start?

    My goal is responding again, is not to discredit Babel-17 nor to promote my work, rather to see if we can illuminate any issues that might impact our current decisions about our separate designs. You are ahead of me in implementation, which in part may be attributed to that you don’t have to deal with typing and especially a parametric type inference algorithm. Also I have a verified formal LL(2) grammar (not yet published). I am trying to determine what you have done conceptually (not just syntax) which is not in Copute? And perhaps you gain similar insight.

    In any case, my aim is to encourage you, not discourage you.

  4. phlegmaticprogrammer Says:

    Yes, I also don’t like that layout influences the meaning of a programming language. No, Scala does not have means to distinguish pure from non-pure code.

    First, you have to be careful when debating “the fundamental expressive power” of programming languages. One of the assumptions in the “expression problem” you mention is that of static typing. As Babel-17 has no static typing, the “expression problem” is actually a non-issue in Babel-17. Also note that typing generally makes a language less expressive and powerful, not more šŸ™‚ Often, this is intentional, because for example IDE’s can handle less expressive languages better than more expressive ones.

    Therefore my reluctance on introducing a type system from the beginning. If you introduce one from the start, you limit your language from the start. You mentioned that you want to avoid design mistakes with Copute. Well, by equipping it with a type system from the start, you are pretty much guaranteed to have made several design mistakes already.

    The goal of Babel-17 is to have all of its features as unified as possible. In case I decide on certain static typing features, I have no problem to make changes to the whole language to maintain that harmony.

    I have taken a look at your description of Copute, but currently it pretty much seems to be vapor-ware in the sense that you have a bunch of ideas about some of the features of Copute, but there is no implementation and/or description of its syntax and most importantly, its semantics, available. That is not meant to discourage you, I started the same way and it is still a long way for me until Babel-17 resembles my internal ideal of the perfect (non-system) programming language.

    But it means that I cannot really make a solid comparison between Babel-17 and Copute. Concerning the C-like syntax, Babel-17 is very different from C and therefore there is not much sense in that.

    One thing though I want to mention is that insisting that EVERYTHING in Babel-17 is purely functional has helped me a lot so far in the design process for Babel-17. For example, I wanted for and while loops in my language, and I didn’t want all these nested “let val x = … in …. end” expressions in my language. This has led me to the discovery of linear scope and to the addition of collectors to Babel-17. I also knew that I wanted exceptions in my language. I found a very nice way to do that. From your description of Copute I see that you forbid the interplay of exceptions and pure functions. This is exactly the danger with starting with a type system: It lets you easily express what you want (in your case to distinguish pure and impure functions) but lets you take short-cuts and accept limitations you can rationalize now (ah, it is enough to have exceptions in impure functions, don’t need them in pure ones) but might regret later (damnit, if I could only handle that divison by zero exception in my pure function).

  5. Shelby Moore III Says:

    Hi Steven, I don’t want to get into any kind of religious design war :), and I don’t want to disparage your work even by implication, as I believe in diversity and letting the market sort things out. I am happy there are others working on similar goal. Just a quick reply to say that the Maybe (or “null”) algebraic type substitutes adequately for divide-by-0 exception. I agree with stated reasons that Go rejected exceptions and assertions (imo both should be handled in the typing system). And static typing is intended to be completely optional in Copute, just declare the type dynamic, so there is no loss of potential expressive power. I was referring the specific type of expressive power in the challenge raised by Wadler in the “Expression Problem” that derives from extensions. It may be true that I have committed design errors in the type system, or just inherently in that any type system is by definition incomplete because it defines a limitation set, luckily I had many examples to pull from to know what has in past worked or not. I appreciated very much the discussion. Thank you so much for being friendly and upbeat. We have a fun vocation, so we have every reason to keep smiling. Thanks for looking at Copute, and apologies for its current incomplete state.

    • phlegmaticprogrammer Says:

      The designers of Go come very much from a system programming background. To be honest, I think for the kind of programming I do (non-system programming), Go sucks.

      As for the Maybe type as result of divisions, I have thought about this hard when designing Babel-17. It does not work in a setting of static typing. For example which type has the expression (3 * (20 div x)) ? Is it “Int Option” ? That would be very clumsy. You can try to escape the situation by saying “null” is an instance of every type, and for example 3 * null results in null. But then you lose your typing advantage and in fact you introduce dynamic typing for “null”. In Babel-17, exception handling is the result of how to generalize this special behavior of “null”.

  6. Shelby Moore III Says:

    Agreed Go is aimed towards systems programming and isn’t an optimum solution for the use case I am driving towards either. And moreover, I think the future of systems programming is to have apps that are provably correct (Linus Torvald admitted such a remote possibility for “designer languages”), so I think Go is also going to be superceded in time, but that might be a long time from now. Good to see Thompson is working on an upgrade for C.

    The problem is that an exception causes order dependence, which removes the ability to parallelism the implementation.

    raise First + raise Second handle First => 1 | Second => 2,

    what is the value of this expression? It clearly depends on the order
    of evaluation.

    Although afaik an exception does not violate referential transparency literally, it does remove the orthogonality of functions, which for me is one of the key outcomes of referential transparency, and orthogonality is necessary for composability. Whereas, typing is the lego patterns for composability. (btw the name Co-pute is driving towards cooperation and composition, I am aiming for wide scale web mashup language)

    If the programmer expects the possibility of an exception, then the function needs to declare that in its types. Afaics, there is no shortcut of exceptions without type that maintains composability and concurrency. The programmer can make a type NeverZero, if prefers to attack the problem on input, sort of analogous to reversing the Observer pattern to Functional Programming Reactive as I mentioned in my 2nd post above.

    Yeah it is fugly to have to propagate exception cases every where. But that is life. The shortcut has a real important cost. And I agreed with the comments at Go, that exceptions turn into a convoluted mess, especially when one starts composing functions in different permutations.

    • phlegmaticprogrammer Says:

      That’s where the beauty of Babel-17 is. The Babel-17 equivalent of your example expression:

      (exception First) + (exception Second)
      case First => 1
      case Second => 2

      will always evaluate to 1, but not because of the evaluation order of Babel-17, but because Babel-17 is a deterministic language. In particular, the expression

      (concurrent x) + (concurrent y)

      is perfectly well defined and the computations x and y might actually run in parallel (depending on the number of available processing units). In Babel-17, functions are orthogonal, and exceptions are no problem.

      • Shelby Moore III Says:

        There a function A which inputs a function B, and A catches an expected exception, but note this is not declared in the type of A or B. So function B is input to A, but unlike other B in the past, this B catches the exception that A is expecting to catch. The programmer of B would have no way of knowing that A expected the same exception, because that is not declared in the types. Orthogonality and composability subverted. Whereas if B declared by returning a non-exception type, that it handles the exception, then problem resolved. So then A would be overloaded (folded) on return type of B, one version/guard of A that handles the exception and one that lets B handle it.

        I can see how you get the elegant determinism by basically adding a “null” test on every return type of every function, instead of doing a longjmp, but afaics hiding the exception return type from the programmer causes the above problem.

        I see no problem using exceptions when there is no function call inside the try block.

        If I have made a wrong assumption or erroneous statement, I apologize in advance.

  7. phlegmaticprogrammer Says:

    I think you mix up different issues now. Your last argument was that exceptions don’t go together with parallelism, but as my example showed, they do.

    Your new argument basically has nothing to do with exceptions. What you are really saying is that without static typing there is no overloading. Yep, I do agree with that. But although overloading is nice to have, it is by no means necessary to write scaleable and composable programs.

    • Shelby Moore III Says:

      I was making two related points, one that concurrency is not achieved unless we use a Maybe type. Afaics, you’ve since clarified for me (thank you), that you are using a Maybe type, and you’ve hidden (made implicit) the monadic action in a try-catch abstraction.

      Agreed, exceptions can be concurrent if they don’t employ the longjmp paradigm, and instead (even implicitly) employ the Exception monad on the Maybe algebraic type, where the compiler is doing the monadic lifting behind the scenes.

      So our discussion is the choice between do that implicitly and doing it with static typing. The advantage of doing it with static typing is that it can be propagated automatically with a monad type (what afaics Babel-17 achieves), and we gain the ability to overload on type (how Haskell and Copute do it).

      The second point I was making is that static typing is critical for composability.

      With dynamic typing, the only way to prove correctness is with assertions on inputs (i.e. exceptions). These assertions are just types[1], e.g. instead throwing an exception to insure NonZero, just make the input type a NonZero type.

      How do we compose functions when their invariants are not explicitly stated by type, but rather hidden in their implementation as assertions that will throw exceptions? We end up with spaghetti, because the composition of the invariants are not being checked by the compiler. Non-declared assumptions get lost. I have 25+ years coding in spaghetti. šŸ™‚ Is there another way to deal with it that I am not aware of?

      For composability, afaics the exceptions must be coded on the return type (post-conditions[1]), and/or on the input types (pre-conditions[1]).


  8. phlegmaticprogrammer Says:

    Now we are back to the religious war you wanted to avoid šŸ˜€

    You are saying it is not possible to write composeable programs without static typing. Static typing cannot express all the invariants (not even the most important ones) in a program. So when you rely exclusively on static typing to track those invariants, you in fact write spaghetti code while thinking you write clean and nice code … Taking away static typing merely takes away that false sense of security.

    The way to write composeable components is pretty easy to describe: You design them, and then you document them. You should also write tests for them (you have to do that anyway, with or without static typing).

    Also note that declaring exceptions as part of the type is viewed as a design error by most modern language designs. For example in Scala, exceptions are not declared as part of the type. And overloading on the return type checking if it can throw an exception or not is definitely the worst kind of spaghetti code I have ever heard of šŸ™‚

    Furthermore, to model with static typing the same what has been achieved for exceptions in Babel-17 means that EVERY TYPE has to include the algebraic case of an exception. Most statically typed languages shy away from that, including Haskell. So it is Haskell’s dirty little secret that the division by zero exception is not handled via the exception monad.

    • Shelby Moore III Says:

      Agreed I definitely want to try to avoid subjective injection (because it won’t help either of us produce a better product). So we are not going to fight that irrational war, because we will delineate what we can conclude objectively. You will correct me and point out where I am making a subjective conclusion.

      Agreed, that the tension in composability hinges around the granularity (more completely, the fitness) with which the invariants can be declared and enforced/checked. Agreed also that to the degree that one’s statically typed implementation does not fully express the invariant semantics, then aliasing error will spill out chaotically (aliasing error manifests as noise).

      Isn’t the objective fallacy of arguing against static typing, that the alternative isn’t better? Afaics, testing is never exhaustive due to the Halting problem (beyond getting some common use cases covered, which is not an argument against static typing because as you also said, it is needed in any case), because one would need to test every possible composition at all permutations of N potential mashup companion functions before we even know what they will be. Note, there were some comments at LtU yesterday about impracticality and inadequacy of testing for proving associativity for Guy steele’s example. Documentation is not an argument against static typing, as it is needed in any case.

      Static typing is a first level check, it enables the compiler to check some errors. And to the degree one strives to produce types that fully express the invariant pre and post-conditions at all semantic levels, then the degree of checking is increased (but sadly aliasing error isn’t a linear phenomenon, so that might not help). This is not security against all possible semantic errors, but at least remaining errors are in those that slipped through the design of the types (even though they manifest as aliasing error far from the source). Types can be reused, so we can put a lot of effort into designing them well.

      There is a tradeoff. As the types become more restrictive, they become more difficult to compose. The C “const” blunder being one of the infamous painful examples (I’ve been hopefully careful to not repeat this “const” in Copute). This is real life injecting itself in our attempts for a Holy Grail, which there never will be of course. Actually it is a futures contract which is the antithesis of natural law. “const” could never be assigned to a non-const in any scenario (there was no escape route), it thus infected the entire program.

      Thanks for pointing out the exception is not a result type and thus a design error. I agree that declaring exceptions as return types is a design error (an ad-hoc hack), because exception is not a proper post-condition, i.e. it is a non-result semantic and thus a design error to return it as a result. I don’t see objectively how an implicitly lifted exception monadic action try-catch is not also a design error by same logic? Divide-by-zero means our result is NFG (no fugling good, lol šŸ™‚ ), which is not a result at all, it is different semantic entirely, so normally we are designing our code so that exception will never occur. So I offer a NonZero argument type, the caller can construct that type and check at run-time that not passing a 0 value. The dynamic checks are there in static typing, but they are forced to be checked (note constructor NonZero( 0 ) would throw an assumed uncaught exception, aka an assert, i.e. stack trace into the debugger because caller didn’t even do the check). If the caller already had a NonZero type, they don’t need to check it again. Afaics, that is more PROVABLY correct than returning an exception, because it describes the compiler checked invariants, rather than an ad-hoc return which is not a return semantic. So I am not arguing for exception return type except where you argue for try-catch as an ad-hoc “solution” (perhaps that wasn’t reified in my prior post), but rather for declaring the invariant arguments and avoiding exceptions entirely, where practical (i.e. correct design).

      After all, we are not in religious war, because Copute supports dynamic typing too. I understand in some (maybe even most) use cases, static typing does not provide reasonable benefits to justify its use. It can cause tsuris for very minute gains in checking. Afaik, inferred typing goes a long way to increase the utility. And potentially Map reduce constructed data types will make typing much more useful, which pertains to the title of this blog page (link is to my comments which were censored from LtU). I am not criticizing Babel-17 for not having static typing (and am encouraging you to pursue your design ideas), I only asked that we characterize any tradeoffs of potentially adding it later incrementally, never, or now (as I am trying to do in one big difficult design+implementation step). And afaics, this discussion has helped document/reify/correct some of my own understanding. I hope we have also clarified for your readers some of your design decisions for Babel-17. What else can I say, but a big sincere thank you.

      Are there any more objective observations we can make on this issue? Any corrections?

  9. phlegmaticprogrammer Says:

    Let me first say that I am the last person to deny that testing cannot solve all of our problems. I’ve spent a considerable amount of time studying how we can actually PROVE invariants that are much too hard for simple static typing.

    Let me also say that I always enjoy a good discussion, because I consider defending/explaining my design decisions that I made with Babel-17 to be part of the design process. It helps me to confirm that I made the right decision. Sometimes it also helps me to reconsider a decision I made.

    I think we agree now on a lot of points. I think after you have actually designed and implemented your type system and tried it out in practice we will agree on even more points šŸ™‚

    Take for example the NonZero type you mention. You are saying that each time I have a division I need to construct this type? Which is then checked DYNAMICALLY?? So I had to write for example:

    (a * b) div NonZero (a - b)

    and the effect of this ugly piece of code would be just the same as when I write in Babel-17

    (a * b) div (a - b)

    Now, of course there are also ways to get around having to explicitly write down NonZero in static typing, most notably via implicit type conversions. But I have learnt that each such “improvement” comes with its own set of problems. Therefore I came to the conclusion that embracing dynamic typing from the beginning allows me to focus on a simple and clean design of my language.

    Furthermore, we can see now that the REAL point you are making has nothing to do with static typing. You are saying that certain exceptions should not be caught! But this is just exception handling 101. You can just let these exceptions bubble up to the top. And in case one day you really want to catch a DivionByZero exception, you can.

    I think our discussion has been representative for some of the changes in programming languages that are currently going on. Many people think that languages will converge somewhere in the middle between dynamic and static typing. I just happen to prefer to start with no static type system at all and work my way from there, effectively preferring freedom over security in those cases where I have to choose. You prefer to start with a static type system and will probably need a couple of iterations until you have one you are happy with.

    Thank you for the discussion!

    • Shelby Moore III Says:

      I concur.

      Could you offer me any literature reference(s) on “studying how we can actually PROVE invariants that are much too hard for simple static typing”? Preferably something readable on the web, otherwise anything you can recommend.

      I would like to consider that before learning by the fire of experience šŸ™‚

      The only distinction I want to make is that by testing the invariants before making the function call, the function (f) being called is provably correct (to its declared type):

      f NonZero a b = (a * b) div (a – b)

      Now call it:

      if a – b == 0 then …
      else f NonZero a b

      An exception is only thrown if the above code is not done when NonZero is constructed. This is why Copute has a “throw” but not a “catch” instruction.

  10. phlegmaticprogrammer Says:

    The thing is that even though your f may be “provably correct”, the compiler cannot prove it automatically in general and therefore has to flag an error.

    I can give you pointers for interactive theorem proving, but none of this stuff will be immediately useful for you when designing your language as it takes a few years to learn and absorb the lessons of this field. But to get started, you could for example play around with the theorem prover Isabelle, available at together with lots of documentation

  11. Shelby Moore III Says:

    Agreed the compiler can not prove that the function (f) will be instantiated without error by the caller of f, but afaics, it can prove that the invariants of f’s interface are not violated.

    I am not proposing that any one will ever be able to prove how a function will be used in every possible permutation of composition. Complex algebras will forever enumerate a growing but finite # of possibilities (and this may be very useful in helping us design class invariants), as new algebras will be discovered/enumerated. I am only proposing that we can prove that the function’s input and output invariants are enforced at the interface only. Anything more would be a futures contract (where instead we can build the proof of the algebra into the generation of the type itself), which afaics is an inherently unstable proposition (a/k/a slavery like C++ “const”) because of the 2nd law of thermodynamics which (you know), states that the universe trends to maximum disorder (a/k/a entropy or possibilities). Common folk think disorder is a bad word, but it actually means life and freedom– more possibilities. Most common folk don’t grasp that a uniform distribution is equivalent to an empty set or death– nothing can happen.

    I think Stephen Wolfram’s assertion applies that formal math can never enumerate the complexity of nature. In my theory (and shared by his apparently), there are patterns (what I call “frequencies” in a more abstract sense) even finer grained than Planck’s constant: (my TOE)

    So my thought is that the simplest building blocks (e.g. cellular automa) compose in simple interfaces, to produce complex semantics that can’t even be described by a closed form algebra. One of his big points is that even if you have the equation, you don’t know what it is going to do, until you run it. Ah sounds a lot like the Halting problem. Ah sounds a lot like “universe trends to maximum disorder”. Ok I know “sounds like” isn’t science, but science is often seeded by inspirational intuition.

    Thus I see the purpose for static typing of interfaces not for proving the complex semantics of composition permutations, but for proving the invariants of composition. Inward looking, not outward, sort of analogous to how referential transparency isolates the function’s implementation from the global state. Of course these types will fail, unless the types themselves are sufficiently simple– by which I mean the type does not protect any futures contract assumption and is valid for any of its invariants (constructor and methods). It is really about the programmer doing exhaustive design. The static typing just enforces those invariants that have been designed. If types are poorly designed, then the static typing won’t help, or might even cause more tsuris.

    In short, we are not trying to prove the limit of the universe, we are just trying to prove that the simple building block has a deterministic semantic as expressed by its interface.

    I accept your point with appreciation and enthusiasm, that there is much more I will learn about this. Thank you for your links. I do have it on my todo list to become more conversant in post-bachelor level formal mathematics. I know just enough in some cases, but not enough as I noted my TOE, I want to learn how to study Q-orders, etc.. Thanks for having the discussion with me, even though I am ignorant in some areas.

  12. phlegmaticprogrammer Says:

    Agreed the compiler can not prove that the function (f) will be instantiated without error by the caller of f, but afaics, it can prove that the invariants of fā€™s interface are not violated.

    I don’t really know what you mean by that. What do you mean by “that the invariants of f’s interface are not violated” ? Whatever you mean by it, my guess is that you are wrong thinking that a compiler can in general prove such a thing.

  13. Shelby Moore III Says:

    Yes I must be missing something fundamental, but I think know what you are thinking. What I mean is that in the following code, every call to f() will be sure to respect the invariants of f’s interface, which is that a – b != 0. This rule can never be violated on the f’s interface, because it is checked by the compiler.

    I understand you are probably thinking (and I agree) that the caller of f, could forget to check the inputs to NonZeroDiff. Thus you would state that nothing has been checked at compile time.

    I thinking that the interface of f() has been checked, and this provides benefits, because types can be reused. So as NonZeroDiff is reused, the ratio of constructions to interface checks drops. So we get more checks than non-checks (and this doesn’t factor the added benefit of reduced human error due to familiarity). This reuse is very important for composability, and it will be more compelling for types which are more reusable than NonZeroDiff. It also helps document the program symbolically, and reduces the amount of redundant human language documentation. As I said prior, I am also in the dynamic typing camp, because is some use cases (maybe most) the benefits versus costs of using static typing are not favorable. Are those potential static typing benefits completely fictional in your view?

    class NonZeroDiff {
    a, b // these would be declared to be read only outside the new() constructor
    new( a : Int, b : int ) {
    if a – b == 0 then throw “Unexpected inputs”
    this.a = a
    this.b = b

    function f( ab : NonZeroDiff ) {
    return (ab.a * ab.b) div (ab.a ā€“ ab.b)

    result = f( new NonZeroDiff( 2, 1 ) )

    • phlegmaticprogrammer Says:

      I think the introduction of a NonZeroDiff class just to capture a very simple invariant is code bloat and achieves basically nothing. Instead just trust that “x div y” will throw an exception if y is 0. Or, if you better like the exception to be at the boundary of f instead of within f, just add an assertion, like

      function f(a : Int, b : Int) {
      assert(a != b)

      Of course it might make sense not to pass the arguments of a function not separately but as a single object of type NonZeroDiff if you use this type over and over again. Just note that NonZeroDiff then should be a “pure class”, so there will still be the need to throw exceptions in pure code.

      • Shelby Moore III Says:

        Agreed, thus the fundamental conclusion I make is that in dynamic typing, the check is inside the function being reused, and in static typing the check is outside the function, in the type of the invariants specified by the function’s type.

        In the dynamic-typing case, the function inherently bind’s its check to every possible external code (tries to define the limit of the universe, which is inherently unstable due to 2nd law of thermodynamics), because the external code has no check.

        Whereas, in the static-typing case, the external code does its check, in order to comply with the compile-time checked invariants of the function. The external code has the option of passing this check to (a member of) its function type, thus pushing the check outward. But rather than this check being pushed outward in a disorderly (random actually, since we don’t know all permutations of external code a priori when the original function was defined with the dynamic assert) way in dynamic-typing with a bubbling exception, the static-typing bubbles the check at compile-time. And that really is the key insight. Compile-time bubbling, versus run-time bubbling, because try-catch is orthogonal to any compile-time checking on function type.

        I hope I haven’t misstated.

      • Shelby Moore III Says:

        Wow! I was able to eliminate the assertion exceptions entirely:

      • phlegmaticprogrammer Says:

        You are circling around the same mistakes. Your compiler cannot check these preconditions at compile time. In effect, you have only proposed another way of writing down assertion exceptions.

  14. Shelby Moore III Says:

    There are other benefits to static typing, for example the algebraic type which is treated a enumeration, can be checked by the compiler to enumerate all the cases, e.g.

    enum Type { a, b, c }

    switch( Type )
    case a:
    case b:
    case c:

    Ditto with Haskell’s type and guards.

    So the compiler can check if there is missing a case, if no default case.

    Another use of static typing is the compiler can check covariance, so that there is some order to extending invariants that are checked by the compiler.

    Also afaics, the prior discussion about NonZeroDiff was a degenerate case, because it had an exception on inputs to the constructor. Not all types will have an exception. Afaics, a type can be completely consistent with its interface (constructor, methods, and their invariants given by the types of their interfaces) in some (perhaps many) use cases, with careful design. That is why I suggest we can justify more effort on types, when we have types to reuse.

    • phlegmaticprogrammer Says:

      Babel-17 has very powerful pattern matching, and in my programs I rarely use simple enums. When accepting pattern matching as a style of life, you will find that usually you have a default case anyway. If you don’t, then the match is usually complete for reasons which are beyond static type checking. And if you made an error and have an incomplete match that you thought were complete, then if this is encountered at runtime a NoMatch exception will be raised.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: