When you listen to an orchestra, a substantial number of the instruments are doing this. Actually, they do it for every note, C or not. The grand thing is that somehow, like magic, it works. The bottom line here is… if you like it, use it! Enter your e-mail address in the box below to get both blog posts AND new music updates. For new music updates only, use the link below. You should receive a confirmation e-mail shortly. If you don't, be sure to check your "Spam" box. Sally, This is not boring at all — at least for me. I am printing this to include in my small portfolio in order to remind myself of this info.
I am greedy and am always on the look-out for them! Thank you so very much for all that you do for those of us who love to sing to our Heavenly Father. Your email address will not be published. Notify me of follow-up comments by email. Notify me of new posts by email. This site uses Akismet to reduce spam. Learn how your comment data is processed. The absence of restrictions on combining functions except for typing constraints has often been claimed to be a major benefit of functional programming, for example by Backus [6] and Hughes [32]. Originally our interest was in the development of a calculus for the derivation of algorithms from a specification, as proposed by Meertens [47] and Bird [9, 10].
Category theory provides a suitable medium to formalise the notion of datatype, as shown by Lehmann and Smith [40], Manes and Arbib [45], and many others. Malcolm [42] showed that actual calculations of algorithms can be rendered in a categorical style. It is from here a small step to apply the calculational style of algorithm derivation more generally to category theory itself. The overall acceptance of diagram chasing is presumably the cause that this style of deriving categorical properties is relatively unknown. Indeed, only recently books and papers on category theory have appeared in which equational reasoning is explicitly strived for; for example by Lambek and Scott [37], Hoare [31].
Preliminaries and Initiality 6 Overview. The remainder of the chapter is organised as follows. In the next section we define some lesser known categorical concepts and discuss initiality.
Then we specialise the laws for initiality to products and sums in Section 2c, to coequalisers and kernel pairs in Section 2d, and to colimits in general in Section 2e. Each of these sections contains one or more examples of a calculation for the derivation of a well-known result. We conclude with a worked-out example in Section 2f: Many more examples of categorical calculations occur in the remainder of the text. Sections 2b, 2c are essential for the following chapters; Sections 2d, 2e, and 2f may be skipped without loss of continuity but 2f depends on all preceding sections.
Sections 2d, 2e, and 2f assume more familiarity with categorical concepts, and are intended as a case study in the calculational approach to category theory. The proof of the pudding is in the eating: Often an interesting construction in C can be characterised by initiality in a category A built upon C. We say A is built upon C if: So, A is fully determined by defining its objects and morphisms. The categorician may recognise D as the category of cocones for the diagram D. Dually, the category of cones for D is denoted V D. I owe these notations, and those for D below, to Jaap van der Woude.
It follows that x: An object in a, a is a parallel pair with source a. W Category f kg , where f and g are morphisms in C with a common source and W a common target. An object in f kg is: Let p W and q be such objects; then a morphism from p to q in f kg is: Category W object in f W f g , where f, g are morphisms in C with a common source. Let h, j and k, l be 16 Chapter 2. Categories algebraically W two objects; then a morphism from h, j to k, l in f g is: This category is used to define the pushout of f, g. Category Alg F , where F is an endofunctor on C. We shall explain this in more detail in the chapters to come.
Let A be a category, and a an object in A. Then a is initial in A if: Often there is a more specific notation that better suggests the resulting properties see the following sections. The usual notation for [b] A is! Finality is dual to initiality; an object a is final if: Here are some consequences of Charn. A substitution for x such that the right-hand side becomes true yields Self, and a substitution for b, x such that the left-hand side becomes true yields Id: Preliminaries and Initiality x: For Fusion we argue suppressing A and a: In particular the importance of law Fusion cannot be over-emphasised; we shall use it quite often.
If the statement x: In the case of initial algebras Uniq captures the pattern of proofs by induction that two functions x and y are equal; in several other cases Uniq asserts that a collection of morphisms is jointly epic. In general, when A is built upon another category, C say, the well-formedness condition for the notation [b] is that b viewed as a composite entity in the underlying category C is an object in A ; this is not a purely syntactic condition. Here is a first example of the use of these laws: Suppose that both a and b are initial.
By Type and Self they have the correct typing. We prove both implications of the equivalence at once. This gives a nice proof of the weaker claim that initial objects are isomorphic. In other categories products and sums may get a different interpretation. As an introduction to the definition of the categorical sum, we present here a categorical description of the disjoint union. Let C be Set. There are the injections inl: Using the predicate one can define an operation that in programming languages is known as a case construct, and vice versa. Products and Sums This is an important observation; it holds for each representation of disjoint union!
In summary, we call inl: This is an entirely categorical formulation. In addition, the form of the equivalence suggests to look for a characterisation by means of initiality or finality. The entities inl and inr have a common target and have a and b as source respectively. This completes the introduction to the definition below. Since there are categories in which the objects are not sets, the categorical construct is called sum rather than disjoint union. Let C be arbitrary, the default category, and let a, b be objects.
Notice that for given f: We shall quite often use this form of definition. Categories algebraically 14 Products. Products are, by definition, dual to sums. The usual categorical notation is hf, gi. As a first application we show that inl a,a is monic and by symmetry inr a,a too, and dually each of exl a,a and exr a,a is epic: The choice for g is immaterial; id a certainly does the job. Two binary operations abide with each other if: For later use we define, for f: Throughout the text we shall use several properties of product and sum.
Here is a list. Related to this is the lesser known concept of kernel pair. Both will be used in the construction of the congruence relation induced by a given relation, in Section 2f. Let us therefore present the algebraic properties of these concepts; they are categorically known as coequaliser and kernel pair.
Let C be Set , the default category, and fix for the following discussion an object a and a parallel pair f, g with a as common source. The pair f, g is or represents a relation on a , namely the one that contains all pairs f x, gx. We shall now describe the equivalence relation induced by f, g. An equivalence p on a is called proper if: Properness of p means that the target of p is precisely the set of equivalence classes and does not contain unreachable junk.
The equivalence on a induced by f, g is: Then the equivalence p induced by f, g is the function p p x: Coequalisers and Kernel pairs Alternatively, the induced equivalence can be expressed by initiality as follows. Let q be an arbitrary equivalence on a that also includes relation f, g. Then, the initiality statement x: W Thus p is initial in f kg. W Abstracting from Set and the application here, the initial object in f kg is called coequaliser since in categories different from Set the terminology of relation, equivalence, and inclusion may not be appropriate.
We shall present the properties of coequalisers in a way suitable for algebraic calculation. Let C be arbitrary, the default category, and let f, g W be a parallel pair. A coequaliser of f, g is: Let p be a coequaliser of f, g , supposing one exists. Now that we have presented the laws the choice of notation may be evident: The following law confirms the choice of notation once more. Then the proof runs as follows. Another law that we shall use below has to do with functors. As before, let p be a coequaliser. Clearly, this condition is valid when F preserves coequalisers.
The proof of the law reads: Above we have dealt with a categorical description of the equivalence p on a induced by a given relation f, g: Now we consider inducing in the opposite direction. A relation f, g on a is called proper if: Let a set a and an equivalence p on a be fixed for this discussion. This category is designed in such a way that the steps become valid; so it V is built upon C as follows. An object in p p is: V A morphism from d, e to f, g in p p is: Now, the relation on a induced by p is: This is a lesser known concept in daily set theory, since in set theory a relation is rarely represented as a pair f, g of functions, and moreover the relation induced by p represents the very same relation as p.
Alternatively, the relation induced by p can be expressed by finality as follows. Let f, g be the relation induced by p , and let d, e be an arbitrary relation including p. Then the finality statement x: V Thus f, g is final in p p. V Abstracting from Set and the application here, the final object in p the kernel pair for p. Coequalisers and Kernel pairs 21 Laws for kernel pairs. Let C be arbitrary, the default category, and let p be a V morphism. A kernel pair of p is: Let f, g be a kernel pair of p , supposing one exists.
Due to the presence of so many pairs the notation is a bit cumbersome, but we refrain from simplifying it here. We do so in paragraph As an example of the use of the laws we prove that the coequaliser and kernel pair form an adjunction. More precisely, let C denote a mapping that sends each parallel pair with common target a to some coequaliser of it, and similarly let K send each morphism with source a to some kernel pair of it: W V We shall extend them to functors C: V To define Cx for a morphism x in a, a we make an obvious choice.
It remains to prove that C is a functor. W To define Ku for a morphism u in a we make an obvious choice too. Thus extended, K is a functor by a similar argument as above. Colimits 2e Colimits An initial object is a colimit of the empty diagram, and conversely, a colimit of a diagram is an initial object in the category of cocones over that diagram. Let us use the latter approach to present the algebraic properties of colimits. A diagram in category C is: Category D , built upon C , is W defined as follows. An object in D , called cocone for D , is: W A colimit for D in C is: In view of the explicit quantifications the above laws for colimits are not very well suited for algebraic calculation, and that is what we are after.
It turns out that this can be formulated categorically by using natural transformations, which are families of morphisms indeed. Several not all manipulations on the subscripts can then be phrased 30 Chapter 2. Categories algebraically as well-known manipulations with natural transformations as a whole. So let us redesign the definitions. I got the suggestion from Jaap van der Woude; Mac Lane [41] and Lambek and Scott [37] and several others use the following formulation too. As regards the property of being a cocone we can say without loss of generality that a directed graph is a category D: Conversely, each category D determines a graph by taking all morphisms as edges, and forgetting which morphisms are composites and which are identities.
A labelling of the edges with morphisms from C is then a functor D: This leads to the following definitions. A diagram in C is: Category D is built upon C as follows. An object in it, again called cocone for D , is: Again, a colimit for D is: For natural transformations in general, hence for cocones in particular, the following definitions are standard. We present the well-known construction of an initial F -algebra.
You may skip this application without loss of continuity. Our interest is solely in the algebraic, calculational style of various subproofs. The notion of F -algebra has been defined in paragraph 7 without any explanation. Read the steps and their explanation below in parallel! We shall now complete the construction in the following three parts. This is easily achieved by making D a chain of iterated F applications, as follows. The zero and successor functors 0 , S: Let 0 be an initial object in C. Define the diagram D: This completes the entire construction and proof.
We include it mainly to illustrate once more an algebraic calculational approach to category theory, in particular in a case where pushouts are involved. I consider it rather a case study. Although you should be able to follow the calculations step by step, you will probably not understand what is going on if you are not familiar with the notions of pushout and colimit. We start with a categorical description of two different notions of induced congruence, then we introduce a notation that facilitates an algebraic calculation with pushouts, and finally we give a construction of one of the induced congruences and its correctness proof.
The notions and notations of the preceding sections are used throughout. Recall from Section 2d the notion of equivalence. Aiming at a formulation in Alg F the following definition suggests itself. The intersection makes sense since W both categories are subcategories of another one, namely a. The analogy may be exploited in generalising a construction of coequaliser to a construction of the induced alg-congruence. This has been done by Lehmann [39].
However, the underlying category C , and not Alg F , is the universe of discourse. The morphisms of C are —for us— all the algorithms that exist, and only some of these are in Alg F too. So, here is my self-made definition directly in terms of C. For later use we rephrase this as follows. Then the defining implication 32 is established by: When the u is a pre-inverse, the target algebra of the homomorphism p is independent of the choice for a pre-inverse of p: Induced congruence categorically So, in Set the notions of alg- and base-congruence coincide, and in arbitrary categories an initial base-congruence p has also the initiality property with respect to the algcongruences, though p itself is not necessarily an alg-congruence.
Thus it is to be expected that a categorical construction of the initial base-congruence requires stronger conditions of the underlying category and F than the construction of Lehmann [39]. I have not been able to check this in detail. Before we can present a construction of the induced congruence, we introduce some more notation and formalise categorically the union of equivalences. It is quite important to be W aware that the source category of K is a and not C. For suppose that p is an object W W in a and x, y are morphisms in a so that all three are morphisms in C. With this notation the definition of congruence admits an alternative formulation.
The former claim is obvious. An equivalence p on a is called proper if function p is surjective. We shall now give a categorical description of the proper union of two proper equivalences; this turns out to be a pushout construct. So, let C be Set , and let a be a set and p, q be proper equivalences on a , fixed for the following discussion. Here is the typing of p and q , and the variables used in the sequel. Indeed, the r so defined has source a , and if two elements of a have an equal image under p , or q , then they have an equal image under r as well.
Similarly for r Finally, p t q is: An explicit expression for p t q is readily constructed. Then Alternatively, p t q can be expressed by initiality as follows. Let equivalence s , including both p and q and represented by s0 , s00 , be arbitrary. Then the initiality statement x: W So, indeed, p t q is initial in p q. W Abstracting from Set and the application here, an initial object in p a pushout of p and q. Let C be arbitrary, the default category.
Let p and q be morphisms W with common source. The pushout of p and q is: For those who know pushouts, w p t q is the pushout of q along p , and, in the conventional diagrammatic representation w w of the pushout square, p t q is parallel to q as suggested by the symbol t. Similarly w for p t q , and p t q denotes the diagonal.
W 41 The construction. First we define the obW jects Dn in a. We shall nowhere use these clauses explicitly. It is routine to verify, by induction on n , that D satisfies the typing as W indicated, and hence D: Interpreted in Set equivalence p is defined to be the union of all the equivalences Dn. Moreover, p is epic in C. Morphism p is epic in C.
Induced congruence categorically 45 Lemma. By induction on n. This is a very weak guess since many categorical constructions have this form. The calculations are quite smooth; there were few occasions where we had to interrupt the calculation, for establishing an auxiliary result or for introducing a new name for a morphism. Thanks to the systematisation of the notation and laws for the unique arrows brought forward by initiality, there is less or no need to draw or remember commutative diagrams for the inspiration or verification of a step in a calculation.
Each step is easily verified, and there is ample opportunity for machine assistance in this respect. More importantly, the construction of required morphisms from others is performed as a calculation as well. There are several places where a morphism is constructed by beginning to prove the required property while, along the way, determining more and more of an expression for the morphism.
Thus proof and construction go hand-in-hand, in an algebraic style. There is one purpose for which pictures are certainly helpful: All calculations can be interpreted in Set so that, actually, we have quite involved calculations with algorithms functions. Calculations with algorithms working on more usual datatypes will be explored further in the next chapter.
Chapter 3 Algebras categorically Roughly speaking, an algebra is a collection of operations, and a homomorphism between two algebras is a function that commutes with the operations. Homomorphisms are computationally relevant and calculationally attractive; they occur frequently in transformational programming. Algebras are also used to define the notion of datatypes. The language of category theory provides for a simple and elegant formalisation and investigation of homomorphisms and algebras; it also suggests a dualisation and several generalisations.
Expressed at the function level this reads: A further generalisation of the equation reads: The equation asserts the semantic equality of two different ways of computing the same value. In case the equation holds, the efficiency of a program may be improved by replacing 47 48 Chapter 3. Notice also that such a program transformation need not be done with an immediate efficiency improvement in mind, but may be done to enable future transformations that do improve the efficiency in the end.
Therefore such generalised distributivity properties are relevant for transformational programming. For a useful formal treatment we generalise the source structure of the operations from II to an arbitrary functor F. This generalisation also captures the distribution over several operations simultaneously, as shown by the following calculation. Thus promotability of f is nothing but the property that f is a homomorphism. More precise definitions are given in the sequel.
The generalisation from II to an arbitrary functor F is not yet the full story. Such a property is again quite relevant for transformational programming. We shall see in Section 3d that a collection of algebras and co-algebras together is a single dialgebra, and that the notion of dialgebra also covers many-sortedness.
A further motivation to study di algebras is their use in formalising the notion of datatype. So, part of a datatype is a particular algebra; the distinguishing property is categorically known as initiality of the algebra. Dualisation leads to the notion of final co-algebra; less known, but quite useful as we shall see. There are reasonable conditions on F in order that an initial F -algebra, or final F -co-algebra, exists. I do not know of similar conditions for dialgebras in general. Hagino [29] shows that function spaces, exponentials in category speak, are dialgebras.
Algebras categorically The following definition captures the preceding observations. Anticipating laws homo-Id and homo-Compose in paragraph 13, we also define the category of di-,co- algebras but see the remarks that follow the definition. We postpone the discussion and formalisation of laws conditional equations satisfied by operations algebras to Chapter 5. An F, G -dialgebra is: Category DiAlg F, G is: An F -algebra is: The two formulas for Homo are easy to remember, in spite of the swap of F, G when comparing the two formulas. The order of F, G in the notation f: As regards the equation, since F describes the source structure of the algebras, morphism F f can only sensibly occur at the source side of an dialgebra; similarly, Gf can only sensibly occur at an target side.
Strictly speaking the definition of the categories is wrong in the sense that the morphisms in DiAlg F, G —as defined above— do not have a unique source and target. It may happen that both the equation denoted by f: To repair this defect, the 51 3a. Since category C is intended as the universe of discourse, the equation f: Functor U is usually called an Underlying or forgetful functor. For algebras and co-algebras there is nothing the matter since functor I is injective. Whenever the equation denoted by f: Examples dialgebras 6 Naturals. Recall the datatype of naturals as explained in paragraph 1.
The single operation zero is a 1 -algebra with carrier nat. The single operation succ is an I -algebra with carrier nat. Algebras categorically 7 Cons lists. Recall the datatype of cons lists over a as explained in paragraph 1. The single operation nil is an 1 -algebra with carrier La.
The single operation size is a nat -co-algebra with carrier La , as well as a La -algebra with carrier nat. Similarly as above, various combinations of hd , tl , and from form F -coalgebras or F -algebras, for suitably chosen functors F. Here is just one example. A rose tree over a is a multi-forking tree with labels at the tips. Meertens [46] discusses these in detail. Let Ra be the set of rose trees over a. The constructors are tip: Specifically, the defining equations of size in paragraph 1.
The last line is immediate by writing out the equations in detail, as we did above for f , or by applying homo-Sum We have already argued in paragraph 1 and 2 that homomorphisms are computationally relevant. They are also calculationally attractive since they satisfy a lot of algebraic properties. The first two are very important and frequently used. Each of the laws is an abstraction and generalisation of a pattern of reasoning that occurs somewhere in this text.
Law homo-Sum is proved in paragraph 2 for 54 Chapter 3. Law homo-Compose states that homomorphisms compose nicely; together with homoId it asserts that F, G -dialgebras form a category; the category is called DiAlg F, G and defined in paragraph 4.
Law homo-Ftr2 states that functors H: Law homo-Swap is less general than it seems upon first sight: However, sometimes the unabbreviated formula may be much clearer than that with the arrow notation. As an example, the following law becomes almost trivial by just unfolding the arrow.
Use of the laws Suppose that inits, tails: The proof is simple, thanks to the notation and laws for homomorphisms. Actually, the proof can be simplified further by noting that inits , tails , and flatten are natural transformations, and so is segs. Initiality and catamorphisms 3b Initiality and catamorphisms We explain here informally what initiality in Alg F means, and also finality in CoAlg F. Initiality or finality in DiAlg F, G in general has, as far as I know, no immediate practical relevance; moreover, I know of no simple conditions on F, G that ensure that an initial or final object in DiAlg F, G exists.
Prefix cata is explained below in paragraph The premise of cata-Fusion for instance can be formulated as x: The arrow notation makes it easier to apply the homo-Laws discussed in paragraph Using the arrow notation the laws read as follows. The proof of cata-Compose is simple: Now look at the left hand side of cata-Charn: The other laws have a similar informal interpretation. Thus cata-Uniq captures, in a sense, induction. So a catamorphism is nothing but a homomorphism on an initial algebra. It is useful to have a separate name, since in contrast to homomorphisms they are not closed under composition but do satisfy the laws listed above.
In the literature on functional programming catamorphisms on cons lists are called fold or iterate. To prove this, we have to establish a pair x, y of morphisms in Alg F F -algebra homomorphisms in C , x y: The existence of a candidate for y is problematic for dialgebras in general. It remains to show that these choices are each others inverse indeed. For this we argue: It is a post-inverse too: Recall that in each category all initial objects are isomorphic to each other, even with precisely one isomorphism between each pair.
There exist categories and endofunctors F for which there is no initial F -algebra. Yet, for Set and various order-enriched categories such as CPO the class of functors for which an initial algebra exists is quite large. The key to this result has been shown in paragraph 2. All functors generated by the grammar F:: The induced type functor is defined in paragraph Malcolm [42] has proved the result especially with regards to the last clause.
We shall return to this in paragraph 6. There are several instances of such proofs in the sequel. Examples initial algebras and catamorphisms 34 Naturals. It is initial in Alg F. Working out this definition, we find: Writing n 35 Cons lists. Specifically, the function mentioned in the last line is cons a0 ,.
Modern programming languages allow, amongst others, to define a new type by enumerating the elements of the type. As an example we show how to define a type color with three elements red , white, blue. Hence, color is a set consisting of just three elements, called red , white, and blue. To be continued in paragraph See also Section 5f. Initiality and catamorphisms expressions the tree-structure being determined by F. For this choice the equation of the previous example is quite complicated to write down in a readable way as one equation. But it is easy to derive a similar equation.
Repeated application of the latter equation, and once using the former, gives that is, zero ; succ ; succ ;. More generally, in each category the morphism id 0: By definition final co-algebras and anamorphisms are the dual notions of initial algebras and catamorphisms. The definitions and laws are obtained by the mechanical process of dualising.
So we can be brief here. Let F be an endofuctor. Algebras categorically Notice that most equations merely express that x is a homomorphism of a certain type. The premise of ana-Fusion, for instance, can be written as x: Now look at the left hand side of ana-Charn: Equation 43 tells that the destruction of the result of x the right hand side can be computed as given in the left hand side. This type of definition, and algebra, is far less known than that for initial algebras. The other laws have a similar interpretation. More generally, in each category morphism id 1: Actually, this is nothing but the dual of the observation in paragraph It is final in CoAlg F.
So the outcome of x is expressed in a way not involving x , and therefore function x itself is well defined.
Recall the F -co-algebra destruct 0: The only informality in our argument is the claim that the infinite sequence x;! Accepting the claim, the reasoning is straightforward. By induction on n it is easily shown that, for all n , x ; F 0 destruct 0 ;. So each of the functions in the list can be written as an expression in terms of known functions, not involving x. This equation complies with the explanation in paragraph 1.
It is not an initial one: Indeed, the typing implies that x maps an infinite list onto a finite list, but the equations imply that each result has the same length as its argument. Also, for finite sets a containing at least two elements the cardinality of La is countably infinite, whereas that of L0 a is uncountable; hence the carriers are not isomorphic, implying that the algebras are not isomorphic in Alg F.
As an example, the streams nats , ones , and nils are now readily defined. For each a and f: Moreover, the entire construction is natural in I -algebra f. This is formally shown in paragraph Let us specialise the general iteration construct to cons0 lists. We wish to express the cons and cons0 list of all predecessors of a nat argument as a cata- and anamorphism, respectively. Algebras categorically For preds: Hence in Set they define preds uniquely.
This equation has almost the form of the equations for F -catamorphisms. So it is not obvious that preds is a catamorphism. Actually, not every paramorphism is a catamorphism, but this one is. We will discuss paramorphisms briefly in Section 4b. Again the top line of the following calculation is taken for granted, and at least in Set it defines preds 0 as a total function. Notice the correspondence with the equations and calculation for preds. So pred 0 is an anamorphism indeed. A possibly finite, possibly infinite cons0 list is produced by an until construct: Define in Set for predicate p on a the function p?: A construction of p?: Now, for arbitrary f: Apart from the construction of p?
For the datatype of lists the so-called map is well known and frequently used; see for example Bird [9]. Recall the algebra of cons lists over a where, now, a is considered to be a parameter rather than a fixed set, and actually nil , cons are functions of a: Algebras categorically Writing Lf for the well-known function map f it follows that Lf: We shall show that these observations hold not only for the particular functor Fa for cons lists, but also for each parametrised functor Fa that depends functorially on a: We prefer the name sumtype functor or briefly type functor, since the word map is already in use for various meanings, and sumtype is quite well chosen as explained in paragraph Malcolm [42] has already formulated and proved all the laws.
My contribution is merely some extra subscripts at various places, some slight generalisation, and some more examples. We discuss initial algebras in detail, and then dualise the results to final co-algebras, giving prodtype functors.
Both sumtype and prodtype functors are called just type functors. The generalisation to arbitrary dialgebras is sketched in paragraph Take a category C as the default category. Define a mapping M on objects 69 3c. Thus we have derived a candidate definition for M: Prodtype 70 Chapter 3. Algebras categorically 55 Type functors for dialgebras. The generalisation to arbitrary dialgebras is a bit tricky, and we shall nowhere use it.
This is also expressed in the third line of the calculation. Type functors formerly map functors 57 Laws for sumtype. Here are some useful laws.
In each law, functor M: Some explanation and explicit typing follows the enumeration. G cata-Ntrf Examples 62—67 illustrate some laws. Fully typed the law reads: The ingredients of law cata-Transformer are typed as follows: Then the preceding laws dualise to these. For the —simple— proofs of the sumtype and prodtype laws we refer to Malcolm [42]. Algebras categorically 60 Syntactic sugar. For example, cons lists can be defined by the declaration sumtype Lx with rightreduce is nil: Apart from defining L and nil , cons in the obvious way, the declaration also defines rightreduce a e, f for all a, b and e: More abstractly, the declaration sumtype M x with cata is alpha: This is explained in detail in Examples 38 and Using again an enumeration of the components of alpha , the declaration prodtype Sx with generate is hd: Needless to say that a declaration itself does not guarantee the existence of the declared entities.
Type functors formerly map functors 61 Sumtype, Prodtype. Examples 62 To illustrate sumtype-Distr. Then the composite sumsquares is a single catamorphism; usually this is proved by the Fold-Unfold technique. Algebras categorically and for arbitrary f, g: Consider once more the datatype of streams: The following calculations show that zip and zipwith-f are anamorphisms. Recall the definition of f -iterate given in paragraph Iteration is a transformer: The proof above is just a few lines long or 75 pages depending on what you consider to be part of the proof.
Most of the results reported here were observed by Meertens [23]. Thus Mu is a functor, Mu: G form a subcategory of C. Since Mu is a functor, there is a simple proof that reverse is its own inverse: By the remark following cata-Compose 26, reverse ; f is a catamorphism whenever f is. The latter example is the key to the factorisation of the sumtype functor. Algebras categorically Many-sortedness and other variations 73 Many-sorted algebra. The notion of di algebra is rich enough to model manysorted di algebras. As an example consider the collection hbool, nat; true, false, bool-to-nat, zero, succ, equal i.
This collection is or suggests a two-sorted algebra, the two sorts types being bool and nat. In view of the typing bool-to-nat: We shall show that by instantiating the underlying category to a product category, category Alg F consists of many-sorted algebras indeed. Besides that, a single initial many-sorted algebra can be expressed as many initial single-sorted algebras.
Thus the existence conditions and the construction for initial algebras over a product category are reduced to initial algebras over the component categories. It has the advantage that the formulas are simpler than in the general case, whereas all essential aspects are covered.
The example above motivates the following definition. There is, however, a simpler definition for two-sorted algebras. Recall the notion of product category: The following theorem has already been observed by Wraith [77]. Then it follows that M: For stack there are some laws that relate the operations to each other; this aspect, not relevant here, is discussed in Chapter 5. Also, bialgebra stack is special among the bialgebras of the same type in that it is initial in some sense.
Also this aspect is irrelevant for the formalisation of bialgebra 81 3d. Many-sortedness and other variations proper. Clearly, the F, G -bialgebras and homomorphisms form a category, called BiAlg F, G , that is built upon the default category. Similarly to many-sorted algebras, a bialgebra is a particular dialgebra. So in contrast to the case for two-sorted algebras, f is not a pair. The generalisation to two-sorted bialgebras is straightforward. A morphism in this category is probably just what you might wish as a morphism for two-sorted bialgebra.
The morphisms of the category are just what you might have expected. The categorical formalisation of distributivity leads to the notion of homomorphism, with a collection of operations being a dialgebra. There are a lot of laws for dialgebras, and these facilitate to calculate with algorithms in an algebraic way in the sense of high school algebra. In order to make clear the pattern or structure of an algorithm, and thus to discover the homomorphisms involved, it is very helpful if algorithms are expressed as compositions of functions, rather than as cascaded applications of functions to arguments.
A possible drawback is the presence of a lot of combinators that are algorithmically not interesting, and whose only purpose is to get the arguments in the right place. Initial algebras and final co-algebras turn out to be a formalisation of the intuitive notion of datatype.
The initiality or finality of the co algebras gives further laws that facilitate to calculate with functions defined with induction on the structure of the source algebra or target co-algebra. The Fusion laws are quite important for efficiency improvement since they exploit the distributivity property of one of the functions involved. Interestingly, in a more general context and without efficiency considerations in mind, the Fusion law has turned out to be an important law for calculation with functions, in Chapter 2. The categorical technique of dualisation is not just a formal game, but gives results that are relevant for practice, as demonstrated by the many examples that we have given.
Though most of the theorems of this chapter may be known, or even well-known, it is certainly not the case that the algebraic style of calculating with algorithms is common coin. There are more types of equations that have precisely one solution and can therefore be characterised by laws like Charn. In some cases such laws have consequences similar to the Fusion law that we know for cata- and anamorphisms and that is so useful for program transformation. One type of equation gives an alternative view on the datatype; another type of equation gives mutumorphisms mutually recursive definitions ; a third type has solutions that we call prepromorphisms and dually postpromorphisms.
As an aside, we derive sufficient conditions for the equality of an cata- and anamorphism, and illustrate these by expressing a transpose function both as a cata- and as an anamorphism. In practice one encounters equations that do not fit the pattern above, yet have precisely one solution. The uniqueness means that a characterisation like Charn is possible, hence also laws like Self and Uniq, and maybe also Id and Fusion. These laws are useful for program transformations. Unique Fixed Points recursive inductive invocations of f , but also the arguments that were passed to those recursive invocations.
As Meertens proves, and we will do so in Section 4b, paramorphisms satisfy properties similar to those of catamorphisms; in particular a Fusion law. We investigate three different kinds of equations. Then in Section 4b we consider mutually recursive definitions, or rather simultaneous inductive equations. In the dual case the recursive invocations in the equation for anamorphisms are succeeded by some postprocessing. As an aside, we give in Section 4c two conditions under which a catamorphism equals an anamorphism.
The law is illustrated by the equivalence proof for a catamorphism and an anamorphism expression for a kind of array transpose. In particular one uses both induction on the cons structure and induction on the join or snoc structure to define functions on lists. We set out to describe this phenomenon formally, for datatypes in general. For concreteness, however, we refer to lists. More precisely, the intended correspondence between cons and snoc lists is suggested by cons a,. Suppose you want to define functions on cons lists by induction on the snoc pattern; think of left reduces.
We do not elaborate rev here, but assume that rev is its own inverse; compare with reverse discussed in paragraph 3. Views on datatypes has precisely one solution f , so that these equations do define functions on La. The second method is as follows. We shall now spend some words on both methods. Except for the specific choices the discussion in paragraph 3 is completely general. We can also formally define a notion of isomorphism between an F -algebra and a G -algebra, or, more generally, between dialgebras of different type. To this end define the category DiAlg , built on the default category, as follows.
An object in DiAlg is: So defined DiAlg is a category indeed. The use of auxiliary functions is commonplace in programming. Often a function or algorithm f is easily expressed by induction on the structure of the argument, provided that some function g may be used; where g is expressed by induction too, using f in its turn. We call such functions mutumorphisms mutu arising from mutually recursive. The discussion below formalises the folklore intuition that such mutumorphisms can be expressed in terms of a single recursive function. In addition it follows that mutumorphisms have nice calculational properties, including a Fusion law.
Specific cases arise when one or both do not really depend on the other. The proof of the theorem is simple. Since the theorem asserts that the tupling of mutumorphisms is a catamorphism, there is a characterisation of mutumorphisms the theorem! A specific notation may make it more clear. Lambert Meertens put them together in juxtaposition to name this law. Voermans and Van der Woude [73]. We present two such conditions, and illustrate one of them by proving the equality of two ways to express a certain kind of array transpose.
Both an anamorphism and a catamorphism is the solution of a certain kind of fixed point equation. So [[F ]] is characterised by ufp-Charn below, and hence satisfies the two other laws as well. So by suitable instantiations of F and G there result conditions not only for the equality of a cata- and an anamorphism, but also for two catamorphisms and for two anamorphisms.