O-notation in Algorithm Analysis

Back to Publications

30.01.2014 (17.11.2016)

O-notation in Algorithm Analysis (updated). The best version of the thesis, updated for newest results and built with latest LaTeX packages.

O-notation in Algorithm Analysis (updated, arXiv), arXiv 1309.3210. Corresponds to the updated thesis, but built with arXiv’s old LaTeX packages; less pretty.

O-notation in Algorithm Analysis, Doctoral Thesis, Tampere University of Technology, volume 1427, October 2016. The thesis at the time of defence. Pretty, but lacks the newest results.

A General Definition of the O-notation for Algorithm Analysis, Kalle Rutanen, Germán Gómez-Herrero, Sirkka-Liisa Eriksson, Karen Egiazarian, Bulletin of EATCS 117, October 2015. A brief 30 page journal paper.


These videos summarize some of the research in the thesis.

Part 1: Linear dominance

Part 2: Local linear dominance

Part 3: Practice


We provide an extensive list of desirable properties for an O-notation — as used in algorithm analysis — and reduce them to 8 primitive properties. We prove that the primitive properties are equivalent to the definition of the O-notation as linear dominance.

We abstract the existing definitions of the O-notation under local linear dominance, and show that it has a characterization by limits over filters for positive functions.

We define the O-mappings as a general tool for manipulating the O-notation, and show that Master theorems hold under linear dominance.


Most of the research was done under funding from the Finnish National Doctoral Programme in Mathematics and its Applications.

Some more research was done under funding from the Science fund of the city of Tampere during Summer 2015.

The preorder diagram in the paper is adapted from a figure created by David Wilding, used with his permission.

In the introduction, I use svg-cards playing cards designed by David Bellot.

Raphael Reitzig provided extensive feedback on the paper related to content and typographical concerns. The limit theorems are the result of his suggestion to study how limits are related to the O-notation. The idea to draw inside the O-symbols is his.

Jarkko Kari suggested to study the independence of the primitive properties, and pointed out how some of those results already followed from the study of candidate definitions.

The comments from the anonymous reviewer improved the presentation of the paper significantly. Among the most important things was the idea to divide -sub-homogenuity into -sub-homogenuity and -cancellation.

Many thanks to Yuri Gurevich, the editor of the Logic in Computer science column, and to the anonymous reviewer, for possessing the open minds I was looking for.


Paul Vitányi noted that I missed the reference “Big omega versus the wild functions”, Paul M.B. Vitányi, Lambert Meertens, ACM SIGACT News, Volume 16, Issue 4, April 1985, pp. 56-59, ACM New York, NY, USA. Apologies for this omission. The reference has been added to the arXiv version of the paper.

I had to make some compromises in the arXiv version to be able to submit it, since arXiv hosts old versions of LaTeX packages. Perhaps the most visible of these are that I could not use biblatex, and that I had to refer to theorems by their labels rather than by their names.

Automated checking

In the paper Latex source, I have annotated the theorems and their proofs in a simple language which allows a computer program to check that there are no extraneous assumptions, and that the assumptions of a theorem are fulfilled before invoking it. The program to check the paper can be found from here.


Review timeline

arXiv timeline


First arXiv paper

The dates in these initial entries refer to when I had a given idea, not when I wrote it. They were recovered from the version-control of the paper source.


The idea for this paper came after reading Rodney R. Howell‘s paper On asymptotic notation with multiple variables, which demonstrated problems with the multi-variate definition, particularly the subset-sum rule. It occurred to me that the subset-sum rule does not hold for the univariate big-oh notation either. I was then set to find a new definition which would fix this problem. My first intuition suggested that while ignoring points in the domain of a function could be fundamental, that process should perhaps be carried out in another way.


My first alternate way to ignore points was to use affine dominance, defined by

This was loosely inspired by the difference between Riemann and Lebesgue integration. In terms of the primitive properties, the affine dominance fulfills order-consistency, reflexivity, transitivity, one-separation, locality, positive scale-invariance, and the composition rule. However, it did not fulfill zero-separation and multiplicativity.

My second alternate way to ignore points was to use cofinite linear dominance (ignore finitely many points). In terms of the primitive properties, the cofinite linear dominance fulfills order-consistency, reflexivity, transitivity, zero-separation, one-separation, locality, positive scale-invariance, and multiplicativity. However, it did not fulfill the composition rule.


At this point I hadn’t yet come up with the composition rule, and only noticed that the subset-sum rule did not hold for the cofinite linear dominance. As a result, it occurred to me that I can not ignore even a single point if I want the subset-sum rule to work. That was perhaps the greatest psychological leap in this paper, since ever after first reading ‘Introduction to Algorithms’ in 2001 I had always thought that ignoring points was somehow important.

Since affine dominance did not cut it, and ignoring points was out of the question, I was lead to linear dominance. After that theorems started to fall in their place. All the extraneous requirements vanished and all the properties I could think of I could prove easily.

I began wondering about the relationships between the many properties I had listed: which ones are implied by others? I noticed that I can prove many of the properties using a small set of simple properties. But I could not do that for the subset-sum rule. That made me uneasy, since my intuition suggested that it was a composite of several properties. But I could not see which ones. Soon enough I came up with what I was missing: the composition rule. The immediate realization was that this rule does not hold for asymptotic linear dominance, and that this rule is more fundamental than the subset-sum rule. By then I was convinced that the fundamental failure in asymptotic linear dominance was that the composition rule did not hold.


I was left with a small set of properties, the primitive properties, which implied all the other properties, and a direct definition, linear dominance, which fulfilled all the primitive properties. The natural question was: could there be other definitions which also fulfill the primitive properties? The answer was ‘yes’. The problem was that simply defining each O-notation as consisting of the all possible functions fulfilled the (by that time defined) primitive properties. It became evident that I needed to impose requirements that would exclude such trivial definitions. It was easy to come up with the requirements and . After adding these non-triviality properties I was able to show that indeed the primitive properties implied the definition of linear dominance.


When proving yet another property, I had a strong feeling that I had done a very similar proof in the past. Indeed, after skimming back on my notes, I found a proof that was almost identical to what I had just proved. I abstracted away the properties that I used to carry out both of those proofs. Those properties turned into the O-mapping requirements.

ICALP review


We submitted a 12-page paper to ICALP. I was convinced that the reviewers could not but agree with the conclusion of the paper.


ICALP rejected the paper.

I thought that the reviewers found some subtle mistake in the mathematics. However, when the reviews arrived from ICALP, there was no mention of the mathematics at all.

Out of the three reviewers, the first did not give feedback at all.

The second reviewer

The third reviewer rejected the paper based on the following claims:

ICALP review — effects


Out of the three reviewers, only the feedback from the third reviewer was relevant. Unfortunately, his/her claims reflected common misconceptions. Out of this feedback we learned that the reviewers


We spent a lot of time improving the introduction to address the presented misconceptions in advance, and to make it intuitively as convincing as possible.

I checked the mathematics once again, and found that the definitions for the O-mappings were erroneous. This did not affect the main result of the paper. I fixed these definitions, and added some more proofs. An important new proof was to characterize exactly when the composition rule holds for asymptotic linear dominance. I felt this was needed to drive home the point that asymptotic linear dominance fails in a fundamental way.

ESA review


We submitted an improved 12-page paper to ESA.


ESA rejected the paper.

We got reviews from three reviewers, with similar viewpoints. In summary,

The similarity of the sentence structure, the content, and the used words suggested that the second reviewer was the same as the ICALP’s third reviewer.

ESA review — effects


The reviews reflected a number of misconceptions, some of which were new. We addressed them in a 56-page arXiv-paper. At this point it was clear that — unless we can address the misconceptions in advance — the paper can never be accepted due to a psychological barrier.

ISAAC review


We submitted an 11-page paper to ISAAC. We spent two additional pages on resolving the learned misconceptions in advance. Combined with the 11 page limit, this meant that we had to the cut out the material on the O-mappings and the related definitions. The full paper was attached as an appendix; ISAAC required missing proofs to be provided in the appendix.


ISAAC rejected the paper.

We got three reviews, all of which stated that they “are not convinced”. The only constructive content in the reviews was that the reviewer 1 thought he had a counterexample for why the composition rule should not hold. Unfortunately, the reviewer used composition from the left, while the composition rule uses composition from the right. Other than that, there were no justifications for why the reviewers were not convinced.

ISAAC review — effects


Perhaps counterintuitively, I was slightly glad about the ISAAC review. This is because the misconceptions from the previous reviews were absent. I attribute this to addressing them in advance. Also, the reviews revealed that the problem is psychological. As a result of the ISAAC review, we added the left-composition example to the set of misconceptions, together with a proof of the Master theorem, and posted a 66-page paper to arXiv.

I am now convinced that the paper will not be accepted through a conference, due to confirmation bias, and have to send the paper to a journal. While it is expected that the first journal review will be similar to the conferences, I hope to be able to give direct rebuttals to the reviewers’ claims. There’s also a positive side to a journal, in that a journal usually has less strict a page limit.

JOURNAL X review


We submitted a 74-page paper to a certain top-impact-factor journal, which I’ll denote by JOURNAL X, to provide its editor-in-chief anonymity. Actually, we submitted our first version of the paper to JOURNAL X already in September 2013. However, back then we withdrew the paper once we learned that the publication delay was too long for us.


According to the JOURNAL X’s tracking system, the review began at 16.9.2014 at 9:48.


JOURNAL X rejected the paper.

The decision was sent to us 17.9.2014 at 9:30, after only a single day of review. The editor-in-chief rejected the paper, because in his opinion the paper was not a research article.


I asked the editor-in-chief why he thought that the paper is not a research article. His reply was that “science does not critique the known”. It was left ambiguous what ‘known’ means, but I did not ask for details. If ‘known’ means correct, then this is just a direct demonstration of confirmation bias. If ‘known’ means ‘existing’, then the statement is false; Einstein’s relativity was certainly a welcome critique over Newtonian physics.

Researcher X


I went to talk to a researcher X in computer science, because I wanted to understand the thoughts that underlie the presented misconceptions. During our 3 hours of talk, I found out that

Due to the last points, we hit a wall in the discussion, and stopped. I was happy in that we managed to keep the discussion civilized, so as to separate any emotions, although at one point that seemed difficult for him. This discussion was very illuminating, in that it gave me a glimpse of the psychology behind this problem. After this discussion I could now concretely think about how I would go about persuading X, and not just some faceless reviewer.

The most fundamental misconception was the idea that a complexity is a function of the input-size, rather than of the input itself. To combat this misconception, I formalized how worst-case, best-case, and average-case analyses fit the input-set-thinking. In general, this was a way to formalize the analysis of complexity by grouping input, for example by size of a sequence. Currently it seems to me that this is a very convincing way to deal with this misconception, because it shows input-size-thinking as a special case of input-set-thinking.

That arbitrary domains, such as real numbers, are not accepted, is a consequence of the input-size-thinking. The problem is that with arbitrary sets there is no ‘size’ to use to partition the input-set. Instead of leading to the conclusion that there is something wrong with the input-size idea, it leads to the conclusion that there is something wrong with arbitrary sets.

There is an even more fundamental misconception lurking under. It is the belief that Turing machines formalize the concept of an algorithm. I accept the Church-Turing thesis that effectively computable functions are exactly the Turing-computable functions. However, most of the algorithms in computational geometry, for example, are defined over real numbers, and therefore are not formalizable under Turing machines. Instead, these algorithms are formalized under the real-RAM model.

But how would I then formalize the concept of an algorithm? I did not know, but some searching led me to abstract state machines, which formalize algorithms in just the generality and sense that I need. This should put an end to input-size and computability misconceptions.

For those who would go sketching new versions in an attempt to fix asymptotic linear dominance, I abstracted all of such definitions under restricted linear dominance. I could then characterize all of them with the same theorems. This took care of the sketching problem. I also added an analysis of affine dominance. It is not at all necessary to go through these examples, but perhaps they make the results more tangible for those who only read mathematics as the last option.

The straw-man suggestion is the only one that I was worried about. It could be that by requiring the properties to hold for all sets, or larger than the universe generated by natural numbers, would provide an unnecessarily strict definition (although correct). To rebut this, I generalized the O-notation to work under a pre-specified universe. This showed that the problems are not fundamentally dependent on the universe. There were some interesting notations where injective composition would work in the universe generated by the natural numbers, but not in larger universes. This illuminated the way that the other definitions fail the primitive properties.

JOURNAL Y review


I felt that the changes I made to the paper after the JOURNAL X rejection were quite persuasive. Therefore, I submitted a 90-page paper to another top-impact-factor journal, which I’ll denote by JOURNAL Y, to again provide its editor-in-chief anonymity.


JOURNAL Y rejected the paper.

The review blocked right at the editorial board. The editorial board rejected the paper because it “lacked scientifically relevant results”. They wrote that the paper contained some useful considerations for the community, and suggested submission to the Bulletin of EATCS or to the SIGACT news. It was not specified what they meant by the lack of scientifically relevant results, or what the useful considerations were. I asked for the latter.


The reply came back, and it was from the handling editor. He replied that the useful considerations were the details and subtleties that we point out about the O-notation, and the approach based on listing requirements.

What caught my attention was that he did not mention anything about our major claims, as if we had just made a detailed study of the existing definition. I wrote back to point these out. On a reply, he said to have missed these claims because he has only a little time to devote for each paper. He agreed that having the O-notation not being mathematically justified is a bad thing, and suggested that if the paper stressed these key-points more, it would help to get the “attention it deserves”.

The handling editor seemed supportive and constructive; this was refreshing in contrast to the emotional conference reviews.


It seemed to me that the handling editor agreed with me of the importance of the problem, which contradicted with the statement that the paper does not contain scientifically relevant results. Therefore, I asked him whether, assuming I added a section to stress the key-points more, we could revive the review. The handling editor responded on the same day saying that the editor-in-chief will respond to this question. The editor-in-chief was CC’d for all of our discussion.


The editor-in-chief replied. He said that he already expressed his view, and that this is a policy matter; the paper was not appropriate for JOURNAL Y. I don’t know which policy matter he was referring to.

Weakening primitive properties


One thing that bothered me was that the primitive properties were not as weak as they could have been. Correspondingly, during the past month I undertook a major revision to weaken the primitive properties as much as possible. The changes in desirable properties are:

When working on these changes, I noticed that it became exhausting to keep track of which properties each theorem required. Therefore, I created a program to automatically check that the assumptions of each referred theorem were satisfied, and that there were not extraneous assumptions. It took me about three days to write the program and make the paper semi-machine-checkable, but the payback was immense. When I adapted the theorems to the new properties, the program systematically guided me through what to do next, as the modification of one theorem affected those which depended on it. This snowball-effect also occurred whenever I managed to prove a theorem with using less assumptions; that felt very rewarding.

While in the previous versions I used injective sub-composability as a primitive property, it turned out that injective sub-composability was sufficient only because the other primitive properties were excessively strong. With the weakened primitive properties, I had to bring back sub-composability. The clarity of the roles of the primitive properties increases as they become more orthogonal.

Another thing that bothered me was that I felt that the introduction was not good enough to drive the intuition for why the primitive properties should be chosen. It was from this process that I realized that the primitive properties were not weak enough. After having trouble explaining sub-multiplicativity — it seemed slightly too complex — I noticed that the natural property was instead (functional) sub-homogenuity; that multiplication should be compatible with the dominance relation. I find that intuitive explanations are found by talking in terms of the dominance relation, not in terms of its principal down-sets.

I renamed restricted linear dominance to local linear dominance, because the term restricted does not immediately suggest on what is being restricted.

I sent the corresponding 98-page paper to arXiv.


One thing that was left open two weeks ago was the question of whether linear dominance (and others) is complete or not. This is something I really wanted to solve, since either way has some interesting implications on the life of a complexity analyst.

At first it seemed to me that the order should be complete, and so I spent two weeks trying to prove this, without success. These failed attempts finally suggested to me that the order must be incomplete, after which it was very quick to prove. Next time I should try to prove the negation far more early.

Since the order is not complete, it is possible that an optimal lower-bound for a resource-consumption of a computational problem can never be found. Also, there may be incomparable lower-bounds. Even if an optimal lower-bound exists, there may not be an algorithm with that resource-consumption.

The loophole here is that the definition of complete is too strict. Rather than requiring completeness on all subsets of resource-consumptions, it should be required only on those subsets which are generated by algorithms. However, determining whether the order is algorithm-complete may be harder, and depends on both the computational model and the cost-model. On the other hand, showing incompleteness only needs one counter-example. Perhaps there is a simple subset of resource-consumptions which demonstrates algorithm-incompleteness for a wide range of models.

I sent the corresponding 99-page paper to arXiv.

JOURNAL Z review


I sent a 29-page paper to a certain JOURNAL Z, and a corresponding 99-page paper to arXiv.


JOURNAL Z rejected the paper, but offered a possibility for a rebuttal.

The reviewer confused our main result — the equivalence between primitive properties and linear dominance — with a small sub-result — that asymptotic linear dominance is not a correct formalization of the O-notation. While he/she agreed with the latter, the sub-result could be shown in half pages by a counter-example, and therefore the rest of the paper seemed irrelevant to him/her. Apart from this, the reviewer expressed some of the same misconceptions as the reviewers before him/her.

One of the constructive things the reviewer noted was that it does not sound good to say that a definition is incorrect. I think he/she is correct, and I should find another way to say the same, for example that the current definition of the O-notation is not suitable (for algorithm analysis).


I wrote a rebuttal for two days (and nights), except for keeping some course-exercises in the middle. The rebuttal ended up at 13 pages, with the reviewer comments quoted verbatim. I sent the rebuttal to JOURNAL Z.


After reading the rebuttal, the reviewer had changed his/her mind, and recommended publication given minor revisions. Correspondingly, the editor of JOURNAL Z accepted the paper conditionally on making the proposed changes. In the same day, I made the required changes and sent the revised paper to JOURNAL Z.


The reviewer asked to add page numbers to some citations. I sent a revised paper to JOURNAL Z.


Since I had not heard from the editor in about a month, I sent an e-mail to the editor to confirm that I had not missed an e-mail. I hadn’t; the editor forwarded the query to the reviewer.

Weakening primitive properties some more


The following changes were already done to the paper I sent to JOURNAL Z. Now I also made them to the arXiv version.

In co-asymptotic linear dominance, there was a notational ambiguity on whether the negation referred to the underlying relation, or to the relation lifted to functions. To disambiguate, I replaced the negation with set-difference.

While everywhere else I used the term O-notation, in the title I used the term big-oh notation. For consistency, I changed the title to use the term O-notation.


I noticed that zero-separation is redundant as a primitive property; it is implied by order-consistency, transitivity, one-separation, and sub-homogenuity. Reflexivity is also redundant; it is implied by order-consistency. Reflexivity used not to be redundant, until I changed the definition of order-consistency. I removed both from the primitive properties, so that now there are only 7 primitive properties.

JOURNAL Z review continued


The reviewer provided a throughout set of comments about the paper. To me, the important points were

I felt that there was a shift in that the reviewer now seemed committed to improving the paper. However, it seemed that he/she still had not understood the generality of the algorithm concept formalized by abstract state machines. Therefore, I transformed the section on algorithms to a fast introduction into abstract state machines right at the start. For concreteness, I provided Newton’s method as an algorithm for finding roots of differentiable functions. Here one of the inputs is a function, and the algorithm uses differentation as a primitive operation. I hoped this to force the reader off from the mind-set of computable algorithms.

It is always better to write in such a way which minimizes any distractions for the reader. As advised by the reviewer, I spent a lot of time rephrasing to decrease the negativity, although I was not always sure which places the reviewer meant.


I sent the revised paper back.


The reviewer provided again a long set of comments, many of which were editorial. Some of the main points were that

I was quite happy of the reviewer noticing the first one. It was something that I asked myself some time ago, but then forgot about it. To make sub-homogenuity more intuitive, I introduced two separate properties, which allow multiplication and division with natural-number-valued functions. I then showed that these imply sub-homogenuity.

The little-oh must be defined in terms of big-oh, because otherwise results obtained for big-oh cannot be transferred to results on little-oh. The -notation and the related notations are different views of the same underlying preorder.

Many hours were spent on rephrasing and structuring the text so that things are represented from a more positive viewpoint.

Starting with the abstract state machines and Newton’s algorithm seemed to now have convinced the reviewer of the usefulness of abstract algorithms, and therefore of abstract input-sets.


I sent the revised paper back.


The reviewer was now content with the paper, apart from few final touches. I made those, and submitted the final version of the paper. The paper will be published in October.

Discussions with Raphael


During the past week, I had a very fruitful e-mail correspondence with Raphael Reitzig. He provided comprehensive comments on both the content and the typography. The question came up of how the limits were related to the O-notation, and whether they could be used to deduce something about linear dominance. At first, I showed that for , it is possible to deduce the linear dominance from a limit. The problem then was to generalize this result to . However, there are many ways to approach infinity, and so I needed to specify how.

At first I thought about introducing topologies. However, that seemed overkill, and also perhaps not general enough, since I only needed a limit at the approached set. After some reflection, I realized that to define a limit, the only thing I need is a filter basis on the domain — the filter basis describes what is being approached in the domain. And then it hit me. I already had the requirement of a filter basis in the definition of local linear dominance, although I had not realized it before. I reached that definition through a minimal abstraction to complete a proof, not by thinking about limits.

I formalized limit superior, limit inferior, and a limit, given any filter basis. Now that I had these, I could generalize my limit proofs: they hold not only for , but for any local linear dominance. Since limits are not always defined, I substituted limit superiors and limit inferiors for them. This generalization resulted in many beautiful correspondences.

The limits opened up new possibilities. I could now define a generalized form of limit equivalence for any local linear dominance. It is nice for intuition to see how the definitions simplify when the filter bases simplify to those of linear dominance.

Apart from such stimulating conversations, Raphael provided me with a long list of typographical problems in the paper. Fixing these issues improved the look of the paper significantly. Among the most important was providing the different O-notations suggestive symbols by drawing inside O’s.

I submitted a new version to arXiv.



The paper was published in the Bulletin of EATCS.



I added an introduction to the paper, and more material on preordered sets and partitioned sets. I refactored the paper by moving most of the proofs into appendices. I hope this to make the intuitive content of the paper accessible to a wider audience. I implemented fixes to detailed feedback provided by a computer scientist Y, and uploaded a new version to arXiv.

Characterization by limits


I was able to completely characterize a local linear dominance by a limit. Previously I had to require the function in the denominator of a ratio to be positive, to make the ratio defined. However, I then only had a sufficient condition for the . What I noticed was that division by zero is only problematic for those points where the numerator is zero; other divisions by zero could be treated as infinities. I then restricted the limit to those points where the numerator is positive. As a consequence, I also obtained characterizations of and by limits.

I added a cheat sheet for the O-notation based on linear dominance. I implemented fixes to detailed feedback provided by a computer scientist Z. I figured out how to get proper uppercasing for the property names; the paper looks a bit better now. I changed the name of the membership rule to orderness, because it is equivalent to being a preorder. For a moment, I considered orderliness — but that’s not the same. I removed limit equivalence; it did not seem interesting enough. I uploaded a new version to arXiv.

Extension to sets


I have been bothered by the notation for some time, since I am not aware of anyone formalizing what it means. There is an obvious candidate for a formalization, which is also incorrect. Today, I extended the O-notation to sets of functions, and called it an -notation. This notation automatically inherits properties from the O-notation. Whereas is the principal down-set of , is the down-set of . I believe that an algorithm being is formalized by its resource-consumption being an element of . I uploaded a new version to arXiv.

Master theorems


I have been working for the past week on improving the proofs of Master theorems, as well generalizing them. I am now quite content with the proofs.



Working on the O-notation was at first supposed to be a brief side-project to “make things right”, but then ended up taking quite a bit of time. When the paper got accepted to BEATCS last year, we decided with my advisors to turn the extended paper on Arxiv as my doctoral thesis. So, for the past 4 months, the paper has been subject to detailed pre-examination by three researchers in computer science, who I denote with Y, Z, and W.

Usually a thesis manuscript has only two pre-examiners. However, this manuscript seemed suspect to the faculty council. First, it only had about 20 references. Second, the people who were consulted about the manuscript were not very comfortable with it. Third, BEATCS was not listed in the Finnish ranking system for journals (later, it was given the lowest possible ranking, with no associated academic prestige). The faculty council resolved these worries by assigning a third pre-examiner W, in addition to Y and Z proposed by my advisors.

Y and Z returned their pre-examination statements in two months (the recommended time), and three months, respectively. Their evaluations were positive, and they recommended permission to defend as a thesis. W took four months, and advised against permission to defend as a thesis.

W’s pre-examination statement was mostly positive: the proofs in the manuscript are correct, the language is good, and the scope of the work corresponds to that required from a doctoral thesis. Surprisingly, he then advised against permission to defend as a thesis: the results in the manuscript were not significant for the field of computer science. I did not understand his justifications.

Since I am working on a doctorate in mathematics, it should not be an obstacle for my graduation if the topic is not interesting to computer scientists. The manuscript is supposed to demonstrate my ability to formulate and to solve novel mathematical problems. That said, I would appreciate computer scientists recognizing the importance of rooting the O-notation on mathematics, rather than on intuition and imitation.

W continued that while he accepts the mathematical results in the manuscript, he finds linear dominance unintuitive, proceeding to give an example where replacing a non-zero value of a cost function with a zero changes the O-set of the function. For this reason, he said to “prefer” cofinite linear dominance.

I sent my revised manuscript back to the faculty council, to be decided whether to grant a permission to defend as a thesis or not. I wrote a rebuttal to the pre-examination comments.

I uploaded a new version to arXiv.


The faculty council gave a permission to defend the manuscript as a doctoral thesis. The defence is at the end of this month.


I defended the thesis manuscript successfully. The opponents Jarkko Kari and Pasi Fränti recommended to accept the thesis.

Minimal properties


During the thesis defence, the opponent Jarkko Kari suggested to study whether the primitive properties are independent of each other. I had already noticed that one-separation and sub-composability were each independent of the other primitive properties, but for some reason I never thought about proving the same for all of the primitive properties.

In essence, showing a primitive property independent of the others requires to find a definition of O-notation which fails the property while satisfying the other primitive properties. Linear dominance then gives another definition which satisfies all the properties. The existence of such a pair of definitions proves the desired independence. Using this technique, Kari could readily give other examples of independent primitive properties. I was very excited about this; of the possibility of proving the primitive properties minimal.

As luck would have it, the candidate definitions that I had studied in the thesis were directly usable for this task. Trivial linear dominance proved one-separation independent, asymptotic linear dominance sub-composability independent, and affine dominance sub-homogeneity in independent. I was then left with the task of constructing new definitions to prove the independence of other primitive properties.

After a few days, I had obtained proofs for the independence of all primitive properties except locality. I tried to find a suitable definition to show locality independent, but did not succeed in that. This took my thoughts to my previous proof of completeness, where I was trying to prove the wrong result for many days in vain. This time I would try to prove the negative also! And sure enough, it did not take long to finish a proof which showed that locality is implied by the other primitive properties. This was very surprising to me. The proof was really fun to come by; it has to be my favourite in the whole book.

Even though locality can be proved from the other primitive properties, trying to prove things without it seems like a really bad idea. In essence, the proofs would then need to imitate the indirect way of obtaining locality. I decided to instead retain locality as a primitive property, so that primitive properties do not form a minimal axiom set. I then named the primitive properties without locality as pre-primitive properties, and showed minimality for pre-primitive properties. I think that the primitive properties make a really nice trade-off between understandability and minimality.

I sent a new version to arXiv.


The faculty council accepted the manuscript as a doctoral thesis.


I added three videos to Youtube, to summarize some of the research in the thesis. I hope this to make the thesis accessible to even wider an audience. I do not know how the videos will be received, or whether the level is too high or too low. In the videos, I included — but skipped over — all formal parts, so that the viewer can pause to them if he/she wants to. I turned off all advertisements. I made the license Creative Commons, to give people as much freedom as possible in using the videos, say in teaching. To record the audio, I used the Blue Jeti microphone, which worked very well.

Creating such videos is hard work; creating these videos has made me respect the work of Youtubers much more. I cannot thank Raphael Reitzig enough for yet again providing me with throughout feedback on all the versions of slides that went into these videos.