#

Algorithmic Complexity Bounds

on Future Prediction Errors^{†}^{†}thanks:
This work was supported by SNF grants
200020-107590/1,
2100-67712 and 200020-107616.
A shorter version appeared in the proceedings of the ALT’05 conference [Ch05].

###### Abstract

We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor from the true distribution by the algorithmic complexity of . Here we assume that we are at a time and have already observed . We bound the future prediction performance on by a new variant of algorithmic complexity of given , plus the complexity of the randomness deficiency of . The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.

Keywords

Kolmogorov complexity, posterior bounds, online sequential prediction, Solomonoff prior, monotone conditional complexity, total error, future loss, randomness deficiency.

## 1 Introduction

We consider the problem of online=sequential predictions. We
assume that the sequences are drawn from some “true”
but unknown probability distribution . Bayesians proceed by
considering a class of models=hypotheses=distributions,
sufficiently large such that , and a prior over .
Solomonoff considered the truly large class that contains all
computable probability distributions [Sol64].
He showed that his universal distribution converges rapidly to
[Sol78], i.e. predicts well in any environment
as long as it is computable or can be modeled by a computable
probability distribution (all physical theories are of this sort).
is roughly , where is the length of
the shortest description of , called the Kolmogorov complexity of
. Since and are incomputable, they have to be
approximated in practice.
See e.g. [Sch02b, Hut05, LV97, CV05] and
references therein.
The universality of also precludes useful statements about the
prediction quality at particular time
instances [Hut05, p. 62],
as opposed to simple classes like
i.i.d. sequences (data) of size , where accuracy is typically
.
Luckily, bounds on the expected *total*=cumulative loss
(e.g. number of prediction errors) for can be
derived [Sol78, Hut01c, Hut03a, Hut03b],
which is often sufficient in an online setting. The bounds are in
terms of the (Kolmogorov) complexity of . For instance, for
deterministic , the number of errors is (in a sense tightly)
bounded by which measures in this case the information
(in bits) in the observed infinite sequence .

What’s new. In this paper we assume we are at a time and have already observed . Hence we are interested in the future prediction performance on , since typically we don’t care about past errors. If the total loss is finite, the future loss must necessarily be small for large . In a sense the paper intends to quantify this apparent triviality. If the complexity of bounds the total loss, a natural guess is that something like the conditional complexity of given bounds the future loss. (If contains a lot of (or even all) information about , we should make fewer (no) errors anymore.) Indeed, we prove two bounds of this kind but with additional terms describing structural properties of . These additional terms appear since the total loss is bounded only in expectation, and hence the future loss is small only for “most” . In the first bound (Theorem 1), the additional term is the complexity of the length of (a kind of worst-case estimation). The second bound (Theorem 7) is finer: the additional term is the complexity of the randomness deficiency of . The advantage is that the deficiency is small for “typical” and bounded on average (in contrast to the length). But in this case the conventional conditional complexity turned out to be unsuitable. So we introduce a new natural modification of conditional Kolmogorov complexity, which is monotone as a function of condition. Informally speaking, we require programs (=descriptions) to be consistent in the sense that if a program generates some given , then it must generate the same given any prolongation of . The new posterior bounds also significantly improve upon the previous total bounds.

Contents. The paper is organized as follows. Some basic notation and definitions are given in Sections 2 and 3. In Section 4 we prove and discuss the length-based bound Theorem 1. In Section 5 we show why a new definition of complexity is necessary and formulate the deficiency-based bound Theorem 7. We discuss the definition and basic properties of the new complexity in Section 6, and prove Theorem 7 in Section 7. We briefly discuss potential generalizations to general model classes and classification in the concluding Section 8.

## 2 Notation & Definitions

Strings and natural numbers. We write for the set of finite strings over a finite alphabet , and for the set of infinite sequences. The cardinality of a set is denoted by . We use letters for natural numbers, for finite strings, for the empty string, and etc. for infinite sequences. For a string of length we write with and further abbreviate and . For , denote by an (arbitrary) element from such that . For binary alphabet , the is uniquely defined. We occasionally identify strings with natural numbers.

Prefix sets. A string is called a (proper) prefix of if there is a such that ; is called a prolongation of . We write in this case, where is a wildcard for a string, and similarly for the case where is an infinite sequence. A set of strings is called prefix free if no element is a proper prefix of another. Any prefix-free set has the important property of satisfying Kraft’s inequality .

Asymptotic notation. We write and . Equalities , are defined similarly: they hold if the corresponding inequalities hold in both directions. for for

(Semi)measures.
We call a semimeasure *iff*
and , and a measure *iff* both unstrict inequalities are
equalities. is interpreted as the -probability of
sampling a sequence which starts with . The conditional
probability (posterior)

(1) |

is the -probability that a string is followed by (continued with) . If , is defined arbitrarily and every such function is called a version of conditional probability. We call deterministic if . In this case we identify with .

Random events and expectations. We assume that sequence is sampled from the “true” measure , i.e. . We denote expectations w.r.t. by , i.e. for a function , . We abbreviate .

Enumerable sets and functions.
A set of strings (or naturals, or other constructive objects) is
called *enumerable* if it is the range of some computable
function. A function is called
*(co-)enumerable* if the set of pairs

To simplify the statements of the theorems below, we assume that for every computable measure , there is one fixed computable version of conditional probability , for example, is the uniform measure on ’s for .

Prefix Kolmogorov complexity. The conditional prefix complexity is the length of the shortest binary (self-delimiting) program on a universal prefix Turing machine with output and input [LV97]. . For non-string objects we define , where is some standard code for . In particular, if is an enumeration of all (co-)enumerable functions, we define . We need the following properties: The co-enumerability of , the upper bounds , the lower bound for “most” and , extra information bounds , Kraft’s inequality and

Monotone and Solomonoff complexity. The monotone complexity is the length of the shortest binary (possibly non-halting) program on a universal monotone Turing machine which outputs a string starting with . Solomonoff’s prior is the probability that outputs a string starting with if provided with fair coin flips on the input tape. Most complexities coincide within an additive term , e.g.

## 3 Setup

Convergent predictors.
We assume that is a ‘‘true’’^{1}^{1}1Also called
objective or *aleatory* probability or *chance*.
sequence generating measure, also called an environment. If we know
the generating process , and given past data , we can
predict the probability of the next data item
. Usually we do not know , but estimate it from
. Let be an estimated
probability^{2}^{2}2Also called *subjective* or *belief*
or *epistemic* probability. of , given .
Closeness of to is desirable
as a goal in itself or when performing a Bayes decision that
has minimal -expected loss
.
Consider, for instance, a weather data sequence with
meaning rain and meaning sun at day . Given
the probability of rain tomorrow is . A
weather forecaster may announce the probability of rain to be
, which should be close to
the true probability .
To aim for

seems reasonable.

Convergence in mean sum. We can quantify the deviation of from , e.g. by the squared difference

Alternatively one may also use the squared absolute distance , the Hellinger distance , the KL-divergence , or the squared Bayes regret for . For all these distances one can show [Hut01b, Hut03a, Hut05] that their cumulative expectation from to is bounded as follows:

(2) |

is increasing in , hence exists [Hut01a, Hut05]. A sequence of random variables like is said to converge to zero with probability 1 if the set has measure 1. is said to converge to zero in mean sum if , which implies convergence with probability 1 (rapid if is of reasonable size). Therefore a small finite bound on would imply rapid convergence of the defined above to zero, hence and fast. So the crucial quantities to consider and bound (in expectation) are and . For illustration we will sometimes loosely interpret and other quantities as the number of prediction errors, as for the error-loss they are closely related to it [Hut01c, Hut01a]. for if

Bayes mixtures. A Bayesian considers a class of distributions , large enough to contain , and uses the Bayes mixture

(3) |

for prediction, where can be interpreted as the prior of (or initial belief in) . The dominance

(4) |

is its most important property. Using for prediction, this implies , hence . If is chosen sufficiently large, then is not a serious constraint.

Solomonoff prior. So we consider the largest (from a computational point of view) relevant class, the class of all enumerable semimeasures (which includes all computable probability distributions) and choose which is biased towards simple environments (Occam’s razor). This gives us Solomonoff-Levin’s prior [Sol64, ZL70] (this definition coincides within an irrelevant multiplicative constant with the one in Section 2). In the following we assume , , and being a computable (proper) measure, hence by (4).

Prediction of deterministic environments. Consider a computable sequence “sampled from ” with , i.e. is deterministic, then from (4) we get

(5) |

which implies that converges rapidly to 1 and hence , i.e. asymptotically correctly predicts the next symbol. The number of prediction errors is of the order of the complexity of the sequence.

For binary alphabet this is the best we can expect, since at each time-step only a single bit can be learned about the environment, and only after we “know” the environment we can predict correctly. For non-binary alphabet, still measures the information in in bits, but feedback per step can now be bits, so we may expect a better bound . But in the worst case all . So without structural assumptions on the bound cannot be improved even if is huge. We will see how our posterior bounds can help in this situation.

Individual randomness (deficiency).
Let us now consider a general (not necessarily deterministic)
computable measure . The Shannon-Fano code of w.r.t.
has code-length , which is “optimal”
for “typical/random” sampled from . Further, is the length of an “optimal” code for .
Hence for “-typical/random”
. This motivates the definition of *-randomness deficiency*

which is small for “typical/random” . Formally, a sequence is called (Martin-Löf) random iff , i.e. iff its Shannon-Fano code is “optimal” (note that for all sequences), i.e. iff

Unfortunately this does not imply on the -random , since may oscillate around , which indeed can happen [HM04]. But if we take the expectation, Solomonoff [Sol78, Hut01a, Hut05] showed

(6) |

hence, with -probability 1. So in any case, is an important quantity, since the smaller (at least in expectation) the better predicts.

## 4 Posterior Bounds

Posterior bounds. Both bounds (5) and (6) bound the total (cumulative) discrepancy (error) between and . Since the discrepancy sum is finite, we know that after sufficiently long time , we will make few further errors, i.e. the future error sum is small. The main goal of this paper is to quantify this asymptotic statement. So we need bounds on , where are the past and the future observations. Since and are conditional versions of true/universal distributions, it seems natural that the unconditional bound also simply conditionalizes to . The more information the past observation contains about , the easier it is to code i.e. the smaller is, and hence the less future predictions errors we should make. Once contains all information about , i.e. , we should make no errors anymore. More formally, optimally coding , then , and finally by Shannon-Fano gives a code for , hence . Since , this implies , but with a logarithmic fudge that tends to infinity as , which is unacceptable. The -independent bound we need was first stated in [Hut05, Prob.2.6]:

###### Theorem 1.

For any computable measure and any it holds that

###### Proof.

For every we define the following function of . For ,

For , we extend by defining . It is easy to see that is an enumerable semimeasure. By the definition of , we have

for any and . Now let and . Let us define a computable measure . Then

Taking the logarithm, after trivial transformations, we get

To complete the proof, let us note that . ∎ and

###### Corollary 2.

The future deviation of from is bounded by

For being squared (absolute) distance, Hellinger distance, or squared Bayes regret, the total deviation of from is bounded by

###### Proof.

Examples and more motivation. The bounds Theorem 1 and Corollary 2 prove and quantify the intuition that the more we know about the environment, the better our predictions. We show the usefulness of the new bounds for some deterministic environments .

Assume all observations are identical, i.e. . Further assume that is huge and , i.e. is a typical/random/complex element of . For instance if is a color 512512 pixel image, then . Hence the standard bound (6) on the number of errors is huge. Of course, interesting pictures are not purely random, but their complexity is often only a factor 10..100 less, so still large. On the other hand, any reasonable prediction scheme observing a few (rather than several thousands) identical images, should predict that the next image will be the same. This is what our posterior bound gives,

More generally, assume , where the initial part contains all information about the remainder, i.e. may be a binary program for or , and its -ary expansion. Sure, given the algorithm for some number sequence, it should be perfectly predictable. Indeed, Theorem 1 implies ( if ). On the other hand, for most , i.e. is larger than the that one might hope for. , which can be exponentially smaller than Solomonoff’s bound . For instance,

Logarithmic versus constant accuracy. Thus there is one blemish in the bound. There is an additive correction of logarithmic size in the length of . Many theorems in algorithmic information theory hold to within an additive constant, sometimes this is easily reached, sometimes with difficulty, sometimes one needs a suitable complexity variant, and sometimes the logarithmic accuracy cannot be improved [LV97]. The latter is the case with Theorem 1:

###### Lemma 3.

For , for any positive computable measure , there exists a computable sequence such that for any

###### Proof.

Let us construct such a computable sequence by induction. Assume that is constructed. Since is a measure, either or for . Since is computable, we can find (effectively) such that . Put .

Let us estimate . Since is computable, . This set is prefix free and decidable. Therefore is an enumerable function with , and the claim follows from the coding theorem. Thus, we have . Since , we get for any . Actually, consider the set . We claim that

∎

A constant fudge is generally preferable to a logarithmic one for quantitative and aesthetical reasons. It also often leads to particular insight and/or interesting new complexity variants (which will be the case here). Though most complexity variants coincide within logarithmic accuracy (see [Sch00, Sch02a] for exceptions), they can have very different other properties. For instance, Solomonoff complexity is an excellent predictor, but monotone complexity can be exponentially worse and prefix complexity fails completely [Hut03c, Hut06].

Exponential bounds. Bayes is often approximated by MAP or MDL. In our context this means approximating by with exponentially worse bounds (in deterministic environments) [Hut03c]. (Intuitively, since an error with Bayes eliminates half of the environments, while MAP/MDL may eliminate only one.) Also for more complex “reinforcement” learning problems, bounds can be rather than due to sparser feedback. For instance, for a sequence if we do not observe but only receive a reward if our prediction was correct, then the only way a universal predictor can find is by trying out all possibilities and making (in the worst case) errors. Posterization allows us to boost such gross bounds to useful bounds . But in general, additive logarithmic corrections as in Theorem 1 also exponentiate and lead to bounds polynomial in which may be quite sizeable. Here the advantage of a constant correction becomes even more apparent [Hut05, Problems 2.6, 3.13, 6.3 and Section 5.3.3].

## 5 More Bounds and New Complexity Measure

Lemma 3 shows that the bound in Theorem 1 is attained for some binary strings. But for other binary strings the bound may be very rough. (Similarly, is greater than infinitely often, but for many “interesting” .) Let us try to find a new bound, which does not depend on .

First observe that, in contrast to the unconditional case (6), is not an upper bound (again by Lemma 3). Informally speaking, the reason is that can predict the future very badly if the past is not “typical” for the environment (such past have low -probability, therefore in the unconditional case their contribution to the expected loss is small). So, it is natural to bound the loss in terms of randomness deficiency , which is a quantitative measure of “typicalness”.

###### Theorem 4.

For any computable measure and any it holds

Theorem 4 is a variant of the “deficiency conservation theorem” from [VSU05]. We do not know who was the first to discover this statement and whether it was published (the special case where is the uniform measure was proved by An. Muchnik as an auxiliary lemma for one of his unpublished results; then A. Shen placed a generalized statement to the (unfinished) book [VSU05]).

Now, our goal is to replace in the last bound by a conditional complexity of . Unfortunately, the conventional conditional prefix complexity is not suitable:

###### Lemma 5.

Let . There is a constant such that for any , there are a computable measure and such that

###### Proof.

For , define a deterministic measure such that is equal to on the prefixes of and is equal to otherwise.

Let . Then , , . Also and (as in the proof of Lemma 3) , and . Trivially,

(One can obtain the same result also for non-deterministic , for example, taking mixed with the uniform measure.) ∎

Informally speaking, in Lemma 5 we exploit the fact that can use the information about the length of the condition . Hence can be small for a certain and is large for some (actually almost all) prolongations of . But in our case of sequence prediction, the length of grows taking all intermediate values and cannot contain any relevant information. Thus we need a new kind of conditional complexity.

Consider a Turing machine with two input tapes. Inputs are
provided without delimiters, so the size of the input is defined by
the machine itself. Let us call such a machine *twice
prefix*. We write that if machine , given a sequence
beginning with on the first tape and a sequence beginning with
on the second tape, halts after reading exactly and
and prints to the output tape. (Obviously, if , then
the computation does not depend on the contents of the input tapes
after and .) We define

Clearly, is an enumerable from above function of , , and . Using a standard argument [LV97], one can show that there exists an optimal twice prefix machine in the sense that for any twice prefix machine

###### Definition 6.

*Complexity monotone in conditions* is defined for some fixed
optimal twice prefix machine as

Here in is a syntactical part of the complexity notation , though one may think of as of the minimal length of a program that produces given any .

###### Theorem 7.

For any computable measure and any it holds

###### Note.

One can get slightly stronger variants of Theorems 1 and 7 by replacing the complexity of a standard code of by more sophisticated values. First, in any effective encoding there are many codes for every , and in all the upper bounds (including Solomonoff’s one) one can take the minimum of the complexities of all the codes for . Moreover, in Theorem 1 it is sufficient to take the complexity of (and it is sufficient that is enumerable, while can be incomputable). For Theorem 7 one can prove a similar strengthening: The complexity of is replaced by the complexity of any computable function that is equal to on all prefixes and prolongations of .

To demonstrate the usefulness of the new bound, let us again consider some deterministic environment . For and with , Theorem 1 gives the bound . Consider the new bound . Since is deterministic, we have