Algorithmes et Complexité 2, année 2023

Bibliography

Model of computation

A computation model is a formal way of representing abstract computation. Most models aim to represent –more or less faithfully – computations done by an actual physical machine abstracting away technicalities and providing hints on the inner cost in the model of performing a given computation.

Classical models of computation are discrete machines, their internal states can be describe by a finite amount of information. Other types of model exists, like analog computer story (not in the scope of this lecture).

Alphabet and words

An alphabet \(\Sigma\) is a finite set of symbols. For example, \(\{a,b,c,d\}\), \(\{0,1\}\) etc. A word \(w=u_1,\dots,u_k\) on \(\Sigma\) is a finite sequence of elements of \(\Sigma\). We denote by \(\Sigma^*\) the set of words on \(\Sigma\). For example \(001100\) is an element of \(\{0,1\}^*\), that is, a word on \(\{0,1\}\).

Turing machines

Turing Machines are one of the most classical model of computation. It is not especially practical nor elegant but it has a huge historical importance and is the basis of classical Complexity theory.

A (Deterministic) Turing machine is a tuple \((Q, \Sigma, q_0, q_a, q_r, \delta )\) where:

Memory model

This computational model is equipped with a bi-infinite tape (i.e. a function from \(\mathbb{Z}\to \Gamma\)) representing the memory. The machine has a head that moves on the tape reading and writing values. At a given time, we use the information of the head and the current state to lookup in the transition function the action to perform. That is, the symbol to write as the current position, the new state to reach and the head movement (left, right or stay).

To be a valid memory configuration, we add the extra constraint that the content of the bi-infinite tapes must absolutely be of the form \(\_^\infty u \_^\infty\) with \(u\in \Sigma^*\). That is, we never have a blank symbol except on the sides. Thanks to this constraint we can call \(u\) the tape content, which is well defined and finite.

Configurations

Given a Turing machine \(M=(Q, \Sigma, q_0, \delta, F)\), a configuration of \(M\) is a tuple \((q, u, i)\) where \(q\) a state, \(i\) is an integer called the head position and \(u\) is a word of \(\Sigma^*\) called the tape content.

Given a configuration \(c=(q, u, i)\) of \(M\) with \(q\not\in\{q_a, q_r\}\), we define \(d=(p,v,j)\) to be:

We say that \(d\) is the next configuration after \(c\) by \(M\). We abuse notation and denote it by \(\delta(c)\).

Executions

Given a word \(u\in \Sigma^*\), we denote by \(I(u)=(q_0, u, 0)\) the initial configuration of \(M\) with the tape initialized with the word \(u\) written from position \(0\) of the tape. We say in this case that \(u\) in the input of the Turing machine.

The execution of \(M\) on input \(u\) is the sequence of configurations \((c_i)\) defined by: \(c_0=I(u)\) and \(c_{i+1} = \delta(c_i)\) if the state of \(c_i\) is not \(q_a\) nor \(q_r\) and \(c_{i+1}=c_i\) otherwise.

The Turing machine \(M\) accepts the word \(u\) if there exists \(i\) such that \(c_i\) has state \(q_a\). \(M\) rejects the word \(u\) if there exists \(i\) such that \(c_i\) has state \(q_r\).

Given a Turing machine \(M\), the language \(A\) accepted by \(M\) is the set of words it accepts. The language \(R\) rejected by \(M\) is the set of words it rejects.

If \(A = \Sigma^* \setminus R\), that is, if for every word \(u\) of \(\Sigma^*\), \(M\) reaches state \(q_a\) or \(q_r\) at some points, we say that \(M\) decides \(A\), and we denote this language by \(L(M)\).

Given any language, we say it is Turing-Computable if there exists a Turing machine that decides it.

If the set of words accepted by \(M\) is \(L\) but the set of words rejected by \(M\) is not \(L^c\) we say that \(M\) semi-decide \(L\). Remarks that it can happens when the Turing Machine reaches an infinite loop.

Examples

Turing-Computable

There exist languages which are not Turing-Computable. By a simple counting argument: The set of subset of \(\Sigma^*\) is not countable but the amount of Turing machine is countable.

Alternate model of computation

Turing variations

More details in reference books.

Non-Turing variant

We have seen that Turing Machines are simple to describe but not really close to the actual way computations are performed by computers. Computers are closer to the Von Neumann architecture with a different memory model where memory cells can be accessed without moving slowly to it but by jumping at a given address.

It makes huge a difference in practice since, for instance, Turing machine cannot perform dichotomy search efficiently in a sorted array.

A closer to reality model has been formalized: it is the RAM (Random Access Memory) model. It is the canonical model on which complexity theory is built on, but it is complicated and full of caveat.

Many other alternative models of computations exists to take into account various notions of computability and their associated cost analysis.

Python as a model of computation

Unlike compiled programming language, Python is executed through a virtual machine, that is a model of computation simulated by computers. It is an intermediate model of computation with its own machine-like definition, memory models, etc.

We can access the machine code of a Python function directly within Python:

import dis
def hello_world():
  print("Hello World")
for i in dis.Bytecode(hello_world):
  s = i.opname
  if i.argrepr:
    s += f" ({i.argrepr})"
  print(s)
RESUME
LOAD_GLOBAL (NULL + print)
LOAD_CONST ('Hello World')
PRECALL
CALL
POP_TOP
LOAD_CONST (None)
RETURN_VALUE

Computability

Some machines can simulate other machines. For example, our computers are able to simulate Python virtual machine. The Python virtual machine can simulate a Turing Machine and it is even easy to do so. Several external modules offer class to simplify this: turing_machine.

If we abstract away memory constraint, it is thus possible to simulate a Turing Machine on a Python Virtual machine. Hence, any Turing-Computable language is also computable by a Python program (and thus by the actual machine actually executing the code).

It can be shown (but it is not completely easy) that a Turing machine can also simulate a Python program. Hence, the two notions of computation represented by both Turing Machine and Python programs actually match.

This fact is neither specific to Python or Turing machine but actually to any discrete model of computation. We can (probably) simulates any of those model with a Turing machine, and if these models are expressive enough they can also simulate a Turing machine.

It is not really known if any physical machine can compute more than a Turing machine (neither if the question can be made formal). It is however highly believed to be a universal limit of computation that arose in many different shapes, as exposed by the Church Thesis.

More on the subject.

When a given model of computation has the ability to simulate Turing machines, we say that it is Turing-hard. If the converse is also true, that is, if it can also be simulated by Turing machines, we say that it is Turing-complete.

The terminology behind hard and complete will be investigated further in future coming lectures.

Universal Turing machine

It is possible to simulate a Turing machine directly with Turing machine. It is the main result of the Seminal of Turing (1936).

It has many impact on the overall comprehension of the nature of computation. In particular, it allows to prove the undecidability of the Halting problem: there exists no Turing machine that can decide if a Turing machine in input will halt on the empty input.

Although the result holds for Turing Machine, it is not really dependent on the model of computation. We can even prove it using Python syntax.

def halt(f, x):
    """
        Return True if f(x) stops in finite time else return False
    """
    pass # I don't know how to do that :(

def g(f):
    if halt(f, f):
        while True:
            pass
    else:
        return

g(g) # Does it stop? Does it loop infinitly?

Exercises

  1. Show that a Turing Machine with two tapes can be simulated with one tape Turing machine.

  2. Show that a Turing Machine with \(k\)-tapes can be simulated with one tape Turing machine.

  3. Show that Turing-Computable languages are closed under Boolean operations.

Turing-Complete by accident

It often happens that some limited computing power is embedded into some software. Then, step by step, developers add small functionalities to make it more convenient. At some point it become Turing Complete by accident.

This website provides a fun list of known accidental Turing-complete models of computation.

Complexity: analysis the cost of computation

One of the key driven subject to introduce model of computations is to evaluate the cost of a given computation. The notion of cost should be understood in a very broad way. Depending of the situation, the cost can refer to the time needed to perform a computation, to its memory footprint, to the amount of communication among computers, to the amount of humain/computers interactions required to complete the task, etc.

Complexity Theory is the study of some cost in computational models. Classically it focuses on Turing machine or RAM model with respect to their time and memory usage.

Contrary to what happens with computability, the fact that a model can simulate another model is not very helpful since the simulation can drastically change the cost of the computation.

Time and Space complexity for languages

Given a Turing Machine \(M\), we say that it has a worst-case time complexity \(f\colon\mathbb{N}\to\mathbb{N}\) if for any word \(u\), the length of the execution of \(M\) over \(u\) is less than \(f(|u|)\).

Given a Turing-computable language \(L\) and a function \(f\colon\mathbb{N}\to\mathbb{N}\), we say that \(L\) has a worst-case time complexity \(f\) if there exists a Turing machine \(M\) computing \(L\) with worst-time complexity \(f\).

Several remarks:

We often ignore multiplicative constants and focus on asymptotic analysis in complexity Theory. Hence, complexity is usually given using Landau notation. We assume those notations to be well understood already.

Examples

Exercice

Let \(L\) be the language of words \(\overline{b_1}^2\#\overline{b_2}^2\#\cdots\#\overline{b_n}^2\) for all \(n\in\mathbb{N}\) such that for all \(i\leq n\) we have \(b_i\in\{0, 2^n\}\) and \(b_i=b_{n-i}\).

What is the complexity of \(L\) with one-Tape Turing Machines and many Tapes Turing Machines.

Time Complexity

DTIME

Given a function \(f\colon\mathbb{N}\to\mathbb{N}\), we denote \(\textrm{DTIME}(f)\) the class of languages with worst-time complexity \(O(f)\).

Observe that if \(f\) is dominated as a function by \(g\) then \(\textrm{DTIME}(f)\subseteq \textrm{DTIME}(g)\).

Those classes are sensible to the model of computation.

Example

\(\{a^n b^n\mid n\in \mathbb{N}\}\) is in \(\textrm{DTIME}(n\mapsto n^2)\) but also in \(\textrm{DTIME}(n\mapsto n^3)\).

In the model of Turing Machine with two tapes, \(\{a^nb^n\mid n\in\mathbb{N}\}\) is in \(\textrm{DTIME}(n\mapsto n)\).

Notations

We will use from now on the notation \(\textrm{DTIME}(f(n))\) instead of \(\textrm{DTIME}(n\mapsto f(n))\)

P

We denote by \(P\) the class of languages computable by Turing machine in polynomial time. Formally, \(\textrm{P}=\bigcup_{d}\textrm{DTIME}(n^d)\). We extend \(\textrm{P}\) to problem with output. Hence, Turing machine with an output tape computing functions.

It is a robust class of complexity: it is (more or less) independent of the model of computation! Hence, we can prove a language is in \(P\) by providing a Python function computing it (using only standard operations).

It is the class of reasonable problems although all problems in \(P\) are not reasonable in practice.

Exercises

Questions

EXP

We denote by \(\textrm{Exp}\) the class of languages computable by Turing Machine in exponential time. Formally \(\textrm{Exp}=\bigcup_{d} \textrm{DTIME}(2^{n^d})\). As for \(\textrm{P}\), we extend \(\textrm{Exp}\) to functional problem by considering Turing Machine with an output tape.

Question

The hierarchy Theorem

Their exists a language which is in \(\textrm{Exp}\) but not in \(\textrm{P}\).

Proof

We prove that their exists a language in \(\textrm{DTIME}(2^{n^2})\) but not in \(\textrm{DTIME}(2^n)\). Since \(P\subseteq \textrm{DTIME}(2^n)\), it concludes the proof.

We denote by \(\textrm{TM-encode}\) the function that encode any Turing-Machine over a binary alphabet and \(\textrm{TM-decode}\) the converse function (that take an encoding of Turing Machine and provides a description of the TM.

Then we define the language \(L\) that accept pairs \((u, v)\) such that their exists a Turing Machine \(N\) with \(u=\textrm{TM-encode}(N)\) and execute \(N\) over \((u, v)\) and letting \(n=|u|+|v|\) :

Observes that the simulation of \(2^n\) steps of a given Turing machine is doable by a Turing machine in \(n^k2^{n}\) steps, for some \(k\), which show that \(U\) is indeed in \(\textrm{Exp}\). We need to perform the simulation and for each step increase a counter of steps. Each simulation and counter update can be perform in \(n^{O(1)}\) steps, which conclude.

Their exists \(n\) large enough such that \(n^k 2^n < 2^{n^2}\).

Assume by contradiction that \(L\) is doable by a Turing machine \(M\) in \(2^n\) steps. Let \(u=\textrm{TM-encode}(M)\) and \(v\) such that \(|v|+|\textrm{TM-encode}(M)=n\). Then we have:

If \(M\) accepts \((u, v)\), then it means \((u, v)\) is in \(L\), which means that \(M\) reject \((u, v)\) in less than \(2^n\) steps which means \((u, v)\) is not in \(L\).

If \(M\) rejects \((u, v)\) then it means \(M\) accept \((u, v)\) in less than \(2^n\) steps which mean \((u, v)\) is in \(L\).

Hence, we reach a contradiction.


Compiled the: mar. 17 déc. 2024 14:03:14 CET