Algorithm and Complexity 2, année 2024

Non-deterministic Turing Machine

A non-deterministic Turing machine (NTM) is a tuple \((Q, \Sigma, q_0, q_a, q_r, \delta_0, \delta_1)\) where:

Contrary to DTM, NTM have two transitions functions. It intuitively means that given a NTM \(M\) and a configuration \(c\) of \(M\), there are two possible configurations following \(c\) : \(\delta_0(c)\) and \(\delta_1(c)\).

The memory model is the same as in the original model of computation: a NTM is equipped with a bi-infinite tape of the form \(\_^\infty u \_^\infty\) and a head that can read and write on the tape. We describe configurations in the same way as for DTM.

We use the same notations as in the previous lecture, that is, we denote by \(\delta_0(c)\) the configuration following \(c\) by \(\delta_0\) (similarly for \(\delta_1(c)\)).

Executions

Similarly to DTM, the initial configuration of an NTM \(M\) on input \(u\) is \(I(u)=(q_0, u, 0)\), that is, the head is positionned at \(0\), the state is the initial state and the tape contains only \(u\).

However, starting from this initial state, a NTM may have several possible executions.

An execution of \(M\) on input \(u\) is a sequence of configurations \((c_i)\) such that \(c_0=I(u)\) and \(c_{i+1} \in \{\delta_0(c_i), \delta_1(c_i)\}\) if the state of \(c_i\) is not \(q_a\) nor \(q_r\).

The NTM \(M\) accepts the word \(u\) if and only if there exists an execution of \(M\) on input \(u\) that reaches \(q_a\). The NTM \(M\) rejects the word \(u\) if and only if every execution of \(M\) on input \(u\) reaches state \(q_r\).

An NTM \(M\) accepts a language \(L\) if \(L\) is the set of words accepted by \(M\). An NTM \(M\) rejects a language \(L\) if \(L\) is the set of words rejected by \(M\). An NTM \(M\) decides a language \(L\), denoted by \(L(M)\), if it accepts \(L\) and rejects \(\Sigma^* \setminus L\).

Examples

Variations of NTM

As for DTM, one could imagine variations of the previous model of computation. For example:

Both variant define the same notion of computation.

However, it is not obvious anymore how to define a variation of NTM which is able to compute a function and not only decide a language. Indeed, there may be several accepting executions that will write a different words on output tape.

For NTM, we will thus only consider decision problem.

Computational power of NTM

NTM and DTM have the same computational power, that is, for every language \(L\) that is recognized by a NTM \(M\), there exists a DTM \(M'\) such that \(L = L(M')\).

The idea of the proof is to simulate every possible executions of an NTM with a DTM . This can be done by encoding every possible configuration a machine \(M\) can reach after \(t\) steps (proof sketch on blackboard).

Complexity

Similarly to DTM, one can define a notion of complexity for NTM. Given an NTM \(M\), we say that it has a worst-case time complexity \(f\colon\mathbb{N}\to\mathbb{N}\) if for any word \(u\), the length of every execution of \(M\) on input \(u\) is less than \(f(|u|)\).

Given a language \(L\) and a function \(f\colon\mathbb{N}\to\mathbb{N}\), we say that \(L\) has a worst-case non-deterministic time complexity \(f\) if there exists an NTM \(M\) deciding \(L\) with worst-time complexity \(f\).

Given a function \(f\colon\mathbb{N}\to\mathbb{N}\), we denote by \(\textrm{NTIME}(f)\) the class of languages with worst-time non-deterministic complexity \(O(f)\).

Observe that contrary to DTIME, the notion of NTIME (and more generally the notion of languages decided by NTM) is not closed by complement. That is, if \(L \in \textrm{NTIME}(f)\), it does not mean the \(\Sigma^* \setminus L \in \textrm{NTIME}(f)\). Interchanging \(q_a\) and \(q_r\) does not work!

Non-deterministic time is, at first glance, more powerful than deterministic time, which is witnessed by this upper bound which is more or less the best that we know to this point:

Theorem: \(\textrm{DTIME(f)} \subseteq \textrm{NTIME(f)} \subseteq \textrm{DTIME}(2^{O(f)})\).

Non determinism in high level languages

Non-deterministic time complexity classes behave quite differently from what we expect from high level languages. This is mainly because processors are inately deterministic.

The notion of non-determinism could however be introduced in such language by adding a “guess” instruction:

b = GUESS()

In this case, variable \(b\) receives a value in \(\{0,1\}\). If used in a function returning either true or false, the value of the function is true if and only if there is a way of setting the different guesses so that the function returns \(1\).

In particular, it makes bruteforce quite easy!

See here for an actual way to program with non-determinism in Python.

Exercise