## Yate’s Algorithm for the Zeta Transform

This is my first post in the folklore knowledge category about an algorithm first used by Yate. I stumbled over this technique while reading the beautiful paper “Determinant Sums for Undirected Hamiltonicity” by A. Björklund (more on this in a few days). Since I did not found an easy to understand presentation of this technique, I decided to write my own one.

When designing algorithms one is sometimes faced with the problem of computing the so-called Zeta Transform $\hat{f}$ of a function $f:2^N\rightarrow R$, i.e. $f$ maps subsets of a given superset $N$ onto elements in a fixed computation ring $R$. The Zeta Transform is defined as

$\hat{f}(Y):=\sum_{X\subset Y}f(X)\hspace{1cm}\forall Y\subset N\enspace.$

Hence, the Zeta Transform is an operator mapping a function $f:2^N\rightarrow R$ onto another function $\hat{f}:2^N\rightarrow R$. The naive way of computing $\hat{f}$ for all $Y\subset N$ would be to simply compute $\hat{f}(Y)$ from scratch for every $Y$. In terms of ring additions, this yields a complexity of

$\sum_{Y\subset N}2^{|Y|}=\mathcal{O}(3^{|N|})$

assuming that the evaluation of the function $f$ itself can be done in unit time.

Yeta’s algorithm gives a more efficient way to do so. Clearly, the naive computation contains a lot of double calculations which should be avoided. Essentially the trick is to derive a simple recursion formula that allows to apply dynamic programming. This allows to prove the following result.

Theorem (Yate’s Fast Zeta Transform). For every function $f:2^N\rightarrow R$, the Zeta Transform $\hat{f}(Y)$ can be computed for all $Y\subset N$ with $\mathcal{O}(N2^N)$ ring additions.

Proof. For convenience we write $N=\{1,\ldots,n\}$. For every $Y\subset N$ and $i=0,\ldots,n$ we define

$g_i(Y):=\sum_{Y\setminus\{1,\ldots,i\}\subset X\subset Y}f(Y)$

describing partial sums over all subsets $X\subset Y$ covering all elements in $Y\setminus\{1,\ldots,i\}$. Obviously we obtain $g_0(Y)=f(Y)$ for every $Y$ and $g_n(Y)=\hat{f}(Y)$. Now the important observation is the following recursion formula for $g_i(Y)$ given by

$g_i(Y)=\begin{cases}g_{i-1}(Y)&\text{ for} i\notin Y\\g_{i-1}(Y)+g_{i-1}(Y\setminus\{i\})&\text{ else}\end{cases}\enspace.$

Clearly, for $i\notin Y$ it holds $g_i(Y)=g_{i-1}(Y)$. For the other case, observe that every subset $X\subset Y$ that occurs in $g_i(Y)$ either contains $i$ or not and covers all of $Y\setminus\{1,\ldots,i-1\}$ which explains the second equation in the above recursion formula. Now, the following dynamic programming algorithm allows to finally prove the theorem.

Algorithm.
Input: $f,N$
Output: $\hat{f}(Y)$ for all $Y\subset N$

• For all $Y\subset N$ set $g_0(Y)=f(Y)$
• For all $i=1,\ldots n$ do
• For all $Y\subset N$ update $g_i(Y)$ using the above recursion formula
• Output $g_n(Y)$ for all $Y\subset N$.

The first step takes $2^N$ evaluations of $f$ and the second step yields a total number of $N2^N$ ring additions.