# Partial Fourier Sums and Some Probability

These are the lecture notes for a harmonic analysis reading group talk I gave October 14th, 2019 at UT Austin. They are based off of the book Classical and Multilinear Harmonic Analysis Vol. 1 by Muscalu and Schlag

Throughout the following notes, we will be working on ${\mathbb{T}^d : = [0,1]^d}$. Our Fourier transform will be defined to be

$\displaystyle \widehat{f} (n) : = \int_{\mathbb{T}^d} f(x) e^{2\pi i n \cdot x} dx.$

One of the fundamental objects in classical harmonic analysis is the partial Fourier series, defined for ${N \in \mathbb{N}}$ by

$\displaystyle S_N f(x) : = \sum_{|n| \leq N} \widehat{f}(n) e^{2\pi i n \cdot x}.$

In view of the Fourier inversion formula, it is natural to ask when ${S_N f \rightarrow f}$, where this limit is taken in ${L^p(\mathbb{T}^d)}$ or pointwise a.e. We have the following basic result for ${L^p}$ convergence:

Theorem 1 The following statements are equivalent for ${p \in [1,\infty)}$

• ${\sup_{N\in {\mathbb N}} \Vert S_N \Vert_{p \rightarrow p} < \infty}$
• For all ${f \in L^p( \mathbb{T}^d)}$, ${\Vert S_N f - f \Vert_p \rightarrow 0}$

Proof: One direction follows from density of trigonometric polynomials in ${L^p(\mathbb{T}^d)}$, and the other follows from the uniform boundedness principle. $\Box$

By using weak ${L^1}$ to ${L^2}$ bounds, duality, and Marcinkiewicz interpolation on the Hilbert transform, one in fact has the following result for ${p \in (1, \infty)}$.

Theorem 2 For any ${p \in(1,\infty)}$, ${\sup_{N\in {\mathbb N}} \Vert S_N\Vert_{p \rightarrow p} < \infty}$.

This result can be further extended to pointwise a.e. convergence, although it is much harder.

Theorem 3 (Carleson ’66, Hunt ’68) For any ${p \in (1, \infty)}$ and ${f \in L^p(\mathbb{T}^d)}$, ${S_Nf(x) \rightarrow f(x)}$ for a.e. ${x}$.

The proof of this theorem is exceptionally complicated, and requires an in depth analysis of both the frequency and spacial variables simultaneously.

Amazingly the above convergence results are false on ${L^1(\mathbb{T}^d)}$! To show this, let us assume for simplicity that ${d =1}$. For ${L^1}$ type convergence, this follows from the representation of ${S_Nf = D_N * f}$, where ${D_N}$ is the Dirichlet kernel

$\displaystyle D_N(x) : = \frac{\sin ((2N+1) \pi x) }{\sin(\pi x) }.$

It is an exercise to show that ${\Vert D_N \Vert_{1 \rightarrow 1} \sim \log N}$, and this implies that ${\Vert S_N \Vert_{1 \rightarrow 1 } \rightarrow \infty}$ as ${N \rightarrow \infty}$. By Theorem 1, this implies failure of ${L^1}$ convergence of the partial Fourier sums.

For the pointwise a.e. convergence, we have the following theorem by Kolmogorov:

Theorem 4 (Kolmogorov ’23) There exists an ${f \in L^1( \mathbb{T})}$ such that ${S_Nf}$ does not converge a.e. as ${N \rightarrow \infty}$.

The rest of this talk will be devoted to proving this theorem.

Recall for a moment the proof of the Lebesgue Differentiation theorem via bounding the Hardy-Littlewood maximal function. These types of maximal functions are fundamental in proving convergence a.e. results. Calderon, Zygmund, and Stein noticed that a bound on relevant maximal functions is in fact necessary for showing convergence a.e. The exact statement is the following:

Theorem 5 Let ${T_n}$ be a sequence of translation invariant bounded linear operators on ${L^1(\mathbb{T})}$. If the maximal function

$\displaystyle Mf(x) : = \sup_{n \in {\mathbb N}} |T_n f(x)|$

satisfies ${\Vert M f \Vert_\infty < \infty}$ for each trigonometric polynomial ${f}$, then the following implication holds: If for any ${f \in L^1(\mathbb{T})}$, we have ${|\{ x \in \mathbb{T} : Mf (x) < \infty \} | > 0}$, then there exists ${A > 0}$ such that

$\displaystyle |\{ x \in \mathbb{T} : Mf(x) > \lambda \} | \leq \frac{A}{\lambda} \Vert f \Vert_1$

for all ${f\in L^1(\mathbb{T})}$ and ${\lambda > 0}$.

Proof: We proceed by contradiction. Assume there exists ${\{f_j\} \subset L^1 ( \mathbb{T})}$ and ${\{ \lambda_j \} \subset (0,\infty)}$ such that ${\Vert f_j \Vert_1 = 1}$ and

$\displaystyle |E_j| = |\{ x\in \mathbb{T} : Mf_j(x) > \lambda_j \} | > \frac{2^j}{\lambda_j}.$

We can make a few simplifications. First, note that for each ${j}$, there exists ${M_j > 0}$ such that

$\displaystyle |\{ x\in \mathbb{T} : \sup_{1 \leq k \leq M_j} |T_kf_j(x) > \lambda_j \} | > \frac{2^j}{\lambda_j}.$

Next, we may assume that each ${f_j}$ is a trigonometric polynomial. The Cesaro means ${\sigma_Nf \rightarrow f}$ in ${L^1}$, and boundedness of ${T_k}$ implies that ${T_k \sigma_N f\rightarrow T_k f}$ as ${N\rightarrow \infty}$ in ${L^1}$. Since ${\sigma_N f}$ is a trigonometric polynomial for each ${N \geq 1}$, we have not lost any generality assuming ${f_j}$ is a trigonometric polynomial.

Now, for each ${j \in {\mathbb N}}$, pick ${m_j\in {\mathbb N}}$ such that ${m_j \leq \lambda_j/2^j < m_j +1}$. Then we have by construction that

$\displaystyle \sum_{j=1}^\infty m_j |E_j| = \infty$

By a Borel-Cantelli type argument, one can find a set of translations ${x_{j,l} \in \mathbb{T}}$ for ${j\in {\mathbb N}}$ and ${1 \leq l \leq m_j}$ such that ${x - x_{j,l}}$ lies in infinitely many ${E_j}$ for a.e. ${x \in \mathbb{T}}$. In other words, the set

$\displaystyle \mathcal{J}_x : = \{ j \in {\mathbb N} : x - x_{j,l} \in E_j , \exists l \in \{1, \dots, m_j\} \}$

is infinite for a.e. ${x \in \mathbb{T}}$. From these ${x_{j,l}}$ and ${f_j}$, we can construct a function which will furnish a contradiction.

Let ${(\Omega, \mathcal{F}, \mathbb{P})}$ be a probability space supporting an iid sequence of coin flips ${X_n \in \{\pm 1\}}$. Define the random variables

$\displaystyle \epsilon_n : = \frac{1}{n^2 m_n} X_n,$

and define the random variable

$\displaystyle f(x) : = \sum_{j=1}^\infty \sum_{l =1}^{m_j} \epsilon_j f_j(x - x_{j,l}).$

Note that by the triangle inequality, ${f \in L^1( \mathbb{T})}$ for all ${\omega \in \Omega}$:

$\displaystyle \Vert f \Vert_1 \leq \sum_{j=1}^\infty \sum_{l=1}^{m_j} \frac{1}{j^2 m_j}= \sum_{j=1}^\infty \frac{1}{j^2} <\infty$

We wish to pick random values such that ${Mf(x) = \infty}$ for a.e. ${x}$. To this end, let us fix an ${x \in \mathbb{T}}$ such that ${\mathcal{J}_x}$ is infinite. Note first that by absolute ${L^1}$ convergence of the sums and the boundedness of the operators ${T_k}$, we have that up to a measure zero set in ${\mathbb{T}}$,

$\displaystyle T_n f(x) = \sum_{k=1}^\infty \sum_{l=1}^{m_k} \epsilon_k T_n f_k(x - x_{k,l})$

Now, since the ${T_k}$ are translation invariant, so is the maximal function ${M}$. Hence by definition, for each ${j \in \mathcal{J}_x}$ we have

$\displaystyle Mf_j(x - x_{j,l}) > \lambda_j,$

and hence there exists an ${n(j,x) \in {\mathbb N}}$ such that

$\displaystyle |T_{n(j,x)} f_j(x - x_{j,l})| > \lambda_j.$

Plugging this ${n(j,x)}$ into the above expression, we get

$\displaystyle T_{n(j,x)} f(x) = \sum_{k=1}^\infty \sum_{l=1}^{m_k} \epsilon_k T_{n(j,x)} f_k(x - x_{k,l}).$

From here, we use the randomness of the coefficients to obtain some nice bounds. Note that in particular for one of the terms in this sum, ${|\epsilon_{j} T_{n(j,x)} f_{j} (x- x_{j,l})| >\frac{\lambda_j}{j^2 m_j}}$. We have the following picture;

Hence, we have by independence that

$\displaystyle \mathop{\mathbb P}\Big( \{ |T_{n(j,x)} f(x) | > \frac{\lambda_j}{j^2 m_j} \} \Big) \geq \frac{1}{2}$

Since the set ${\mathcal{J}_x}$ is infinite, and ${\frac{\lambda_j}{j^2 m_j} \rightarrow \infty}$, we have that

$\displaystyle \mathop{\mathbb P} \big( \{ Mf(x) = \infty \} \big) \geq 1/2.$

It can be shown that the event ${F = \{ Mf(x) = \infty \} \in \mathcal{F}}$ above is a tail event. That is,

$\displaystyle F \in \bigcap_{k=1}^\infty \sigma \Big( \bigcup_{n = k}^\infty \sigma(X_n) \Big)$

Hence by Kolmogorov’s 0-1 Law

$\displaystyle \mathop{\mathbb P} \big( \{ Mf(x) = \infty \} \big) =1.$

Hence, since the set of ${x}$ which this argument was applied to had full measure, we obtain by Fubini that

$\displaystyle |\{ x \in \mathbb{T} : Mf(x) = \infty \} | = 1$

a.s. in ${\Omega}$ (and hence for at least one ${\omega \in \Omega}$). This contradicts the main hypothesis, so we are done. $\Box$

With this in hand we have the following corollary for the maximal operator

$\displaystyle Cf(x) : = \sup_{N \in {\mathbb N}} |S_Nf(x)|.$

Corollary 6 Assume that there exists a set ${G \subset \mathbb{T}}$ of positive measure such that ${S_Nf}$ converges for every ${f \in L^1(\mathbb{T})}$. Then for any complex Borel measure ${\mu}$ on ${\mathbb{T}}$ there exists a constant ${A}$ such that

$\displaystyle |\{x \in \mathbb{T} : C \mu(x) > \lambda \} | \leq \frac{A}{\lambda} \Vert \mu \Vert$

for any ${\lambda > 0}$.

Proof: This follows from considering the Vallée de Poussin’s kernel convoluted with the measure ${\mu}$ and applying the previous theorem. $\Box$

With this corollary in hand, one now constructs a ${\mu}$ which grows with ${S_N}$ in such a way that ${C\mu}$ does not satisfy the weak ${L^1}$ bound above. To do this, we simply consider measures of the form

$\displaystyle \mu_N : = \frac{1}{N} \sum_{k=1}^N \delta_{x_{k,N} },$

where ${x_{j,N}}$ is close to ${j/N}$. We calculate

$\displaystyle S_n \mu_N (x) = \frac{1}{N} \sum_{k=1}^N \frac{ \sin( (2n +1)\pi ( x- x_{k,N} )) }{\sin(\pi( x - x_{k,N}))}$

By picking points such that the numerator of the above is the same sign, we may obtain that

$\displaystyle |S_n \mu_N (x)| \approx \frac{1}{N} \sum_{k=1}^N \frac{ 1}{|\sin(\pi( x - x_{k,N}))|} \approx \frac{1}{N} \sum_{k=1}^N \frac{ 1}{j/N} \approx \log N$

To pick these points, we prove a variant of Kronecker’s Theorem

Lemma 7 Assume that ${\theta \in \mathbb{T}^d}$ is incommensurate. That is, assume for all ${n \in {\mathbb N}^d}$ we have ${n \cdot \theta \notin {\mathbb Z}}$. Then the orbit ${\{ k \theta \mod {\mathbb Z}^d : k \in {\mathbb Z}\} \subset \mathbb{T}^d}$ is dense.

Proof: It suffices to show for all ${f \in \mathcal{C}^\infty( \mathbb{T}^d)}$ we have

$\displaystyle \frac{1}{N} \sum_{k=1}^N f(k \theta) \rightarrow \int_{\mathbb{T}^d} f(x) dx.$

To see this, assume that the orbit is not dense. Picking a “bump function” on an open subset of the complement of the orbit would imply the above limit is false.

To show the above limit holds, simply calculate

$\displaystyle \frac{1}{N} \sum_{k=1}^N f(k \theta) = \frac{1}{N } \sum_{k=1}^N\sum_{\nu \in {\mathbb Z}^d } \widehat{f}(\nu) e^{2\pi i k \theta \cdot \nu} = \widehat{f}(0) + \sum_{\nu \in {\mathbb Z}^d\setminus \{0\}} \frac{1}{N} \Bigg( \frac{1- e^{2\pi i (N+1) \theta \cdot \nu}}{1- e^{2\pi i \theta \cdot \nu } }\Bigg)$

and use the decay of ${\widehat{ f} (\nu)}$ to show that the sum converges to ${0}$ as ${N\rightarrow \infty}$. $\Box$

With this lemma, we can now construct our ${\mu_N}$.

Lemma 8 There exists a sequence ${\{\mu_N\}_{N\in {\mathbb N}}}$ of probability measures on ${\mathbb{T}}$ such that for each ${N}$,

$\displaystyle (\log N)^{-1} \limsup_{n \rightarrow \infty} | S_n \mu_N(x)| \geq \alpha > 0$

for a.e. ${x \in \mathbb{T}}$ for some universal constant ${\alpha}$.

Proof: For each ${N}$ and ${1 \leq j \leq N}$, pick ${x_{j,N} \in \mathbb{T}}$ such that

$\displaystyle \Bigg |x_{j,N} - \frac{j}{N} \Bigg| \leq \frac{1}{N^2}$

and the vector ${(x_{j,N})_{j=1}^N \in \mathbb{T}^d}$ is incommensurate. This can be done since the set of incommensurate vectors forms a set of measure zero in ${\mathbb{T}^d}$. Note that the set of ${x \in \mathbb{T}}$ such that ${(2( x- x_{j,N}))_{j=1}^N}$ is a commensurate vector is at most countable. Therefore for a.e. ${x \in \mathbb{T}}$ the previous lemma shows that

$\displaystyle \{ 2k (x - x_{j,N})_{j=1}^N \mod {\mathbb Z}^N : k \in \mathbb{Z} \} \subset \mathbb{T}^N$

is dense, and so

$\displaystyle \{ (2k +1) (x - x_{j,N})_{j=1}^N \mod {\mathbb Z}^N : k \in \mathbb{Z} \} \subset \mathbb{T}^N$

is also dense. Hence for a.e. ${x \in \mathbb{T}}$, we can pick infinitely many ${k= k(x) \in {\mathbb N}}$ such that for each ${1\leq j \leq N}$,

$\displaystyle \sin( (2k+1)\pi(x- x_{j,N})) \geq 1/2.$

Therefore for these infinitely many ${k}$,

$\displaystyle | S_{k}\mu_N (x) | \geq C \frac{1}{N} \sum_{j=1}^N\frac{1}{ |\sin(\pi ( x- x_{j,N} ) ) | }$

So we have by a bit of thinking that

$\displaystyle \frac{1}{N} \sum_{j=1}^N\frac{1}{ |\sin(\pi ( x- x_{j,N} ) ) | }\geq C \frac{1}{N} \sum_{j=1}^N \frac{1}{| [ x - j/N] | + N^{-2} } \geq C \frac{1}{N} \sum_{j=1}^N \frac{1}{j/N + N^{-1} + N^{-2} }$

where ${C}$ is universal. The right hand side is bounded below by

$\displaystyle \geq C\sum_{k=1}^N \frac{1}{j+2} \geq C \log(N-1).$

This finishes the proof. $\Box$

We now finish with the proof of Kolmogorov’s Theorem.

Proof: By the Corollary 6, if we did have partial Fourier convergence on ${L^1}$, then we would obtain a weak type bound

$\displaystyle 1= |\{x \in \mathbb{T} : C\mu_N (x) > \alpha \log N \}| \leq \frac{A}{\log N } \Vert \mu_N\Vert = \frac{A}{\log N }$

This implies ${\log N \leq A\rightarrow \infty}$, a clear contradiction. $\Box$