PDEs: A global optimum? Best I can do is local!


Table of Contents

This is the sixth post of the series about partial differential equations (PDEs). In the previous posts, I explored various issues inherent to data input / output and state preparation. In this post, I will start by taking a step back and speaking about classical optimization algorithms, to conclude on their usage in the setting of variational quantum algorithms.

If you do not understand the text, do not worry, everything is explained in this blog post :wink:. Best I can do meme. [History of the image](https://knowyourmeme.com/memes/best-i-can-do-is).
If you do not understand the text, do not worry, everything is explained in this blog post 😉. Best I can do meme. History of the image.

What is an optimization problem?

Let’s start with the basics: what problem are we trying to solve?

Mathematical optimisation is a HUGE field of research, as attested by the length of the “Major subfields” section on Wikipedia. For the sake of not making a post that would take a full day to be read, let’s reduce the scope of the problems I will speak about here to continuous minimisation problems of the form

$$ \min_{\vec{\theta} \in \left[ a, b \right]^n} f\left(\vec{\theta}\right). $$

for $(a, b) \in \mathbb{R}^2, a < b$.

Small digression on variational quantum algorithms

Notably, this type of optimisation problem includes the optimisation problem that can be found in any variational quantum algorithm.

Let’s define the expectation value of the operator $ U_i $ on the state $ \left\vert V \left(\vec{\theta}\right) \right\rangle $ produced by an ansatz as: $$ E_i = \left\langle V\left(\vec{\theta}\right)^H \middle\vert U_i \middle\vert V \left(\vec{\theta}\right) \right\rangle $$ where $^H$ is used for the conjugate transpose. Then, the optimisation problem found in any variational quantum algorithm is exactly the minimisation problem presented above with $$ f\left(\vec{\theta}\right) = g\left( E_0, E_1, \dots, E_k \right), $$ $g$ being a function (in most variational quantum algorithms, $g$ is a weighted sum of the $E_i$ given as input).

But let’s go back to the generic optimisation problem for the moment. An interesting question to ask would be

In which case is a computer able to efficiently find a solution to a given optimisation problem?

And it is indeed interesting! But this is not the first question we should ask ourselves.

What is a solution to a minimisation problem?

Before even considering to solve the minimisation problem, we should define clearly what we call a “solution”. It may seem obvious, but this is a (if not the) key point of this post.

And what’s better than an example to illustrate?

Let’s use the following minimisation problem: $$ \min_{\theta \in \left[0, 2\pi\right]} \cos\left(\theta \right) + \cos\left(3 \theta\right) $$ and plot the cost function:


Three points have been added in the above graph.

The two orange points are local minimums: they are a solution to the minimisation problem, but only if you look locally around them (none of their close neighbour obtains a lower value for the cost function considered).

The green point is a global minimum: it is a solution of the minimisation problem, even when you consider the entire interval the minimisation problem is defined on: $\left[0, 2\pi\right]$.

Only global minimas are solutions to the optimisation problem. Local minimas can be arbitrarily more costly than global ones, and as such should not be considered valid solutions.

Now that the first question is answered, let’s go back to our initial question:

In which case is a computer able to efficiently find the solution to the optimisation problem?

It turns out that only convex optimisation problems (i.e., with a convex set of possible values and a convex cost function) can be solved efficiently by a computer.

Convexity, makes it easy

The title of this section sounds like an advertisement right?

With convexity, your optimisation problems are solved in the blink of an eye.

Don’t wait! Take one of our convex problem home with you!

We have a 30-days returns no question asked policy.

Okey, pause over, let’s get back to a little bit of maths.

What is convexity?

I will not write down a large paragraph on convexity here, the different Wikipedia pages about convex functions and convex sets should be sufficiently clear. For those wanting a quick, non-mathematical and non-rigorous explanation:

  • a set is convex if, for any 2 points taken in the set, all the points on the line between these two points are also in the set,
  • a function is convex over a given set when you can draw a line between any 2 points of this function and the intermediate values of the function are below the line.

Understanding why convex optimisation is “easy” is a more interesting task.

Whenever you read “convex problems are easy”, what I really mean is “convex problems are way easier to solve (i.e., find a global optimum) than non-convex ones of the same size”.

Some convex optimisation problems will be computionally hard (take a function with a huge input space of $10^{30}$ parameters for example), so not all convex problems are “simple” or “easy” to solve, even with a large computer.

Why are convex problems “easy”?

Let’s take another example, a convex function this time: $f(x) = \cos(\frac{x + \pi}{2})$ on $x\in \left[0, 2\pi\right]$. The function is plotted below:


Finding the global minimum, denoted by the orange point on the plot above, is realtively easy for the human eye. But it turns out that it is also easy for a computer, as it only has to:

  1. pick a random point $x \in [0, 2\pi]$,
  2. compute the slope of the function at that point,
  3. update the point $x$ such that it goes “downhill”, following the negative slope.

The algorithm described above is a (very non-rigorous) version of the gradient descent algorithm, a core algorithm of computational optimisation. It allows to find local minimas, and is guaranteed to converge to a local minima under some conditions.

But wait… didn’t you write that we do not care about local minimas and only care about global ones?

I said that… and this is where convexity comes into play! It turns out that, if your problem is convex, any local minima is also a global minima!

What about non-convex problems?

Non-convex problems do not have the property that local minimas are also global one. The first function plotted in this blog post, $$ f(\theta) = \cos(\theta) + \cos(3\theta) $$ is a good example: the two orange points are local minimas, but are not global as the green point at $\theta = \pi$ is strictly lower.

You are now fully equiped to understand the meme at the beginning of this blog post 😉

In general, it is easy to find local minimas of non-convex problems as we can use the gradient descent algorithm with a randomly sampled starting point. But the obtained local minima has no guarantee about:

  • whether or not it is also a global optimum,
  • if it is not a global optimum, whether or not it is far away from it.

It is still possible to be lucky though, and randomly sample a point in the “attraction valley” of the global minimum. But you have no guarantee about that, and most of time you also have no knowledge on the probability of randomly sampling such a point.

Coming back to quantum

As I wrote in the introduction of this blog post, variational quantum algorithms are all instances of the continuous optimisation problem I spoke about in the previous sections.

In order to compute the quantum state that represents the solution of your problem, variational quantum algorithms encode this quantum state as the global minimum of an optimisation problem.

In other words, variational algorithms simply re-phrase the problem (that can be about chemistry, solving a system of linear equations, factoring a large number, etc.) into a continuous optimisation problem and rely on classical optimisation algorithms to find the solution.

But the optimisation problem resulting from this re-phrasing of the original problem does not have nice properties:

  1. it is not convex (except in really trivial cases),
  2. it may have an exponentially growing number of local minimas.

The first point in the list above rules out any guarantee to efficiently recover the global minima (which, again, is the thing we want).

The second point tells us that the number of local minimas found in the cost function of a variational quantum algorithm may grow exponentially, as shown in the research paper Exponentially Many Local Minima in Quantum Neural Networks.

This basically means that you have an information on the problem that need to be solved for variational quantum algorithms: you need to be extremely (or dare I say “exponentially”) lucky to pick a starting point for your optimisation algorithm that lead you to the spot you are interested in.

Coming back to quantum algorithms to solve partial differential equations

What I am trying to convey in this blog post is that solving a PDE with a variational quantum algorithm seems to be doomed to fail.

The main reason for that is not even due to the nature of the problem of solving PDEs numerically. It is directly linked to variational quantum algorithms and their inability to guarantee any convergence to the desired result. In other words, using a variational quantum algorithm could

  • be faster than a classical computer,
  • take as long as a classical computer,
  • be slower than a classical computer,

and you have absolutely no way of knowing beforehand.

My personal opinion is that, due to

  1. the constants involved when performing quantum computations (slow gates, complex control, very slow communication between the quantum computer and the classical one, etc.),
  2. the lack of robust algorithm to reliably find the solution to the problem,
  3. the different theoretical results piling up and reducing the set of problems on which variational quantum algorithm may, one day, potentially have an advantage,

variational quantum algorithm will never be the future of quantum computing, in particular to solve partial differential equations.

The claim above is motivated by more than just this blog post. Follow-up blog posts presenting the other reasons that led me to this conclusion will be posted in the next few months.

What’s next

In the next post I will write about another issue with variational quantum algorithms that has seen recent (a few months back) and exciting developments.

If you ever used a variational quantum algorithm in your life you might already see where I am taking you… to Barren-Plateau land! See you next month for this exciting adventure 😉.

If you want to be notified when each post is out, you can add me on LinkedIn, I will do a LinkedIn post for each new blog post.

Also, if this post picked your interest or raised some questions in your mind, please feel free to send me a message 😉

Adrien Suau
Adrien Suau
Entrepreneur in Quantum Computing

I am an entrepreneur in quantum computing. I also do business consulting, so feel free to reach out!

Next
Previous