I am a PhD student working on quantum computing at LIRMM and CERFACS. My research interests includes quantum algorithms, tools related to quantum computing (profilers, compilers, …), algorithmic complexity and how quantum computing may speed-up scientific computing in general.
PhD in Quantum Computing, 2022
University of Montpellier
Master of Science, 2018
MSIAM, Grenoble-Alpes University
Engineering school, 2018
Preparatory class to "Grandes Écoles", 2015
La Prépa des INP
Daily usage in a professional environment, publication of several medium-sized packages to PyPi.
Daily exposition to algorithmic complexity analysis or linear algebra.
Daily user loving the ricing philosophy.
Medium/Advanced user of
git loving Magit.
All my personal projects are now located on Gitlab. Basic usage of the Gitlab continuous integration and Gitlab’s Docker container registry.
Regular user. See my configuration.
Fluent both orally and in writting
Talk dedicated to quantum algorithms and their potential application, with an emphasis made on pratical implementation and results of such implementations.
In this poster I summarise the research I have done on QatHS, the quantum wave equation solver.
In this talk I explain the research path followed at CERFACS that brought us to solve linear systems of equations on IBM chips.
Présentation montrant les résultats obtenus par le solveur d’équation d’ondes quantique développé au CERFACS.
As a variety of quantum computing models and platforms become available, methods for assessing and comparing the performance of these devices are of increasing interest and importance. Despite being built of the same fundamental computational unit, radically different approaches have emerged for characterizing the performance of qubits in gate-based and quantum annealing computers, limiting and complicating consistent cross-platform comparisons. To fill this gap, this work proposes a single-qubit protocol (Q-RBPN) for measuring some basic performance characteristics of individual qubits in both models of quantum computation. The proposed protocol scales to large quantum computers with thousands of qubits and provides insights into the distribution of qubit properties within a particular hardware device and across families of devices. The efficacy of the Q-RBPN protocol is demonstrated through the analysis of more than 300 gate-based qubits spanning eighteen machines and 2000 annealing-based qubits from one machine, revealing some unexpected differences in qubit performance. Overall, the proposed Q-RBPN protocol provides a new platform-agnostic tool for assessing the performance of a wide range of emerging quantum computing devices.
We introduce qprof, a new and extensible quantum program profiler able to generate profiling reports of various quantum circuits. We describe the internal structure and working of qprof and provide three practical examples on practical quantum circuits with increasing complexity. This tool will allow researchers to visualise their quantum implementation in a different way and reliably localise the bottlenecks for efficient code optimisation.
In the last few years, several quantum algorithms that try to address the problem of partial differential equation solving have been devised: on the one hand, “direct” quantum algorithms that aim at encoding the solution of the PDE by executing one large quantum circuit; on the other hand, variational algorithms that approximate the solution of the PDE by executing several small quantum circuits and making profit of classical optimisers. In this work, we propose an experimental study of the costs (in terms of gate number and execution time on a idealised hardware created from realistic gate data) associated with one of the “direct” quantum algorithm: the wave equation solver devised in [PCS. Costa, S. Jordan, A. Ostrander, Phys. Rev. A 99, 012323, 2019]. We show that our implementation of the quantum wave equation solver agrees with the theoretical big-O complexity of the algorithm. We also explain in great detail the implementation steps and discuss some possibilities of improvements. Finally, our implementation proves experimentally that some PDE can be solved on a quantum computer, even if the direct quantum algorithm chosen will require error-corrected quantum chips, which are not believed to be available in the short-term.
Multiple linear regression, one of the most fundamental supervised learning algorithms, assumes an imperative role in the field of machine learning. In 2009, Harrow et al. [Phys. Rev. Lett. 103, 150502 (2009)] showed that their algorithm could be used to sample the solution of a linear system $Ax=b$ exponentially faster than any existing classical algorithm. Remarkably, any multiple linear regression problem can be reduced to a linear system of equations problem. However, finding a practical and efficient quantum circuit for the quantum algorithm in terms of elementary gate operations is still an open topic. Here we put forward a 7-qubit quantum circuit design, based on an earlier work by Cao et al. [Mol. Phys. 110, 1675 (2012)], to solve a 3-variable regression problem, utilizing only basic quantum gates. Furthermore, we discuss the results of the Qiskit simulation for the circuit and explore certain possible generalizations to the circuit.
Due to several physical limitations in the realization of quantum hardware, today’s quantum computers are qualified as noisy intermediate-scale quantum (NISQ) hardware. NISQ hardware is characterized by a small number of qubits (50 to a few hundred) and noisy operations. Moreover, current realizations of superconducting quantum chips do not have the ideal all-to-all connectivity between qubits but rather at most a nearest-neighbor connectivity. All these hardware restrictions add supplementary low-level requirements. They need to be addressed before submitting the quantum circuit to an actual chip. Satisfying these requirements is a tedious task for the programmer. Instead, the task of adapting the quantum circuit to a given hardware is left to the compiler. In this article, we propose a hardware-aware (HA) mapping transition algorithm that takes the calibration data into account with the aim to improve the overall fidelity of the circuit. Evaluation results on IBM quantum hardware show that our HA approach can outperform the state of the art, both in terms of the number of additional gates and circuit fidelity.