Home / Chroniques / Quantum computing and AI: less compatible than expected?
AI concept using deep learning and cloud computing incorporating big data technology Modern machine learning with neural network and coding 3D rendered illustration
π Science and technology

Quantum computing and AI : less compatible than expected ?

Flippo Vincentini
Filippo Vicentini
Assistant Professor of AI and Quantum Physics at Ecole Polytechnique (IP Paris)
Key takeaways
  • There is a belief that quantum computing could revolutionise artificial intelligence and in particular deep learning.
  • However, quantum computing will not necessarily advance AI because it encounters difficulties in processing information from neural networks and voluminous data.
  • In particular, quantum computers are very slow and only very short calculations can be carried out without breakdowns.
  • However, AI machine learning is an essential tool for learning how to design and operate quantum computers today.

With a num­ber of tech firms pro­mi­sing to be able to solve some small real-world pro­blems within the next few years, it would seem that the world is on the cusp of a quan­tum com­pu­ting break­through. As such there had been much hope that access to such quan­tum com­pu­ting would trans­form arti­fi­cial intel­li­gence too. But a gro­wing consen­sus sug­gests this may not yet be within reach.

What can be said about the origins behind the belief that quantum computing could revolutionise AI ?

Filip­po Vicen­ti­ni. AI is a very wide-ran­ging term. So, I’ll focus on “deep lear­ning”, which is behind the new tech­no­lo­gies like text, audio and video gene­ra­tive models that we are seeing explode today. The idea that quan­tum com­pu­ting could boost AI deve­lop­ment became more pro­minent around 2018–19. Com­pa­nies were coming out with ear­ly quan­tum com­pu­ters with 1, 2, 3, or 4 noi­sy qubits. Because of their limi­ta­tions, these machines could not be used to do lar­ger real-world cal­cu­la­tions, which is where we expect quan­tum com­pu­ting to real­ly shine. Ins­tead, they were tas­ked with doing many short “quan­tum” subrou­tines (com­mon­ly known as quan­tum cir­cuits), fee­ding back into a clas­si­cal opti­mi­sa­tion algo­rithm. This approach is stri­kin­gly simi­lar to how neu­ral net­works are trai­ned in deep learning.

The hope, back around that time, was that a rea­so­na­bly sized “quan­tum cir­cuit” would be more expres­sive — mea­ning it could present more com­plex solu­tions to a pro­blem with fewer resources — than a neu­ral net­work, thanks to quan­tum phe­no­me­na like inter­fe­rence and super­po­si­tion. In short, this could mean that quan­tum cir­cuits could enable algo­rithms that could learn to find cor­re­la­tions within data more effec­ti­ve­ly. Hence, the field of quan­tum machine lear­ning was born, and seve­ral resear­chers star­ted to try to bring ideas from one side to the other. There was a lot of exci­te­ment at the time.

Several companies have touted the advent of more powerful quantum computers in the next few years. Does that mean we should expect a leap forward in AI as well ?

The brief ans­wer would be no, I don’t think quan­tum com­pu­ting will boost AI for­ward. It’s beco­ming increa­sin­gly clear that quan­tum com­pu­ters will be very use­ful for appli­ca­tions that require limi­ted input and out­put but a huge amount of pro­ces­sing power. For ins­tance, to solve com­plex phy­sics pro­blems rela­ted to super­con­duc­ti­vi­ty or simu­late che­mi­cal mole­cules. Howe­ver, for any­thing rela­ted to big data and neu­ral net­works, the consen­sus is gro­wing that it may ulti­ma­te­ly not be worth the effort.

This posi­tion was recent­ly laid out in a paper1 by Swiss Natio­nal Super­com­pu­ting Centre’s Tors­ten Hoe­fler, Amazon’s Tho­mas Häner, and Microsoft’s Mat­thias Troyer. I just fini­shed revie­wing sub­mis­sions for the QTML24 (quan­tum tech­niques in machine lear­ning) confe­rence and the tone of the quan­tum machine lear­ning com­mu­ni­ty was on a down­ward trend.

Why is that ?

More and more experts are reco­gni­sing that quan­tum com­pu­ters will like­ly remain very slow when it comes to input and out­put of data. To give you an idea, we expect that a quan­tum com­pu­ter that could exist maybe five years from now — if we are being opti­mis­tic — will have the same speed to read and write as an ave­rage com­pu­ter from 1999 or 2000.

If we try to run quan­tum com­pu­ters fas­ter to increase the amount of data we can inject, we will start to intro­duce more errors in the cal­cu­la­tion, and the result will dete­rio­rate. There seems to be a speed limit for the ope­ra­tion of these machines above which the noise and errors are too strong to be cor­rec­ted, even when we look about 20 years in the future.

Both clas­si­cal and quan­tum com­pu­ters are noi­sy. For one, a bit or a qubit, at some point, can ran­dom­ly switch to a 1. While we can address this effec­ti­ve­ly in clas­si­cal com­pu­ters, we don’t have that tech­no­lo­gy in quan­tum com­pu­ters. We esti­mate it will take at least ano­ther 15 years to deve­lop ful­ly fault-tole­rant quan­tum com­pu­ters. That means we can only do very ‘short’ calculations.

Moreo­ver, the out­put of a quan­tum com­pu­ter is pro­ba­bi­lis­tic, which creates addi­tio­nal chal­lenges. Clas­si­cal com­pu­ters give you a deter­mi­nis­tic result – run the same simu­la­tion twice and you’ll get the same ans­wer. But eve­ry time you run a quan­tum algo­rithm the out­put will be dif­ferent. The result must be extrac­ted from the dis­tri­bu­tion of the out­puts (how many times you see 0s and 1s). To recons­truct the dis­tri­bu­tion accu­ra­te­ly you must repeat the cal­cu­la­tion many, many times which increases the ove­rhead. This is ano­ther rea­son why some algo­rithms see­med very power­ful a few years ago, but even­tual­ly were shown not to yield a sys­te­ma­tic advan­tage over the clas­si­cal ones we can alrea­dy run on nor­mal computers.

Does that mean that AI and quantum computing will be distant cousins, with little overlap ?

Not at all. In fact, my col­leagues and I have recent­ly laun­ched a peti­tion2 to ask for fun­ding at the Euro­pean Union level for machine lear­ning and quan­tum sciences. Machine lear­ning is qui­ck­ly beco­ming an essen­tial tool to learn how to desi­gn and ope­rate quan­tum com­pu­ters nowa­days. For example, eve­ry device is slight­ly dif­ferent. Rein­for­ce­ment lear­ning tech­niques can ana­lyse your machine and its par­ti­cu­lar pat­terns to help fit algo­rithms spe­ci­fi­cal­ly to that device. There’s one com­pa­ny cal­led Q‑CTRL3, for ins­tance, that has been doing pio­nee­ring work in that field. Google’s quan­tum AI4 and Amazon’s Bra­ket5 are two other lea­ders leve­ra­ging these ideas.

AI could also be very com­pli­men­ta­ry to quan­tum com­pu­ting. Let’s take Microsoft’s Azure Quan­tum Ele­ments, which used a com­bi­na­tion of Micro­soft Azure HPC (high-per­for­mance com­pu­ting) and AI pro­per­ty-pre­dic­tion fil­ters to whit­tle down a selec­tion of 32 mil­lion can­di­dates for a more effi­cient rechar­geable bat­te­ry mate­rial to just 18 can­di­dates. These were run through power­ful, esta­bli­shed, pro­ces­sing-inten­sive algo­rithms, which are quite limi­ted because they consume a lot of ener­gy, and can’t work with very com­pli­ca­ted mole­cules. That is exact­ly where quan­tum com­pu­ting could step in, in the near future.

I believe AI and quan­tum com­pu­ting will be dif­ferent com­po­nents in a stack of tools — com­ple­men­ta­ry but not com­pa­tible. We want to keep pushing those direc­tions and many more by crea­ting a joint team cal­led “Phi­Qus” bet­ween Ecole Poly­tech­nique (IP Paris) and Inria toge­ther with Marc-Oli­vier Renou and Titouan Carette.

Interview by Marianne Guenot
1https://​cacm​.acm​.org/​r​e​s​e​a​r​c​h​/​d​i​s​e​n​t​a​n​g​l​i​n​g​-​h​y​p​e​-​f​r​o​m​-​p​r​a​c​t​i​c​a​l​i​t​y​-​o​n​-​r​e​a​l​i​s​t​i​c​a​l​l​y​-​a​c​h​i​e​v​i​n​g​-​q​u​a​n​t​u​m​-​a​d​v​a​n​tage/
2https://www.openpetition.eu/petition/online/support-the-machine-learning-in-quantum-science-manifesto‑2
3https://q‑ctrl.com
4https://​quan​tu​mai​.google
5https://​aws​.ama​zon​.com/​f​r​/​b​r​aket/

Support accurate information rooted in the scientific method.

Donate