Home / Chroniques / 5 breakthroughs made possible by quantum technologies
Digital Surveillance: Extreme Close-Up of Robotic or Bionic Eye with Advanced Circuitry
π Science and technology π Digital

5 breakthroughs made possible by quantum technologies

Key takeaways
  • Research into quantum physics is leading to advances in many areas of research and development.
  • For example, quantum computers represent a very promising avenue with many potential applications.
  • However, there are technical and theoretical obstacles to overcome that stand in the way of the commercialisation and practical use of these advances, such as entanglement.
  • In the medium term, quantum physics research could be used for astronomical imaging, health and semiconductors.

#1 Towards large-scale quantum computers thanks to improvements in quantum error correction

Quan­tum com­put­ers use quan­tum bits (qubits). These bits are dif­fer­ent from stan­dard com­put­er bits, which can be either 0 or 1, as they can be both 0 and 1. These machines could be much faster than the fastest com­put­ers avail­able today as they would be able to com­pute with many qubits, result­ing in an expo­nen­tial increase in com­put­ing pow­er. Qubits can be made from dif­fer­ent mate­ri­als, such as super­con­duc­tors or trapped ions. Oth­er future meth­ods include pho­ton­ic quan­tum proces­sors that use light.

A true quan­tum com­put­er will require the inte­gra­tion of many qubits into a sin­gle device. This will not be an easy task as they are very del­i­cate and the quan­tum infor­ma­tion they con­tain can eas­i­ly be destroyed, lead­ing to errors in quan­tum calculations.

To cor­rect these errors, a quan­tum error cor­rec­tion (QEC) sys­tem will be essen­tial. This gen­er­al­ly involves encod­ing a bit of quan­tum infor­ma­tion onto a set of qubits that act togeth­er as a sin­gle “log­i­cal qubit”. One such tech­nique is sur­face cod­ing, in which a quan­tum infor­ma­tion bit is encod­ed onto an array of qubits. A prob­lem with this approach, how­ev­er, is that adding extra qubits to the sys­tem in turn adds extra sources of error.

Accord­ing to physi­cists, true large-scale quan­tum com­put­ing will require an error rate of around one in a mil­lion, but the best cur­rent error cor­rec­tion tech­nolo­gies can only achieve rates of around one in a thou­sand. So, there is still a long way to go.

The suc­cess of this process is an impor­tant mile­stone for quan­tum tech­nol­o­gy infrastructures

Researchers at Google Quan­tum AI have recent­ly cre­at­ed a sur­face code scheme that should adapt to the error rate required in a quan­tum proces­sor made up of super­con­duct­ing qubits that con­sti­tute either data qubits (for oper­a­tion) or mea­sure­ment qubits. The lat­ter are adja­cent to the data qubits and can mea­sure a bit or phase rever­sal. These are two types of error that affect the qubits.

The researchers found that a “distance‑5 qubit array” com­pris­ing a total of 49 phys­i­cal qubits had an error rate of 2.914%, com­pared with 3.028% for a « distance‑3 array » com­pris­ing 17 qubits. This reduc­tion shows that increas­ing the num­ber of qubits is a viable route to “fault-tol­er­ant quan­tum com­put­ing” and that an error rate of bet­ter than one in a mil­lion could be pos­si­ble in a dis­tance-17 qubit array com­pris­ing 577 qubits.

#2 A converter for different quantum devices

The plat­forms cur­rent­ly being devel­oped for quan­tum com­put­ers are based on dif­fer­ent quan­tum sys­tems such as pho­tons (par­ti­cles of light), neu­tral atoms, ions, super­con­duc­tors and semi­con­duc­tors. In future quan­tum net­works, these sys­tems will need to com­mu­ni­cate with each oth­er, but as they rely on dif­fer­ent types of cod­ing, this could prove difficult.

Researchers at Lab­o­ra­toire Kastler Brossel (LKB) in France have cre­at­ed a con­vert­er that enables quan­tum devices based on dif­fer­ent sys­tems to com­mu­ni­cate. “We have designed a kind of black box that allows you to switch from one type of quan­tum infor­ma­tion cod­ing to anoth­er thanks to the phe­nom­e­non of entan­gle­ment,” explains physi­cist Julian Lau­rat, one of the mem­bers of the LKB team. Entan­gle­ment, the sub­ject of the 2022 Nobel Prize in Physics, is a pure­ly quan­tum phe­nom­e­non where­by two or more par­ti­cles can have a clos­er rela­tion­ship than that per­mit­ted by clas­si­cal physics. This means that if we deter­mine the quan­tum state of one of the par­ti­cles, we can instant­ly deter­mine the quan­tum state of the oth­er, regard­less of the dis­tance sep­a­rat­ing them. Once con­sid­ered a quirk of the quan­tum world, this “spooky action at a dis­tance”, as Albert Ein­stein called it, is now exploit­ed in quan­tum cryp­tog­ra­phy and com­mu­ni­ca­tion sys­tems, as well as in the sen­sors used to detect grav­i­ta­tion­al waves (a defor­ma­tion of the fab­ric of space-time that prop­a­gates at the speed of light)..

Thanks to entan­gle­ment, the LKB researchers have been able to pre­serve the quan­tum codes infor­ma­tion sig­nal, which is frag­ile, while chang­ing the basis on which it is written.

“The suc­cess of this process is an impor­tant mile­stone for quan­tum tech­nol­o­gy infra­struc­tures,” points out Beate Asen­beck, a doc­tor­al stu­dent at the LKB. “Once we can inter­con­nect quan­tum devices, more com­plex and effi­cient net­works can be built.”

The researchers have filed a patent to pro­tect their tech­nol­o­gy, which is now being used by Welinq, a start-up found­ed by Julien Lau­rat and his col­league Tom Darras.

#3 Quantum error correction could improve astronomical imaging

High-res­o­lu­tion, long-base­line opti­cal inter­fer­om­e­ters could rev­o­lu­tionise astro­nom­i­cal imag­ing: here, light from two or more tele­scopes, placed at a cer­tain dis­tance from each oth­er, is com­bined to cre­ate an image of a celes­tial object, such as a star. The images obtained in this way have much fin­er detail than those obtained with each indi­vid­ual tele­scope. In this way, the mul­ti­ple tele­scopes act as a gigan­tic “vir­tu­al” tele­scope whose diam­e­ter is much larg­er than that of any real telescope.

In the­o­ry, the fur­ther apart the tele­scopes are, the high­er the image res­o­lu­tion. In prac­tice, how­ev­er, envi­ron­men­tal noise and light loss­es between the two instru­ments degrade the qual­i­ty of the light sig­nals, lim­it­ing the pos­si­ble dis­tance between them.

Quan­tum tech­nolo­gies can help cir­cum­vent these “light trans­mis­sion loss­es” by using quan­tum mem­o­ries and entan­gle­ment to replace direct opti­cal links between the tele­scopes, there­by increas­ing the dis­tances between them. In the most direct approach, the sig­nal could be stored in atom­ic states or qubits. How­ev­er, one prob­lem remains: these states are frag­ile and can be eas­i­ly destroyed.

Researchers at Mac­quar­ie Uni­ver­si­ty in Aus­tralia and the Nation­al Uni­ver­si­ty of Sin­ga­pore (NUS) have now found a way to pro­tect the quan­tum infor­ma­tion con­tained in the light com­ing from a celes­tial object.

In their new tech­nique, the researchers manip­u­late the state of a star’s light from the two tele­scopes so that it is in a form that is pro­tect­ed from envi­ron­men­tal noise. By then car­ry­ing out spe­cif­ic mea­sure­ments, any errors in the qubits can be detect­ed and cor­rect­ed by the QEC codes before recov­er­ing the infor­ma­tion con­tained in the starlight. This infor­ma­tion is then used to con­struct an image of the star.

#4 Using quantum vacuum fluctuations to fabricate a high-speed random number generator

Mod­ern cryp­tog­ra­phy relies on the gen­er­a­tion of ran­dom num­bers that are then used as keys to encrypt the huge amounts of data pro­duced by gov­ern­ments and large cor­po­ra­tions, for exam­ple. Although algo­rithms are com­mon­ly used to gen­er­ate seem­ing­ly ran­dom num­bers, a hack­er could in prin­ci­ple work out the pre­de­ter­mined steps of an algo­rithm and thus pre­dict its output.

An improved sys­tem would rather be based on a tru­ly ran­dom process, like the prob­a­bilis­tic nature of phe­nom­e­na that occur at the quan­tum level.

The vac­u­um of space is not real­ly a vac­u­um but is teem­ing with ran­dom quan­tum fluc­tu­a­tions as pairs of par­ti­cles and antipar­ti­cles are spon­ta­neous­ly cre­at­ed and then anni­hi­lat­ed when they col­lide with each oth­er. These process­es occur on extreme­ly short time scales and can been used to pro­duce ran­dom num­bers. The prob­lem is that these sys­tems are sub­ject to par­a­sitic noise from their own com­po­nents, which slows down the process.

To solve this prob­lem, researchers at Ghent Uni­ver­si­ty in Bel­gium built a com­put­er chip (mea­sur­ing just 5 mm in length) and then mapped all the imper­fec­tions in the chip as well as the sources of noise with­in it. This enabled them to iden­ti­fy the ori­gin of the inter­fer­ence and mea­sure the quan­tum fluc­tu­a­tions with much greater sen­si­tiv­i­ty. The result: a chip capa­ble of gen­er­at­ing ran­dom num­bers 200 times faster than exist­ing com­mer­cial devices.

#5 Quantum advantage without error correction

IBM researchers have shown that it is pos­si­ble to achieve quan­tum advan­tage (or “suprema­cy”) with­out error cor­rec­tion. To do this, they used a 127-qubit quan­tum proces­sor to cal­cu­late the mag­neti­sa­tion of a mate­r­i­al using a 2D Ising mod­el. This mod­el rep­re­sents the mag­net­ic prop­er­ties of a 2D mate­r­i­al using a net­work of quan­tum spins that inter­act with their near­est neigh­bours. Appar­ent­ly sim­ple, this mod­el is known to be extreme­ly dif­fi­cult to solve.

The researchers used a tech­nique called “noisy inter­me­di­ate-scale (NISQ) quan­tum com­pu­ta­tion” in which the cal­cu­la­tion is per­formed rapid­ly to avoid the accu­mu­la­tion of errors1. This type of cal­cu­la­tion will allow for more gen­er­al quan­tum algo­rithms in the short term, before tru­ly fault-tol­er­ant quan­tum com­put­ers become available.

The cal­cu­la­tion was car­ried out using a super­con­duct­ing quan­tum chip com­pris­ing 127 qubits exe­cut­ing quan­tum cir­cuits 60 lay­ers deep with a total of around 2800 two-qubit gates. These gates are the quan­tum ana­logues of con­ven­tion­al log­ic gates.

The quan­tum cir­cuit gen­er­ates high­ly entan­gled quan­tum states that the researchers then used to pro­gram the 2D Ising mod­el by per­form­ing a sequence of oper­a­tions on the qubits and pairs of qubits. Although this method elim­i­nates much of the noise, errors remained. The researchers there­fore applied a quan­tum error mit­i­ga­tion process using con­ven­tion­al com­put­er soft­ware. The tech­nique works thanks to the 127-qubit proces­sor’s abil­i­ty to encode many con­fig­u­ra­tions of the Ising mod­el. Con­ven­tion­al com­put­ers would not have suf­fi­cient mem­o­ry to achieve such a feat.

Isabelle Dumé


Sup­press­ing quan­tum errors by scal­ing a sur­face code log­i­cal qubit. Nature 614, 676–681

 A quan­tum-bit encod­ing con­vert­er. Nature Pho­ton­ics 17 165–170

Archive ouverte arX­iv

Imag­ing Stars with Quan­tum Error Cor­rec­tion. Phys. Rev. Lett. 129, 210502

100-Gbit/s Inte­grat­ed Quan­tum Ran­dom Num­ber Gen­er­a­tor Based on Vac­u­um Fluc­tu­a­tions. PRX Quan­tum 4, 010330

Evi­dence for the util­i­ty of quan­tum com­put­ing before fault tol­er­ance Nature 618 500–505

Our world explained with science. Every week, in your inbox.

Get the newsletter