Home / Chroniques / 5 breakthroughs made possible by quantum technologies
Digital Surveillance : Extreme Close-Up of Robotic or Bionic Eye with Advanced Circuitry
π Science and technology π Digital

5 breakthroughs made possible by quantum technologies

Key takeaways
  • Research into quantum physics is leading to advances in many areas of research and development.
  • For example, quantum computers represent a very promising avenue with many potential applications.
  • However, there are technical and theoretical obstacles to overcome that stand in the way of the commercialisation and practical use of these advances, such as entanglement.
  • In the medium term, quantum physics research could be used for astronomical imaging, health and semiconductors.

#1 Towards large-scale quantum computers thanks to improvements in quantum error correction

Quan­tum com­pu­ters use quan­tum bits (qubits). These bits are dif­ferent from stan­dard com­pu­ter bits, which can be either 0 or 1, as they can be both 0 and 1. These machines could be much fas­ter than the fas­test com­pu­ters avai­lable today as they would be able to com­pute with many qubits, resul­ting in an expo­nen­tial increase in com­pu­ting power. Qubits can be made from dif­ferent mate­rials, such as super­con­duc­tors or trap­ped ions. Other future methods include pho­to­nic quan­tum pro­ces­sors that use light.

A true quan­tum com­pu­ter will require the inte­gra­tion of many qubits into a single device. This will not be an easy task as they are very deli­cate and the quan­tum infor­ma­tion they contain can easi­ly be des­troyed, lea­ding to errors in quan­tum calculations.

To cor­rect these errors, a quan­tum error cor­rec­tion (QEC) sys­tem will be essen­tial. This gene­ral­ly involves enco­ding a bit of quan­tum infor­ma­tion onto a set of qubits that act toge­ther as a single “logi­cal qubit”. One such tech­nique is sur­face coding, in which a quan­tum infor­ma­tion bit is enco­ded onto an array of qubits. A pro­blem with this approach, howe­ver, is that adding extra qubits to the sys­tem in turn adds extra sources of error.

Accor­ding to phy­si­cists, true large-scale quan­tum com­pu­ting will require an error rate of around one in a mil­lion, but the best cur­rent error cor­rec­tion tech­no­lo­gies can only achieve rates of around one in a thou­sand. So, there is still a long way to go.

The suc­cess of this pro­cess is an impor­tant miles­tone for quan­tum tech­no­lo­gy infrastructures

Resear­chers at Google Quan­tum AI have recent­ly crea­ted a sur­face code scheme that should adapt to the error rate requi­red in a quan­tum pro­ces­sor made up of super­con­duc­ting qubits that consti­tute either data qubits (for ope­ra­tion) or mea­su­re­ment qubits. The lat­ter are adja­cent to the data qubits and can mea­sure a bit or phase rever­sal. These are two types of error that affect the qubits.

The resear­chers found that a “distance‑5 qubit array” com­pri­sing a total of 49 phy­si­cal qubits had an error rate of 2.914%, com­pa­red with 3.028% for a « distance‑3 array » com­pri­sing 17 qubits. This reduc­tion shows that increa­sing the num­ber of qubits is a viable route to “fault-tole­rant quan­tum com­pu­ting” and that an error rate of bet­ter than one in a mil­lion could be pos­sible in a dis­tance-17 qubit array com­pri­sing 577 qubits.

#2 A converter for different quantum devices

The plat­forms cur­rent­ly being deve­lo­ped for quan­tum com­pu­ters are based on dif­ferent quan­tum sys­tems such as pho­tons (par­ticles of light), neu­tral atoms, ions, super­con­duc­tors and semi­con­duc­tors. In future quan­tum net­works, these sys­tems will need to com­mu­ni­cate with each other, but as they rely on dif­ferent types of coding, this could prove difficult.

Resear­chers at Labo­ra­toire Kast­ler Bros­sel (LKB) in France have crea­ted a conver­ter that enables quan­tum devices based on dif­ferent sys­tems to com­mu­ni­cate. “We have desi­gned a kind of black box that allows you to switch from one type of quan­tum infor­ma­tion coding to ano­ther thanks to the phe­no­me­non of entan­gle­ment,” explains phy­si­cist Julian Lau­rat, one of the mem­bers of the LKB team. Entan­gle­ment, the sub­ject of the 2022 Nobel Prize in Phy­sics, is a pure­ly quan­tum phe­no­me­non whe­re­by two or more par­ticles can have a clo­ser rela­tion­ship than that per­mit­ted by clas­si­cal phy­sics. This means that if we deter­mine the quan­tum state of one of the par­ticles, we can ins­tant­ly deter­mine the quan­tum state of the other, regard­less of the dis­tance sepa­ra­ting them. Once consi­de­red a quirk of the quan­tum world, this “spoo­ky action at a dis­tance”, as Albert Ein­stein cal­led it, is now exploi­ted in quan­tum cryp­to­gra­phy and com­mu­ni­ca­tion sys­tems, as well as in the sen­sors used to detect gra­vi­ta­tio­nal waves (a defor­ma­tion of the fabric of space-time that pro­pa­gates at the speed of light)..

Thanks to entan­gle­ment, the LKB resear­chers have been able to pre­serve the quan­tum codes infor­ma­tion signal, which is fra­gile, while chan­ging the basis on which it is written.

“The suc­cess of this pro­cess is an impor­tant miles­tone for quan­tum tech­no­lo­gy infra­struc­tures,” points out Beate Asen­beck, a doc­to­ral student at the LKB. “Once we can inter­con­nect quan­tum devices, more com­plex and effi­cient net­works can be built.”

The resear­chers have filed a patent to pro­tect their tech­no­lo­gy, which is now being used by Welinq, a start-up foun­ded by Julien Lau­rat and his col­league Tom Darras.

#3 Quantum error correction could improve astronomical imaging

High-reso­lu­tion, long-base­line opti­cal inter­fe­ro­me­ters could revo­lu­tio­nise astro­no­mi­cal ima­ging : here, light from two or more teles­copes, pla­ced at a cer­tain dis­tance from each other, is com­bi­ned to create an image of a celes­tial object, such as a star. The images obtai­ned in this way have much finer detail than those obtai­ned with each indi­vi­dual teles­cope. In this way, the mul­tiple teles­copes act as a gigan­tic “vir­tual” teles­cope whose dia­me­ter is much lar­ger than that of any real telescope.

In theo­ry, the fur­ther apart the teles­copes are, the higher the image reso­lu­tion. In prac­tice, howe­ver, envi­ron­men­tal noise and light losses bet­ween the two ins­tru­ments degrade the qua­li­ty of the light signals, limi­ting the pos­sible dis­tance bet­ween them.

Quan­tum tech­no­lo­gies can help cir­cumvent these “light trans­mis­sion losses” by using quan­tum memo­ries and entan­gle­ment to replace direct opti­cal links bet­ween the teles­copes, the­re­by increa­sing the dis­tances bet­ween them. In the most direct approach, the signal could be sto­red in ato­mic states or qubits. Howe­ver, one pro­blem remains : these states are fra­gile and can be easi­ly destroyed.

Resear­chers at Mac­qua­rie Uni­ver­si­ty in Aus­tra­lia and the Natio­nal Uni­ver­si­ty of Sin­ga­pore (NUS) have now found a way to pro­tect the quan­tum infor­ma­tion contai­ned in the light coming from a celes­tial object.

In their new tech­nique, the resear­chers mani­pu­late the state of a star’s light from the two teles­copes so that it is in a form that is pro­tec­ted from envi­ron­men­tal noise. By then car­rying out spe­ci­fic mea­su­re­ments, any errors in the qubits can be detec­ted and cor­rec­ted by the QEC codes before reco­ve­ring the infor­ma­tion contai­ned in the star­light. This infor­ma­tion is then used to construct an image of the star.

#4 Using quantum vacuum fluctuations to fabricate a high-speed random number generator

Modern cryp­to­gra­phy relies on the gene­ra­tion of ran­dom num­bers that are then used as keys to encrypt the huge amounts of data pro­du­ced by govern­ments and large cor­po­ra­tions, for example. Although algo­rithms are com­mon­ly used to gene­rate see­min­gly ran­dom num­bers, a hacker could in prin­ciple work out the pre­de­ter­mi­ned steps of an algo­rithm and thus pre­dict its output.

An impro­ved sys­tem would rather be based on a tru­ly ran­dom pro­cess, like the pro­ba­bi­lis­tic nature of phe­no­me­na that occur at the quan­tum level.

The vacuum of space is not real­ly a vacuum but is tee­ming with ran­dom quan­tum fluc­tua­tions as pairs of par­ticles and anti­par­ticles are spon­ta­neous­ly crea­ted and then anni­hi­la­ted when they col­lide with each other. These pro­cesses occur on extre­me­ly short time scales and can been used to pro­duce ran­dom num­bers. The pro­blem is that these sys­tems are sub­ject to para­si­tic noise from their own com­po­nents, which slows down the process.

To solve this pro­blem, resear­chers at Ghent Uni­ver­si­ty in Bel­gium built a com­pu­ter chip (mea­su­ring just 5 mm in length) and then map­ped all the imper­fec­tions in the chip as well as the sources of noise within it. This enabled them to iden­ti­fy the ori­gin of the inter­fe­rence and mea­sure the quan­tum fluc­tua­tions with much grea­ter sen­si­ti­vi­ty. The result : a chip capable of gene­ra­ting ran­dom num­bers 200 times fas­ter than exis­ting com­mer­cial devices.

#5 Quantum advantage without error correction

IBM resear­chers have shown that it is pos­sible to achieve quan­tum advan­tage (or “supre­ma­cy”) without error cor­rec­tion. To do this, they used a 127-qubit quan­tum pro­ces­sor to cal­cu­late the magne­ti­sa­tion of a mate­rial using a 2D Ising model. This model repre­sents the magne­tic pro­per­ties of a 2D mate­rial using a net­work of quan­tum spins that inter­act with their nea­rest neigh­bours. Appa­rent­ly simple, this model is known to be extre­me­ly dif­fi­cult to solve.

The resear­chers used a tech­nique cal­led “noi­sy inter­me­diate-scale (NISQ) quan­tum com­pu­ta­tion” in which the cal­cu­la­tion is per­for­med rapid­ly to avoid the accu­mu­la­tion of errors1. This type of cal­cu­la­tion will allow for more gene­ral quan­tum algo­rithms in the short term, before tru­ly fault-tole­rant quan­tum com­pu­ters become available.

The cal­cu­la­tion was car­ried out using a super­con­duc­ting quan­tum chip com­pri­sing 127 qubits exe­cu­ting quan­tum cir­cuits 60 layers deep with a total of around 2800 two-qubit gates. These gates are the quan­tum ana­logues of conven­tio­nal logic gates.

The quan­tum cir­cuit gene­rates high­ly entan­gled quan­tum states that the resear­chers then used to pro­gram the 2D Ising model by per­for­ming a sequence of ope­ra­tions on the qubits and pairs of qubits. Although this method eli­mi­nates much of the noise, errors remai­ned. The resear­chers the­re­fore applied a quan­tum error miti­ga­tion pro­cess using conven­tio­nal com­pu­ter soft­ware. The tech­nique works thanks to the 127-qubit pro­ces­sor’s abi­li­ty to encode many confi­gu­ra­tions of the Ising model. Conven­tio­nal com­pu­ters would not have suf­fi­cient memo­ry to achieve such a feat.

Isabelle Dumé

Refe­rences :

Sup­pres­sing quan­tum errors by sca­ling a sur­face code logi­cal qubit. Nature 614, 676–681

 A quan­tum-bit enco­ding conver­ter. Nature Pho­to­nics 17 165–170

Archive ouverte arXiv

Ima­ging Stars with Quan­tum Error Cor­rec­tion. Phys. Rev. Lett. 129, 210502

100-Gbit/s Inte­gra­ted Quan­tum Ran­dom Num­ber Gene­ra­tor Based on Vacuum Fluc­tua­tions. PRX Quan­tum 4, 010330

Evi­dence for the uti­li­ty of quan­tum com­pu­ting before fault tole­rance Nature 618 500–505

Support accurate information rooted in the scientific method.

Donate