Researchers Demonstrate Error Correction in a Silicon Qubit System

2022-08-27 01:33:52 By : Mr. Ashley Zhou

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Aug. 25, 2022 — Researchers from RIKEN in Japan have achieved a major step toward large-scale quantum computing by demonstrating error correction in a three-qubit silicon-based quantum computing system. This work, published in Nature, could pave the way toward the achievement of practical quantum computers.

Quantum computers are a hot area of research today, as they promise to make it possible to solve certain important problems that are intractable using conventional computers. They use a completely different architecture, using superimposition states found in quantum physics rather than the simple 1 or 0 binary bits used in conventional computers. However, because they are designed in a completely different way, they are very sensitive to environmental noise and other issues, such as decoherence, and require error correction to allow them to do precise calculations.

One important challenge today is choosing what systems can best act as “qubits”—the basic units used to make quantum calculations. Different candidate systems have their own strengths and weaknesses. Some of the popular systems today include superconducting circuits and ions, which have the advantage that some form of error correction has been demonstrated, allowing them to be put into actual use albeit on a small scale. Silicon-based quantum technology, which has only begun to be developed over the past decade, is known to have an advantage in that it utilizes a semiconductor nanostructure similar to what is commonly used to integrate billions of transistors in a small chip, and therefore could take advantage of current production technology.

However, one major problem with the silicon-based technology is that there is a lack of technology for error connection. Researchers have previously demonstrated control of two qubits, but that is not enough for error correction, which requires a three-qubit system.

In the current research, conducted by researchers at the RIKEN Center for Emergent Matter Science and the RIKEN Center for Quantum Computing, the group achieved this feat, demonstrating full control of a three-qubit system (one of the largest qubit systems in silicon), thus providing a prototype for the first time of quantum error correction in silicon. They achieved this by implementing a three-qubit Toffoli-type quantum gate.

According to Kenta Takeda, the first author of the paper, “the idea of implementing a quantum error-correcting code in quantum dots was proposed about a decade ago, so it is not an entirely new concept, but a series of improvements in materials, device fabrication, and measurement techniques allowed us to succeed in this endeavor. We are very happy to have achieved this.”

According to Seigo Tarucha, the leader of the research group, their “next step will be to scale up the system. We think scaling up is the next step. For that, it would be nice to work with semiconductor industry groups capable of manufacturing silicon-based quantum devices at a large scale.”

More information: Kenta Takeda, Quantum error correction with silicon spin qubits, Nature (2022). DOI: 10.1038/s41586-022-04986-6.

Source: phys.org and RIKEN

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Some chip pioneers from the 1980s are raising the ante in modern chip design with new opportunities provided by artificial intelligence and the open-source RISC-V architecture. Untether AI, which was co-founded by an analog and mixed signal chip pioneer Martin Snelgrove, released a new AI inferencing chip called Boqueria, which has more than 1,400 optimized RISC-V processors. That chip will compete... Read more…

Groq has deconstructed the conventional CPU, and designed its chip in which software takes over control of the chip. The Groq Tensor Streaming Processor Architecture follows a growing trend of software controlling system functions, which has happened in autonomous cars, networking and other hardware. The architecture hands over hardware controls of the chip to the compiler. The chip has integrated software control units at strategic points to optimize data... Read more…

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types, presented by Jaideep Dastidar... Read more…

Intel has been hyping up its media delivery and cloud gaming GPUs codenamed Arctic Sound-M, and has given a formal name: Flex Series GPUs. The Flex Series GPUs will sit in the cloud datacenters for AI inferencing and graphics – such as gaming and video – to remote devices. Intel's Vision conference in May was the coming-out party for the GPUs. Intel hasn't provided a clear shipment date for the Flex GPUs... Read more…

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Predictive models have powered the design and analysis of real-world systems such as jet engines, automobiles, and powerplants for decades. These models are used to provide insights on system performance and to run simulations, at a fraction of the cost compared to experiments with physical hardware. Read more…

Financial services organizations have large volumes of customer data that includes account balances, payment transactions and information such as customer FICO scores, and credit history. Read more…

Amid wildfire and drought season, worries are growing that another natural disaster is looming over the West Coast: megafloods. While concurrent threats from megafloods and droughts may seem at odds with each other, researchers at the National Center for Atmospheric Research (NCAR) recently conducted a supercomputer-powered study showing that climate change is greatly exacerbating the risk of catastrophic flooding in California. Read more…

Some chip pioneers from the 1980s are raising the ante in modern chip design with new opportunities provided by artificial intelligence and the open-source RISC-V architecture. Untether AI, which was co-founded by an analog and mixed signal chip pioneer Martin Snelgrove, released a new AI inferencing chip called Boqueria, which has more than 1,400 optimized RISC-V processors. That chip will compete... Read more…

Groq has deconstructed the conventional CPU, and designed its chip in which software takes over control of the chip. The Groq Tensor Streaming Processor Architecture follows a growing trend of software controlling system functions, which has happened in autonomous cars, networking and other hardware. The architecture hands over hardware controls of the chip to the compiler. The chip has integrated software control units at strategic points to optimize data... Read more…

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types, presented by Jaideep Dastidar... Read more…

Intel has been hyping up its media delivery and cloud gaming GPUs codenamed Arctic Sound-M, and has given a formal name: Flex Series GPUs. The Flex Series GPUs will sit in the cloud datacenters for AI inferencing and graphics – such as gaming and video – to remote devices. Intel's Vision conference in May was the coming-out party for the GPUs. Intel hasn't provided a clear shipment date for the Flex GPUs... Read more…

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Amid wildfire and drought season, worries are growing that another natural disaster is looming over the West Coast: megafloods. While concurrent threats from megafloods and droughts may seem at odds with each other, researchers at the National Center for Atmospheric Research (NCAR) recently conducted a supercomputer-powered study showing that climate change is greatly exacerbating the risk of catastrophic flooding in California. Read more…

“It is my privilege to welcome you to the dedication of Frontier, the supercomputer that broke the exascale barrier.” That was the introduction by Oak Ridge National Laboratory Director Thomas Zacharia, at a small, public event on August 17 to officially dedicate the supercomputer, which in May became the first system to achieve over 1.0 exaflops of 64-bit performance on the... Read more…

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.

Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.