Every bit of data that goes over the internet, from paragraphs in an email to 3D images in a virtual reality environment, is susceptible to noise, such as electromagnetic interference from a microwave or Bluetooth device. The data is encoded so that when it reaches its destination, a decoding program can reverse the effects of the noise and recover the original data.
Using a global decoding technique dubbed Guessing Random Additive Noise Decoding (GRAND), researchers from the United States and Ireland have constructed the first silicon device capable of decoding any code, independent of its structure, with maximum accuracy.
GRAND offers enhanced efficiency by reducing the need for several, computationally complex decoders, which could have applications in augmented and virtual reality, gaming, 5G networks, and connected devices that need processing a large amount of data with minimal latency.
These codes can be thought of as superfluous hashes appended to the end of the original data. A specific codebook contains the rules for generating that hash. Noise or energy that interrupts the signal, which is commonly generated by other electrical devices, affects the encoded data as it travels through a network. When that coded data and the noise that influenced it to arrive at its destination, the decoding algorithm consults its codebook and guesses what the stored information is based on the structure of the hash.
GRAND works by predicting the noise that affected the transmission and deducing the original information from the noise pattern. GRAND creates a series of noise sequences in the order in which they are most likely to occur, subtracts them from the received data, and verifies whether the generated codeword is in a codebook. Despite the fact that the noise appears to be random, it has a probabilistic structure that allows the algorithm to guess what it is.
The GRAND chip has a three-tiered structure, with the first stage starting with the simplest conceivable solutions and progressing to longer and more complicated noise patterns in the second and third levels. Each stage is self-contained, increasing system throughput while conserving energy.
The device can also effortlessly transition between two different codebooks. It has two static random-access memory chips, one of which can crack codewords while the other loads a new codebook and immediately switches to decoding.
The GRAND chip was tested and determined to be capable of decoding any moderate redundancy code up to 128 bits in length with only a microsecond of latency. Médard and her team had previously proved the algorithm’s success, but this new work is the first to establish GRAND’s effectiveness and efficiency in hardware.
The researchers had to set aside their preconceived preconceptions before developing hardware for the innovative decoding method. We were unable to go out and reuse work that had already been completed. This seemed like a full-fledged whiteboard. Every single component had to be re-thought from the ground up. It had been a journey of self-reflection. There will be things with this initial chip that we will realize we did out of habit or the belief that we can do better when we make our second chip.
Because GRAND only employs codebooks for verification, it can be used not only with legacy codes but also with codes that have yet to be released. Regulators and communications businesses failed to agree on which codes should be used in the new network in the run-up to 5G deployment. In the end, regulators decided to employ two different types of conventional codes for 5G infrastructure in different scenarios. In the future, GRAND could obviate the need for such stringent standardization.
The researchers hope to use a retooled version of the GRAND chip to tackle the problem of soft detection in the future. The received data are less exact in soft detection. They also intend to evaluate GRAND’s potential to crack longer, more complicated codes and tweak the silicon chip’s structure to increase its energy efficiency.