MUMBAI, India, Nov. 14 -- Intellectual Property India has published a patent application (202547096998 A) filed by Intel Corporation, Santa Clara, U.S.A., on Oct. 8, for 'sparsity-based reduction of gate switching in deep neural network accelerators.'

Inventor(s) include Langhammer, Martin; Raha, Arnab; and Power, Martin.

The application for the patent was published on Nov. 14, under issue no. 46/2025.

According to the abstract released by the Intellectual Property India: "Gate switching in deep learning operations can be reduced based on sparsity in the input data. A first element of an activation operand and a first element of a weight operand may be stored in input storage units associated with a multiplier in a processing element. The multiplier computes a product of the two elements, which may be stored in an output storage unit of the multiplier. After detecting that a second element of the activation operand or a second element of the weight operand is zero valued, gate switching is reduced by avoiding at least one gate switching needed for the multiply-accumulation operation. For instance, the input storage units may not be updated. A zero-valued data element may be stored in the output storage unit of the multiplier and used as a product of the second element of the activation operand and the second element of the weight operand."

The patent application was internationally filed on Dec. 07, 2023, under International application No.PCT/US2023/082951.

Disclaimer: Curated by HT Syndication.