Posit number system has been used in many applications, especially the deep learning. Because of how well its non-uniform number distribution aligns with deep learning\'s data distribution, deep learning\'s training process can be sped up. The hardware multiplier is typically built with the widest mantissa bit-width available due to the flexibility of posit numbers\' bit-width. Such multiplier designs consume a lot of power since the mantissa bit-width is not necessarily the maximum value. This is especially true when the mantissa bit-width is tiny. The mantissa multiplier is still built to have the widest bit-width feasible, but it is broken into numerous smaller multipliers. At run-time, just the necessary tiny multipliers are turned on. The regime bit-width, which can be used to determine the mantissa bit-width, controls those smaller multipliers. This design technique is applied to 8-bit, 16-bit, and 32-bit posit formats.
Introduction
I. INTRODUCTION
A new datatype called Posit was created to completely supplant IEEE Standard 754 floating-point numbers. (floats). Plots do not require interval arithmetic or variable size, in contrast to previous iterations of universal number (unum) arithmetic. operands; similar to fractions, they round if a response is not precise. Larger dynamic range, better accuracy, better closure, bitwise identical results across systems, simpler hardware, and easier exception management are just a few of the compelling benefits they offer over floats. Posits never go over or under their limits, and "Not-a-Number '' (NaN) denotes an action rather than a bit sequence. An IEEE float FPU requires more hardware than a posit processing unit. Furthermore, its uneven data distribution blends well with the uneven data distribution of some uses, including deep learning. In deep learning algorithms, the 8-bit or 16-bit posit formats are frequently employed. In some scientific computation uses, the 64-bit floating-point format is replaced by the 32-bit posit format. A posit number is defined as Posit (nb,es), where nb is the overall bit width and es is the exponent bit width. Sign (s), regime (rg), exponent (exp), and mantissa are its four constituents. (frac). The bit-width of the component varies. For various values, the regime bit-width changes.
III. PROPOSED METHOD
One of the major speed enhancement techniques used in modern digital circuits is the ability to add numbers with minimal carry propagation. The basic idea is that three numbers can be reduced to 2, in a 3:2 compressor, by doing the addition while keeping the carries and the sum separate. This means that all of the columns can be added in parallel without relying on the result of the previous column, creating a two output & quot; adder & quot; with a time delay that is independent of the size of its inputs.1 bit 3:2 compressor is shown in figure 7.
Conclusion
The idea proposed in the paper is a 32-bit Posit multiplier architecture with power efficiency. Intrigued by the idea of reconstructing the multiplier unit for mantissa into smaller parts, because of the whole mantissa unit is not used entirely all the time, we have built the hardware. To limit the power consumption, we use only the necessary potion of the multiplier. Our method is evaluated for 16-bit multiplier, whereas we can extend the work for 8-bit and 32-bit posit multipliers using the same technique. For futuristic purposes, more power reduction techniques for multiplier architecture can be developed. The work need not be necessarily limited to multipliers alone. Future works can be deployed also for Posits Adder or Posits Multiply Accumulate functions.
References
[1] J. Johnson, “Rethinking floating point for deep learning,” CoRR, Nov. 2018.
[2] R. Chaurasiya et al., “Parameterized posit arithmetic hardware generator,” in Proc. IEEE 36th Int. Conf. Comput. Design (ICCD), Orlando, FL, USA, Oct. 2018.
[3] M. K. Jaiswal and H.-K. So, “Architecture generator for type-3 unum posit adder/subtractor,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), Florence, Italy,May 2018.
[4] Podobas and S. Matsuoka, “Hardware implementation of POSITs and their application in FPGAs,” in Proc. IEEE Int. Parallel Distrib. Process. Symp. Workshops (IPDPSW), Vancouver, BC, Canada, May 2018.
[5] Z. Carmichael, S. H. F. Langroudi, C. Khazanov, J. Lillie, J. L. Gustafson, and D. Kudithipudi, “Deep positron: A deep neural network using the posit number system,” Dec. 2018.
[6] M. Klöwer, P. D. Düben, and T. N. Palmer, “Posits as an alternative to floats for weather and climate models,” in Proc. Conf. Next Gener. Arithmetic, Mar. 2019.
[7] H. Zhang, J. He, and S.-B. Ko, “Efficient posit multiply-accumulate unit generator for deep learning applications,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), Sapporo, Japan, May 2019.
[8] Z. Carmichael, H. F. Langroudi, C. Khazanov, J. Lillie, J. L. Gustafson, and D. Kudithipudi, “Performance-efficiency trade-off of low-precision numerical formats in deep neural networks,” in Proc. Conf. Next Gener. Arithmetic, Mar. 2019.
[9] Uguen, Yohann, Luc Forget, and Florent de Dinechin. \"Evaluating the hardware cost of the posit number system.\" 2019 29th International Conference on Field Programmable Logic and Applications (FPL). IEEE, 2019.
[10] Murillo, R., Del Barrio, A. A., Botella, G., Kim, M. S., Kim, H., & Bagherzadeh, N. (2021). PLAM: A posit logarithm-approximate multiplier. IEEE Transactions on Emerging Topics in Computing.