ni tutorial 3115 en (1)

Upload: ajithganesh2420

Post on 01-Jun-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 NI Tutorial 3115 en (1)

    1/31/3 www.ni.c

    1.

    2.

    3.

    4.

    5.

    6.

    Comparing Floating-Point and Fixed-Point Implementations on ADI Blackfin Processors with LabVIEW

    Publish Date: Oct 19, 2012

    Overview

     All data on microprocessors is stored in a binary representation at some level. ASCII strings are represented by assigning 8 bits, or a byte, to each character in the string. You can represent

    numerics in a variety of ways, but two representations have emerged as the standard representations for decimal numbers - floating-point and fixed-point. Each representation has its advantage

    and disadvantages. In this document, explore these two representations in detail and discover how you can use each in the NI LabVIEW Embedded Module for ADI Blackfin Processors.

    To demonstrate the differences between floating-point and fixed-point representations, you will use the number 118.625.

    Table of Contents

     Floating-Point Representation

     Fixed-Point Representation

     The Fract16 Data Type

     Floating-Point Algorithms Using the NI LabVIEW Embedded Module for ADI Blackfin Processors

     Fixed-Point Algorithms Using the NI LabVIEW Embedded Module for ADI Blackfin Processors

     Conclusion

    1. Floating-Point Representation

    The Institute of Electrical and Electronics Engineers (IEEE) standardizes floating-point representation in IEEE 754. Floating-point representation is similar to scientific notation in that there is a

    number multiplied by a base number raised to some power. For example, 118.625 is represented in scientific notation as 1.18625 x 10 . The main benefit of this representation is that it provide2

    varying degrees of precision based on the scale of the numbers that you are using. For example, it is beneficial to talk in terms of angstroms (10 m) when you are working with the distance-10

    between atoms. However, if you are dealing with the distance between cities, this level of precision is no longer practical or necessary.

    IEEE 754 defines binary representations for 32-bit single-precision and 64-bit double-precision (64-bit) numbers as well as extended single-precision and extended double-precision numbers.

    Examine the specification for single-precision, floating-point numbers, also called floats.

     A float consists of three parts: the sign bit, the exponent, and the mantissa. The division of the three parts is as follows:

     

    Figure 1. A float consists of three parts: the sign bit, the exponent, and the mantissa.

    The sign bit is 0 if the number is positive and 1 if the number is negative.

    The exponent is an 8-bit number that ranges in value from -126 to 127. The exponent is actually not the typical two's complement representation because this makes comparisons more difficult.

    Instead, the value is biased by adding 127 to the desired exponent and representation, which makes it possible to represent negative numbers.

    The mantissa is the normalized binary representation of the number to be multiplied by 2 raised to the power defined by the exponent.

    Now look at how to encode 118.625 as a float. The number 118.625 is a positive number, so the sign bit is 0. To find the exponent and mantissa, first write the number in binary, which is

    1110110.101 (get more details on finding this number in the "Fixed-Point Representation" section). Next, normalize the number to 1.110110101 x 2 , which is the binary equivalent of scientific6

    notation. The exponent is 6 and the mantissa is 1.110110101. The exponent must be biased, which is 6 + 127 = 133. The binary representation of 133 is 10000101.

    Thus, the floating-point encoded value of 118.65 is 0100 0010 1111 0110 1010 0000 0000 0000. Binary values are often referred to in their hexadecimal equivalent. In this case, the hexadecim

    value is 42F6A000.

     

    2. Fixed-Point Representation

    In fixed-point representation, a specific radix point - called a decimal point in English and written "." - is chosen so there is a fixed number of bits to the right and a fixed number of bits to the left

    the radix point. The bits to the left of the radix point are called the integer bits. The bits to the right of the radix point are called the fractional bits.

     

    Figure 2. In fixed-point representation, a specific radix point - called a decimal point in English and written "." - is chosen so there is a fixed number of bits to the right and a fixed number of bits

    the left of the radix point. The bits to the left of the radix point are called the integer bits. The bits to the right of the radix point are called the fractional bits.

    In this example, assume a 16-bit fractional number with 8 magnitude bits and 8 radix bits, which is typically represented as 8.8 representation. Like most signed integers, fixed-point numbers are

    represented in two's complement binary. Using a positive number keeps this example simple.

    To encode 118.625, first find the value of the integer bits. The binary representation of 118 is 01110110, so this is the upper 8 bits of the 16-bit number. The fractional part of the number is

    represented as 0.625 x 2 where is the number of fractional bits. Because 0.625 x 256 = 160, you can use the binary representation of 160, which is 10100000, to determine the fractional bitsn n

    Thus, the binary representation for 118.625 is 0111 0110 1010 0000. The value is typically referred to using the hexadecimal equivalent, which is 76A0.

    The major advantage of using fixed-point representation for real numbers is that fixed-point adheres to the same basic arithmetic principles as integers. Therefore, fixed-point numbers can take

    advantage of the general optimizations made to the Arithmetic Logic Unit (ALU) of most microprocessors, and do not require any additional libraries or any additional hardware logic. On

    processors without a floating-point unit (FPU), such as the Analog Devices Blackfin Processor, fixed-point representation can result in much more efficient embedded code when performing

    mathematically heavy operations.

    In general, the disadvantage of using fixed-point numbers is that fixed-point numbers can represent only a limited range of values, so fixed-point numbers are susceptible to common numeric

     

  • 8/9/2019 NI Tutorial 3115 en (1)

    2/32/3 www.ni.c

    computational inaccuracies. For example, the range of possible values in the 8.8 notation that can be represented is +127.99609375 to -128.0. If you add 100 + 100, you exceed the valid range

    the data type, which is called overflow. In most cases, the values that overflow are saturated, or truncated, so that the result is the largest representable number.

     

    3. The Fract16 Data Type

    The Analog Devices Blackfin Processor family does not have a hardware FPU, so the libraries that are most optimized for the Blackfin Processor are written using a specific fixed-point data type

    This data type, called fract16, is a two's complement fixed-point number defined to be a 1.15 representation. The one integer bit is a sign bit and the other 15 bits are reserved for fractional bits.

    This helps you represent data in the range of +0.99996948 to -1.0. The advantage of this data range is that the result of multiplying two numbers within the range produces a result that also is in

    the range. Thus, overflows are possible only when performing addition and subtraction.

     

    4. Floating-Point Algorithms Using the NI LabVIEW Embedded Module for ADI Blackfin Processors

    Floating-point representation is the only natively supported way to represent real numbers in the National Instruments LabVIEW graphical development environment. All of the NI LabVIEW for 

    Windows advanced analysis and math libraries are implemented using single-precision, double-precision, or extended-precision floating-point data types. These data types use orange wires on

    block diagram.

    The ADI Blackfin Processor does provide a floating-point emulation library, so you can use the LabVIEW for Windows analysis and math VIs and functions in the LabVIEW Embedded Module fo

    Blackfin Processors. However, the mathematics involved are implemented in software, so the performance is not optimal on the Blackfin Processor. The following is an example of a 1,024-point

    fast Fourier transform (FFT) done in floating point on the Blackfin Processor.

     

    Figure 3. Example of a 1,024-Point FFT in Floating Point on the Blackfin Processor 

    For many applications, the performance of the floating-point emulation is more than adequate. To maintain the best possible performance from the Blackfin emulation libraries, it is recommended

    that you use the 32-bit single-precision, floating-point data type instead of the 64-bit double-precision, floating-point data type whenever the single-precision is adequate.

    However, you might have applications that require processor-intensive real-time signal processing, such as real-time audio processing, where the overhead of the floating-point emulation library

    too great. For those applications, you must convert your algorithm to a fixed-point implementation where you need performance improvements. The next section outlines the process of convertin

    the previous FFT example to a fixed-point implementation.

     

    5. Fixed-Point Algorithms Using the NI LabVIEW Embedded Module for ADI Blackfin Processors

    To implement the FFT magnitude measurement using the Blackfin-specific fixed-point libraries, first scale the floating-point data and convert it to the fixed-point equivalent. To do this, divide eac

    element in the array by a value that safely places all of the numbers in the valid range for a fract16, which is 0.99996948 to -1.0. For this example, divide each number in the array by 1.001 times

    the element with the maximum absolute value to normalize the signal. Use one of the fixed-point Conversion VIs located on the palette toFunctions»Blackfin»Blackfin Numeric»Conversion

    convert the signal to a fract16.

    Figure 4. Use one of the fixed-point Conversion VIs located on the palette to convert the signal to a fract16.Functions»Blackfin»Blackfin Numeric»Conversion

    Next, replace the floating-point FFT VI with the Blackfin-specific fixed-point BF Real Fast Fourier Transform VI located on the Functions»Blackfin»Blackfin Analysis Library»Signal

     palette. Notice that the fixed-point BF Real Fast Fourier Transform VI requires several additional inputs. The fixed-point library was developed with efficiency inProcessing»Frequency Domain

    mind, so the additional inputs account for the complex arrays necessary to compute the FFT. You do not need to allocate additional memory during the computation. These arrays must be

    initialized to the correct size and passed to the BF Real Fast Fourier Transform VI as inputs. Use the complex_fract16 constant located on the Functions»Blackfin»Blackfin Numeric»Compl

    palette. In addition, use the BF Twiddle Table Generator for FFTs VI to generate a twiddle table that you can pass as an input to the BF Real Fast Fourier Transform VI. You must do this outsid

    the time-critical portion of the VI.

  • 8/9/2019 NI Tutorial 3115 en (1)

    3/33/3 www.ni.c

    Figure 5. Use the BF Twiddle Table Generator for FFTs VI to generate a twiddle table that you can pass as an input to the BF Real Fast Fourier Transform VI.

     

    Finally, you can use the BF Complex Absolute Value VI to compute the power spectrum of the signal. You must manually select the polymorphic instance to use for this VI. Polymorphic VIs in th

    Blackfin Analysis Library VIs and the Blackfin Numeric VIs use "_fr16" at the end of the instance name to indicate that instance is for fract16 data. Similarly, those instance names that end in "d"

    are for double-precision, floating-point data and the instance names that end in "f" are for single-precision floating-point data. The result is then converted back to a floating-point number for disp

    and/or further computation.

     

    Figure 6. You can use the BF Complex Absolute Value VI to compute the power spectrum of the signal.

    The resulting fixed-point FFT calculation is approximately twice as fast as its floating-point counterpart (50.1 percent time to execute).

     

    6. Conclusion

    The LabVIEW Embedded Module for Blackfin Processors provides two analysis libraries for mathematics and signal processing. The standard LabVIEW Advanced Analysis Library includes

    floating-point routines for the most common signal processing functions. In addition, the Blackfin Analysis Library offers fixed-point implementations for many of the same VIs and functions base

    on the fract16 data type. Many of the fixed-point implementations have been hand-tuned for the Blackfin processor. These Blackfin-optimized VIs often require additional inputs, but the VIs prov

    a significant improvement in execution time over the equivalent floating-point VIs.

    The LabVIEW Embedded Module for Blackfin Processors includes additional examples using the Blackfin Analysis Library VIs and Blackfin Numeric VIs. To open the examples through the NI

    Example Finder, select and browse to .Help>>Find Examples Toolkits and Modules>>Analog Devices Blackfin>>Analysis Library

    Related Links:

    Learn more about targeting in LabVIEWembedded systems

    http://www.ni.com/embeddedsystems/http://www.ni.com/embeddedsystems/