Search Engine www.edaboard.com

Decimal Vhdl

Add Question

42 Threads found on edaboard.com: Decimal Vhdl
You shouldn't expect others to do your homework, but help is surely available. First step would be a clear specification. You can design a divider to give fractional result bits (bits right of the decimal point), however this has nothing to do with floating point number representation. Floating point involves an exponent, showing how many bit po
HI, I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32
Yes. range indeces are in decimal by default, so 001000010000 is 10 billion, 10 thousand. Not the binary form (this is larger than the range of integer). You need to format the number with: 2#00100001000# to make it a binary representation of an integer (or 16#208# if you want hex) and I think the error comes from using the to_integer function -
I have some real numbers like 9.123472e+002. I need to print these to a file after converting them to normal decimal number representation from scientific representation like in this example I want to write 912.3472 to the file instead of writing 9.123472e+002. Any idea how to do this in vhdl? Any library functions to do this job?
Who can you please tell me how to order the BASYS 2 decimal point ?Please
You are not doing binary to decimal conversion, the output is still Binary. You are taking the absolute value of the vector. what input does the dac expect? Signed binary or offset unsigned binary?
Hi all; I want to write a function for converting a BCD number into decimal in vhdl. I looked for algorithms online but did not quite understand the logic behind it. I feel one can do it following way: I have an address input as BCD value. So it can be broken into the chunks of 4 bits. And then converted into the corresponding (...)
Hi, How to convert integer value into decimal in vhdl. I am trying to use conv_std _logic _vector But I am getting some error like below: Undefined symbol 'conv_std_logic_vector'. conv_std_logic_vector: Undefined symbol (last report in this block) How can I solve the problem?. Regards xilinx1001
Hi dear friends, Can anybody help me about this calculation (showns below) in vhdl. Otherwise, I will become such an insane. Xn+1 = 4*Xn*(1-Xn) Xn+1 is a new value and Xn is past. Xn will always be in the range of 0< Xn< 1. Xn is float number with 10 decimal point. inital value of Xn is 0.25. So how can I obtain a suitable code fo
With a good scaling factor, you can get rid of the decimals (between 0 and 1). And as FvM suggests, if it is your intention to make your functions synthesizable, 8 bit resolution might be on the low side, but 16 bit video is already on the high side. Maybe some food for thought: how is your outside world communicating with your FPGA? I doubt if it
Hi everybody, I am trying to test out my chip's functionality on vhdl before going ahead. I wanted to know how to simulate decimal point delays like: s <= d after 1.54 ns; vhdl treats that as 2 ns delay. I want to know how to make the compiler simulate decimal point delays. Any help appreciated. Thanks.
We would prefer a slightly clearer question. What is the relation of decimal <> real number <> 2 bit <> std_logic_vector ???
I am trying to write the vhdl code for a Timing Genarator Chip : in the vhdl code i have to incorporate a code for the 16 Bit BCD(Binary Coded decimal) Counter i.e. 4 Decades , i tried a lot but unable to figure it out how to get it working... as the 16 bit BCD counter can count from 0 to 9999 ,for the first 9 clock pulses i can easily cre
On real hardware there is no such thing as decimal. All numbers are binary. In vhdl you can compare arrays to decimal values just so its easy to read. You can split up any array by indexing the bits: if din(1 downto 0) > 1 then .. elsif din(3 downto 2) > 1 then etc. You need to use the numeric_std package to convert the input to (...)
i would say represent 0.390625 in binary in binary it is 0.011001 Now look at the bits to the right of the point. It is 011001 which is 25 in decimal Now look at the binary point and count the number of shifts that you need to make to go from 0.11001 to 011001.0. You need 6 right shifts which mean you divide by 64 So that is all.
Obviously, you can only feed back 8 bits of the result. You have to decide about the intended adder behaviour. It can be either wrap around (simply ignoring the two most significant bits) or saturation, limiting the result to 255 (decimal).
The weight of the lowest fractional bit is 1/16 or 0.0625, so you would need four display digits to represent it exactly. If this is what you want, you can multiply with 625 and convert to decimal. Or just use a look-up table of intended representations with your selected width, as TrickyDicky suggested.
firstly, you should read up a small amount about floating point implementations. In the end, I suggest using a fixed point implementation instead. This is where a decimal point is added. eg, 110010.11 -- a 6.2 fixed point. Math is done as normal, but the decimal point is tracked. eg a 6.2 fixed point, multiplied by 1/4 would simply be a 4.4 fi
your title and your question in the post are 2 different things. Elexan has anwered your question in the post. But for the question in the title - std_logic_vector is not a number, so you have to type convert via the unsigned or signed type to get a std_logic_vector. My main question is why do you want to assign a decimal number to a std_logi
sometimes you can use as --***************************** entity decimal is Port ( clk : in STD_LOGIC; res : in STD_LOGIC; cnt : out STD_LOGIC_vector(0 to 5)); end decimal; architecture Behavioral of decimal is signal cnt_s : std_logic_vector(0 to 5); begin SM : process (Clk) (...)