7

How do I detect the timescale precision used in a simulation from the source code ?. Consider I have a configuration parameter(cfg_delay_i) of some delay value given by user in timeunits as fs .If the user gives 1000 , my code has to wait 1000fs or 1ps before executing further.

#(cfg_delay_i * 1fs );//will wait only if timescale is 1ps/1fs
do_something(); 

If the timescale precision is 1fs ,there won’t be any problem but if the precision is higher than that it won’t wait and it will work as 0 delay . So I want to write a code which will determine the timescale used by the user and give the delay accordingly.My expected pseudo-code will be like below,

if(timeprecision == 1fs )#(cfg_delay_i * 1fs ) ; 
else if(timeprecision == 1ps )#(cfg_delay_i/1000 * 1ps ) ;

Please help me with the logic to determine the timescale unit and precision internally.

4

1 回答 1

6

你可以写if (int'(1fs)!=0) // the time precision is 1fs等等。但没有必要这样做。

#(cfg_delay_i/1000.0 * 1ps)

无论精度是 1ps 还是更小,上述方法都有效。注意使用实数1000.0来保持除法是真实的。1ps 已经是实数了,所以整个表达式的结果都是实数。你也可以做

#(cfg_delay_i/1.e6 * 1ns)

如果此代码所在点的时间精度大于 1fs,则结果将四舍五入到最接近的精度单位。例如,如果cfg_delay为 500 且当前精度为 1ps,则将四舍五入为#1ps.

请注意,用户设置cfg_delay必须同样小心,以确保它们的值设置为正确的缩放/精度。

于 2019-10-01T07:13:29.843 回答