You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run part7 of the tutorial I can see floats going into the model as input. In my situation I've already got some upstream processing that has created Q(1.31) / fixed32_31 data (so 1 integer sign bit and 31 other bits). I would like to avoid having to convert the existing input data I've got to floating point. In future as well I may be dealing with 8bits as well.
What I'm not sure is:
How to train a Qkeras model with fixed point input as I don't see a specific quantised aware "input" layer https://github.com/google/qkeras. I'm guessing I just put the data converted to fixed point straight in during training rather than putting float in.
During training if I try putting fixed point in on input layer and straight into a QConv2DBatchnorm with kernel_quantizer=f"quantized_bits({10},{2},alpha=1)" then I get a loss of nan. No issue though with floating point.
For the HLS config I'm guessing I can just set something like hls_config_q['LayerName'][**input_layer**]['Precision'] = 'ap_fixed<32,1>' (but so far not managed to successfully train a model to get to this stage with a fixed point input).
If I assume I'm meant to just train the QKeras model with floating point input, then in the hsl_config set the input layer to be ap_fixed<32,1> I get a different error when I try to test the model with hls_model_q.predict(np.ascontiguousarray(X_fixed)) Exception: Invalid type (int32) of numpy array. Supported types are: single, float32, double, float64, float_.. So I'm not sure how I should test the model before deploying if this approach even works.
I'm new to FPGA and fixed point so just knowing that this is possible would be a great help. Also if people do normally change the input to fixed point or this is a strange thing to do.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I run part7 of the tutorial I can see floats going into the model as input. In my situation I've already got some upstream processing that has created Q(1.31) / fixed32_31 data (so 1 integer sign bit and 31 other bits). I would like to avoid having to convert the existing input data I've got to floating point. In future as well I may be dealing with 8bits as well.
What I'm not sure is:
hls_config_q['LayerName'][**input_layer**]['Precision'] = 'ap_fixed<32,1>'
(but so far not managed to successfully train a model to get to this stage with a fixed point input).hls_model_q.predict(np.ascontiguousarray(X_fixed))
Exception: Invalid type (int32) of numpy array. Supported types are: single, float32, double, float64, float_.
. So I'm not sure how I should test the model before deploying if this approach even works.I'm new to FPGA and fixed point so just knowing that this is possible would be a great help. Also if people do normally change the input to fixed point or this is a strange thing to do.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions