Releases: calad0i/HGQ
Releases · calad0i/HGQ
v0.2.3
v0.2.2
v0.2.1
v0.2.0.post1
Minor dependency requirement change.
As tf 2.15
passes all tests, allow tf >=2.13, <2.16.
tf 2.16
is not tested, but may work.
v0.2.0
Change logs:
- proper hls4ml integration through proxy-model.
- bit-accurate emulation of hls4ml firmware up to fp32 precision.
- zero-hassle conversion. You get what you see in the HGQ model, upon fp32 precision and overflowing.
- (experimental) supports nested models
- (limited) support of arbitrary unary activation functions
- many bug fixes here and there
Full Changelog: 0.1.3...v0.2.0
v0.2.0rc1
Bug fixes and doc update:
Issues with converting Signature
layer into proxy-model
, PConcatenate
layer init check, fixing var name typo, bit-mismatch in edge some cases, and other minor changes.
Back support python 3.10
, better dependency requirement.
Full Changelog: v0.2.0b2...v0.2.0rc1
v0.2.0b2
0.2.0b1 - overhaul
- Use
proxy-model
for hls4ml conversion, no more monkey and file patches- bit-accurate emulation to hls4ml, with or without overflow
- many bug fixes regarding inaccurate synthesis results
- Documentation
- bump to
python==3.11
andtensorflow==2.13
- QKeras -> proxy-model support for bit-accurate synthesis
- Much more...
v0.1.3
Minor fixes
- Avoid overflow on some corner cases
- Avoid unnecessarily large cover range for bounded activation functions when
cover_factor
is set Support json serialization of model architecture