Skip to content

Commit c62669a

Browse files
committed
pypi 1.5.2 release
2 parents 0c735a9 + 0c0e839 commit c62669a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+5731
-1244
lines changed

.travis.yml

Lines changed: 0 additions & 34 deletions
This file was deleted.

README.md

Lines changed: 21 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,32 @@
1-
tf2onnx - convert TensorFlow models to ONNX models.
1+
tf2onnx - Convert TensorFlow models to ONNX.
22
========
33

4-
[![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build?definitionId=16&branchName=master)
4+
| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
5+
| --- | --- | --- | --- | --- | --- |
6+
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.5, 3.6 | 1.5-1.14 | 7-10 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
7+
| Unit Test - Full | Linux, MacOS, Windows | 3.5, 3.6, 3.7 | 1.5-1.14 | 7-10 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
8+
9+
<a name="build_status_footnote">\*</a> Only test on python3.6, TF1.14.
510

611
# Supported ONNX version
712
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
813

9-
We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7.
14+
We support opset 6 to 10. By default we use opset 7 for the resulting ONNX graph since most runtimes will support opset 7. Support for future opsets add added as they are released.
1015

11-
If you want the graph to be generated with a newer opset, use ```--opset``` in the command line, for example ```--opset 10```.
16+
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 10```.
1217

1318
# Status
14-
We support many TensorFlow models. Support for Fully Connected and Convolutional networks is mature. Dynamic LSTM/GRU/Attention networks should work but the code for this is evolving.
15-
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml)
19+
We support many TensorFlow models. Support for Fully Connected, Convolutional and dynamic LSTM networks is mature.
20+
A list of models that we use for testing can be found [here](tests/run_pretrained_models.yaml).
1621

1722
Supported RNN classes and APIs: LSTMCell, BasicLSTMCell, GRUCell, GRUBlockCell, MultiRNNCell, and user defined RNN cells inheriting rnn_cell_impl.RNNCell, used along with DropoutWrapper, BahdanauAttention, AttentionWrapper.
1823
Check [tips](examples/rnn_tips.md) when converting RNN models.
1924

25+
You find a list of supported Tensorflow ops and their mapping to ONNX [here](support_status.md).
26+
27+
Tensorflow has broad functionality and occacional mapping it to ONNX creates issues.
28+
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
29+
2030
# Prerequisites
2131

2232
## Install TensorFlow
@@ -41,7 +51,7 @@ For pytorch/caffe2, follow the instructions here:
4151
We tested with pytorch/caffe2 and onnxruntime and unit tests are passing for those.
4252

4353
## Supported Tensorflow and Python Versions
44-
We are testing with tensorflow 1.5-1.13 and anaconda **3.5,3.6,3.7**.
54+
We are testing with tensorflow 1.5-1.14 and anaconda **3.5,3.6,3.7**.
4555

4656
# Installation
4757
## From pypi
@@ -64,8 +74,10 @@ python setup.py bdist_wheel
6474

6575
# Usage
6676

67-
To convert a TensorFlow model, tf2onnx prefers a ```frozen TensorFlow graph``` and the user needs to specify inputs and outputs for the graph by passing the input and output
68-
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
77+
You find a end to end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb).
78+
79+
To convert a TensorFlow model, tf2onnx supports ```saved_model```, ```checkpoint``` or ```frozen graph``` formats. We recommend the ```saved_model``` format. If ```checkpoint``` or ```frozen graph``` formats are used, the user needs to specify inputs and outputs for the graph by passing the input and output
80+
names with ```--inputs INPUTS``` and ```--outputs OUTPUTS```.
6981

7082
```
7183
python -m tf2onnx.convert
@@ -108,7 +120,6 @@ the runtime may support custom ops that are not defined in onnx. A user can aske
108120
### --fold_const
109121
when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
110122

111-
112123
Usage example (run following commands in tensorflow-onnx root directory):
113124
```
114125
python -m tf2onnx.convert\

Troubleshooting.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
tf2onnx - common issues when converting models.
2+
========
3+
4+
## tensorflow op is not supported
5+
Example:
6+
7+
```ValueError: tensorflow op NonMaxSuppression is not supported```
8+
9+
means that the given tensorflow op is not mapped to ONNX. This could have multiple reasons:
10+
11+
(1) we have not gotten to implement it. NonMaxSuppression is such an example: we implemented NonMaxSuppressionV2 and NonMaxSuppressionV3 but not the older NonMaxSuppression op.
12+
13+
To get this fixed you can open an issue or send us a PR with a fix.
14+
15+
(2) There is no direct mapping to ONNX.
16+
17+
Sometimes there is no direct mapping from tensorflow to ONNX. We took care are of the most common cases. But for less frequently used ops there might be a mapping missing. To get this fixed there 2 options:
18+
19+
a) in tf2onnx you can compose the op out of different ops. A good example for this the [Erf op](https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/onnx_opset/math.py#L317). Before opset-9 this tf2onnx composes Erf with other ONNX ops.
20+
21+
b) You request the missing op to be added to [ONNX](https://github.com/onnx/onnx). After it is added to ONNX and some runtime implements it we'll add it to tf2onnx. You can see that this happened for the Erf Op. Starting with opset-9, ONNX added it - tf2onnx no longer composes the op and instead passes it to ONNX.
22+
23+
c) The op is too complex to compose and it's to exotic to add to ONNX. In that cases you can use a custom op to implement it. Custom ops are documented in the [README](README.md) and there is an example [here](https://github.com/onnx/tensorflow-onnx/blob/master/examples/custom_op_via_python.py). There are 2 flavors of it:
24+
- you could compose the functionality by using multiple ONNX ops.
25+
- you can implement the op in your runtime as custom op (assuming that most runtimes do have such a mechanism) and then map it in tf2onnx as custom op.
26+
27+
## get tensor value: ... must be Const
28+
29+
There is a common group of errors that reports ```get tensor value: ... must be Const```.
30+
The reason for this is that there is a dynamic input of a tensorflow op but the equivalent ONNX op uses a static attribute. In other words in tensorflow that input is only known at runtime but in ONNX it need to be known at graph creation time.
31+
32+
An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
33+
34+
You can pass the options ```--fold_const``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
35+
36+
If this doesn't work the model is most likely not to be able to convert to ONNX. We used to see this a lot of issue with the ONNX Slice op and in opset-10 was updated for exactly this reason.

VERSION_NUMBER

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.5.1
1+
1.5.2

ci_build/azure_pipelines/coveragerc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[paths]
2+
source =
3+
./
Lines changed: 36 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,38 @@
11
# Test against latest onnxruntime nightly package
22

3-
jobs:
4-
- template: 'templates/job_generator.yml'
5-
parameters:
6-
tf_versions: ['1.13.1']
7-
onnx_opsets: ['']
8-
onnx_backends:
9-
onnxruntime: ['']
10-
job:
11-
steps:
12-
- template: 'unit_test.yml'
13-
parameters:
14-
onnx_opsets: ['10', '9', '8', '7']
3+
stages:
4+
- stage:
5+
jobs:
6+
- template: 'templates/job_generator.yml'
7+
parameters:
8+
platforms: ['linux', 'windows', 'mac']
9+
python_versions: ['3.6', '3.5']
10+
tf_versions: ['1.13.1','1.12', '1.11', '1.10', '1.9', '1.8', '1.7', '1.6', '1.5']
11+
onnx_opsets: ['']
12+
onnx_backends: {onnxruntime: ['nightly']}
13+
job:
14+
steps:
15+
- template: 'unit_test.yml'
16+
report_coverage: 'True'
17+
18+
- template: 'templates/job_generator.yml'
19+
parameters:
20+
platforms: ['linux', 'windows', 'mac']
21+
python_versions: ['3.7', '3.6', '3.5']
22+
tf_versions: ['1.14']
23+
onnx_opsets: ['']
24+
onnx_backends: {onnxruntime: ['nightly']}
25+
job:
26+
steps:
27+
- template: 'unit_test.yml'
28+
report_coverage: 'True'
29+
30+
- template: 'templates/combine_test_coverage.yml'
31+
32+
schedules:
33+
- cron: "0 10 * * *"
34+
displayName: Daily onnxruntime nightly unittest
35+
branches:
36+
include:
37+
- master
38+
always: true

ci_build/azure_pipelines/pretrained_model_test-matrix.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ jobs:
55
parameters:
66
platforms: ['linux', 'windows', 'mac']
77
python_versions: ['3.6', '3.5']
8-
tf_versions: ['1.12', '1.11', '1.10', '1.9', '1.8', '1.7', '1.6', '1.5']
8+
tf_versions: ['1.13.1', '1.12', '1.11', '1.10', '1.9', '1.8', '1.7', '1.6', '1.5']
99
job:
1010
steps:
1111
- template: 'pretrained_model_test.yml'
@@ -14,7 +14,7 @@ jobs:
1414
parameters:
1515
platforms: ['linux', 'windows', 'mac']
1616
python_versions: ['3.7', '3.6', '3.5']
17-
tf_versions: ['1.13.1']
17+
tf_versions: ['1.14']
1818
job:
1919
steps:
2020
- template: 'pretrained_model_test.yml'

ci_build/azure_pipelines/pretrained_model_test.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@ jobs:
44
- template: 'templates/job_generator.yml'
55
parameters:
66
python_versions: ['3.7', '3.6', '3.5']
7-
tf_versions: ['1.13.1']
7+
tf_versions: ['1.14.0']
88
job:
99
steps:
1010
- template: 'pretrained_model_test.yml'
1111

1212
- template: 'templates/job_generator.yml'
1313
parameters:
1414
platforms: ['windows', 'mac']
15-
tf_versions: ['1.13.1']
15+
tf_versions: ['1.14.0']
1616
job:
1717
steps:
1818
- template: 'pretrained_model_test.yml'

ci_build/azure_pipelines/pylint.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,5 @@ jobs:
1010
set -ex
1111
pip install pylint
1212
pip freeze
13-
pylint --rcfile=tools/pylintrc --ignore=version.py --disable=cyclic-import tf2onnx tests/*.py tools
13+
pylint --rcfile=tools/pylintrc --ignore=version.py --disable=cyclic-import tf2onnx tests/*.py tools -j 0
1414
displayName: 'Pylint'
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# combine and report unittest coverage
2+
3+
parameters:
4+
artifact_name: 'single_test_coverage'
5+
6+
stages:
7+
- stage:
8+
jobs:
9+
- job: 'combine_and_report_coverage'
10+
variables:
11+
CI_ARTIFACT_NAME: '${{ parameters.artifact_name }}'
12+
13+
pool:
14+
vmImage: 'ubuntu-16.04'
15+
16+
steps:
17+
- task: DownloadBuildArtifacts@0
18+
displayName: 'Download Single Test Coverage'
19+
inputs:
20+
artifactName: '${{ parameters.artifact_name }}'
21+
downloadPath: $(System.DefaultWorkingDirectory)
22+
23+
- task: CondaEnvironment@1
24+
inputs:
25+
createCustomEnvironment: 'true'
26+
environmentName: 'tf2onnx'
27+
packageSpecs: 'python=3.6'
28+
updateConda: 'false'
29+
30+
- bash: |
31+
pip install -U coverage
32+
condition: succeeded()
33+
displayName: 'Install Coverage'
34+
35+
- bash: |
36+
cat ${CI_ARTIFACT_NAME}/.coveragerc_paths* >> ci_build/azure_pipelines/coveragerc
37+
coverage combine --rcfile ci_build/azure_pipelines/coveragerc ${CI_ARTIFACT_NAME}
38+
coverage report
39+
coverage html -d ${BUILD_ARTIFACTSTAGINGDIRECTORY}/coverage_report
40+
condition: succeeded()
41+
displayName: 'Combine And Report Test Coverage'
42+
43+
- task: PublishBuildArtifacts@1
44+
condition: succeeded()
45+
inputs:
46+
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
47+
artifactName: 'test_coverage_report'
48+
displayName: 'Deploy Test Coverage Report'

0 commit comments

Comments
 (0)