Skip to content

Commit b96f221

Browse files
add document
lint lint save save add more case save error lint lint commit do lint save fix lint wrap it back as func lint save remove dead comment fix style fix lint Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> address review feedback pe now handle freevar. as a result preserving function is now trivial. test add basic test, implement pretty printing for generic function test lint fix segfault save save do test fix another error address comment commit save address review feedback add test for invalidate, fix error in lookup rename cont to boduy fix error and add regression test Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <[email protected]> fix error, add test case fix lint remove extra line fix some error pe commit save save save save save (pe/dce broken) [DOCKER] Pin flatbuffers checkout to the last release tag (apache#2823). (apache#2879) [Relay][Text Format] Reverse CallNode Print Order (apache#2882) [NNPACK] Modernize test (apache#2868) [Relay] Add list update to prelude (apache#2866) Add missing sgx includes (apache#2878) Fix setting up hints for getaddrinfo (apache#2872) [ARITH] RewriteSimplifier: improved cmp simplification (apache#2851) do (apache#2883) [RELAY][Frontend][TF] decompile tf control flow (apache#2830) * decompile tf control flow * Add docs * remove import relay * move tests under tensorflow frontend * minor fix Enhance upsample operator to adapt onnx opset version 9 (apache#2840) Use version invariant rustfmt (apache#2886) [Relay][Op] Add group conv2d dispatch to topi function (apache#2870) * [Relay][Op] Add group conv2d dispatch to topi function * Rerun tests [Apps] [howto_deploy] fix cxx-flags order and build directory (apache#2888) fix prelu, now can use on 2d input and add one test (apache#2875) Add dense schedules to __init__ for cpu (apache#2855) * Add dense schedules to __init__ for cpu * Add documentation for topi::shape * Add additional imports to topi CPU __init__. [TESTS] Improve script robustness (apache#2893) A number of test scripts use the '|| exit 1' idiom. This has two issues, first process exit codes are defined to be in the range 0-255. Second, more importantly, the idiom is fragile because it requires that every possible failure point be explicitly coded. This patch removes the idiom in favour of "set -e" as used in the docker scripts as a more robust mechanism to ensure that script failures are always caught and propagated by default. [Relay] Fix name of bias in testing.mlp (apache#2892) winograd_nnpack (apache#2721) [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (apache#2861) * Fix Relay ARM CPU spatial pack depthwise alter op layout issue. * Update tune_relay_arm.py [TESTS] Import script robustness (set -u) (apache#2896) Adopt the "set -u" idiom from the docker scripts as a mechanism to improve future robustness. [DOCKER] Upgrade ci-cpu to latest v0.50 (apache#2901) Allow linking against MKLML (apache#2902) [COMMUNITY] ASF mentors (apache#2906) [Relay] Allow converting keras.layers.Sequential (apache#2842) * Allow converting keras.layers.Sequential * Use existing new_var function * Only update expr when missing * Add test [Relay] clean up hd, change tl (apache#2917) Turn on USE_SORT by default (apache#2916) [TEST] Cache test data (apache#2921) Unified error handling in NNVM and Relay frontends (apache#2828) add support for mxnet smooth_l1 (apache#2905) [Relay] Add support for TupleGetItem in op fusion (apache#2914) [Relay, TOPI] Deformable conv2d (apache#2908) * [Relay, TOPI] Add deformable conv2d * Moved to op level2 * Fix lint * Moved to level2 & bug fix * Update comments * Disabled flaky test of conv2d TVM debugresult dump to Chrome Tracing (apache#2922) [Relay] add test for second order ad (apache#2754) * do second order * add comment * better name * use tvm assert all close * refire ci Revert "[Relay] add test for second order ad (apache#2754)" (apache#2926) This reverts commit f5ca991. [Tutorial] Cache the test data in tutorial (apache#2923) [AUTOTVM] Refactor measure build func (apache#2927) Fix intersect of modular set (apache#2904) Fix comment bugs and code style [Relay, OpFusion] Fix handling TupleGetItem for nested tuples (apache#2929) Consistent result of DetectLinearEquation() when an empy vars is passed (apache#2860) [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (apache#2850) * [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. * * test cases * * ci error Outdated renaming for flatten in ONNX converter (apache#2843) [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (apache#2864) * [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. * * review comments Fix vcvtph2ps codegen (apache#2925) Port changes More fixes save save Changes to schedules and mxnet importer
1 parent 46f0b67 commit b96f221

File tree

159 files changed

+6396
-1324
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

159 files changed

+6396
-1324
lines changed

CONTRIBUTORS.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,21 @@
11
TVM Contributors
22
================
3-
TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
3+
TVM adopts the Apache way and governs by merit. We believe that it is important to create an inclusive community where everyone can use,
44
contribute to, and influence the direction of the project. We actively invite contributors who have earned the merit to be part of the development community.
55

66
See the [community structure document](http://docs.tvm.ai/contribute/community.html) for the explanation of community structure and contribution guidelines.
77

8+
## Mentors
9+
10+
TVM is now part of the Apache Incubator.
11+
We are fortunate to have the following mentors.
12+
13+
- Markus Weimer @markusweimer
14+
- Sebastian Schelter @sscdotopen
15+
- Byung-Gon Chun @bgchun
16+
- Henry Saputra @hsaputra
17+
- Timothy Chen @tnachen
18+
- Furkan KAMACI @kamaci
819

920
## Committers
1021

Jenkinsfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
//
2323
ci_lint = "tvmai/ci-lint:v0.50"
2424
ci_gpu = "tvmai/ci-gpu:v0.51"
25-
ci_cpu = "tvmai/ci-cpu:v0.41"
25+
ci_cpu = "tvmai/ci-cpu:v0.50"
2626
ci_i386 = "tvmai/ci-i386:v0.50"
2727

2828
// tvm libraries

apps/howto_deploy/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,4 @@ lib/cpp_deploy_pack: cpp_deploy.cc lib/test_addone_sys.o lib/libtvm_runtime_pack
3131
# Deploy using pre-built libtvm_runtime.so
3232
lib/cpp_deploy_normal: cpp_deploy.cc lib/test_addone_sys.o
3333
@mkdir -p $(@D)
34-
$(CXX) $(PKG_CFLAGS) -o $@ $^ $(PKG_LDFLAGS) -ltvm_runtime
34+
$(CXX) $(PKG_CFLAGS) -o $@ $^ -ltvm_runtime $(PKG_LDFLAGS)

apps/howto_deploy/run_example.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ echo "Build the libraries.."
33
mkdir -p lib
44
make
55
echo "Run the example"
6-
export LD_LIBRARY_PATH=../../lib:${LD_LIBRARY_PATH}
7-
export DYLD_LIBRARY_PATH=../../lib:${DYLD_LIBRARY_PATH}
6+
export LD_LIBRARY_PATH=../../build:${LD_LIBRARY_PATH}
7+
export DYLD_LIBRARY_PATH=../../build:${DYLD_LIBRARY_PATH}
88

99
echo "Run the deployment with all in one packed library..."
1010
lib/cpp_deploy_pack

cmake/config.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ set(USE_MPS OFF)
127127
set(USE_ROCBLAS OFF)
128128

129129
# Whether use contrib sort
130-
set(USE_SORT OFF)
130+
set(USE_SORT ON)
131131

132132
# Build ANTLR parser for Relay text format
133133
set(USE_ANTLR OFF)

cmake/modules/SGX.cmake

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,6 @@ if(NOT USE_SGX STREQUAL "OFF")
4848
-L${USE_SGX}/lib64 -l${_urts_lib}
4949
-L${RUST_SGX_SDK}/sgx_ustdc -lsgx_ustdc)
5050
list(APPEND RUNTIME_SRCS ${RUNTIME_SGX_SRCS})
51+
52+
include_directories(${RUST_SGX_SDK}/edl ${RUST_SGX_SDK}/common)
5153
endif()

cmake/modules/contrib/BLAS.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ elseif(USE_BLAS STREQUAL "mkl")
1010
if(NOT IS_DIRECTORY ${USE_MKL_PATH})
1111
set(USE_MKL_PATH /opt/intel/mkl)
1212
endif()
13-
find_library(BLAS_LIBRARY mkl_rt ${USE_MKL_PATH}/lib/ ${USE_MKL_PATH}/lib/intel64)
13+
find_library(BLAS_LIBRARY NAMES mkl_rt mklml_gnu HINTS ${USE_MKL_PATH}/lib/ ${USE_MKL_PATH}/lib/intel64)
1414
include_directories(${USE_MKL_PATH}/include)
1515
list(APPEND TVM_RUNTIME_LINKER_LIBS ${BLAS_LIBRARY})
1616
list(APPEND RUNTIME_SRCS ${CBLAS_CONTRIB_SRC})

docker/install/ubuntu_install_rust.sh

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,10 @@ apt-get update && apt-get install -y --no-install-recommends curl
99
export RUSTUP_HOME=/opt/rust
1010
export CARGO_HOME=/opt/rust
1111
# this rustc is one supported by the installed version of rust-sgx-sdk
12-
curl -s -S -L https://sh.rustup.rs -sSf | sh -s -- -y --no-modify-path --default-toolchain nightly-2019-01-28
12+
curl -s -S -L https://sh.rustup.rs -sSf | sh -s -- -y --no-modify-path --default-toolchain nightly-2019-03-24
1313
. $CARGO_HOME/env
14-
rustup component add rust-src
15-
cargo install sccache
16-
cargo install rustfmt-nightly --version 1.0.1 --force
17-
cargo install xargo
14+
rustup component add rustfmt
15+
cargo install sccache --no-default-features
1816

1917
# make rust usable by all users
2018
chmod -R a+w /opt/rust

docker/install/ubuntu_install_tflite.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ set -u
55
set -o pipefail
66

77
# Download, build and install flatbuffers
8-
git clone --depth=1 --recursive https://github.com/google/flatbuffers.git
8+
git clone --branch=v1.10.0 --depth=1 --recursive https://github.com/google/flatbuffers.git
99
cd flatbuffers
1010
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release
1111
make install -j8

include/tvm/relay/attrs/nn.h

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,24 @@ struct Conv2DWinogradAttrs : public tvm::AttrsNode<Conv2DWinogradAttrs> {
155155
}
156156
};
157157

158+
/*! \brief Attributes used in winograd weight transformation operators */
159+
struct Conv2DWinogradNNPACKWeightTransformAttrs
160+
: public tvm::AttrsNode<Conv2DWinogradNNPACKWeightTransformAttrs> {
161+
int convolution_algorithm;
162+
DataType out_dtype;
163+
164+
TVM_DECLARE_ATTRS(Conv2DWinogradNNPACKWeightTransformAttrs,
165+
"relay.attrs.Conv2DWinogradNNPACKWeightTransformAttrs") {
166+
TVM_ATTR_FIELD(convolution_algorithm)
167+
.describe(
168+
"The convolution algorithm for Winograd NNPACK. "
169+
"E.g. tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8 for WT_8x8, "
170+
"tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8_FP16 for WT_8x8_FP16");
171+
TVM_ATTR_FIELD(out_dtype)
172+
.set_default(NullValue<DataType>())
173+
.describe("Output data type, set to explicit type under mixed precision setting");
174+
}
175+
};
158176

159177
/*! \brief Attributes used in softmax operators */
160178
struct SoftmaxAttrs : public tvm::AttrsNode<SoftmaxAttrs> {
@@ -438,6 +456,67 @@ struct L2NormalizeAttrs : public tvm::AttrsNode<L2NormalizeAttrs> {
438456
}
439457
};
440458

459+
460+
/*! \brief Attributes for DeformableConv2D operator */
461+
struct DeformableConv2DAttrs : public tvm::AttrsNode<DeformableConv2DAttrs> {
462+
Array<IndexExpr> strides;
463+
Array<IndexExpr> padding;
464+
Array<IndexExpr> dilation;
465+
int deformable_groups;
466+
int groups;
467+
IndexExpr channels;
468+
Array<IndexExpr> kernel_size;
469+
std::string data_layout;
470+
std::string kernel_layout;
471+
std::string out_layout;
472+
DataType out_dtype;
473+
474+
TVM_DECLARE_ATTRS(DeformableConv2DAttrs, "relay.attrs.DeformableConv2DAttrs") {
475+
TVM_ATTR_FIELD(strides).set_default(Array<IndexExpr>({1, 1}))
476+
.describe("Specifies the strides of the convolution.");
477+
TVM_ATTR_FIELD(padding).set_default(Array<IndexExpr>({0, 0}))
478+
.describe("If padding is non-zero, then the input is implicitly zero-padded"
479+
"on both sides for padding number of points");
480+
TVM_ATTR_FIELD(dilation).set_default(Array<IndexExpr>({1, 1}))
481+
.describe("Specifies the dilation rate to use for dilated convolution.");
482+
TVM_ATTR_FIELD(deformable_groups).set_default(1)
483+
.describe("Controls the connections between inputs and offsets."
484+
"Input channels are partitioned into multiple deformable groups. Offsets"
485+
"are shared across input channels in the same deformable group.");
486+
TVM_ATTR_FIELD(groups).set_default(1)
487+
.describe("Controls the connections between inputs and outputs."
488+
"At groups=1, all inputs are convolved to all outputs."
489+
"At groups=2, the operation becomes equivalent to having two convolution"
490+
"layers side by side, each seeing half the input channels, and producing"
491+
"half the output channels, and both subsequently concatenated.");
492+
TVM_ATTR_FIELD(channels)
493+
.describe("The number of output channels in the convolution."
494+
" If it is not set, inferred by shape of the weight.")
495+
.set_default(NullValue<IndexExpr>());
496+
TVM_ATTR_FIELD(kernel_size)
497+
.describe("Specifies the dimensions of the convolution window.")
498+
.set_default(NullValue<Array<IndexExpr> >());
499+
TVM_ATTR_FIELD(data_layout).set_default("NCHW")
500+
.describe("Dimension ordering of input data. Can be 'NCHW', 'NHWC', etc."
501+
"'N', 'C', 'H', 'W' stands for batch, channel, height, and width"
502+
"dimensions respectively. Convolution is applied on the 'H' and"
503+
"'W' dimensions.");
504+
TVM_ATTR_FIELD(kernel_layout).set_default("OIHW")
505+
.describe("Dimension ordering of weight. Can be 'OIHW', 'OIHW16o16i', etc."
506+
"'O', 'I', 'H', 'W' stands for num_filter, input_channel, height, and width"
507+
"dimensions respectively.");
508+
TVM_ATTR_FIELD(out_layout).set_default("")
509+
.describe("Dimension ordering of output. Can be 'NCHW', 'NHWC', etc."
510+
"'N', 'C', 'H', 'W' stands for batch, channel, height, and width"
511+
"dimensions respectively. Default to be same as input layout.");
512+
513+
// use 0 bits to indicate none.
514+
TVM_ATTR_FIELD(out_dtype)
515+
.set_default(NullValue<DataType>())
516+
.describe("Output data type, set to explicit type under mixed precision setting");
517+
}
518+
};
519+
441520
} // namespace relay
442521
} // namespace tvm
443522
#endif // TVM_RELAY_ATTRS_NN_H_

include/tvm/relay/expr.h

Lines changed: 40 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,6 +166,26 @@ class VarNode : public ExprNode {
166166

167167
RELAY_DEFINE_NODE_REF(Var, VarNode, Expr);
168168

169+
/*! \brief Hash Var by it's id.
170+
* Different VarNode might has same vid, and they are considered to be the same var in such case.
171+
* Use VarHash to hash Var by id.
172+
*/
173+
struct VarHash {
174+
size_t operator()(const Var& v) const {
175+
return v->vid.hash();
176+
}
177+
};
178+
179+
/*! \brief Compare Var by it's id.
180+
* Different VarNode might has same vid, and they are considered to be the same var in such case.
181+
* Use VarEqual to compare Var by id.
182+
*/
183+
struct VarEqual {
184+
bool operator()(const Var& l, const Var& r) const {
185+
return l->vid.get() == r->vid.get();
186+
}
187+
};
188+
169189
/*!
170190
* \brief Global variable that leaves in the top-level module.
171191
* This is used to enable recursive calls between function.
@@ -503,7 +523,7 @@ RELAY_DEFINE_NODE_REF(RefWrite, RefWriteNode, Expr);
503523
* rewriting pass such as layout or type transformation.
504524
*
505525
* Subclass TempExprNode allows us to pattern match on
506-
* specific kind TempExpr and use them for expression rewriting.
526+
* specific kind of TempExpr and use them for expression rewriting.
507527
*
508528
* TempExpr should only be used within a pass,
509529
*/
@@ -521,6 +541,25 @@ class TempExprNode : public ExprNode {
521541

522542
RELAY_DEFINE_NODE_REF(TempExpr, TempExprNode, Expr);
523543

544+
class Annotate;
545+
class AnnotateNode : public ExprNode {
546+
public:
547+
Expr expr;
548+
NodeRef annotation;
549+
void VisitAttrs(tvm::AttrVisitor* v) final {
550+
v->Visit("expr", &expr);
551+
v->Visit("annotation", &annotation);
552+
v->Visit("_checked_type_", &checked_type_);
553+
}
554+
555+
TVM_DLL static Annotate make(Expr expr, NodeRef annotation);
556+
557+
static constexpr const char* _type_key = "relay.AnnotateNode";
558+
TVM_DECLARE_NODE_TYPE_INFO(AnnotateNode, ExprNode);
559+
};
560+
561+
RELAY_DEFINE_NODE_REF(Annotate, AnnotateNode, Expr);
562+
524563
// implementataions
525564
inline const Type& ExprNode::checked_type() const {
526565
CHECK(checked_type_.defined()) << "internal error: the type checker has "

include/tvm/relay/expr_functor.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ class ExprFunctor<R(const Expr& n, Args...)> {
7171
* \return The result of the call
7272
*/
7373
virtual R VisitExpr(const Expr& n, Args... args) {
74+
CHECK(n.defined());
7475
static FType vtable = InitVTable();
7576
return vtable(n, this, std::forward<Args>(args)...);
7677
}
@@ -97,6 +98,7 @@ class ExprFunctor<R(const Expr& n, Args...)> {
9798
virtual R VisitExpr_(const RefWriteNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
9899
virtual R VisitExpr_(const ConstructorNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
99100
virtual R VisitExpr_(const MatchNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
101+
virtual R VisitExpr_(const AnnotateNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
100102
virtual R VisitExprDefault_(const Node* op, Args...) {
101103
throw Error(std::string("Do not have a default for ") + op->type_key());
102104
}
@@ -121,6 +123,7 @@ class ExprFunctor<R(const Expr& n, Args...)> {
121123
RELAY_EXPR_FUNCTOR_DISPATCH(RefWriteNode);
122124
RELAY_EXPR_FUNCTOR_DISPATCH(ConstructorNode);
123125
RELAY_EXPR_FUNCTOR_DISPATCH(MatchNode);
126+
RELAY_EXPR_FUNCTOR_DISPATCH(AnnotateNode);
124127
return vtable;
125128
}
126129
};
@@ -151,6 +154,7 @@ class ExprVisitor
151154
void VisitExpr_(const RefWriteNode* op) override;
152155
void VisitExpr_(const ConstructorNode* op) override;
153156
void VisitExpr_(const MatchNode* op) override;
157+
void VisitExpr_(const AnnotateNode* op) override;
154158
virtual void VisitType(const Type& t);
155159
virtual void VisitClause(const Clause& c);
156160
virtual void VisitPattern(const Pattern& c);
@@ -193,6 +197,7 @@ class ExprMutator
193197
Expr VisitExpr_(const RefWriteNode* op) override;
194198
Expr VisitExpr_(const ConstructorNode* op) override;
195199
Expr VisitExpr_(const MatchNode* op) override;
200+
Expr VisitExpr_(const AnnotateNode* op) override;
196201

197202
/*!
198203
* \brief Used to visit the types inside of expressions.

include/tvm/relay/pass.h

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
#include <tvm/relay/module.h>
4747
#include <tvm/relay/op_attr_types.h>
4848
#include <tvm/relay/type.h>
49-
49+
#include <tvm/relay/adt.h>
5050
#include <string>
5151
#include <vector>
5252

@@ -326,6 +326,17 @@ TVM_DLL bool WellFormed(const Expr& expr);
326326
*/
327327
TVM_DLL tvm::Array<Var> BoundVars(const Expr& expr);
328328

329+
/*! \brief Get all bound variables from pattern pat.
330+
*
331+
* Bound variables are all variables that got bound by the pat.
332+
* They only have meaning inside that expr, and can only be used in it.
333+
*
334+
* \param pat the Pattern.
335+
*
336+
* \return List of bound vars, in the PostDFS order in the expression.
337+
*/
338+
TVM_DLL tvm::Array<Var> BoundVars(const Pattern& pat);
339+
329340
/*! \brief Get free type parameters from expression expr.
330341
*
331342
* Free variables are variables that are not bound by a
@@ -413,12 +424,13 @@ TVM_DLL tvm::Array<TypeVar> AllTypeVars(const Type& t, const Module& mod);
413424

414425
/*! \brief Remove expressions which does not effect the program result.
415426
*
416-
* It will remove let bindings which are not referenced, and branches that will
417-
* not be entered.
427+
* It will remove let bindings which are not referenced,
428+
* and inline let bindings that are only used once.
418429
*
419-
* For example, this pass should turn `let a = 1 in 2` into `2`, as the value of
420-
* the expression does not depend on a. Another example is `if (true) then 1
421-
* else 2` will be optimized into 1.
430+
* For example, this pass should turn `let a = 1 in 2` into `2`,
431+
* as the value of the expression does not depend on a.
432+
*
433+
* As another example, `let a = 1 in a` will be optimized into 1.
422434
*
423435
* \param e the expression to optimize.
424436
*
@@ -527,7 +539,7 @@ struct StructuralHash {
527539
*
528540
* \return expression in A-Normal Form
529541
*/
530-
Expr ToANormalForm(const Expr& e, const Module& mod);
542+
TVM_DLL Expr ToANormalForm(const Expr& e, const Module& mod);
531543

532544
/*! \brief Remove let binding and directly share via pointer instead.
533545
*
@@ -538,8 +550,14 @@ Expr ToANormalForm(const Expr& e, const Module& mod);
538550
*
539551
* \return the expression in graph normal form.
540552
*/
541-
Expr ToGraphNormalForm(const Expr& e);
553+
TVM_DLL Expr ToGraphNormalForm(const Expr& e);
542554

555+
/*! \brief Aggressive constant propagation/constant folding/inlining.
556+
* It will do as much computation in compile time as possible.
557+
* It has two benefit: remove runtime overhead, and allow more optimization (typically fusion).
558+
* As a side effect, code size will explode.
559+
*/
560+
Expr PartialEval(const Expr& e, const Module& mod);
543561
} // namespace relay
544562
} // namespace tvm
545563

include/tvm/relay/pattern_functor.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ class PatternFunctor<R(const Pattern& n, Args...)> {
7171
* \return The result of the call
7272
*/
7373
virtual R VisitPattern(const Pattern& n, Args... args) {
74+
CHECK(n.defined());
7475
static FType vtable = InitVTable();
7576
return vtable(n, this, std::forward<Args>(args)...);
7677
}

nnvm/include/nnvm/top/nn.h

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,6 +183,26 @@ struct WinogradWeightTransformParam : public dmlc::Parameter<WinogradWeightTrans
183183
static const constexpr int kWeight = 0;
184184
};
185185

186+
struct WinogradNNPACKWeightTransformParam
187+
: public dmlc::Parameter<WinogradNNPACKWeightTransformParam> {
188+
int convolution_algorithm;
189+
int out_dtype;
190+
191+
DMLC_DECLARE_PARAMETER(WinogradNNPACKWeightTransformParam) {
192+
DMLC_DECLARE_FIELD(convolution_algorithm)
193+
.describe(
194+
"The convolution algorithm for Winograd NNPACK. "
195+
"E.g. tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8 for WT_8x8, "
196+
"tvm.contrib.nnpack.ConvolutionAlgorithm.WT_8x8_FP16 for WT_8x8_FP16");
197+
DMLC_DECLARE_DTYPE_FIELD(out_dtype)
198+
.add_enum("same", -1)
199+
.set_default(-1)
200+
.describe("Output data type, set to explicit type under mixed precision setting");
201+
}
202+
203+
static const constexpr int kWeight = 0;
204+
};
205+
186206
struct WinogradConv2DParam : public dmlc::Parameter<WinogradConv2DParam> {
187207
int channels;
188208
TShape kernel_size;

0 commit comments

Comments
 (0)