Skip to content

Commit 28ed334

Browse files
committed
update comment for mcunetv3 reference
1 parent 950162f commit 28ed334

File tree

125 files changed

+125
-125
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

125 files changed

+125
-125
lines changed

TinyEngine/include/arm_nnfunctions_modified.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
* Reference papers:
2727
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
2828
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
29-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
29+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
3030
* Contact authors:
3131
* - Wei-Ming Chen, [email protected]
3232
* - Wei-Chen Wang, [email protected]

TinyEngine/include/detectionUtility.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/fp_requantize_op.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/img2col_element.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/kernel_element.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/mutable_function.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/precision_cnt.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/profile.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/tinyengine_lib.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/include/yoloOutput.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/add_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_1x1_s8_ch16_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_1x1_s8_ch24_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_1x1_s8_ch48_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_1x1_s8_ch8_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_1x1_s8_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/convolve_s8_kernel3_inputch3_stride2_pad1_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/fp_requantize_op/mat_mul_kernels_fpreq.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/add.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/arm_convolve_s8_4col.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
* Reference papers:
2727
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
2828
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
29-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
29+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
3030
* Contact authors:
3131
* - Wei-Ming Chen, [email protected]
3232
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/arm_nn_mat_mult_kernel3_input3_s8_s16.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
* Reference papers:
2727
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
2828
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
29-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
29+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
3030
* Contact authors:
3131
* - Wei-Ming Chen, [email protected]
3232
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/arm_nn_mat_mult_kernel_s8_s16_reordered_8mul.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
* Reference papers:
2727
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
2828
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
29-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
29+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
3030
* Contact authors:
3131
* - Wei-Ming Chen, [email protected]
3232
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/arm_nn_mat_mult_kernel_s8_s16_reordered_oddch.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
* Reference papers:
2727
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
2828
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
29-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
29+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
3030
* Contact authors:
3131
* - Wei-Ming Chen, [email protected]
3232
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/avgpooling.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/concat_ch.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_SRAM.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_ch16.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_ch24.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_ch48.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_ch8.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_kbuf.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_oddch.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_1x1_s8_skip_pad.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_s8_kernel2x3_inputch3_stride2_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_s8_kernel3_inputch3_stride2_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_s8_kernel3_stride1_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_s8_kernel3x2_inputch3_stride2_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_u8_kernel3_inputch3_stride1_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/convolve_u8_kernel3_inputch3_stride2_pad1.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
* Reference papers:
77
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
88
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
9-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
9+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
1010
* Contact authors:
1111
* - Wei-Ming Chen, [email protected]
1212
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/element_mult.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/fully_connected.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/mat_mul_fp.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

TinyEngine/src/kernels/int_forward_op/mat_mult_kernels.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
* Reference papers:
66
* - MCUNet: Tiny Deep Learning on IoT Device, NeurIPS 2020
77
* - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021
8-
* - MCUNetV3: On-Device Training Under 256KB Memory, arXiv:2206.15472
8+
* - MCUNetV3: On-Device Training Under 256KB Memory, NeurIPS 2022
99
* Contact authors:
1010
* - Wei-Ming Chen, [email protected]
1111
* - Wei-Chen Wang, [email protected]

0 commit comments

Comments
 (0)