From ab6df7c70387877876ed82992c9ceae548771567 Mon Sep 17 00:00:00 2001 From: synandi <98147397+synandi@users.noreply.github.com> Date: Mon, 28 Nov 2022 18:33:39 +0530 Subject: [PATCH] Update comprehensive_guide.ipynb MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The broken link to "enable pruning to improve latency" has been fixed in line 726.  Typos at lines 590 and 390 are fixed. --- .../g3doc/guide/pruning/comprehensive_guide.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb b/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb index d674f47f0..423b7e767 100644 --- a/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb +++ b/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb @@ -387,7 +387,7 @@ "1. Prune a custom Keras layer\n", "2. Modify parts of a built-in Keras layer to prune.\n", "\n", - "For an example, the API defaults to only pruning the kernel of the\n", + "For example, the API defaults to only pruning the kernel of the\n", "`Dense` layer. The example below prunes the bias also.\n" ] }, @@ -587,7 +587,7 @@ "\n", "* Have a learning rate that's not too high or too low when the model is pruning. Consider the [pruning schedule](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/PruningSchedule) to be a hyperparameter.\n", "\n", - "* As a quick test, try experimenting with pruning a model to the final sparsity at the begining of training by setting `begin_step` to 0 with a `tfmot.sparsity.keras.ConstantSparsity` schedule. You might get lucky with good results.\n", + "* As a quick test, try experimenting with pruning a model to the final sparsity at the beginning of training by setting `begin_step` to 0 with a `tfmot.sparsity.keras.ConstantSparsity` schedule. You might get lucky with good results.\n", "\n", "* Do not prune very frequently to give the model time to recover. The [pruning schedule](https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras/PruningSchedule) provides a decent default frequency.\n", "\n", @@ -723,7 +723,7 @@ "id": "yqk0jI49c1mw" }, "source": [ - "Once different backends [enable pruning to improve latency]((https://github.com/tensorflow/model-optimization/issues/173)), using block sparsity can improve latency for certain hardware.\n", + "Once different backends [enable pruning to improve latency](https://www.tensorflow.org/model_optimization/guide/pruning), using block sparsity can improve latency for certain hardware.\n", "\n", "Increasing the block size will decrease the peak sparsity that's achievable for a target model accuracy. Despite this, latency can still improve.\n", "\n",