diff --git a/docs.json b/docs.json
index 7e403b4b..18202340 100644
--- a/docs.json
+++ b/docs.json
@@ -475,7 +475,7 @@
"anchors": [
{
"anchor": "Console",
- "href": "https://runpod.io/console/",
+ "href": "https://console.runpod.io/",
"icon": "code"
},
{
@@ -500,7 +500,7 @@
"links": [
{
"label": "Sign up",
- "href": "https://runpod.io/console/signup"
+ "href": "https://console.runpod.io/signup"
}
],
"primary": {
diff --git a/get-started.mdx b/get-started.mdx
index a2f90760..3f209cad 100644
--- a/get-started.mdx
+++ b/get-started.mdx
@@ -10,7 +10,7 @@ If you're new to Runpod, follow this guide to learn how to create an account, de
Start by creating a Runpod account to access GPU Pods and Serverless compute resources:
-1. [Sign up here](https://www.runpod.io/console/signup).
+1. [Sign up here](https://www.console.runpod.io/signup).
2. Verify your email address.
3. Set up two-factor authentication (recommended for security).
@@ -18,7 +18,7 @@ Start by creating a Runpod account to access GPU Pods and Serverless compute res
Now that you've created your account, you're ready to deploy your first Pod:
-1. Open the [Pods page](https://www.runpod.io/console/pods) in the web interface.
+1. Open the [Pods page](https://www.console.runpod.io/pods) in the web interface.
2. Click the **Deploy** button.
3. Select **RTX 2000 Ada** from the list of graphics cards.
@@ -40,7 +40,7 @@ If you haven't set up payments yet, you'll be prompted to add a payment method a
After your Pod has finished starting up (this may take a minute or two), you can connect to it:
-1. On the [Pods page](https://www.runpod.io/console/pods), find the Pod you just created and click the **Connect** button. If it's greyed out, your Pod hasn't finished starting up yet.
+1. On the [Pods page](https://www.console.runpod.io/pods), find the Pod you just created and click the **Connect** button. If it's greyed out, your Pod hasn't finished starting up yet.
@@ -62,7 +62,7 @@ Congratulations! You just ran your first line of code using Runpod.
To avoid incurring unnecessary charges, make sure to:
-1. Return to the [Pods page](https://www.runpod.io/console/pods).
+1. Return to the [Pods page](https://www.console.runpod.io/pods).
2. Click the **Stop button** (square icon) to stop your Pod.
3. Confirm by clicking the **Stop Pod** button.
diff --git a/get-started/api-keys.mdx b/get-started/api-keys.mdx
index 527adc5d..209345f7 100644
--- a/get-started/api-keys.mdx
+++ b/get-started/api-keys.mdx
@@ -14,7 +14,7 @@ Legacy API keys generated before November 11, 2024 have either Read/Write or Rea
Follow these steps to create a new Runpod API key:
-1. In the Runpod console, navigate to the [Settings page](https://www.runpod.io/console/user/settings).
+1. In the Runpod console, navigate to the [Settings page](https://www.console.runpod.io/user/settings).
2. Expand the **API Keys** section and select **Create API Key**.
@@ -37,7 +37,7 @@ Runpod does not store your API key, so you may wish to save it elsewhere (e.g.,
To edit an API key:
-1. Navigate to the [Settings page](https://www.runpod.io/console/user/settings).
+1. Navigate to the [Settings page](https://www.console.runpod.io/user/settings).
2. Under **API Keys**, select the pencil icon for the key you wish to update
3. Update the key with your desired permissions, then select **Update**.
@@ -45,7 +45,7 @@ To edit an API key:
To enable/disable an API key:
-1. Navigate to the [Settings page](https://www.runpod.io/console/user/settings).
+1. Navigate to the [Settings page](https://www.console.runpod.io/user/settings).
2. Under **API Keys**, select the toggle for the API key you wish to enable/disable, then select **Yes** in the confirmation modal.
## Delete an API key
diff --git a/get-started/connect-to-runpod.mdx b/get-started/connect-to-runpod.mdx
index 84728def..e9f05059 100644
--- a/get-started/connect-to-runpod.mdx
+++ b/get-started/connect-to-runpod.mdx
@@ -9,7 +9,7 @@ Runpod offers multiple ways to connect and manage your compute resources. Choose
The Runpod console provides an intuitive web interface to launch and manage Pods, monitor resource usage, access Pod terminals, and view billing and usage history.
-[Launch the Runpod console →](https://www.runpod.io/console)
+[Launch the Runpod console →](https://www.console.runpod.io)
## REST API
diff --git a/get-started/manage-accounts.mdx b/get-started/manage-accounts.mdx
index 609dad53..b198a662 100644
--- a/get-started/manage-accounts.mdx
+++ b/get-started/manage-accounts.mdx
@@ -6,7 +6,7 @@ To access Runpod resources, you'll need to either create your own account or joi
## Create an account
-Sign up for an account at [Runpod.io](https://www.runpod.io/console/signup).
+Sign up for an account at [Runpod.io](https://www.console.runpod.io/signup).
### Convert personal account to a team account
@@ -79,6 +79,6 @@ Full control over the account, ideal for administrators.
## Audit logs
-Runpod includes audit logs to help you understand which actions were used. Go to the [Audit logs](https://www.runpod.io/console/user/audit-logs) settings.
+Runpod includes audit logs to help you understand which actions were used. Go to the [Audit logs](https://www.console.runpod.io/user/audit-logs) settings.
You can view and filter the audit logs by date range, user, resource, resource ID, and action.
diff --git a/hosting/burn-testing.mdx b/hosting/burn-testing.mdx
index 3de13317..a70e2f78 100644
--- a/hosting/burn-testing.mdx
+++ b/hosting/burn-testing.mdx
@@ -24,4 +24,4 @@ When everything is verified okay, start the Runpod agent again by running
sudo systemctl start runpod
```
-Then, on your [machine dashboard](https://www.runpod.io/console/host/machines), self rent your machine to ensure it's working well with most popular templates.
+Then, on your [machine dashboard](https://www.console.runpod.io/host/machines), self rent your machine to ensure it's working well with most popular templates.
diff --git a/hub/overview.mdx b/hub/overview.mdx
index 35c548d5..b734141b 100644
--- a/hub/overview.mdx
+++ b/hub/overview.mdx
@@ -49,7 +49,7 @@ Whether you're a veteran developer who wants to share your work or a newcomer ex
You can deploy a repo from the Hub in seconds:
-1. Navigate to the [Hub page](https://www.runpod.io/console/hub) in the Runpod console.
+1. Navigate to the [Hub page](https://www.console.runpod.io/hub) in the Runpod console.
2. Browse the collection and select a repo that matches your needs.
3. Review the repo details, including hardware requirements and available configuration options to ensure compatibility with your use case.
4. Click the **Deploy** button in the top-right of the repo page. You can also use the dropdown menu to deploy an older version.
diff --git a/hub/publishing-guide.mdx b/hub/publishing-guide.mdx
index 2c56e48f..4fc992ab 100644
--- a/hub/publishing-guide.mdx
+++ b/hub/publishing-guide.mdx
@@ -5,7 +5,7 @@ description: "Publish your repositories to the Runpod Hub."
tag: "NEW"
---
-Learn how to publish a GitHub repository to the [Runpod Hub](https://www.runpod.io/console/hub), including how to configure your repository with the required `hub.json` and `tests.json` files.
+Learn how to publish your repositories to the [Runpod Hub](https://www.console.runpod.io/hub), including how to configure your repository with the required `hub.json` and `tests.json` files.
@@ -15,15 +15,10 @@ Learn how to publish a GitHub repository to the [Runpod Hub](https://www.runpod.
Follow these steps to add your repository to the Hub:
-1. Navigate to the [Hub page](https://www.runpod.io/console/hub) in the Runpod console.
-2. Under **Add your repo** click **Get Started**. If you haven't linked your GitHub account to Runpod, you'll be prompted to do so.
-
- To publish a repository to the Hub, you must have appropriate GitHub access permissions:
- - For personal repositories: Owner or collaborator.
- - For GitHub organizations: Write, maintain, or admin (or an equivelant custom role). See: [Repository roles](https://docs.github.com/en/enterprise-cloud@latest/organizations/managing-user-access-to-your-organizations-repositories/managing-repository-roles/repository-roles-for-an-organization) for more details.
-
-3. Enter your GitHub repository URL.
-4. Follow the guided steps in the interface to add your repository to the Hub.
+1. Navigate to the [Hub page](https://www.console.runpod.io/hub) in the Runpod console.
+2. Under **Add your repo** click **Get Started**.
+3. Enter your GitHub repo URL.
+4. Follow the UI steps to add your repo to the Hub.
The Hub page will guide you through the following steps:
diff --git a/instant-clusters/axolotl.mdx b/instant-clusters/axolotl.mdx
index 52605066..7bef1354 100644
--- a/instant-clusters/axolotl.mdx
+++ b/instant-clusters/axolotl.mdx
@@ -9,7 +9,7 @@ Follow the steps below to deploy a cluster and start training your models effici
## Step 1: Deploy an Instant Cluster
-1. Open the [Instant Clusters page](https://www.runpod.io/console/cluster) on the Runpod web interface.
+1. Open the [Instant Clusters page](https://www.console.runpod.io/cluster) on the Runpod web interface.
2. Click **Create Cluster**.
3. Use the UI to name and configure your Cluster. For this walkthrough, keep **Pod Count** at **2** and select the option for **16x H100 SXM** GPUs. Keep the **Pod Template** at its default setting (Runpod PyTorch).
4. Click **Deploy Cluster**. You should be redirected to the Instant Clusters page after a few seconds.
@@ -90,11 +90,11 @@ Congrats! You've successfully trained a model using Axolotl on an Instant Cluste
## Step 4: Clean up
-If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.runpod.io/console/cluster) and delete your cluster to avoid incurring extra charges.
+If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.console.runpod.io/cluster) and delete your cluster to avoid incurring extra charges.
-You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.runpod.io/console/user/billing) section under the **Cluster** tab.
+You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.console.runpod.io/user/billing) section under the **Cluster** tab.
diff --git a/instant-clusters/pytorch.mdx b/instant-clusters/pytorch.mdx
index d05b7987..182f1664 100644
--- a/instant-clusters/pytorch.mdx
+++ b/instant-clusters/pytorch.mdx
@@ -9,7 +9,7 @@ Follow the steps below to deploy a cluster and start running distributed PyTorch
## Step 1: Deploy an Instant Cluster
-1. Open the [Instant Clusters page](https://www.runpod.io/console/cluster) on the Runpod web interface.
+1. Open the [Instant Clusters page](https://www.console.runpod.io/cluster) on the Runpod web interface.
2. Click **Create Cluster**.
3. Use the UI to name and configure your Cluster. For this walkthrough, keep **Pod Count** at **2** and select the option for **16x H100 SXM** GPUs. Keep the **Pod Template** at its default setting (Runpod PyTorch).
4. Click **Deploy Cluster**. You should be redirected to the Instant Clusters page after a few seconds.
@@ -118,11 +118,11 @@ This diagram illustrates how local and global ranks are distributed across multi
## Step 5: Clean up
-If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.runpod.io/console/cluster) and delete your cluster to avoid incurring extra charges.
+If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.console.runpod.io/cluster) and delete your cluster to avoid incurring extra charges.
-You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.runpod.io/console/user/billing) section under the **Cluster** tab.
+You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.console.runpod.io/user/billing) section under the **Cluster** tab.
diff --git a/instant-clusters/slurm.mdx b/instant-clusters/slurm.mdx
index 675bc8a8..4eaace34 100644
--- a/instant-clusters/slurm.mdx
+++ b/instant-clusters/slurm.mdx
@@ -9,13 +9,13 @@ Follow the steps below to deploy a cluster and start running distributed SLURM w
## Requirements
-* You've created a [Runpod account](https://www.runpod.io/console/home) and funded it with sufficient credits.
+* You've created a [Runpod account](https://www.console.runpod.io/home) and funded it with sufficient credits.
* You have basic familiarity with Linux command line.
* You're comfortable working with [Pods](/pods/overview) and understand the basics of [SLURM](https://slurm.schedmd.com/).
## Step 1: Deploy an Instant Cluster
-1. Open the [Instant Clusters page](https://www.runpod.io/console/cluster) on the Runpod web interface.
+1. Open the [Instant Clusters page](https://www.console.runpod.io/cluster) on the Runpod web interface.
2. Click **Create Cluster**.
3. Use the UI to name and configure your cluster. For this walkthrough, keep **Pod Count** at **2** and select the option for **16x H100 SXM** GPUs. Keep the **Pod Template** at its default setting (Runpod PyTorch).
4. Click **Deploy Cluster**. You should be redirected to the Instant Clusters page after a few seconds.
@@ -120,11 +120,11 @@ Check the output file created by the test (`test_simple_[JOBID].out`) and look f
## Step 8: Clean up
-If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.runpod.io/console/cluster) and delete your cluster to avoid incurring extra charges.
+If you no longer need your cluster, make sure you return to the [Instant Clusters page](https://www.console.runpod.io/cluster) and delete your cluster to avoid incurring extra charges.
-You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.runpod.io/console/user/billing) section under the **Cluster** tab.
+You can monitor your cluster usage and spending using the **Billing Explorer** at the bottom of the [Billing page](https://www.console.runpod.io/user/billing) section under the **Cluster** tab.
diff --git a/integrations/mods.mdx b/integrations/mods.mdx
index 8a5dbe1b..f5970c49 100644
--- a/integrations/mods.mdx
+++ b/integrations/mods.mdx
@@ -15,7 +15,7 @@ To start using Mods, follow these step-by-step instructions:
1. **Obtain Your API Key**:
- * Visit the [Runpod Settings](https://www.runpod.io/console/user/settings) page to retrieve your API key.
+ * Visit the [Runpod Settings](https://www.console.runpod.io/user/settings) page to retrieve your API key.
* If you haven't created an account yet, you'll need to sign up before obtaining the key.
2. **Install Mods**:
diff --git a/integrations/skypilot.mdx b/integrations/skypilot.mdx
index cd850fda..c48d63be 100644
--- a/integrations/skypilot.mdx
+++ b/integrations/skypilot.mdx
@@ -11,7 +11,7 @@ This integration leverages the Runpod CLI infrastructure, streamlining the proce
To begin using Runpod with SkyPilot, follow these steps:
-1. **Obtain Your API Key**: Visit the [Runpod Settings](https://www.runpod.io/console/user/settings) page to get your API key. If you haven't created an account yet, you'll need to do so before obtaining the key.
+1. **Obtain Your API Key**: Visit the [Runpod Settings](https://www.console.runpod.io/user/settings) page to get your API key. If you haven't created an account yet, you'll need to do so before obtaining the key.
2. **Install Runpod**: Use the following command to install the latest version of Runpod:
diff --git a/pods/configuration/expose-ports.mdx b/pods/configuration/expose-ports.mdx
index 1956e666..cdb6e47c 100644
--- a/pods/configuration/expose-ports.mdx
+++ b/pods/configuration/expose-ports.mdx
@@ -14,7 +14,7 @@ This means that uvicorn would be listening on all interfaces on port 4000. Let's
### Through Runpod's Proxy
-In this case, you would want to make sure that the port you want to expose (4000 in this case) is set on the [Template](https://www.runpod.io/console/user/templates) or [Pod](https://www.runpod.io/console/pods) configuration page. You can see here that I have added 4000 to the HTTP port list in my pod config. You can also do this on your template definition.
+In this case, you would want to make sure that the port you want to expose (4000 in this case) is set on the [Template](https://www.console.runpod.io/user/templates) or [Pod](https://www.console.runpod.io/pods) configuration page. You can see here that I have added 4000 to the HTTP port list in my pod config. You can also do this on your template definition.
diff --git a/pods/configuration/use-ssh.mdx b/pods/configuration/use-ssh.mdx
index 7c928389..02a139fa 100644
--- a/pods/configuration/use-ssh.mdx
+++ b/pods/configuration/use-ssh.mdx
@@ -17,7 +17,7 @@ The basic terminal SSH access that Runpod exposes is not a full SSH connection a
-2. Add your public key to your [Runpod user settings](https://www.runpod.io/console/user/settings).
+2. Add your public key to your [Runpod user settings](https://www.console.runpod.io/user/settings).
diff --git a/pods/connect-to-a-pod.mdx b/pods/connect-to-a-pod.mdx
index a405ad6b..893033de 100644
--- a/pods/connect-to-a-pod.mdx
+++ b/pods/connect-to-a-pod.mdx
@@ -61,7 +61,7 @@ To connect using this method:
ssh-ed25519 AAAAC4NzaC1lZDI1JTE5AAAAIGP+L8hnjIcBqUb8NRrDiC32FuJBvRA0m8jLShzgq6BQ YOUR_EMAIL@DOMAIN.COM
```
-3. Copy and paste the output into the **SSH Public Keys** field in your [Runpod user account settings](https://www.runpod.io/console/user/settings).
+3. Copy and paste the output into the **SSH Public Keys** field in your [Runpod user account settings](https://www.console.runpod.io/user/settings).
4. To get the SSH command for your Pod, navigate to the [Pods page](https://console.runpod.io/pods) in the Runpod console.
5. Expand your Pod and select **Connect**.
6. Select the SSH tab. Copy the command listed under **SSH**. It should look something like this:
diff --git a/pods/manage-pods.mdx b/pods/manage-pods.mdx
index fd5bc276..989e21c7 100644
--- a/pods/manage-pods.mdx
+++ b/pods/manage-pods.mdx
@@ -19,7 +19,7 @@ runpodctl config --apiKey [RUNPOD_API_KEY]
To create a Pod using the Runpod console:
-1. Open the [Pods page](https://www.runpod.io/console/pods) in the Runpod console and click the **Deploy** button.
+1. Open the [Pods page](https://www.console.runpod.io/pods) in the Runpod console and click the **Deploy** button.
2. (Optional) Specify a [network volume](/pods/storage/create-network-volumes) if you need to share data between multiple Pods, or to save data for later use.
3. Select **GPU** or **CPU** using the buttons in the top-left corner of the window, and follow the configuration steps below.
@@ -86,7 +86,7 @@ After a Pod is stopped, you will still be charged for its [disk volume](/pods/st
To stop a Pod:
-1. Open the [Pods page](https://www.runpod.io/console/pods).
+1. Open the [Pods page](https://www.console.runpod.io/pods).
2. Find the Pod you want to stop and expand it.
3. Click the **Stop button** (square icon).
4. Confirm by clicking the **Stop Pod** button.
@@ -142,7 +142,7 @@ Pods start as soon as they are created, but you can resume a Pod that has been s
To start a Pod:
-1. Open the [Pods page](https://www.runpod.io/console/pods).
+1. Open the [Pods page](https://www.console.runpod.io/pods).
2. Find the Pod you want to start and expand it.
3. Click the **Start** button (play icon).
@@ -171,7 +171,7 @@ Terminating a Pod permanently deletes all associated data that isn't stored in a
To terminate a Pod:
-1. Open the [Pods page](https://www.runpod.io/console/pods).
+1. Open the [Pods page](https://www.console.runpod.io/pods).
2. Find the Pod you want to terminate and expand it.
3. [Stop the Pod](#stop-a-pod) if it's running.
4. Click the **Terminate** button (trash icon).
@@ -198,7 +198,7 @@ runpodctl remove pods my-bulk-task --podCount 40
## List Pods
-You can find a list of all your Pods on the [Pods page](https://www.runpod.io/console/pods) of the web interface.
+You can find a list of all your Pods on the [Pods page](https://www.console.runpod.io/pods) of the web interface.
If you're using the CLI, use the following command to list your Pods:
diff --git a/pods/networking.mdx b/pods/networking.mdx
index 55e5388e..15e8aa1d 100644
--- a/pods/networking.mdx
+++ b/pods/networking.mdx
@@ -14,7 +14,7 @@ Global networking is currently only available on NVIDIA GPU Pods.
**Enable global networking**
-1. Go to [Pods](https://www.runpod.io/console/pods) section and select **+ Deploy**.
+1. Go to [Pods](https://www.console.runpod.io/pods) section and select **+ Deploy**.
2. Toggle the **Global Networking** to select Pods that have global networking enabled.
3. Configure your GPUs and select **Deploy**.
diff --git a/pods/pricing.mdx b/pods/pricing.mdx
index e8e11d81..bf47438c 100644
--- a/pods/pricing.mdx
+++ b/pods/pricing.mdx
@@ -16,10 +16,10 @@ Runpod offers multiple flexible pricing options for Pods, designed to accommodat
All Pods are billed by the second for compute and storage, with no additional fees for data ingress or egress. Every Pod has an hourly cost based on its [GPU type](/references/gpu-types) or CPU configuration, and your Runpod credits are charged for the Pod every second it is active.
-You can find the hourly cost of a specific GPU configuration on the [Runpod console](https://www.runpod.io/console/pods) during Pod deployment.
+You can find the hourly cost of a specific GPU configuration on the [Runpod console](https://www.console.runpod.io/pods) during Pod deployment.
-If your account balance is projected to cover less than 10 seconds of remaining run time for your active Pods, Runpod will pre-emptively stop all your Pods. This is to ensure your account retains a small balance, which can help preserve your data volumes. If your balance is completely drained, all Pods are subject to deletion at the discretion of the Runpod system. We highly recommend setting up [automatic payments](https://www.runpod.io/console/user/billing) to avoid service interruptions.
+If your account balance is projected to cover less than 10 seconds of remaining run time for your active Pods, Runpod will pre-emptively stop all your Pods. This is to ensure your account retains a small balance, which can help preserve your data volumes. If your balance is completely drained, all Pods are subject to deletion at the discretion of the Runpod system. We highly recommend setting up [automatic payments](https://www.console.runpod.io/user/billing) to avoid service interruptions.
## Pricing options
@@ -112,7 +112,7 @@ Consider your workload's sensitivity to interruptions, your budget, the expected
You can select your preferred pricing model directly from the Runpod console when configuring and deploying a new Pod.
-1. Open the [Pods page](https://www.runpod.io/console/pods) in the Runpod console and select **Deploy**.
+1. Open the [Pods page](https://www.console.runpod.io/pods) in the Runpod console and select **Deploy**.
2. Configure your Pod (see [Create a Pod](/pods/manage-pods#create-a-pod)).
@@ -148,4 +148,4 @@ When you [stop a Pod](/pods/manage-pods#stop-a-pod), you will no longer be charg
## Tracking costs and savings plans
-You can monitor your active savings plans, including their associated Pods, commitment periods, and expiration dates, by visiting the dedicated [Savings plans](https://www.runpod.io/console/savings-plans) section in your Runpod console. General Pod usage and billing can be tracked through the [Billing section](https://www.runpod.io/console/user/billing).
+You can monitor your active savings plans, including their associated Pods, commitment periods, and expiration dates, by visiting the dedicated [Savings plans](https://www.console.runpod.io/savings-plans) section in your Runpod console. General Pod usage and billing can be tracked through the [Billing section](https://www.console.runpod.io/user/billing).
diff --git a/pods/storage/create-network-volumes.mdx b/pods/storage/create-network-volumes.mdx
index dd0baf54..27de9b4e 100644
--- a/pods/storage/create-network-volumes.mdx
+++ b/pods/storage/create-network-volumes.mdx
@@ -35,7 +35,7 @@ Consider using a network volume when you need:
To create a new network volume:
-1. Navigate to the [Storage page](https://www.runpod.io/console/user/storage) in the Runpod console.
+1. Navigate to the [Storage page](https://www.console.runpod.io/user/storage) in the Runpod console.
2. Select **New Network Volume**.
3. **Configure your volume:**
* Select a datacenter for your volume. Datacenter location does not affect pricing, but the datacenter location will determine which GPU types your network volume can be used with.
@@ -50,7 +50,7 @@ To create a new network volume:
4. Select **Create Network Volume**.
-You can edit and delete your network volumes using the [Storage page](https://www.runpod.io/console/user/storage).
+You can edit and delete your network volumes using the [Storage page](https://www.console.runpod.io/user/storage).
## Attach a network volume to a Pod
@@ -62,7 +62,7 @@ Network volumes must be attached during Pod deployment. They cannot be attached
To deploy a Pod with a network volume attached:
-1. Navigate to the [Pods page](https://www.runpod.io/console/pods).
+1. Navigate to the [Pods page](https://www.console.runpod.io/pods).
2. Select **Deploy**.
3. Select **Network Volume** and select the network volume you want to attach to the Pod from the dropdown list.
4. Select a GPU type. The system will automatically tell you which Pods are available to use with the selected network volume.
diff --git a/pods/templates/manage-templates.mdx b/pods/templates/manage-templates.mdx
index 794075c0..6e854333 100644
--- a/pods/templates/manage-templates.mdx
+++ b/pods/templates/manage-templates.mdx
@@ -4,9 +4,9 @@ title: "Manage Pod Templates"
## Explore Templates
-You can explore Templates managed by Runpod and Community Templates in the **[Explore](https://www.runpod.io/console/explore)** section of the Web interface.
+You can explore Templates managed by Runpod and Community Templates in the **[Explore](https://www.console.runpod.io/explore)** section of the Web interface.
-You can explore Templates managed by you or your team in the **[Templates](https://www.runpod.io/console/user/templates)** section of the Web interface.
+You can explore Templates managed by you or your team in the **[Templates](https://www.console.runpod.io/user/templates)** section of the Web interface.
Learn to create your own Template in the following section.
diff --git a/pods/templates/secrets.mdx b/pods/templates/secrets.mdx
index dd6d67ce..d651a93d 100644
--- a/pods/templates/secrets.mdx
+++ b/pods/templates/secrets.mdx
@@ -8,7 +8,7 @@ You can add Secrets to your Pods and templates. Secrets are encrypted strings of
You can create a Secret using the Runpod Web interface or the Runpod API.
-1. Login into the Runpod Web interface and select [Secrets](https://www.runpod.io/console/user/secrets).
+1. Login into the Runpod Web interface and select [Secrets](https://www.console.runpod.io/user/secrets).
2. Choose **Create Secret** and provide the following:
@@ -30,7 +30,7 @@ Once a Secret is created, its value cannot be viewed. If you need to change the
You can modify an existing Secret using the Runpod Web interface.
-1. Login into the Runpod Web interface and select [Secrets](https://www.runpod.io/console/user/secrets).
+1. Login into the Runpod Web interface and select [Secrets](https://www.console.runpod.io/user/secrets).
2. Select the name of the Secret you want to modify.
3. Select the configuration icon and choose **Edit Secret Value**.
@@ -41,7 +41,7 @@ You can modify an existing Secret using the Runpod Web interface.
You can view the details of an existing Secret using the Runpod Web interface. You can't view the Secret Value.
-1. Login into the Runpod Web interface and select [Secrets](https://www.runpod.io/console/user/secrets).
+1. Login into the Runpod Web interface and select [Secrets](https://www.console.runpod.io/user/secrets).
2. Select the name of the Secret you want to view.
3. Select the configuration icon and choose **View Secret**.
@@ -69,7 +69,7 @@ Alternatively, you can select your Secret from the Web interface when creating o
You can delete an existing Secret using the Runpod Web interface.
-1. Login into the Runpod Web interface and select [Secrets](https://www.runpod.io/console/user/secrets).
+1. Login into the Runpod Web interface and select [Secrets](https://www.console.runpod.io/user/secrets).
2. Select the name of the Secret you want to delete.
3. Select the configuration icon and choose **Delete Secret**.
4. Enter the name of the Secret to confirm deletion.
diff --git a/references/faq.mdx b/references/faq.mdx
index b9f1121a..e387c57f 100644
--- a/references/faq.mdx
+++ b/references/faq.mdx
@@ -5,7 +5,7 @@ sidebarTitle: "Overview"
## Secure Cloud vs Community Cloud
-Runpod provides two cloud computing services: [Secure Cloud](https://www.runpod.io/console/gpu-secure-cloud) and [Community Cloud.](https://www.runpod.io/console/gpu-cloud)
+Runpod provides two cloud computing services: [Secure Cloud](https://www.console.runpod.io/gpu-secure-cloud) and [Community Cloud.](https://www.console.runpod.io/gpu-cloud)
**Secure Cloud** runs in T3/T4 data centers by our trusted partners. Our close partnership comes with high-reliability with redundancy, security, and fast response times to mitigate any downtimes. For any sensitive and enterprise workloads, we highly recommend Secure Cloud.
@@ -55,7 +55,7 @@ All billing, including per-hour compute and storage billing, is charged per minu
Every Pod has an hourly cost based on GPU type. Your Runpod credits are charged for the Pod every minute as long as the Pod is running. If you ever run out of credits, your Pods will be automatically stopped, and you will get an email notification. Eventually, Pods will be terminated if you don't refill your credit. **We pre-emptively stop all of your Pods if you get down to 10 minutes of remaining run time. This gives your account enough balance to keep your data volumes around in the case you need access to your data. Please plan accordingly.**
-Once a balance has been completely drained, all pods are subject to deletion at the discretion of the service. An attempt will be made to hold the pods for as long as possible, but this should not be relied upon! We highly recommend setting up [automatic payments](https://www.runpod.io/console/user/billing) to ensure balances are automatically topped up as needed.
+Once a balance has been completely drained, all pods are subject to deletion at the discretion of the service. An attempt will be made to hold the pods for as long as possible, but this should not be relied upon! We highly recommend setting up [automatic payments](https://www.console.runpod.io/user/billing) to ensure balances are automatically topped up as needed.
diff --git a/references/faq/manage-cards.mdx b/references/faq/manage-cards.mdx
index cbafa86d..c189eb52 100644
--- a/references/faq/manage-cards.mdx
+++ b/references/faq/manage-cards.mdx
@@ -6,7 +6,7 @@ Runpod is a US-based organization that serves clients all across the world. Howe
**Keep your balance topped up**
-To avoid any potential issues with your balance being overrun, it's best to refresh your balance at least a few days before you're due to run out so you have a chance to address any last minute delays. Also be aware that there is an option to automatically refresh your balance when you run low under the Billing [page](https://www.runpod.io/console/user/billing):
+To avoid any potential issues with your balance being overrun, it's best to refresh your balance at least a few days before you're due to run out so you have a chance to address any last minute delays. Also be aware that there is an option to automatically refresh your balance when you run low under the Billing [page](https://www.console.runpod.io/user/billing):
diff --git a/references/referrals.mdx b/references/referrals.mdx
index a4b9846e..4dd20baa 100644
--- a/references/referrals.mdx
+++ b/references/referrals.mdx
@@ -52,7 +52,7 @@ The Template Program allows users to earn a percentage of the money spent by use
## How to Participate
-1. Access your [referral dashboard](https://www.runpod.io/console/user/referrals).
+1. Access your [referral dashboard](https://www.console.runpod.io/user/referrals).
2. Locate your unique referral link. For example, `https://runpod.io?ref=5t99c9je`.
3. Share your referral link with potential users.
diff --git a/runpodctl/install-runpodctl.mdx b/runpodctl/install-runpodctl.mdx
index 85d6c12d..1c91f7cc 100644
--- a/runpodctl/install-runpodctl.mdx
+++ b/runpodctl/install-runpodctl.mdx
@@ -40,7 +40,7 @@ This installs Runpod CLI globally on your system, so you can run `runpodctl` com
Before you can use `runpodctl`, you must configure it with an [API key](/get-started/api-keys). Follow these steps to create a new API key:
-1. In the web interface, go to the [Settings page](https://www.runpod.io/console/user/settings).
+1. In the web interface, go to the [Settings page](https://www.console.runpod.io/user/settings).
2. Expand the **API Keys** section and click the **Create API Key** button.
3. Give your key a name and set its permissions. If you want to [manage Pods](/runpodctl/manage-pods) locally, your key will need **READ/WRITE** permissions (or **ALL**).
4. Click **Create**, then click on your newly-generated key to copy it to your clipboard.
diff --git a/runpodctl/overview.mdx b/runpodctl/overview.mdx
index c4c4f811..a8ce1137 100644
--- a/runpodctl/overview.mdx
+++ b/runpodctl/overview.mdx
@@ -62,7 +62,7 @@ wget https://github.com/runpod/runpodctl/releases/latest/download/runpodctl-wind
## Configure your API key
-Before you can use Runpod CLI to manage resources from your local machine, you'll need to configure your [API key](/get-started/api-keys). You can create and manage API keys on the [Runpod account settings page](https://www.runpod.io/console/user/settings).
+Before you can use Runpod CLI to manage resources from your local machine, you'll need to configure your [API key](/get-started/api-keys). You can create and manage API keys on the [Runpod account settings page](https://www.console.runpod.io/user/settings).
After installing `runpodctl` on your local system, run this command to configure it with your API key:
diff --git a/sdks/graphql/manage-pods.mdx b/sdks/graphql/manage-pods.mdx
index 0bda68c6..5ea6c7a2 100644
--- a/sdks/graphql/manage-pods.mdx
+++ b/sdks/graphql/manage-pods.mdx
@@ -4,7 +4,7 @@ title: "Manage Pods"
## Authentication
-Runpod uses API Keys for all API requests. Go to [Settings](https://www.runpod.io/console/user/settings) to manage your API keys.
+Runpod uses API Keys for all API requests. Go to [Settings](https://www.console.runpod.io/user/settings) to manage your API keys.
## GraphQL API spec
diff --git a/serverless/development/test-response-times.mdx b/serverless/development/test-response-times.mdx
index e3928062..d4baf490 100644
--- a/serverless/development/test-response-times.mdx
+++ b/serverless/development/test-response-times.mdx
@@ -18,7 +18,7 @@ The URLs to use in the API will be shown in the My APIs screen:
-On reqbin.com, enter the Run URL of your API, select POST under the dropdown, and enter your API key that was given when you created the key under [Settings](https://www.runpod.io/console/serverless/user/settings)(if you do not have it saved, you will need to return to Settings and create a new key). Under Content, you will also need to give it a basic command (in this example, we've used a Stable Diffusion prompt).
+On reqbin.com, enter the Run URL of your API, select POST under the dropdown, and enter your API key that was given when you created the key under [Settings](https://www.console.runpod.io/serverless/user/settings)(if you do not have it saved, you will need to return to Settings and create a new key). Under Content, you will also need to give it a basic command (in this example, we've used a Stable Diffusion prompt).
diff --git a/serverless/endpoints/job-states.mdx b/serverless/endpoints/job-states.mdx
index fd0f2227..f611f566 100644
--- a/serverless/endpoints/job-states.mdx
+++ b/serverless/endpoints/job-states.mdx
@@ -17,7 +17,7 @@ Understanding job states helps you track the progress of individual requests and
## Endpoint metrics
-You can find endpoint metrics in the **Metrics** tab of the Serverless endpoint details page in the [Runpod web interface](https://www.runpod.io/console/serverless).
+You can find endpoint metrics in the **Metrics** tab of the Serverless endpoint details page in the [Runpod web interface](https://www.console.runpod.io/serverless).
* **Requests**: Displays the total number of requests received by your endpoint, along with the number of completed, failed, and retried requests.
* **Execution time**: Displays the P70, P90, and P98 execution times for requests on your endpoint. These percentiles help analyze execution time distribution and identify potential performance bottlenecks.
diff --git a/serverless/endpoints/manage-endpoints.mdx b/serverless/endpoints/manage-endpoints.mdx
index b1d7ae74..c4869a82 100644
--- a/serverless/endpoints/manage-endpoints.mdx
+++ b/serverless/endpoints/manage-endpoints.mdx
@@ -9,7 +9,7 @@ This guide covers the essential management operations for Runpod Serverless endp
Create a new Serverless endpoint through the Runpod web interface:
-1. Navigate to the [Serverless section](https://www.runpod.io/console/serverless) of the Runpod console.
+1. Navigate to the [Serverless section](https://www.console.runpod.io/serverless) of the Runpod console.
2. Click **New Endpoint**.
3. Select a source for your endpoint, such as a [Docker image](/serverless/workers/deploy), [GitHub repo](/serverless/workers/github-integration), or a preset model. Click **Next**.
4. Follow the UI steps to select a Docker image, GitHub repo, or Hugging Face model. Click **Next**.
@@ -30,7 +30,7 @@ After deployment, your endpoint takes time to initialize before it is ready to p
You can modify your endpoint's configuration at any time:
-1. Navigate to the [Serverless section](https://www.runpod.io/console/serverless) in the Runpod console.
+1. Navigate to the [Serverless section](https://www.console.runpod.io/serverless) in the Runpod console.
2. Click the three dots in the bottom right corner of the endpoint you want to modify.
@@ -58,7 +58,7 @@ To force an immediate configuration update, temporarily set **Max Workers** to 0
Attach persistent storage to share data across workers:
-1. Navigate to the [Serverless section](https://www.runpod.io/console/serverless) in the Runpod console.
+1. Navigate to the [Serverless section](https://www.console.runpod.io/serverless) in the Runpod console.
2. Click the three dots in the bottom right corner of the endpoint you want to modify.
3. Click **Edit Endpoint**.
4. Expand the **Advanced** section.
@@ -71,7 +71,7 @@ Network volumes are mounted to the same path on each worker, making them ideal f
When you no longer need an endpoint, you can remove it from your account:
-1. Navigate to the [Serverless section](https://www.runpod.io/console/serverless) in the Runpod console.
+1. Navigate to the [Serverless section](https://www.console.runpod.io/serverless) in the Runpod console.
2. Click the three dots in the bottom right corner of the endpoint you want to delete.
3. Click **Delete Endpoint**.
4. Type the name of the endpoint, then click **Confirm**.
diff --git a/serverless/endpoints/send-requests.mdx b/serverless/endpoints/send-requests.mdx
index 448dd86a..56063c45 100644
--- a/serverless/endpoints/send-requests.mdx
+++ b/serverless/endpoints/send-requests.mdx
@@ -33,7 +33,7 @@ The exact parameters inside the `input` object depend on your specific worker im
The quickest way to test your endpoint is directly in the Runpod console:
-1. Navigate to the [Serverless section](https://www.runpod.io/console/serverless).
+1. Navigate to the [Serverless section](https://www.console.runpod.io/serverless).
2. Select your endpoint.
3. Click the **Requests** tab.
diff --git a/serverless/overview.mdx b/serverless/overview.mdx
index ae62e3b9..e6b0c548 100644
--- a/serverless/overview.mdx
+++ b/serverless/overview.mdx
@@ -54,13 +54,13 @@ Runpod Serverless offers several ways to deploy your workloads, each designed fo
You can deploy a Serverless endpoint from a repo in the [Runpod Hub](/hub/overview) in seconds:
-1. Navigate to the [Hub page](https://www.runpod.io/console/hub) in the Runpod console.
+1. Navigate to the [Hub page](https://www.console.runpod.io/hub) in the Runpod console.
2. Browse the collection and select a repo that matches your needs.
3. Review the repo details, including hardware requirements and available configuration options to ensure compatibility with your use case.
4. Click the **Deploy** button in the top-right of the repo page. You can also use the dropdown menu to deploy an older version.
5. Click **Create Endpoint**
-[Deploy a repo from the Runpod Hub →](https://www.runpod.io/console/hub)
+[Deploy a repo from the Runpod Hub →](https://www.console.runpod.io/hub)
### Deploy a vLLM worker
diff --git a/serverless/storage/network-volumes.mdx b/serverless/storage/network-volumes.mdx
index 2b64c5b7..0d700133 100644
--- a/serverless/storage/network-volumes.mdx
+++ b/serverless/storage/network-volumes.mdx
@@ -31,7 +31,7 @@ Consider using a network volume when your endpoints needs:
To create a new network volume:
-1. Navigate to the [Storage page](https://www.runpod.io/console/user/storage) in the Runpod console.
+1. Navigate to the [Storage page](https://www.console.runpod.io/user/storage) in the Runpod console.
2. Select **New Network Volume**.
3. **Configure your volume:**
* Select a datacenter for your volume. Datacenter location does not affect pricing, but the datacenter location will determine which endpoints your network volume can be paired with. Your Serverless endpoint must be in the same datacenter as the network volume.
@@ -46,13 +46,13 @@ To create a new network volume:
4. Select **Create Network Volume**.
-You can edit and delete your network volumes using the [Storage page](https://www.runpod.io/console/user/storage).
+You can edit and delete your network volumes using the [Storage page](https://www.console.runpod.io/user/storage).
## Attach a network volume to an endpoint
To enable workers on an endpoint to use a network volume:
-1. Navigate to the [Serverless page](https://www.runpod.io/console/serverless/user/endpoints) in the Runpod console.
+1. Navigate to the [Serverless page](https://www.console.runpod.io/serverless/user/endpoints) in the Runpod console.
2. Either create a **New Endpoint** or select an existing endpoint and choose **Edit Endpoint** from the options menu (three dots).
3. In the endpoint configuration, expand the **Advanced** section.
4. From the **Network Volume** dropdown, select the network volume you want to attach to the endpoint.
diff --git a/serverless/storage/s3-api.mdx b/serverless/storage/s3-api.mdx
index bcc4da84..1a8409f5 100644
--- a/serverless/storage/s3-api.mdx
+++ b/serverless/storage/s3-api.mdx
@@ -41,7 +41,7 @@ Create a network volume in one of the following datacenters to use the S3-compat
Next, you'll need to generate a new key called an "S3 API key" (this is separate from your Runpod API key).
- 1. In the Runpod console, navigate to the [Settings page](https://www.runpod.io/console/user/settings).
+ 1. In the Runpod console, navigate to the [Settings page](https://www.console.runpod.io/user/settings).
2. Expand the **S3 API Keys** section and select **Create an S3 API key**.
3. Give your key a name and select **Create**.
4. Make a note of the **access key** (e.g., `user_***...`) and **secret** (e.g., `rps_***...`) to use in the next step.
@@ -61,8 +61,8 @@ Create a network volume in one of the following datacenters to use the S3-compat
1. If you haven't already, [install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) on your local machine.
2. Run the command `aws configure` in your terminal.
3. Provide the following when prompted:
- * **AWS Access Key ID**: Enter your **access key** (e.g., `user_***...`) from the previous step.
- * **AWS Secret Access Key**: Enter your **secret** (e.g., `rps_***...`) from the previous step.
+ * **AWS Access Key ID**: Enter your Runpod user ID. You can find this in the [Secrets section](https://www.console.runpod.io/user/secrets) of the Runpod console, in the description of your S3 API key. By default, the description will look similar to: `Shared Secret for user_2f21CfO73Mm2Uq2lEGFiEF24IPw 1749176107073`. `user_2f21CfO73Mm2Uq2lEGFiEF24IPw` is the user ID (yours will be different).
+ * **AWS Secret Access Key**: Enter your Runpod S3 API key's secret access key.
* **Default Region name**: You can leave this blank.
* **Default output format**: You can leave this blank or set it to `json`.
diff --git a/serverless/vllm/get-started.mdx b/serverless/vllm/get-started.mdx
index dadbe23e..ebcab330 100644
--- a/serverless/vllm/get-started.mdx
+++ b/serverless/vllm/get-started.mdx
@@ -39,7 +39,7 @@ For this walkthrough, we'll use `openchat/openchat-3.5-0106`, but you can substi
The easiest way to deploy a vLLM worker is through the Runpod console:
-1. Navigate to the [Serverless page](https://www.runpod.io/console/serverless).
+1. Navigate to the [Serverless page](https://www.console.runpod.io/serverless).
2. Under **Quick Deploy**, find **Serverless vLLM** and click **Configure**.
diff --git a/serverless/workers/custom-worker.mdx b/serverless/workers/custom-worker.mdx
index e646112a..715a1c8f 100644
--- a/serverless/workers/custom-worker.mdx
+++ b/serverless/workers/custom-worker.mdx
@@ -185,7 +185,7 @@ Before you can deploy your worker on Runpod Serverless, you need to push it to D
To deploy your worker to a Serverless endpoint:
-1. Go to the [Serverless section](https://www.runpod.io/console/serverless) of the Runpod console.
+1. Go to the [Serverless section](https://www.console.runpod.io/serverless) of the Runpod console.
2. Click **New Endpoint**.
3. Under **Custom Source**, select **Docker Image**, then click **Next**.
4. In the **Container Image** field, enter your Docker image URL: `docker.io/yourusername/serverless-test:latest`.
diff --git a/serverless/workers/github-integration.mdx b/serverless/workers/github-integration.mdx
index 22ba37b6..5ff4fb78 100644
--- a/serverless/workers/github-integration.mdx
+++ b/serverless/workers/github-integration.mdx
@@ -21,7 +21,7 @@ To deploy a worker from GitHub, you need:
Before deploying from GitHub, you need to authorize Runpod to access your repositories:
-1. Open the [settings page](http://runpod.io/console/user/settings) in the Runpod console.
+1. Open the [settings page](http://console.runpod.io/user/settings) in the Runpod console.
2. Find the **GitHub** card under **Connections** and click **Connect**.
@@ -40,7 +40,7 @@ You can manage this connection using Runpod settings or GitHub account settings,
To deploy a worker from a GitHub repository:
-1. Go to the [Serverless section](https://www.runpod.io/console/serverless) of the Runpod console
+1. Go to the [Serverless section](https://www.console.runpod.io/serverless) of the Runpod console
2. Click **New Endpoint**
@@ -177,7 +177,7 @@ When using GitHub integration with Runpod, be aware of these important limitatio
To disconnect your GitHub account from Runpod:
-1. Go to [Runpod Settings](https://www.runpod.io/console/user/settings) → **Connections** → **Edit Connection**
+1. Go to [Runpod Settings](https://www.console.runpod.io/user/settings) → **Connections** → **Edit Connection**
2. Select your GitHub account.
3. Click **Configure**.
4. Scroll down to the Danger Zone.
diff --git a/tutorials/migrations/cog/overview.mdx b/tutorials/migrations/cog/overview.mdx
index f3b5419e..802b5540 100644
--- a/tutorials/migrations/cog/overview.mdx
+++ b/tutorials/migrations/cog/overview.mdx
@@ -76,7 +76,7 @@ Now that your Docker image is ready, it's time to create and deploy a serverless
To create and deploy a serverless endpoint on Runpod:
-1. Log in to the [Runpod Serverless console](https://www.runpod.io/console/serverless).
+1. Log in to the [Runpod Serverless console](https://www.console.runpod.io/serverless).
2. Select **+ New Endpoint**.
diff --git a/tutorials/migrations/openai/overview.mdx b/tutorials/migrations/openai/overview.mdx
index be271537..360ddef6 100644
--- a/tutorials/migrations/openai/overview.mdx
+++ b/tutorials/migrations/openai/overview.mdx
@@ -64,4 +64,4 @@ Congratulations on successfully modifying your OpenAI Codebase for use with your
* [Explore more tutorials on Runpod](/tutorials/introduction/overview)
* [Learn more about OpenAI's API](https://platform.openai.com/docs/)
-* [Deploy your own vLLM Worker on Runpod](https://www.runpod.io/console/serverless)
+* [Deploy your own vLLM Worker on Runpod](https://www.console.runpod.io/serverless)
diff --git a/tutorials/pods/build-docker-images.mdx b/tutorials/pods/build-docker-images.mdx
index 27d66eca..8d15e1ef 100644
--- a/tutorials/pods/build-docker-images.mdx
+++ b/tutorials/pods/build-docker-images.mdx
@@ -21,7 +21,7 @@ Before you begin this guide you'll need the following:
## Create a Pod
-1. Navigate to [Pods](https://www.runpod.io/console/pods) and select **+ Deploy**.
+1. Navigate to [Pods](https://www.console.runpod.io/pods) and select **+ Deploy**.
2. Choose between **GPU** and **CPU**.
diff --git a/tutorials/pods/fine-tune-llm-axolotl.mdx b/tutorials/pods/fine-tune-llm-axolotl.mdx
index 766b9360..cc941931 100644
--- a/tutorials/pods/fine-tune-llm-axolotl.mdx
+++ b/tutorials/pods/fine-tune-llm-axolotl.mdx
@@ -20,7 +20,7 @@ Fine-tuning a large language model (LLM) can take up a lot of compute power. Bec
To do this, you'll need to create a Pod, specify a container, then you can begin training. A Pod is an instance on a GPU or multiple GPUs that you can use to run your training job. You also specify a Docker image like `axolotlai/axolotl-cloud:main-latest` that you want installed on your Pod.
-1. Login to [Runpod](https://www.runpod.io/console/console/home) and deploy your Pod.
+1. Login to [Runpod](https://www.console.runpod.io/console/home) and deploy your Pod.
1. Select **Deploy**.
2. Select an appropriate GPU instance.
diff --git a/tutorials/pods/run-ollama.mdx b/tutorials/pods/run-ollama.mdx
index 4c7efd09..8c124c13 100644
--- a/tutorials/pods/run-ollama.mdx
+++ b/tutorials/pods/run-ollama.mdx
@@ -16,7 +16,7 @@ The tutorial assumes you have a Runpod account with credits. No other prior know
You will create a new Pod with the PyTorch template. In this step, you will set overrides to configure Ollama.
-1. Log in to your [Runpod account](https://www.runpod.io/console/pods) and choose **+ GPU Pod**.
+1. Log in to your [Runpod account](https://www.console.runpod.io/pods) and choose **+ GPU Pod**.
2. Choose a GPU Pod like `A40`.
diff --git a/tutorials/sdks/python/get-started/introduction.mdx b/tutorials/sdks/python/get-started/introduction.mdx
index 16b11cba..4c63edef 100644
--- a/tutorials/sdks/python/get-started/introduction.mdx
+++ b/tutorials/sdks/python/get-started/introduction.mdx
@@ -14,7 +14,7 @@ To follow along with this guide, you should have:
* Basic programming knowledge in Python.
* An understanding of AI and machine learning concepts.
-* [An account on the Runpod platform](https://www.runpod.io/console/signup).
+* [An account on the Runpod platform](https://www.console.runpod.io/signup).
## What is the Runpod Python SDK?
diff --git a/tutorials/serverless/generate-sdxl-turbo.mdx b/tutorials/serverless/generate-sdxl-turbo.mdx
index 46f72160..3eab307d 100644
--- a/tutorials/serverless/generate-sdxl-turbo.mdx
+++ b/tutorials/serverless/generate-sdxl-turbo.mdx
@@ -79,7 +79,7 @@ The container you just built will run on the Worker you're creating. Here, you w
This step will walk you through deploying a Serverless Endpoint to Runpod.
-1. Log in to the [Runpod Serverless console](https://www.runpod.io/console/serverless).
+1. Log in to the [Runpod Serverless console](https://www.console.runpod.io/serverless).
2. Select **+ New Endpoint**.
diff --git a/tutorials/serverless/run-gemma-7b.mdx b/tutorials/serverless/run-gemma-7b.mdx
index 608df1ca..d975a385 100644
--- a/tutorials/serverless/run-gemma-7b.mdx
+++ b/tutorials/serverless/run-gemma-7b.mdx
@@ -19,7 +19,7 @@ To begin, we'll deploy a vLLM Worker as a Serverless Endpoint. Runpod simplifies
Follow these steps in the Runpod Serverless console to create your Endpoint.
-1. Log in to the [Runpod Serverless console](https://www.runpod.io/console/serverless).
+1. Log in to the [Runpod Serverless console](https://www.console.runpod.io/serverless).
2. Select **+ New Endpoint**.
diff --git a/tutorials/serverless/run-ollama-inference.mdx b/tutorials/serverless/run-ollama-inference.mdx
index ca5a530f..f1fca224 100644
--- a/tutorials/serverless/run-ollama-inference.mdx
+++ b/tutorials/serverless/run-ollama-inference.mdx
@@ -14,7 +14,7 @@ Use a [Network volume](/pods/storage/create-network-volumes) to attach to your W
To begin, you need to set up a new endpoint on Runpod.
-1. Log in to your [Runpod account](https://www.runpod.io/console/console/home).
+1. Log in to your [Runpod account](https://www.console.runpod.io/console/home).
2. Navigate to the **Serverless** section and select **New Endpoint**.
3. Choose **CPU** and provide a name for your Endpoint, for example 8 vCPUs 16 GB RAM.
4. Configure your Worker settings according to your needs.