diff --git a/examples/gpt4-1_prompting_guide.ipynb b/examples/gpt4-1_prompting_guide.ipynb
index 03ef0d63fc..1ab5e5717d 100644
--- a/examples/gpt4-1_prompting_guide.ipynb
+++ b/examples/gpt4-1_prompting_guide.ipynb
@@ -576,19 +576,7 @@
"\n",
"Guidance specifically for adding a large number of documents or files to input context:\n",
"\n",
- "* XML performed well in our long context testing. \n",
- " * Example: `The quick brown fox jumps over the lazy dog` \n",
- "* This format, proposed by Lee et al. ([ref](https://arxiv.org/pdf/2406.13121)), also performed well in our long context testing. \n",
- " * Example: `ID: 1 | TITLE: The Fox | CONTENT: The quick brown fox jumps over the lazy dog` \n",
- "* JSON performed particularly poorly. \n",
- " * Example: `[{"id": 1, "title": "The Fox", "content": "The quick brown fox jumped over the lazy dog"}]`\n",
- "\n",
- "The model is trained to robustly understand structure in a variety of formats. Generally, use your judgement and think about what will provide clear information and “stand out” to the model. For example, if you’re retrieving documents that contain lots of XML, an XML-based delimiter will likely be less effective. \n",
- "\n",
- "## Caveats\n",
- "\n",
- "* In some isolated cases we have observed the model being resistant to producing very long, repetitive outputs, for example, analyzing hundreds of items one by one. If this is necessary for your use case, instruct the model strongly to output this information in full, and consider breaking down the problem or using a more concise approach. \n",
- "* We have seen some rare instances of parallel tool calls being incorrect. We advise testing this, and considering setting the [parallel\\_tool\\_calls](https://platform.openai.com/docs/api-reference/responses/create#responses-create-parallel_tool_calls) param to false if you’re seeing issues."
+ "* XML performed well in our long context testing. \n * Example: `The quick brown fox jumps over the lazy dog` \n* This format, proposed by Lee et al. ([ref](https://arxiv.org/pdf/2406.13121)), also performed well in our long context testing. \n * Example: `ID: 1 | TITLE: The Fox | CONTENT: The quick brown fox jumps over the lazy dog` \n* JSON performed particularly poorly. \n * Example: `[{\"id\": 1, \"title\": \"The Fox\", \"content\": \"The quick brown fox jumped over the lazy dog\"}]`\n\nThe model is trained to robustly understand structure in a variety of formats. Generally, use your judgement and think about what will provide clear information and “stand out” to the model. For example, if you’re retrieving documents that contain lots of XML, an XML‑based delimiter will likely be less effective. \n\n## Caveats\n\n* In some isolated cases we have observed the model being resistant to producing very long, repetitive outputs, for example, analyzing hundreds of items one by one. If this is necessary for your use case, instruct the model strongly to output this information in full, and consider breaking down the problem or using a more concise approach. \n* We have seen some rare instances of parallel tool calls being incorrect. We advise testing this, and considering setting the [parallel\\\\_tool\\\\_calls](https://platform.openai.com/docs/api-reference/responses/create#responses-create-parallel_tool_calls) param to false if you’re seeing issues."
]
},
{