PerplexityApiSwift is a Swift framework that provides a convenient wrapper for the Perplexity AI API. This framework simplifies the process of making chat completion requests to Perplexity's advanced language models.
- Easy-to-use Swift interface for the Perplexity AI API
- Support for multiple Perplexity AI models
- Asynchronous API calls using Swift's modern concurrency features
- Built-in error handling for common API issues
To use PerplexityApiSwift, you need to create an instance of PerplexityAPI
with your API token and then make chat completion requests. Here's a basic example:
import PerplexityApiSwift
// Initialize the API client
let api = PerplexityAPI(token: "your_api_token_here")
// Create a message
let messages = [Message(role: "user", content: "What is the capital of France?")]
// Make a chat completion request
do {
let response = try await api.chatCompletion(messages: messages, model: .sonar)
print(response.choices.first?.message.content ?? "No response")
} catch {
print("Error: \(error)")
}
Important: You need to obtain an API token from Perplexity AI to use this framework. Make sure to keep your token secure and never share it publicly.
The framework supports various Perplexity AI models through the PerplexityModel
enum:
.sonarDeepResearch
: Advanced research model with 128K context length.sonarReasoningPro
: Enhanced reasoning model with 128K context length.sonarReasoning
: Base reasoning model with 128K context length
.sonarPro
: Professional model with 200K context length.sonar
: Standard model with 128K context length.r1_1776
: Base model with 128K context length
PerplexityApiSwift defines a PerplexityError
enum for common errors:
.tokenNotSet
: The API token has not been set.invalidResponse(statusCode:)
: The API returned an invalid response with the given status code.invalidResponseFormat
: The API response could not be decoded
The following features are planned for future releases:
- Structured Outputs: Support for receiving structured, typed responses from the API
- Streaming Response: Real-time streaming of model responses for improved user experience
For more detailed information about the Perplexity AI API, please refer to the official documentation: