Autonomys Agents is an EXPERIMENTAL framework for building AI agents. Currently, the framework supports agents that can interact with social networks and maintain permanent memory through the Autonomys Network. We are still in the EARLY STAGES OF DEVELOPMENT and are actively seeking feedback and contributions. We will be rapidly adding many more workflows and features.
IMPORTANT: The main branch of this repository is under active development and may contain breaking changes. Please use the latest stable release for production environments.
- 🤖 Autonomous social media engagement
- 🧠 Permanent agent memory storage via Autonomys Network
- 🔄 Built-in orchestration system
- 🐦 Twitter integration (with more platforms planned)
- 🎭 Customizable agent personalities
- 🛠️ Extensible tool system
- Create a new repository using the template at autonomys-agent-template
- Clone your new repository and install dependencies:
git clone <your-repo-url> cd <your-repo-directory> yarn install
- Windows users will need to install Visual Studio C++ Redistributable. They can be found here: https://aka.ms/vs/17/release/vc_redist.x64.exe
- Create a character using the provided script:
yarn create-character your_character_name
- Configure your character:
- Edit
characters/your_character_name/config/.env
with your API keys and credentialsOPENAI_API_KEY
is required for the vector database that powers agent memory through embeddings
- Customize
characters/your_character_name/config/config.yaml
for agent behavior - Define personality in
characters/your_character_name/config/your_character_name.yaml
- Edit
- Generate SSL certificates (required for API server):
yarn generate-certs
The agent supports the following command-line arguments:
-
Character Name:
yarn start my-character
-
--headless: (Optional) Run the agent without starting the API server
yarn start my-character --headless
-
--help: Show available command-line options
yarn start --help
Docker support allows you to run multiple agents in isolated containers. For detailed instructions on setting up Docker images and containers for your characters, visit our autonomys-agent-template repository.
You can run multiple agents simultaneously by:
- Creating different character configurations
- Generating separate compose files for each character
- Using different
HOST_PORT
for each agent
Each agent will:
- Have its own isolated environment
- Use its own character configuration
- Store data in separate volumes
- Be accessible on its designated port
A modern web-based interface for interacting with your agent. To start:
-
Configure Agent API
In your agent character's.env
file, add these API settings:API_PORT=3010 API_TOKEN=your_api_token_min_32_chars_long_for_security ENABLE_AUTH=true CORS_ALLOWED_ORIGINS=http://localhost:3000,http://localhost:3001
-
Configure Web CLI
cp .env.sample .env
-
Update Web CLI Environment
Edit the.env
file with your configuration:PORT
: The port for running the Web CLI interfaceREACT_APP_API_BASE_URL
: Your Agent API address (e.g., http://localhost:3010/api)REACT_APP_API_TOKEN
: The same token used in your agent configuration
-
Start the Web Interface
yarn dev:web
The following examples demonstrate the use of the framework and are available:
- Twitter Agent
- Multi Personality
- Github Agent
- Notion Agent
- Slack Agent
- Web3 Agent
To run each example run following command to see the options:
yarn example <example-name> <character> --workspace=<absolute path to directory that contains characters, .cookies, and certs folders>
The framework uses a YAML-based character system that allows you to create and run different AI personalities.
Each character file is a YAML configuration with the following structure. For an example character personality configuration, see character.example.yaml and for example parameter configuration, see config.example.yaml.
The orchestrator includes a message pruning system to manage the LLM's context window size. This is important because LLMs have a limited context window, and long conversations need to be summarized to stay within these limits while retaining important information.
The pruning system works through two main parameters:
maxQueueSize
(default: 50): The maximum number of messages to keep before triggering a summarizationmaxWindowSummary
(default: 10): How many of the most recent messages to keep after summarization
Here's how the pruning process works:
- When the number of messages exceeds
maxQueueSize
, the summarization is triggered - The system creates a summary of messages from index 1 to
maxWindowSummary
- After summarization, the new message queue will contain:
- The original first message
- The new summary message
- All messages from index
maxWindowSummary
onwards
You can configure these parameters when creating the orchestrator:
const runner = await getOrchestratorRunner(character, {
pruningParameters: {
maxWindowSummary: 10, // Keep 10 most recent messages after summarization
maxQueueSize: 50, // Trigger summarization when reaching 50 messages
},
// ... other configuration options
});
This ensures your agent can maintain long-running conversations while keeping the most relevant context within the LLM's context window limits.
The framework uses the Autonomys Network for permanent storage of agent memory and interactions. This enables:
- Persistent agent memory across sessions
- Verifiable interaction history
- Cross-agent memory sharing
- Decentralized agent identity
To use this feature:
- Configure your
AUTO_DRIVE_API_KEY
in.env
(obtain from https://ai3.storage) - Enable Auto Drive uploading in
config.yaml
- Provide your Taurus EVM wallet details (PRIVATE_KEY) and Agent Memory Contract Address (CONTRACT_ADDRESS) in
.env
- Make sure your Taurus EVM wallet has funds. A faucet can be found at https://subspacefaucet.com/
- Provide encryption password in
.env
(optional, leave empty to not encrypt the agent memories)
MIT