LLM Functions
This is a fork of original llm-functions by sigoden.
It adds additional sandboxing functionality to work with docker containers.
General
This project empowers you to effortlessly build powerful LLM tools and agents using familiar languages like Bash, JavaScript, and Python.
Forget complex integrations, harness the power of function calling to connect your LLMs directly to custom code and unlock a world of possibilities. Execute system commands, process data, interact with APIs – the only limit is your imagination.
Tools Showcase
Agents showcase
Prerequisites
Make sure you have the following tools installed:
Installation
-
Clone this repository:
bash git clone https://git.kug.is/llm-functions-docker.git -
cdinto the project directory:bash cd llm-functions-docker -
Build tools and agents. You can either build all at once or a selection of tools manually.
Build all tools with GNU-Make
Simply build all tools using GNU-Make:
make
Build selection of tools
Create a ./tools.txt file with each tool filename on a new line.
get_current_weather.sh
execute_command.sh
#execute_py_code.py
Where is the web_search tool?
The `web_search` tool itself doesn't exist directly, Instead, you can choose from a variety of web search tools. To use one as the `web_search` tool, follow these steps: 1. **Choose a Tool:** Available tools include: * `web_search_cohere.sh` * `web_search_perplexity.sh` * `web_search_tavily.sh` * `web_search_vertexai.sh` 2. **Link Your Choice:** Use the `argc` command to link your chosen tool as `web_search`. For example, to use `web_search_perplexity.sh`: ```sh $ argc link-web-search web_search_perplexity.sh ``` This command creates a symbolic link, making `web_search.sh` point to your selected `web_search_perplexity.sh` tool. Now there is a `web_search.sh` ready to be added to your `./tools.txt`.
Create a ./agents.txt file with each agent name on a new line.
coder
todo
Build bin and functions.json
argc build
Ensure that everything is ready (environment variables, Node/Python dependencies, mcp-bridge server)
argc check
Usage
Link LLM-functions and AIChat
AIChat expects LLM-functions to be placed in AIChat's functions_dir so that AIChat can use the tools and agents that LLM-functions provides.
You can symlink this repository directory to AIChat's functions_dir with:
ln -s "$(pwd)" "$(aichat --info | sed -n 's/^functions_dir\s\+//p')"
# OR
argc link-to-aichat
Alternatively, you can tell AIChat where the LLM-functions directory is by using an environment variable:
export AICHAT_FUNCTIONS_DIR="$(pwd)"
Start using the functions
Done! Now you can use the tools and agents with AIChat.
aichat --role %functions% what is the weather in Paris?
aichat --agent todo list all my todos
Writing Your Own Tools
Building tools for our platform is remarkably straightforward. You can leverage your existing programming knowledge, as tools are essentially just functions written in your preferred language.
LLM Functions automatically generates the JSON declarations for the tools based on comments. Refer to ./tools/demo_tool.{sh,js,py} for examples of how to use comments for autogeneration of declarations.
Bash
Create a new bashscript in the ./tools/ directory (.e.g. execute_command.sh).
#!/usr/bin/env bash
set -e
# @describe Execute the shell command.
# @option --command! The command to execute.
main() {
eval "$argc_command" >> "$LLM_OUTPUT"
}
eval "$(argc --argc-eval "$0" "$@")"
Javascript
Create a new javascript in the ./tools/ directory (.e.g. execute_js_code.js).
/**
* Execute the javascript code in node.js.
* @typedef {Object} Args
* @property {string} code - Javascript code to execute, such as `console.log("hello world")`
* @param {Args} args
*/
exports.run = function ({ code }) {
eval(code);
}
Python
Create a new python script in the ./tools/ directory (e.g. execute_py_code.py).
def run(code: str):
"""Execute the python code.
Args:
code: Python code to execute, such as `print("hello world")`
"""
exec(code)
Writing Your Own Agents
Agent = Prompt + Tools (Function Calling) + Documents (RAG), which is equivalent to OpenAI's GPTs.
The agent has the following folder structure:
└── agents
└── myagent
├── functions.json # JSON declarations for functions (Auto-generated)
├── index.yaml # Agent definition
├── tools.txt # Shared tools
└── tools.{sh,js,py} # Agent tools
The agent definition file (index.yaml) defines crucial aspects of your agent:
name: TestAgent
description: This is test agent
version: 0.1.0
instructions: You are a test ai agent to ...
conversation_starters:
- What can you do?
variables:
- name: foo
description: This is a foo
documents:
- local-file.txt
- local-dir/
- https://example.com/remote-file.txt
Refer to ./agents/demo for examples of how to implement a agent.
MCP (Model Context Protocol)
- mcp/server: Let LLM-Functions tools/agents be used through the Model Context Protocol.
- mcp/bridge: Let external MCP tools be used by LLM-Functions.
Documents
License
The project is under the MIT License, Refer to the LICENSE file for detailed information.
