Build voice-based LLM apps in minutes
Vocode is an open source library that makes it easy to build voice-based LLM apps. Using Vocode, you can build real-time streaming conversations with LLMs and deploy them to phone calls, Zoom meetings, and more. You can also build personal assistants or apps like voice-based chess. Vocode provides easy abstractions and integrations so that everything you need is in a single library.
⭐️ Features
- 🗣 Spin up a conversation with your system audio
- ➡️ 📞 Set up a phone number that responds with a LLM-based agent
- 📞 ➡️ Send out phone calls from your phone number managed by an LLM-based agent
- 🧑💻 Dial into a Zoom call
- Out of the box integrations with:
- Transcription services, including:
- LLMs, including:
- Synthesis services, including:
Check out our React SDK here!
Contributing
If there are features or integrations that don't exist yet, please add them! Feel free to fork and create a PR and we will get it merged as soon as possible. We'll have more guidelines on contributions soon :)
🚀 Quickstart (Self-hosted)
import asyncio import signal import vocode from vocode.streaming.streaming_conversation import StreamingConversation from vocode.helpers import create_microphone_input_and_speaker_output from vocode.streaming.models.transcriber import ( DeepgramTranscriberConfig, PunctuationEndpointingConfig, ) from vocode.streaming.models.agent import ChatGPTAgentConfig from vocode.streaming.models.message import BaseMessage from vocode.streaming.models.synthesizer import AzureSynthesizerConfig # these can also be set as environment variables vocode.setenv( OPENAI_API_KEY="<your OpenAI key>", DEEPGRAM_API_KEY="<your Deepgram key>", AZURE_SPEECH_KEY="<your Azure key>", AZURE_SPEECH_REGION="<your Azure region>", ) async def main(): microphone_input, speaker_output = create_microphone_input_and_speaker_output( streaming=True, use_default_devices=False ) conversation = StreamingConversation( output_device=speaker_output, transcriber_config=DeepgramTranscriberConfig.from_input_device( microphone_input, endpointing_config=PunctuationEndpointingConfig() ), agent_config=ChatGPTAgentConfig( initial_message=BaseMessage(text="Hello!"), prompt_preamble="Have a pleasant conversation about life", ), synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output), ) await conversation.start() print("Conversation started, press Ctrl+C to end") signal.signal(signal.SIGINT, lambda _0, _1: conversation.terminate()) while conversation.is_active(): chunk = microphone_input.get_audio() if chunk: conversation.receive_audio(chunk) await asyncio.sleep(0) if __name__ == "__main__": asyncio.run(main())
☁️ Quickstart (Hosted)
First, get a free API key from our dashboard.
import asyncio import signal import vocode from vocode.streaming.hosted_streaming_conversation import HostedStreamingConversation from vocode.streaming.streaming_conversation import StreamingConversation from vocode.helpers import create_microphone_input_and_speaker_output from vocode.streaming.models.transcriber import ( DeepgramTranscriberConfig, PunctuationEndpointingConfig, ) from vocode.streaming.models.agent import ChatGPTAgentConfig from vocode.streaming.models.message import BaseMessage from vocode.streaming.models.synthesizer import AzureSynthesizerConfig vocode.api_key = "<your API key>" if __name__ == "__main__": microphone_input, speaker_output = create_microphone_input_and_speaker_output( streaming=True, use_default_devices=False ) conversation = HostedStreamingConversation( input_device=microphone_input, output_device=speaker_output, transcriber_config=DeepgramTranscriberConfig.from_input_device( microphone_input, endpointing_config=PunctuationEndpointingConfig(), ), agent_config=ChatGPTAgentConfig( initial_message=BaseMessage(text="Hello!"), prompt_preamble="Have a pleasant conversation about life", ), synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output), ) signal.signal(signal.SIGINT, lambda _0, _1: conversation.deactivate()) asyncio.run(conversation.start())