In this AI/ML Hackathon post, we want to share another winning project from last year’s Docker AI/ML Hackathon. This time we will dive into Local LLM Messenger, an honorable mention winner created by Justin Garrison.
Developers are pushing the boundaries to bring the power of artificial intelligence (AI) to everyone. One exciting approach involves integrating Large Language Models (LLMs) with familiar messaging platforms like Slack and iMessage. This isn’t just about convenience; it’s about transforming these platforms into launchpads for interacting with powerful AI tools.
![- 2400x1260 ai ml hackathon 01 AI/ML hackathon](https://www.docker.com/app/uploads/2024/06/2400x1260_ai-ml-hackathon_01-1110x583.png)
Imagine this: You need a quick code snippet or some help brainstorming solutions to coding problems. With LLMs integrated into your messaging app, you can chat with your AI assistant directly within the familiar interface to generate creative ideas or get help brainstorming solutions. No more complex commands or clunky interfaces — just a natural conversation to unlock the power of AI.
Integrating with messaging platforms can be a time-consuming task, especially for macOS users. That’s where Local LLM Messenger (LoLLMM) steps in, offering a streamlined solution for connecting with your AI via iMessage.
What makes LoLLM Messenger unique?
The following demo, which was submitted to the AI/ML Hackathon, provides an overview of LoLLM Messenger (Figure 1).
The LoLLM Messenger bot allows you to send iMessages to Generative AI (GenAI) models running directly on your computer. This approach eliminates the need for complex setups and cloud services, making it easier for developers to experiment with LLMs locally.
Key features of LoLLM Messenger
LoLLM Messenger includes impressive features that make it a standout among similar projects, such as:
- Local execution: Runs on your computer, eliminating the need for cloud-based services and ensuring data privacy.
- Scalability: Handles multiple AI models simultaneously, allowing users to experiment with different models and switch between them easily.
- User-friendly interface: Offers a simple and intuitive interface, making it accessible to users of all skill levels.
- Integration with Sendblue: Integrates seamlessly with Sendblue, enabling users to send iMessages to the bot and receive responses directly in their inbox.
- Support for ChatGPT: Supports the GPT-3.5 Turbo and DALL-E 2 models, providing users with access to powerful AI capabilities.
- Customization: Allows users to customize the bot’s behavior by modifying the available commands and integrating their own AI models.
How does it work?
The architecture diagram shown in Figure 2 provides a high-level overview of the components and interactions within the LoLLM Messenger project. It illustrates how the main application, AI models, messaging platform, and external APIs work together to enable users to send iMessages to AI models running on their computers.
![- F2 LoLLM overview Illustration showing components and processes in LoLLM Messenger, including User, SendBlue API, Docker, and AI Models.](https://www.docker.com/app/uploads/2024/08/F2-LoLLM-overview-1110x583.png)
By leveraging Docker, Sendblue, and Ollama, LoLLM Messenger offers a seamless and efficient solution for those seeking to explore AI models without the need for cloud-based services. LoLLM Messenger utilizes Docker Compose to manage the required services.
Docker Compose simplifies the process by handling the setup and configuration of multiple containers, including the main application, ngrok (for creating a secure tunnel), and Ollama (a server that bridges the gap between messaging apps and AI models).
Technical stack
The LoLLM Messenger tech stack includes:
- Lollmm service: This service is responsible for running the main application. It handles incoming iMessages, processing user requests, and interacting with the AI models. The lollmm service communicates with the Ollama model, which is a powerful AI model for text and image generation.
- Ngrok: This service is used to expose the main application’s port 8000 to the internet using
ngrok
. It runs in the Alpine image and forwards traffic from port 8000 to the ngrok tunnel. The service is set to run in the host network mode. - Ollama: This service runs the Ollama model, which is a powerful AI model for text and image generation. It listens on port 11434 and mounts a volume from
./run/ollama
to/home/ollama
. The service is set to deploy with GPU resources, ensuring that it can utilize an NVIDIA GPU if available. - Sendblue: The project integrates with Sendblue to handle iMessages. You can set up Sendblue by adding your API Key and API Secret in the
app/.env
file and adding your phone number as a Sendblue contact.
Getting started
To get started, ensure that you have installed and set up the following components:
- Install the latest Docker Desktop.
- Register for Sendblue https://app.sendblue.co/auth/login.
- Create an ngrok account using your preferred way and get authtoken https://dashboard.ngrok.com/signup.
Clone the repository
Open a terminal window and run the following command to clone this sample application:
You should now have the following files in your local-llm-messenger
directory:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | . ├── LICENSE ├── README.md ├── app │ ├── Dockerfile │ ├── Pipfile │ ├── Pipfile.lock │ ├── default.ai │ ├── log_conf.yaml │ └── main.py ├── docker-compose.yaml ├── img │ ├── banner.png │ ├── lasers.gif │ └── lollm-demo-1.gif ├── justfile └── test ├── msg.json └── ollama.json 4 directories, 15 files |
The script main.py
file under the /app
directory is a Python script that uses the FastAPI framework to create a web server for an AI-powered messaging application. The script interacts with OpenAI’s GPT-3 model and an Ollama endpoint for generating responses. It uses Sendblue’s API for sending messages.
The script first imports necessary libraries, including FastAPI, requests, logging, and other required modules.
1 2 3 4 5 6 7 8 9 | from dotenv import load_dotenv import os, requests, time, openai, json, logging from pprint import pprint from typing import Union, List from fastapi import FastAPI from pydantic import BaseModel from sendblue import Sendblue |
This section sets up configuration variables, such as API keys, callback URL, Ollama API endpoint, and maximum context and word limits.
1 2 3 4 5 6 7 | SENDBLUE_API_KEY = os.environ.get("SENDBLUE_API_KEY") SENDBLUE_API_SECRET = os.environ.get("SENDBLUE_API_SECRET") openai.api_key = os.environ.get("OPENAI_API_KEY") OLLAMA_API = os.environ.get("OLLAMA_API_ENDPOINT", "http://ollama:11434/api") # could also use request.headers.get('referer') to do dynamically CALLBACK_URL = os.environ.get("CALLBACK_URL") MAX_WORDS = os.environ.get("MAX_WORDS") |
Next, the script performs the logging configuration, setting the log level to INFO. Creates a file handler for logging messages to a file named app.log
.
It then defines various functions for interacting with the AI models, managing context, sending messages, handling callbacks, and executing slash commands.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | def set_default_model(model: str): try: with open("default.ai", "w") as f: f.write(model) f.close() return except IOError: logger.error("Could not open file") exit(1) def get_default_model() -> str: try: with open("default.ai") as f: default = f.readline().strip("\n") f.close() if default != "": return default else: set_default_model("llama2:latest") return "" except IOError: logger.error("Could not open file") exit(1) def validate_model(model: str) -> bool: available_models = get_model_list() if model in available_models: return True else: return False def get_ollama_model_list() -> List[str]: available_models = [] tags = requests.get(OLLAMA_API + "/tags") all_models = json.loads(tags.text) for model in all_models["models"]: available_models.append(model["name"]) return available_models def get_openai_model_list() -> List[str]: return ["gpt-3.5-turbo", "dall-e-2"] def get_model_list() -> List[str]: ollama_models = [] openai_models = [] all_models = [] if "OPENAI_API_KEY" in os.environ: # print(openai.Model.list()) openai_models = get_openai_model_list() ollama_models = get_ollama_model_list() all_models = ollama_models + openai_models return all_models DEFAULT_MODEL = get_default_model() if DEFAULT_MODEL == "": # This is probably the first run so we need to install a model if "OPENAI_API_KEY" in os.environ: print("No default model set. openai is enabled. using gpt-3.5-turbo") DEFAULT_MODEL = "gpt-3.5-turbo" else: print("No model found and openai not enabled. Installing llama2:latest") pull_data = '{"name": "llama2:latest","stream": false}' try: pull_resp = requests.post(OLLAMA_API + "/pull", data=pull_data) pull_resp.raise_for_status() except requests.exceptions.HTTPError as err: raise SystemExit(err) set_default_model("llama2:latest") DEFAULT_MODEL = "llama2:latest" if validate_model(DEFAULT_MODEL): logger.info("Using model: " + DEFAULT_MODEL) else: logger.error("Model " + DEFAULT_MODEL + " not available.") logger.info(get_model_list()) pull_data = '{"name": "' + DEFAULT_MODEL + '","stream": false}' try: pull_resp = requests.post(OLLAMA_API + "/pull", data=pull_data) pull_resp.raise_for_status() except requests.exceptions.HTTPError as err: raise SystemExit(err) def set_msg_send_style(received_msg: str): """Will return a style for the message to send based on matched words in received message""" celebration_match = ["happy"] shooting_star_match = ["star", "stars"] fireworks_match = ["celebrate", "firework"] lasers_match = ["cool", "lasers", "laser"] love_match = ["love"] confetti_match = ["yay"] balloons_match = ["party"] echo_match = ["what did you say"] invisible_match = ["quietly"] gentle_match = [] loud_match = ["hear"] slam_match = [] received_msg_lower = received_msg.lower() if any(x in received_msg_lower for x in celebration_match): return "celebration" elif any(x in received_msg_lower for x in shooting_star_match): return "shooting_star" elif any(x in received_msg_lower for x in fireworks_match): return "fireworks" elif any(x in received_msg_lower for x in lasers_match): return "lasers" elif any(x in received_msg_lower for x in love_match): return "love" elif any(x in received_msg_lower for x in confetti_match): return "confetti" elif any(x in received_msg_lower for x in balloons_match): return "balloons" elif any(x in received_msg_lower for x in echo_match): return "echo" elif any(x in received_msg_lower for x in invisible_match): return "invisible" elif any(x in received_msg_lower for x in gentle_match): return "gentle" elif any(x in received_msg_lower for x in loud_match): return "loud" elif any(x in received_msg_lower for x in slam_match): return "slam" else: return |
Two classes, Msg
and Callback
, are defined to represent the structure of incoming messages and callback data. The code also includes various functions and classes to handle different aspects of the messaging platform, such as setting default models, validating models, interacting with the Sendblue API, and processing messages. It also includes functions to handle slash commands, create messages from context, and append context to a file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | class Msg(BaseModel): accountEmail: str content: str media_url: str is_outbound: bool status: str error_code: int | None = None error_message: str | None = None message_handle: str date_sent: str date_updated: str from_number: str number: str to_number: str was_downgraded: bool | None = None plan: str class Callback(BaseModel): accountEmail: str content: str is_outbound: bool status: str error_code: int | None = None error_message: str | None = None message_handle: str date_sent: str date_updated: str from_number: str number: str to_number: str was_downgraded: bool | None = None plan: str def msg_openai(msg: Msg, model="gpt-3.5-turbo"): """Sends a message to openai""" message_with_context = create_messages_from_context("openai") # Add the user's message and system context to the messages list messages = [ {"role": "user", "content": msg.content}, {"role": "system", "content": "You are an AI assistant. You will answer in haiku."}, ] # Convert JSON strings to Python dictionaries and add them to messages messages.extend( [ json.loads(line) # Convert each JSON string back into a dictionary for line in message_with_context ] ) # Send the messages to the OpenAI model gpt_resp = client.chat.completions.create( model=model, messages=messages, ) # Append the system context to the context file append_context("system", gpt_resp.choices[0].message.content) # Send a message to the sender msg_response = sendblue.send_message( msg.from_number, { "content": gpt_resp.choices[0].message.content, "status_callback": CALLBACK_URL, }, ) return def msg_ollama(msg: Msg, model=None): """Sends a message to the ollama endpoint""" if model is None: logger.error("Model is None when calling msg_ollama") return # Optionally handle the case more gracefully ollama_headers = {"Content-Type": "application/json"} ollama_data = ( '{"model":"' + model + '", "stream": false, "prompt":"' + msg.content + " in under " + str(MAX_WORDS) + # Make sure MAX_WORDS is a string ' words"}' ) ollama_resp = requests.post( OLLAMA_API + "/generate", headers=ollama_headers, data=ollama_data ) response_dict = json.loads(ollama_resp.text) if ollama_resp.ok: send_style = set_msg_send_style(msg.content) append_context("system", response_dict["response"]) msg_response = sendblue.send_message( msg.from_number, { "content": response_dict["response"], "status_callback": CALLBACK_URL, "send_style": send_style, }, ) else: msg_response = sendblue.send_message( msg.from_number, { "content": "I'm sorry, I had a problem processing that question. Please try again.", "status_callback": CALLBACK_URL, }, ) return |
Navigate to the app/
directory and create a new file for adding environment variables.
1 2 3 4 5 | touch .env SENDBLUE_API_KEY=your_sendblue_api_key SENDBLUE_API_SECRET=your_sendblue_api_secret OLLAMA_API_ENDPOINT=http://host.docker.internal:11434/api OPENAI_API_KEY=your_openai_api_key |
Next, add the ngrok authtoken to the Docker Compose file. You can get the authtoken from this link.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | services: lollm: build: ./app # command: # - sleep # - 1d ports: - 8000:8000 env_file: ./app/.env volumes: - ./run/lollm:/run/lollm depends_on: - ollama restart: unless-stopped network_mode: "host" ngrok: image: ngrok/ngrok:alpine command: - "http" - "8000" - "--log" - "stdout" environment: - NGROK_AUTHTOKEN=2i6iXXXXXXXXhpqk1aY1 network_mode: "host" ollama: image: ollama/ollama ports: - 11434:11434 volumes: - ./run/ollama:/home/ollama network_mode: "host" |
Running the application stack
Next, you can run the application stack, as follows:
1 | $ docker compose up |
You will see output similar to the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [+] Running 4/4 ✔ Container local-llm-messenger-ollama-1 Create... 0.0s ✔ Container local-llm-messenger-ngrok-1 Created 0.0s ✔ Container local-llm-messenger-lollm-1 Recreat... 0.1s ! lollm Published ports are discarded when using host network mode 0.0s Attaching to lollm-1, ngrok-1, ollama-1 ollama-1 | 2024/06/20 03:14:46 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" ollama-1 | time=2024-06-20T03:14:46.308Z level=INFO source=images.go:725 msg="total blobs: 0" ollama-1 | time=2024-06-20T03:14:46.309Z level=INFO source=images.go:732 msg="total unused blobs removed: 0" ollama-1 | time=2024-06-20T03:14:46.309Z level=INFO source=routes.go:1057 msg="Listening on [::]:11434 (version 0.1.44)" ollama-1 | time=2024-06-20T03:14:46.309Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2210839504/runners ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="open config file" path=/var/lib/ngrok/ngrok.yml err=nil ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="open config file" path=/var/lib/ngrok/auth-config.yml err=nil ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="starting web service" obj=web addr=0.0.0.0:4040 allow_hosts=[] ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="client session established" obj=tunnels.session ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="tunnel session started" obj=tunnels.session ngrok-1 | t=2024-06-20T03:14:46+0000 lvl=info msg="started tunnel" obj=tunnels name=command_line addr=http://localhost:8000 url=https://94e1-223-185-128-160.ngrok-free.app ollama-1 | time=2024-06-20T03:14:48.602Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cuda_v11]" ollama-1 | time=2024-06-20T03:14:48.603Z level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="7.7 GiB" available="3.9 GiB" lollm-1 | INFO: Started server process [1] lollm-1 | INFO: Waiting for application startup. lollm-1 | INFO: Application startup complete. lollm-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) ngrok-1 | t=2024-06-20T03:16:58+0000 lvl=info msg="join connections" obj=join id=ce119162e042 l=127.0.0.1:8000 r=[2401:4900:8838:8063:f0b0:1866:e957:b3ba]:54384 lollm-1 | OLLAMA API IS http://host.docker.internal:11434/api lollm-1 | INFO: 2401:4900:8838:8063:f0b0:1866:e957:b3ba:0 - "GET / HTTP/1.1" 200 OK |
If you’re testing it on a system without an NVIDIA GPU, then you can skip the deploy
attribute of the Compose file.
Watch the output for your ngrok endpoint. In our case, it shows: https://94e1-223-185-128-160.ngrok-free.app/
Next, append /msg
to the following ngrok webhooks URL: https://94e1-223-185-128-160.ngrok-free.app/
Then, add it under the webhooks URL section on Sendblue and save it (Figure 3). The ngrok
service is configured to expose the lollmm
service on port 8000 and provide a secure tunnel to the public internet using the ngrok.io
domain.
The ngrok service logs indicate that it has started the web service and established a client session with the tunnels. They also show that the tunnel session has started and has been successfully established with the lollmm service.
The ngrok service is configured to use the specified ngrok authentication token, which is required to access the ngrok service. Overall, the ngrok service is running correctly and is able to establish a secure tunnel to the lollmm service.
![- F3 ngrok authentication Screenshot of SendBlue showing addition of ngrok authentication token to Webhooks.](https://www.docker.com/app/uploads/2024/08/F3-ngrok-authentication-1110x611.png)
Ensure that there are no error logs when you run the ngrok container (Figure 4).
![- F4 error logs Screenshot showing local-llm-messenger-ngrok-1 log output.](https://www.docker.com/app/uploads/2024/08/F4-error-logs-1110x714.png)
Ensure that the LoLLM Messenger container is actively up and running (Figure 5).
![- F5 LoLLM running Screen showing local-llm-messenger-ngrok-1 status.](https://www.docker.com/app/uploads/2024/08/F5-LoLLM-running-1110x542.png)
The logs show that the Ollama service has opened the specified port (11434) and is listening for incoming connections. The logs also indicate that the Ollama service has mounted the /home/ollama
directory from the host machine to the /home/ollama
directory within the container.
Overall, the Ollama service is running correctly and is ready to provide AI models for inference.
Testing the functionality
To test the functionality of the lollm service, you first need to add your contact number to the Sendblue dashboard. Then you should be able to send messages to the Sendblue number and observe the responses from the lollmm service (Figure 6).
![- F6 test message iMessage image showing messages sent to the SendBlue number and responses from the lollm service.](https://www.docker.com/app/uploads/2024/08/F6-test-message-798x1024.png)
The Sendblue platform will send HTTP requests to the /msg
endpoint of your lollmm service, and your lollmm service will process these requests and return the appropriate responses.
- The lollmm service is set up to listen on port 8000.
- The ngrok tunnel is started and provides a public URL, such as https://94e1-223-185-128-160.ngrok-free.app.
- The lollmm service receives HTTP requests from the ngrok tunnel, including GET requests to the root path (
/
) and other paths, such as/favicon.ico
,/predict
,/mdg
, and/msg
. - The lollmm service responds to these requests with appropriate HTTP status codes, such as 200 OK for successful requests and 404 Not Found for requests to paths that do not exist.
- The ngrok tunnel logs the join connections, indicating that clients are connecting to the lollmm service through the ngrok tunnel.
![- F7 sending messages e1721152150910 iMessage image showing requests (/list and /help) and responses in chat.](https://www.docker.com/app/uploads/2024/08/F7-sending-messages-e1721152150910-968x1024.jpg)
The first time you chat with LLM by typing /list
(Figure 7), you can check the logs as shown:
1 2 3 4 5 | ngrok-1 | t=2024-07-09T02:34:30+0000 lvl=info msg="join connections" obj=join id=12bd50a8030b l=127.0.0.1:8000 r=18.223.220.3:44370 lollm-1 | OLLAMA API IS http://host.docker.internal:11434/api lollm-1 | INFO: 18.223.220.3:0 - "POST /msg HTTP/1.1" 200 OK ngrok-1 | t=2024-07-09T02:34:53+0000 lvl=info msg="join connections" obj=join id=259fda936691 l=127.0.0.1:8000 r=18.223.220.3:36712 lollm-1 | INFO: 18.223.220.3:0 - "POST /msg HTTP/1.1" 200 OK |
Next, let’s install the codellama
model by typing /install codellama:latest
(Figure 8).
![- F8 install codellama e1721152098821 iMessage image installation of codellama model by typing /install codellama:latest.](https://www.docker.com/app/uploads/2024/08/F8-install-codellama-e1721152098821-983x1024.jpg)
You can see the following container logs once you set the default model to codellama:latest
as shown:
1 2 3 | ngrok-1 | t=2024-07-09T03:39:23+0000 lvl=info msg="join connections" obj=join id=026d8fad5c87 l=127.0.0.1:8000 r=18.223.220.3:36282 lollm-1 | setting default model lollm-1 | INFO: 18.223.220.3:0 - "POST /msg HTTP/1.1" 200 OK |
The lollmm service is running correctly and can handle HTTP requests from the ngrok tunnel. You can use the ngrok tunnel URL to test the functionality of the lollmm service by sending HTTP requests to the appropriate paths (Figure 9).
![- F9 testing functionality v2 iMessage image showing sample questions sent to test functionality, such as "Who won the FIFA World Cup 2022?".](https://www.docker.com/app/uploads/2022/08/F9-testing-functionality_v2.png)
Conclusion
LoLLM Messenger is a valuable tool for developers and enthusiasts looking to push the boundaries of LLM integration within messaging apps. It allows developers to craft custom chatbots for specific needs, add real-time sentiment analysis to messages, or explore entirely new AI features in your messaging experience.
To get started, you can explore the LoLLM Messenger project on GitHub and discover the potential of local LLM.
Learn more
- Subscribe to the Docker Newsletter.
- Read the AI/ML Hackathon collection.
- Get the latest release of Docker Desktop.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.