How to build an AI Slack bot

How to build an AI Slack bot

Hongbo Tian

Jul 7, 2025

How to build an AI Slack bot in minutes that answers questions and finds messages

Read our step-by-step walkthrough for building semantic search for Slack that retrieves and answers questions using conversation history.

You’re trying to track down a past conversation in Slack. Maybe you want to know about a bug, a launch, or a decision that was made. You remember it kind of happened last week on Friday… or was the message shared on Monday?

You try Slack search, but all you get are fragments. Messages out of context, spread across threads. So you ping a teammate, and now they’re spending time scrolling, too.

It’s not that the answer isn’t there. It’s just buried in a long, fast-moving thread and off-topic replies. That’s exactly the kind of mess traditional search was never built to handle.

So, to help with this common issue, we decided to build a semantic search bot for Slack. One that helps your team ask questions like:

  • "Someone shared an article about improving revops with AI a few weeks ago... can you help me find that message?"

  • “Wait, did we already test ElevenLabs for TTS or are we still on that?”

And get answers instantly, with the right context, delivered directly in the thread where the question was asked. 

Here’s how you can build the same thing step-by-step for your team. No glue code or ML expertise required.

The stack

  • Programming Language | Python

  • Web Framework | FastAPI (with Uvicorn for serving)

  • Slack Integration | slack-bolt and slack-sdk

  • Retrieval Augmented Generation (RAG)| DuckyAI

  • Groq Model Name | llama3-70b-8192 

Step-by-step walkthrough

This guide provides a high-level overview of how to build a Slack bot using the code and resources in this project.

Step 1 - Set up your environment

  • Clone the repository and navigate to examples/slack/directory.

  • Create the virtual environment in your terminal using python -m venv venv 

  • Activate the environment you just create by source venv/bin/activate

  • Install the required Python packages using pip install -r requirements.txt.

  • Ensure you have Docker installed if you plan to use the provided Docker setup.

  • Next, configure your environment variables in the .env file:

    • SLACK_BOT_TOKEN: Your Slack bot's OAuth token.

    • SLACK_APP_TOKEN: Your Slack app-level token for socket mode.

    • CHANNEL_ID: The Slack channel ID to fetch messages from.

    • LAST_N: Number of latest messages to fetch (set to 0 to fetch all).

    • DUCKY_API_KEY: API key for the DuckyAI service. You can obtain this by creating a project and API key in your DuckyAI dashboard. See the docs for instructions.

    • DUCKY_INDEX_NAME: Name of the index in DuckyAI for storing/searching knowledge.

    • GROQ_API_KEY: API key for the Groq API service (for LLM integration).

    • GROQ_MODEL_NAME: Name of the model to use with Groq API.

  • These variables are required for the bot to connect to Slack, fetch data, and use AI/semantic search services. Do not share your .env file publicly as it contains sensitive credentials.

Step 2 - Fetch Slack channel data

Once your environment is configured, the next step is to fetch your Slack channel history.

  • Run the file  python src/fetch_slack.py script to fetch channel history and save it as a JSON file (e.g., data/channel_history_enriched.json).

  • High-level logic:

  from slack_sdk import WebClient
  client = WebClient(token=SLACK_TOKEN)
  
  # Fetch messages from the channel
  all_msgs = []
  cursor = None
  while True:
      resp = client.conversations_history(channel=CHANNEL_ID, limit=PAGE_LIMIT, cursor=cursor)
      all_msgs.extend(resp["messages"])
      cursor = resp.get("response_metadata", {}).get("next_cursor", "")
      if not cursor:
      break
  # Fetch user info and enrich messages
  # Save as JSON

Step 3 - Add knowledge to the bot

Now, we need to index the data for semantic search.

  • Run python src/add_knowledge.py to process and enrich the Slack channel data.

  • High-level logic:

from duckyai import DuckyAI
import os, json
from dotenv import load_dotenv
load_dotenv(override=True)
client = DuckyAI(api_key=os.getenv("DUCKY_API_KEY"))
ducky_index_name = os.getenv("DUCKY_INDEX_NAME")
with open("data/channel_history_enriched.json", 'r', encoding='utf-8') as f:
    channel_history_content = json.dumps(json.load(f))
client.documents.index(
    index_name=ducky_index_name,
    content=channel_history_content,
)
  • You can obtain your DuckyAI API key by creating a project and API key in your DuckyAI dashboard. See the Ducky docs for instructions.

  • DuckyAI automatically handles chunking your data, embedding each chunk, and storing them in a vector database (vector store) under the specified index. This enables efficient semantic search without requiring you to manage chunking, embedding, or vector storage manually.

  • This step may involve semantic search indexing or other preprocessing to make the data easily searchable by the bot.

Step 3 - Implement the Slack bot logic

With your data indexed, it’s time to wire up the bot. The main bot logic resides in src/app.py.

  • High-level logic:

from slack_bolt import App as SlackApp
from duckyai import DuckyAI
import os, requests, threading
from fastapi import FastAPI
from dotenv import load_dotenv
load_dotenv(override=True)

# Set up DuckyAI and Slack clients
client = DuckyAI(api_key=os.getenv("DUCKY_API_KEY"))
index = os.getenv("DUCKY_INDEX_NAME")
SLACK_BOT_TOKEN = os.getenv("SLACK_BOT_TOKEN")
SLACK_APP_TOKEN = os.getenv("SLACK_APP_TOKEN")

bolt = SlackApp(token=SLACK_BOT_TOKEN)

@bolt.event("app_mention")
def handle_mention(body, say):
    user_text = body["event"]["text"].split(maxsplit=1)[1]
    # Query FastAPI backend for a response using semantic search
    resp = requests.post("http://localhost:8000/chat", json={"message": user_text}).json()
    answer = resp.get("response", "…")
    say(text=answer, thread_ts=body["event"].get("thread_ts") or body["event"]["ts"])

# Start Slack bot in a background thread
threading.Thread(target=lambda: SocketModeHandler(bolt, SLACK_APP_TOKEN).start(), daemon=True).start()
  • This script connects to Slack, listens for messages, and uses semantic search to find relevant information from the channel history or knowledge base.

  • You can modify this logic to customize how the bot responds or processes messages.

Step 5 - Run the bot

Once everything is connected and configured, you’re ready to launch.

  • You can run the bot locally using Python, or use Docker Compose with the provided docker-compose.yml and Dockerfile for containerized deployment.

  • Ensure all environment variables and dependencies are correctly set up.

Step 6 - Test and iterate

Finally, test the bot in your Slack workspace. Mention it in a channel and ask questions to see how it responds.

Use this feedback loop to improve your knowledge base, tweak retrieval quality, or expand the bot’s capabilities.

Here's an example:

For more details, refer to the code in the src/ directory and the README.md file.

Github repo link

How Ducky turns Slack into a searchable knowledge base in minutes

Building semantic search over Slack shouldn’t require weeks of setup, custom chunkers, or infrastructure headaches. Ducky handles the complexity behind the scenes so you can focus on building features your team actually needs. Here’s how. 

Built for real-world chaos

Slack conversations are rarely clean. Messages split across threads, loaded with emojis, context switches, and off-topic tangents. Ducky processes this mess using smart chunking and semantic indexing, eliminating the need for manual cleaning or formatting of your data.

Zero prep, zero glue code

Ducky is designed for developers who don’t want to spend days wiring up pipelines. There’s no need to write custom chunkers, tune vector storage, or stitch together multiple tools. Just fetch your messages, send them to Ducky, and start searching.

Use your preferred LLM

Already using GPT-4, Claude, or Groq? No problem. Ducky is model-agnostic and integrates easily with whatever LLM you’ve built your app on, giving you flexibility without extra setup.

Ship faster than you thought possible

From the first Slack export to a running bot, the entire flow takes under an hour. You can go from concept to production without writing infrastructure code or managing a vector database.

No ML expertise required

You don’t need to know how chunking or reranking works. Ducky handles retrieval quality behind the scenes, so you can build apps powered by semantic search without diving into the machine learning weeds.

Ready to try it yourself?

Skip the setup headaches and get straight to building. Sign up for your Ducky API  key here and turn your Slack history into a searchable knowledge base in minutes.

Other use cases you can build today

Here are some other practical ways teams are already using Ducky today. 

How to build an AI Slack bot in minutes that answers questions and finds messages

Read our step-by-step walkthrough for building semantic search for Slack that retrieves and answers questions using conversation history.

You’re trying to track down a past conversation in Slack. Maybe you want to know about a bug, a launch, or a decision that was made. You remember it kind of happened last week on Friday… or was the message shared on Monday?

You try Slack search, but all you get are fragments. Messages out of context, spread across threads. So you ping a teammate, and now they’re spending time scrolling, too.

It’s not that the answer isn’t there. It’s just buried in a long, fast-moving thread and off-topic replies. That’s exactly the kind of mess traditional search was never built to handle.

So, to help with this common issue, we decided to build a semantic search bot for Slack. One that helps your team ask questions like:

  • "Someone shared an article about improving revops with AI a few weeks ago... can you help me find that message?"

  • “Wait, did we already test ElevenLabs for TTS or are we still on that?”

And get answers instantly, with the right context, delivered directly in the thread where the question was asked. 

Here’s how you can build the same thing step-by-step for your team. No glue code or ML expertise required.

The stack

  • Programming Language | Python

  • Web Framework | FastAPI (with Uvicorn for serving)

  • Slack Integration | slack-bolt and slack-sdk

  • Retrieval Augmented Generation (RAG)| DuckyAI

  • Groq Model Name | llama3-70b-8192 

Step-by-step walkthrough

This guide provides a high-level overview of how to build a Slack bot using the code and resources in this project.

Step 1 - Set up your environment

  • Clone the repository and navigate to examples/slack/directory.

  • Create the virtual environment in your terminal using python -m venv venv 

  • Activate the environment you just create by source venv/bin/activate

  • Install the required Python packages using pip install -r requirements.txt.

  • Ensure you have Docker installed if you plan to use the provided Docker setup.

  • Next, configure your environment variables in the .env file:

    • SLACK_BOT_TOKEN: Your Slack bot's OAuth token.

    • SLACK_APP_TOKEN: Your Slack app-level token for socket mode.

    • CHANNEL_ID: The Slack channel ID to fetch messages from.

    • LAST_N: Number of latest messages to fetch (set to 0 to fetch all).

    • DUCKY_API_KEY: API key for the DuckyAI service. You can obtain this by creating a project and API key in your DuckyAI dashboard. See the docs for instructions.

    • DUCKY_INDEX_NAME: Name of the index in DuckyAI for storing/searching knowledge.

    • GROQ_API_KEY: API key for the Groq API service (for LLM integration).

    • GROQ_MODEL_NAME: Name of the model to use with Groq API.

  • These variables are required for the bot to connect to Slack, fetch data, and use AI/semantic search services. Do not share your .env file publicly as it contains sensitive credentials.

Step 2 - Fetch Slack channel data

Once your environment is configured, the next step is to fetch your Slack channel history.

  • Run the file  python src/fetch_slack.py script to fetch channel history and save it as a JSON file (e.g., data/channel_history_enriched.json).

  • High-level logic:

  from slack_sdk import WebClient
  client = WebClient(token=SLACK_TOKEN)
  
  # Fetch messages from the channel
  all_msgs = []
  cursor = None
  while True:
      resp = client.conversations_history(channel=CHANNEL_ID, limit=PAGE_LIMIT, cursor=cursor)
      all_msgs.extend(resp["messages"])
      cursor = resp.get("response_metadata", {}).get("next_cursor", "")
      if not cursor:
      break
  # Fetch user info and enrich messages
  # Save as JSON

Step 3 - Add knowledge to the bot

Now, we need to index the data for semantic search.

  • Run python src/add_knowledge.py to process and enrich the Slack channel data.

  • High-level logic:

from duckyai import DuckyAI
import os, json
from dotenv import load_dotenv
load_dotenv(override=True)
client = DuckyAI(api_key=os.getenv("DUCKY_API_KEY"))
ducky_index_name = os.getenv("DUCKY_INDEX_NAME")
with open("data/channel_history_enriched.json", 'r', encoding='utf-8') as f:
    channel_history_content = json.dumps(json.load(f))
client.documents.index(
    index_name=ducky_index_name,
    content=channel_history_content,
)
  • You can obtain your DuckyAI API key by creating a project and API key in your DuckyAI dashboard. See the Ducky docs for instructions.

  • DuckyAI automatically handles chunking your data, embedding each chunk, and storing them in a vector database (vector store) under the specified index. This enables efficient semantic search without requiring you to manage chunking, embedding, or vector storage manually.

  • This step may involve semantic search indexing or other preprocessing to make the data easily searchable by the bot.

Step 3 - Implement the Slack bot logic

With your data indexed, it’s time to wire up the bot. The main bot logic resides in src/app.py.

  • High-level logic:

from slack_bolt import App as SlackApp
from duckyai import DuckyAI
import os, requests, threading
from fastapi import FastAPI
from dotenv import load_dotenv
load_dotenv(override=True)

# Set up DuckyAI and Slack clients
client = DuckyAI(api_key=os.getenv("DUCKY_API_KEY"))
index = os.getenv("DUCKY_INDEX_NAME")
SLACK_BOT_TOKEN = os.getenv("SLACK_BOT_TOKEN")
SLACK_APP_TOKEN = os.getenv("SLACK_APP_TOKEN")

bolt = SlackApp(token=SLACK_BOT_TOKEN)

@bolt.event("app_mention")
def handle_mention(body, say):
    user_text = body["event"]["text"].split(maxsplit=1)[1]
    # Query FastAPI backend for a response using semantic search
    resp = requests.post("http://localhost:8000/chat", json={"message": user_text}).json()
    answer = resp.get("response", "…")
    say(text=answer, thread_ts=body["event"].get("thread_ts") or body["event"]["ts"])

# Start Slack bot in a background thread
threading.Thread(target=lambda: SocketModeHandler(bolt, SLACK_APP_TOKEN).start(), daemon=True).start()
  • This script connects to Slack, listens for messages, and uses semantic search to find relevant information from the channel history or knowledge base.

  • You can modify this logic to customize how the bot responds or processes messages.

Step 5 - Run the bot

Once everything is connected and configured, you’re ready to launch.

  • You can run the bot locally using Python, or use Docker Compose with the provided docker-compose.yml and Dockerfile for containerized deployment.

  • Ensure all environment variables and dependencies are correctly set up.

Step 6 - Test and iterate

Finally, test the bot in your Slack workspace. Mention it in a channel and ask questions to see how it responds.

Use this feedback loop to improve your knowledge base, tweak retrieval quality, or expand the bot’s capabilities.

Here's an example:

For more details, refer to the code in the src/ directory and the README.md file.

Github repo link

How Ducky turns Slack into a searchable knowledge base in minutes

Building semantic search over Slack shouldn’t require weeks of setup, custom chunkers, or infrastructure headaches. Ducky handles the complexity behind the scenes so you can focus on building features your team actually needs. Here’s how. 

Built for real-world chaos

Slack conversations are rarely clean. Messages split across threads, loaded with emojis, context switches, and off-topic tangents. Ducky processes this mess using smart chunking and semantic indexing, eliminating the need for manual cleaning or formatting of your data.

Zero prep, zero glue code

Ducky is designed for developers who don’t want to spend days wiring up pipelines. There’s no need to write custom chunkers, tune vector storage, or stitch together multiple tools. Just fetch your messages, send them to Ducky, and start searching.

Use your preferred LLM

Already using GPT-4, Claude, or Groq? No problem. Ducky is model-agnostic and integrates easily with whatever LLM you’ve built your app on, giving you flexibility without extra setup.

Ship faster than you thought possible

From the first Slack export to a running bot, the entire flow takes under an hour. You can go from concept to production without writing infrastructure code or managing a vector database.

No ML expertise required

You don’t need to know how chunking or reranking works. Ducky handles retrieval quality behind the scenes, so you can build apps powered by semantic search without diving into the machine learning weeds.

Ready to try it yourself?

Skip the setup headaches and get straight to building. Sign up for your Ducky API  key here and turn your Slack history into a searchable knowledge base in minutes.

Other use cases you can build today

Here are some other practical ways teams are already using Ducky today. 

No credit card required - we have a generous free tier to support builders