Setup Perplexica: Your Local, Private AI Search Guide
Are you tired of sending your search queries to big tech companies? Do you worry about your data privacy when using AI tools? Imagine having an AI search engine that runs right on your own computer. This guide shows you how to setup Perplexica, a powerful local and private AI search tool.
You will learn the exact steps to get Perplexica running on your machine using Docker. We provide clear commands and explanations. By the end, you will have your own private AI search engine ready to go.
Why Perplexica? Benefits of Local AI Search
Perplexica offers a major advantage: privacy. Your search queries and the AI’s responses stay local. They do not leave your machine unless you configure it otherwise.
This gives you control over your data. You avoid sending personal information to external servers. It is a truly private search experience.
Running AI locally also means potential offline access. Once models are downloaded, you might not need a constant internet connection for some functions. It is an open-source project, allowing for customization and experimentation.
Perplexica is a great alternative to commercial AI search services. It puts you back in charge of your information retrieval.
Understanding Perplexica’s Setup (Likely Docker)
Perplexica uses several parts working together. This includes a frontend (what you see), a backend (the logic), and an AI model.
Managing these parts can be complex. Docker makes it much simpler. Docker packages each part into its own container.
Think of containers like mini, isolated virtual machines for apps. Docker ensures all dependencies are met. It provides a consistent environment for Perplexica to run correctly.
Because Perplexica has multiple components, Docker is the recommended setup method. You must install Docker first.
Before You Begin: Prerequisites & System Requirements
Before you install Perplexica, you need a few things ready. First, you must install Docker Desktop on your computer. This is essential for running Perplexica’s containers.
You also need a local Large Language Model (LLM) service running. We recommend using Ollama for this. Perplexica needs an LLM to generate AI answers.
Link: Install Docker on Windows and Install Docker on MacOS
Link: Install Ollama
Ensure you install Docker and Ollama first. Download an LLM model in Ollama (e.g., run ollama run llama2
) to make sure Ollama is working.
System requirements are also important. You need a compatible operating system (Windows, macOS, or Linux). Sufficient RAM is key, at least 16GB is recommended. You also need significant disk space for models and data. A good CPU helps, and a GPU will make AI responses much faster, although it might run on CPU alone (slower).
Link: Official Perplexica Requirements
Perplexica Setup Guide (Using Docker & Local LLM Service)
Now, let’s walk through the steps to setup Perplexica. Follow these commands carefully in your terminal or command prompt.
Step 1: Install Docker (If You Haven’t Already)
As mentioned, Docker is required. If you skipped the prerequisites, go back and install Docker now. Make sure Docker Desktop is running before you continue.
Link: Install Docker
Step 2: Install and Run a Local LLM Service (Like Ollama)
Perplexica needs an LLM to provide AI answers. Using a local LLM service like Ollama keeps everything private.
Install Ollama by following their official guide. After installing, run a model command like ollama run llama2
once. This downloads the model and ensures Ollama is active. Perplexica will connect to this service.
Link: Install Ollama
Step 3: Get the Perplexica Code & Configure
First, you need to download the Perplexica project files. Open your terminal or command prompt application. Use the git clone
command to get the code from GitHub.
Type this command and press Enter:
git clone https://github.com/perplexica/perplexica.git
This command copies all the necessary files into a new folder called perplexica
. Next, you need to move into that folder.
Type this command and press Enter:
cd perplexica
Now you are inside the Perplexica project directory. Perplexica uses a configuration file called .env
. This file tells Perplexica how to run and where to find things like your local LLM service. Copy the example file to create your own configuration file.
Type this command and press Enter:
cp .env.example .env
Now you have a .env
file. You might need to edit this file to tell Perplexica where your local LLM service (like Ollama) is running. Open the .env
file in a text editor (like VS Code, Notepad, etc.). Look for lines related to the LLM or API URL. You will likely need to set a variable pointing to your local Ollama instance. A common setting looks like this (confirm with Perplexica docs):
LLM_BASE_URL=http://host.docker.internal:11434
Save the .env
file after making necessary changes.
Step 4: Start the Perplexica Containers
You are ready to start Perplexica. Docker Compose will read the docker-compose.yml
file and your .env
file. It will build the necessary container images (this takes time the first time) and start all the services.
Type this command and press Enter:
docker compose up -d
The up
command starts the services. The -d
flag means “detached”. This runs the containers in the background so you can continue using your terminal. You will see output showing images being pulled or built and containers being created. This step can take several minutes depending on your internet speed and computer.
Step 5: Verify Containers Are Running
After running the docker compose up -d
command, you should check if the containers started successfully. You can list the running Docker containers.
Type this command and press Enter:
docker ps
This command shows a list of containers that are currently running. Look for containers related to Perplexica. Their status should show as ‘Up’ followed by a time. If you see the Perplexica containers listed and ‘Up’, they are running correctly.
Step 6: Access the Perplexica Web Interface
Perplexica runs as a web application. You access it through your web browser. It is typically available on your local machine at a specific address and port.
Open your favorite web browser (Chrome, Firefox, Safari, etc.). In the address bar, type the following address:
http://localhost:3000
Press Enter. You should see the Perplexica search interface load in your browser. This is where you will type your search queries.
Step 7: Select Model and Perform Your First Search
The Perplexica interface should now be visible. You might need to select the local LLM model you downloaded via Ollama within the Perplexica settings or interface if there’s an option. Ensure it is configured to use your running Ollama service.
Find the search bar on the page. Type a question or a search query just like you would on any search engine. For example, “What are the benefits of local AI?”
Press Enter or click the search button. Perplexica will process your query. It will use its search capabilities (potentially external sources or a local index) and then send the information to your local LLM. The LLM will generate an answer based on the provided context. You should see search results displayed, along with the AI-generated answer.
Managing Perplexica (Basic Commands)
You now have Perplexica running. Here are a few basic commands to manage it.
To stop all Perplexica services that were started with docker compose up -d
:
docker compose down
This command stops and removes the containers, networks, and volumes created by up
. Your data might be preserved depending on volume configuration. To restart Perplexica after stopping it:
docker compose restart
This command simply restarts the running containers. These commands should be run from within the Perplexica project directory you cloned earlier.
Troubleshooting Common Setup Issues
Setting up can sometimes hit bumps. Here are a few common problems and tips.
If docker compose
command is not found, ensure Docker Desktop is installed correctly and running. Also, check that Docker Compose is installed (it’s included with Docker Desktop on Windows/macOS, might need separate install on Linux).
If containers fail to start after docker compose up -d
, the logs can help. Navigate to the Perplexica directory in your terminal and run:
docker compose logs
This command shows the output from all the services, which often includes error messages. If http://localhost:3000
doesn’t load, verify the containers are running using docker ps
. Check your firewall; it might be blocking the connection. Ensure you typed the address correctly.
If the Perplexica interface loads but AI answers don’t work, check that your local LLM service (Ollama) is running. Verify that you configured the .env
file correctly to point Perplexica to your LLM service. Check the docker compose logs
for errors related to the LLM connection.
What’s Next? Beyond Basic Search
You have successfully setup Perplexica for basic private AI search. Explore the Perplexica web interface for any settings it might offer. You might find options to change the LLM model or configure other aspects.
Perplexica is an open-source project. You can look into its documentation for more advanced configurations. This might include using different search APIs or exploring local data indexing capabilities if available. Enjoy using your new private AI search engine.
FAQs
Here are answers to some common questions about Perplexica setup and use.
Does Perplexica need internet? Yes, initially for cloning the repository, building Docker images, and potentially for initial search results (if configured to use external search APIs). However, the AI processing using a local LLM like Ollama happens offline.
How private is it? When using a local LLM and not configured to send data externally, your search queries and AI responses remain on your machine, offering high privacy.
Can I use external AI models like ChatGPT? Perplexica is designed for local AI. While it might support external APIs with configuration, its core benefit is private, local processing using models like those run by Ollama.
How much disk space does it use? This depends heavily on the size of the LLM models you download and any local data you might index. LLMs can be several gigabytes each.
Conclusion
You have completed the steps to setup Perplexica on your own computer. This guide walked you through using Docker and a local LLM service like Ollama. You now have a powerful, private AI search engine at your fingertips.
Setting up tools with Docker involves a few commands, but it is manageable. You gained control over your AI search experience. Go ahead and perform your first private search with Perplexica. Enjoy the benefits of local, private AI.