How to Install and Use llm-app: Unlocking a New Dimension of AI Applications! π
Tuesday, Dec 31, 2024 | 6 minute read
Revolutionize your AI development! π This groundbreaking open-source project streamlines app creation, excels in data generation, and leverages advanced functionalities, making it easy for developers to build impactful applications. Seamlessly blend tech and innovation! π
1. Unlock a New Dimension of AI Applications! β Dive Deep into llm-app π
“In this rapidly advancing era of intelligence, building and deploying AI applications has become more important than ever!”
With the fast-paced development of artificial intelligence technology, π various fields are racing to find solutions that can vastly improve efficiency and enhance innovation! For developers, quickly constructing effective AI applications has become a top priority. Amidst this wave, llm-app has emerged as a hot new project! π‘
llm-app is an incredibly innovative and exciting open-source project designed to simplify and speed up the AI application development process. It excels particularly in information retrieval and data generation, providing developers with powerful support. By closely collaborating with Pathway AI Pipelines, llm-app enables developers to easily build and deploy effective AI applications using a rich set of functionalities and templates, truly achieving a seamless blend of technology and business. π
2. Quick Start: How to Install and Use llm-app π
1. Install Docker π³
First, before getting started with llm-app, make sure that Docker is installed on your system. This step is crucial, as llm-app operates in a Docker environment. If you haven’t installed Docker yet, head over to Docker’s official website to download and complete the installation!
If you haven’t installed Docker yet, follow these simple steps:
- Visit the Docker official website.
- Choose the appropriate installation package for your operating system and download it.
- After installation, ensure that Docker is up and running.
2. Install llm-app Components π οΈ
Next, let’s install the components of llm-app. Follow these steps to get started quickly:
Pull the Docker Image
First, pull the llm-app image from Docker Hub using the following command:
docker pull pathwaycom/llm-app
Here, pathwaycom/llm-app
is the name of the Docker image. With the docker pull
command, you can easily download it to your local machine. π₯
Run the Application
After downloading the Docker image, run llm-app with the following command:
docker run -p 8080:80 pathwaycom/llm-app
By using the -p 8080:80
parameter, we map port 80 of the container (the default port where the app runs) to port 8080 on the host. This allows us to access the application via the browser at http://localhost:8080
! π
Sample Code Usage
Once the application is up and running, you can create a basic LLM application. Here’s a preliminary code snippet:
# Sample code to build a Pathway LLM application
def build_llm_app(data_source):
# Connect to the data source
# Perform data indexing
...
This build_llm_app
function will help you connect to the data source and perform data indexing. We will delve into the specific implementation details later.
3. Detailed Code Commentary π
We will go through the code line by line to help you better understand how to leverage llm-app to build powerful AI applications.
4. Prepare Directory and Set Up API Key π
First, we need to create a directory to store data files and set up the OpenAI API key! π
# Create data directory
!mkdir -p 'data/' # Directory for storing files
Use the mkdir
command to create a directory named data
, ensuring you have a place to store data files for the upcoming steps.
Next, set the OpenAI API key in Python:
import os # Import the OS module
import getpass # Import the module for getting input
# Check if OpenAI_API_KEY is already set in environment variables
if "OPENAI_API_KEY" in os.environ:
api_key = os.environ["OPENAI_API_KEY"] # Get API key from environment variable
else:
api_key = getpass.getpass("OpenAI API Key:") # Prompt user to input API key if not set
This code checks if OPENAI_API_KEY
has been set in the environment variables. If not, it prompts the user to enter the API key.
5. Use the Pathway File System Connector to Read File π
Next, let’s use the Pathway library to read the content of files:
import pathway as pw # Import the Pathway module
from pathway.xpacks.llm.vector_store import VectorStoreServer # Import the vector store server
from langchain_openai import OpenAIEmbeddings # Import OpenAI embeddings component
from langchain.text_splitter import CharacterTextSplitter # Import text splitter component
# Read data from the file system
data = pw.io.fs.read( # Read data using Pathway file system
"./data", # Specify the file path as the data directory
format="binary", # Specify the data format as binary, suitable for non-text files
mode="streaming", # Use streaming mode, suitable for large files
with_metadata=True, # Read metadata for better file management
)
# Initialize OpenAI embeddings and text splitter
embeddings = OpenAIEmbeddings(api_key=api_key) # Create an instance of OpenAI embeddings
splitter = CharacterTextSplitter() # Create an instance of the character text splitter
In this code snippet, we import the necessary modules and use pw.io.fs.read
to read the data, specifying the path and format to be used. The embedding instance and text splitter will be useful during data processing. π‘
6. Run the Vector Store Server π
Next, let’s set up and run a vector store server to store and query data:
# Set the host and port for the server
host = "127.0.0.1" # Localhost address
port = 8666 # Custom port
# Create and run the vector store server
server = VectorStoreServer.from_langchain_components( # Create the vector store server from LangChain components
data, embedder=embeddings, splitter=splitter # Use the retrieved data, embedder, and splitter to create the server
)
server.run_server( # Run the server
host, port=port, with_cache=True, # Enable caching for performance improvement
cache_backend=pw.persistence.Backend.filesystem("./Cache"), # Specify the cache backend
threaded=True # Enable multi-threading to support concurrent requests
)
Here, we define the host
and port
, create a VectorStoreServer
, and begin running it to be ready to handle queries at any time!
7. Query Data Using PathwayVectorClient π
Once the vector store server is running, we can use PathwayVectorClient
to perform queries:
from langchain_community.vectorstores import PathwayVectorClient # Import the Pathway client
# Initialize PathwayVectorClient
client = PathwayVectorClient(host=host, port=port) # Connect to the vector store server
query = "What is Pathway?" # Define the query string
docs = client.similarity_search(query) # Perform similarity search
print(docs) # Print the search results
This code initializes a PathwayVectorClient
instance, connects to the running server, defines a query string, performs a similarity search, and prints the results for easy interpretation! β¨
8. Get Vector Store Statistics π
To monitor the system status, we can also retrieve statistics from the vector store:
print(client.get_vectorstore_statistics()) # Get and print vector store statistics
print(client.get_input_files()) # Retrieve and print the list of input files
This code displays the current statistics of the vector store server, helping you understand the system’s operational status.
9. Submit Queries and Output Responses π¬
Lastly, donβt forget to submit queries to the query engine and get responses:
response = query_engine.query("What is Pathway?") # Submit the query to the query engine
print(str(response)) # Print the string representation of the query response
In this part, we send a query to the query engine and print out the response, making it easy to observe the query results and the information returned! π
Conclusion
Through these straightforward steps and code snippets, we can quickly get started with using Pathwayβs llm-app to build powerful AI applications! With the diverse application templates offered by Pathway, we can easily connect to various data sources, realizing flexible and dynamic real-time indexing to meet a multitude of needs! Whether building a Q&A system or conducting document retrieval, Pathway provides tremendous convenience and support to developers, allowing us to embrace the future of AI together! πβ¨