


Integrating Large Language Models in Production Applications
Jan 07, 2025 am 06:24 AMIn this practical guide, you will learn how to create a highly scalable model deployment solution with built-in LLMs for your applications.
In your examples, we will use Hugging Face’s ChatGPT2 model, but you can easily plug in any other model including ChatGPT4, Claude, etc.
Whether you are designing a new application with AI capabilities or improving the existing AI systems, this guide will help you step by step to create a strong LLM integration.
Understanding LLM Integration Fundamentals
Before we start writing code, let’s figure out what it takes to build a production LLM integration. API calls are not the only thing you need to consider when building production-ready LLM integration, you also need to consider things like reliability, cost, and stability. Your production applications must address issues such as service outages, rate limits, and variability in response time while keeping costs under control.
Here's what we'll build together:
- A robust API client that gracefully handles failures
- A smart caching system to optimize costs and speed
- A proper prompt management system
- Comprehensive error handling and monitoring
- A complete content moderation system as your example project
Prerequisites
Before we start coding, make sure you have:
- Python 3.8 or newer installed on your machine
- Redis cloud account or locally installed
- Basic Python programming knowledge
- Basic understanding of REST APIs
- A Hugging Face API key (or any other LLM provider key)
Want to follow along? The complete code is available in your GitHub repository.
Setting Up your Development Environment
Let's start by getting your development environment ready. We'll create a clean project structure and install all the necessary packages.
First, let's create your project directory and set up a Python virtual environment. Open your terminal and run:
mkdir llm_integration && cd llm_integration python3 -m venv env syource env/bin/activate
Now let's set up your project dependencies. Create a new requirements.txt file with these essential packages:
transformers==4.36.0 huggingface-hub==0.19.4 redis==4.6.0 pydantic==2.5.0 pydantic-settings==2.1.0 tenacity==8.2.3 python-dotenv==1.0.0 fastapi==0.104.1 uvicorn==0.24.0 torch==2.1.0 numpy==1.24.3
Let's break down why we need each of these packages:
- transformers: This is Hugging Face’s powerful library that we will be using to interface with the Qwen2.5-Coder model.
- huggingface-hub: Enables us to handle model loading and versioning redis: For implementing request caching
- pydantic: Used for data validation and settings.
- tenacity: Responsible for your retrying functionality for increased reliability
- python-dotenv: For loading environment variables
- fastapi: Builds your API endpoints with a small amount of code
- uvicorn: Used for running your FastAPI application with great efficiency
- torch: For running transformer models and handling machine learning operations
- numpy: Used for numerical computing.
Install all the packages with the command:
mkdir llm_integration && cd llm_integration python3 -m venv env syource env/bin/activate
Let's organize your project with a clean structure. Create these directories and files in your project directory:
transformers==4.36.0 huggingface-hub==0.19.4 redis==4.6.0 pydantic==2.5.0 pydantic-settings==2.1.0 tenacity==8.2.3 python-dotenv==1.0.0 fastapi==0.104.1 uvicorn==0.24.0 torch==2.1.0 numpy==1.24.3
Building the LLM Client
Let's start with your LLM client which is the most important component of your application. This is where we'll interact with the ChatGPT model (or any other LLM you prefer). Add the following code snippets to your core/llm_client.py file:
pip install -r requirements.txt
In this first part of yourLLMClient class, we're setting up the foundation:
- We are using AutoModelForCausalLM and AutoTokenizer from the transformers library to load your model
- The device_map="auto" parameter automatically handles GPU/CPU allocation
- We're using torch.float16 to optimize memory usage while maintaining good performance
Now let's add the method that talks to your model:
llm_integration/ ├── core/ │ ├── llm_client.py # your main LLM interaction code │ ├── prompt_manager.py # Handles prompt templates │ └── response_handler.py # Processes LLM responses ├── cache/ │ └── redis_manager.py # Manages your caching system ├── config/ │ └── settings.py # Configuration management ├── api/ │ └── routes.py # API endpoints ├── utils/ │ ├── monitoring.py # Usage tracking │ └── rate_limiter.py # Rate limiting logic ├── requirements.txt └── main.py └── usage_logs.json
Let's break down what's happening in this completion method:
- Added @retry decorator method to deal with temporary failures.
- Used torch.no_grad() context manager to save memory by disabling gradient calculations.
- Tracking the token usage in both input and output which is very important for costing.
- Returns a structured dictionary with the response and usage statistics.
Creating your LLM Response Handler
Next, we need to add the response handler to parse and structure the LLM's raw output. Do that in your core/response_handler.py file with the following code snippets:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer from tenacity import retry, stop_after_attempt, wait_exponential from typing import Dict, Optional import logging class LLMClient: def __init__(self, model_name: str = "gpt2", timeout: int = 30): try: self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.float16 ) except Exception as e: logging.error(f"Error loading model: {str(e)}") # Fallback to a simpler model if the specified one fails self.tokenizer = AutoTokenizer.from_pretrained("gpt2") self.model = AutoModelForCausalLM.from_pretrained("gpt2") self.timeout = timeout self.logger = logging.getLogger(__name__)
Adding a Robust Caching System
Now let's create your caching system to improve the application performance and reduce costs. Add the following code snippets to your cache/redis_manager.py file:
@retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10), reraise=True ) async def complete(self, prompt: str, temperature: float = 0.7, max_tokens: Optional[int] = None) -> Dict: """Get completion from the model with automatic retries""" try: inputs = self.tokenizer(prompt, return_tensors="pt").to( self.model.device ) with torch.no_grad(): outputs = self.model.generate( **inputs, max_new_tokens=max_tokens or 100, temperature=temperature, do_sample=True ) response_text = self.tokenizer.decode( outputs[0], skip_special_tokens=True ) # Calculate token usage for monitoring input_tokens = len(inputs.input_ids[0]) output_tokens = len(outputs[0]) - input_tokens return { 'content': response_text, 'usage': { 'prompt_tokens': input_tokens, 'completion_tokens': output_tokens, 'total_tokens': input_tokens + output_tokens }, 'model': "gpt2" } except Exception as e: self.logger.error(f"Error in LLM completion: {str(e)}") raise
In the above code snippets, we created a CacheManager class that handles all caching operations with the following:
- The _generate_key method, which creates unique cache keys based on prompts and parameters
- get_cached_response which checks if we have a cached response for a given prompt
- cache_response that stores successful responses for future use
Creating a Smart Prompt Manager
Let's create your prompt manager that will manage the prompts for your LLM model. Add the following code to your core/prompt_manager.py:
mkdir llm_integration && cd llm_integration python3 -m venv env syource env/bin/activate
Then create a sample prompt template for content moderation in your prompts/content_moderation.json file with code snippets:
transformers==4.36.0 huggingface-hub==0.19.4 redis==4.6.0 pydantic==2.5.0 pydantic-settings==2.1.0 tenacity==8.2.3 python-dotenv==1.0.0 fastapi==0.104.1 uvicorn==0.24.0 torch==2.1.0 numpy==1.24.3
Now your prompt manager will be able to load prompt templates from your JSON file and get also get a formatted prompt template.
Setting Up a Configuration Manager
To keep all your LLM configurations in one place and easily reuse them across your application, let's create configuration settings. Add the code below to your config/settings.py file:
pip install -r requirements.txt
Implementing Rate Limiting
Next, let's implement rate limiting to control how users access your application’s resources. To do that, add the following code to your utils/rate_limiter.py file:
llm_integration/ ├── core/ │ ├── llm_client.py # your main LLM interaction code │ ├── prompt_manager.py # Handles prompt templates │ └── response_handler.py # Processes LLM responses ├── cache/ │ └── redis_manager.py # Manages your caching system ├── config/ │ └── settings.py # Configuration management ├── api/ │ └── routes.py # API endpoints ├── utils/ │ ├── monitoring.py # Usage tracking │ └── rate_limiter.py # Rate limiting logic ├── requirements.txt └── main.py └── usage_logs.json
In the RateLimiter we implemented a resuable check_rate_limit method that can be used in any route to handle rate limiting by simply passing the period and number of requests allowed for each user for period of time.
Creating your API Endpoints
Now let's create your API endpoints in the api/routes.py file to integrate your LLM in your application:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer from tenacity import retry, stop_after_attempt, wait_exponential from typing import Dict, Optional import logging class LLMClient: def __init__(self, model_name: str = "gpt2", timeout: int = 30): try: self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.float16 ) except Exception as e: logging.error(f"Error loading model: {str(e)}") # Fallback to a simpler model if the specified one fails self.tokenizer = AutoTokenizer.from_pretrained("gpt2") self.model = AutoModelForCausalLM.from_pretrained("gpt2") self.timeout = timeout self.logger = logging.getLogger(__name__)
Here we defined a /moderate endpoint in the APIRouter class, which is responsible for organizing API routes. The @lru_cache decorator is applied to dependency injection functions (get_llm_client, get_response_handler, get_cache_manager, and get_prompt_manager) to ensure that instances of LLMClient, CacheManager, and PromptManager are cached for better performance. The moderate_content function, decorated with @router.post, defines a POST route for content moderation and utilizes FastAPI's Depends mechanism to inject these dependencies. Inside the function, the RateLimiter class, configured with rate limit settings from settings, enforces request limits.
Finally, let's update your main.py to bring everything together:
@retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10), reraise=True ) async def complete(self, prompt: str, temperature: float = 0.7, max_tokens: Optional[int] = None) -> Dict: """Get completion from the model with automatic retries""" try: inputs = self.tokenizer(prompt, return_tensors="pt").to( self.model.device ) with torch.no_grad(): outputs = self.model.generate( **inputs, max_new_tokens=max_tokens or 100, temperature=temperature, do_sample=True ) response_text = self.tokenizer.decode( outputs[0], skip_special_tokens=True ) # Calculate token usage for monitoring input_tokens = len(inputs.input_ids[0]) output_tokens = len(outputs[0]) - input_tokens return { 'content': response_text, 'usage': { 'prompt_tokens': input_tokens, 'completion_tokens': output_tokens, 'total_tokens': input_tokens + output_tokens }, 'model': "gpt2" } except Exception as e: self.logger.error(f"Error in LLM completion: {str(e)}") raise
In the above code, we've created a FastAPI app and the router using api.routes under the /api/v1 prefix. Enabled logging to display informational messages with timestamps. The app will run localhost:8000 using Uvicorn, with hot-reloading enabled.
Running your Application
We now have all the components in place, let’s start getting your application up and running. First, create a .env file in your project root directory and add your HUGGINGFACE_API_KEY and REDIS_URL:
mkdir llm_integration && cd llm_integration python3 -m venv env syource env/bin/activate
Then ensure Redis is running on your machine. On most Unix-based systems, you can start it with the command:
transformers==4.36.0 huggingface-hub==0.19.4 redis==4.6.0 pydantic==2.5.0 pydantic-settings==2.1.0 tenacity==8.2.3 python-dotenv==1.0.0 fastapi==0.104.1 uvicorn==0.24.0 torch==2.1.0 numpy==1.24.3
Now you can start your application:
pip install -r requirements.txt
your FastAPI server will start running on http://localhost:8000. The automatic API documentation will be available at http://localhost:8000/docs - this is super helpful for testing your endpoints!
Testing your Content Moderation API
Let's test your newly created API with a real request. Open a new terminal and run this curl command:
llm_integration/ ├── core/ │ ├── llm_client.py # your main LLM interaction code │ ├── prompt_manager.py # Handles prompt templates │ └── response_handler.py # Processes LLM responses ├── cache/ │ └── redis_manager.py # Manages your caching system ├── config/ │ └── settings.py # Configuration management ├── api/ │ └── routes.py # API endpoints ├── utils/ │ ├── monitoring.py # Usage tracking │ └── rate_limiter.py # Rate limiting logic ├── requirements.txt └── main.py └── usage_logs.json
You should see a response like this on your terminal:
Adding Monitoring and Analytics
Now let's add some monitoring features to track how your application is performing and how much resyources is being used. Add the following code to your utils/monitoring.py file:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer from tenacity import retry, stop_after_attempt, wait_exponential from typing import Dict, Optional import logging class LLMClient: def __init__(self, model_name: str = "gpt2", timeout: int = 30): try: self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.float16 ) except Exception as e: logging.error(f"Error loading model: {str(e)}") # Fallback to a simpler model if the specified one fails self.tokenizer = AutoTokenizer.from_pretrained("gpt2") self.model = AutoModelForCausalLM.from_pretrained("gpt2") self.timeout = timeout self.logger = logging.getLogger(__name__)
The UsageMonitor class will be performing the following operations:
- Tracking every API request with timestamps
- Recording token usage for cost monitoring
- Measuring response times
- Storing everything in a structured log file (replace this with a database before you deploy your application to production)
Next, add a new method to calculate usage statistics:
@retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10), reraise=True ) async def complete(self, prompt: str, temperature: float = 0.7, max_tokens: Optional[int] = None) -> Dict: """Get completion from the model with automatic retries""" try: inputs = self.tokenizer(prompt, return_tensors="pt").to( self.model.device ) with torch.no_grad(): outputs = self.model.generate( **inputs, max_new_tokens=max_tokens or 100, temperature=temperature, do_sample=True ) response_text = self.tokenizer.decode( outputs[0], skip_special_tokens=True ) # Calculate token usage for monitoring input_tokens = len(inputs.input_ids[0]) output_tokens = len(outputs[0]) - input_tokens return { 'content': response_text, 'usage': { 'prompt_tokens': input_tokens, 'completion_tokens': output_tokens, 'total_tokens': input_tokens + output_tokens }, 'model': "gpt2" } except Exception as e: self.logger.error(f"Error in LLM completion: {str(e)}") raise
Update your API to add the monitoring features from the UsageMonitor class:
from typing import Dict import logging class ResponseHandler: def __init__(self): self.logger = logging.getLogger(__name__) def parse_moderation_response(self, raw_response: str) -> Dict: """Parse and structure the raw LLM response for moderation""" try: # Default response structure structured_response = { "is_appropriate": True, "confidence_score": 0.0, "reason": None } # Simple keyword-based analysis lower_response = raw_response.lower() # Check for inappropriate content signals if any(word in lower_response for word in ['inappropriate', 'unsafe', 'offensive', 'harmful']): structured_response["is_appropriate"] = False structured_response["confidence_score"] = 0.9 # Extract reason if present if "because" in lower_response: reason_start = lower_response.find("because") structured_response["reason"] = raw_response[reason_start:].split('.')[0].strip() else: structured_response["confidence_score"] = 0.95 return structured_response except Exception as e: self.logger.error(f"Error parsing response: {str(e)}") return { "is_appropriate": True, "confidence_score": 0.5, "reason": "Failed to parse response" } def format_response(self, raw_response: Dict) -> Dict: """Format the final response with parsed content and usage stats""" try: return { "content": self.parse_moderation_response(raw_response["content"]), "usage": raw_response["usage"], "model": raw_response["model"] } except Exception as e: self.logger.error(f"Error formatting response: {str(e)}") raise
Now, test your /stats endpoint by running this curl command:
import redis from typing import Optional, Any import json import hashlib class CacheManager: def __init__(self, redis_url: str, ttl: int = 3600): self.redis = redis.from_url(redis_url) self.ttl = ttl def _generate_key(self, prompt: str, params: dict) -> str: """Generate a unique cache key""" cache_data = { 'prompt': prompt, 'params': params } serialized = json.dumps(cache_data, sort_keys=True) return hashlib.sha256(serialized.encode()).hexdigest() async def get_cached_response(self, prompt: str, params: dict) -> Optional[dict]: """Retrieve cached LLM response""" key = self._generate_key(prompt, params) cached = self.redis.get(key) return json.loads(cached) if cached else None async def cache_response(self, prompt: str, params: dict, response: dict) -> None: """Cache LLM response""" key = self._generate_key(prompt, params) self.redis.setex( key, self.ttl, json.dumps(response) )
The above command will show you the stats of your requests on the /moderate endpoint as shown in the screenshot below:
Conclusion
Throughout this tutorial, have learned how to use a large language model in production applications. You implemented features like API clients, caching, prompt management, and error handling. As an example of these concepts, you developed a content moderation system.
Now that you have a solid foundation, you could enhance your system with:
- Streaming responses for real-time applications
- A/B testing for an immediate improvement
- A web-based interface to manage prompts
- Custom model fine-tuning
- Integration with third-party monitoring services
Please recall that in the examples you used the ChatGPT2 model, but you can adapt this system to work with any LLM provider. So choose the model that meets your requirements and is within your budget.
Please don’t hesitate to contact me if you have questions or if you want to tell me what you are building with this system.
Happy coding! ?
The above is the detailed content of Integrating Large Language Models in Production Applications. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Python's unittest and pytest are two widely used testing frameworks that simplify the writing, organizing and running of automated tests. 1. Both support automatic discovery of test cases and provide a clear test structure: unittest defines tests by inheriting the TestCase class and starting with test\_; pytest is more concise, just need a function starting with test\_. 2. They all have built-in assertion support: unittest provides assertEqual, assertTrue and other methods, while pytest uses an enhanced assert statement to automatically display the failure details. 3. All have mechanisms for handling test preparation and cleaning: un

PythonisidealfordataanalysisduetoNumPyandPandas.1)NumPyexcelsatnumericalcomputationswithfast,multi-dimensionalarraysandvectorizedoperationslikenp.sqrt().2)PandashandlesstructureddatawithSeriesandDataFrames,supportingtaskslikeloading,cleaning,filterin

Dynamic programming (DP) optimizes the solution process by breaking down complex problems into simpler subproblems and storing their results to avoid repeated calculations. There are two main methods: 1. Top-down (memorization): recursively decompose the problem and use cache to store intermediate results; 2. Bottom-up (table): Iteratively build solutions from the basic situation. Suitable for scenarios where maximum/minimum values, optimal solutions or overlapping subproblems are required, such as Fibonacci sequences, backpacking problems, etc. In Python, it can be implemented through decorators or arrays, and attention should be paid to identifying recursive relationships, defining the benchmark situation, and optimizing the complexity of space.

To implement a custom iterator, you need to define the __iter__ and __next__ methods in the class. ① The __iter__ method returns the iterator object itself, usually self, to be compatible with iterative environments such as for loops; ② The __next__ method controls the value of each iteration, returns the next element in the sequence, and when there are no more items, StopIteration exception should be thrown; ③ The status must be tracked correctly and the termination conditions must be set to avoid infinite loops; ④ Complex logic such as file line filtering, and pay attention to resource cleaning and memory management; ⑤ For simple logic, you can consider using the generator function yield instead, but you need to choose a suitable method based on the specific scenario.

Future trends in Python include performance optimization, stronger type prompts, the rise of alternative runtimes, and the continued growth of the AI/ML field. First, CPython continues to optimize, improving performance through faster startup time, function call optimization and proposed integer operations; second, type prompts are deeply integrated into languages ??and toolchains to enhance code security and development experience; third, alternative runtimes such as PyScript and Nuitka provide new functions and performance advantages; finally, the fields of AI and data science continue to expand, and emerging libraries promote more efficient development and integration. These trends indicate that Python is constantly adapting to technological changes and maintaining its leading position.

Python's socket module is the basis of network programming, providing low-level network communication functions, suitable for building client and server applications. To set up a basic TCP server, you need to use socket.socket() to create objects, bind addresses and ports, call .listen() to listen for connections, and accept client connections through .accept(). To build a TCP client, you need to create a socket object and call .connect() to connect to the server, then use .sendall() to send data and .recv() to receive responses. To handle multiple clients, you can use 1. Threads: start a new thread every time you connect; 2. Asynchronous I/O: For example, the asyncio library can achieve non-blocking communication. Things to note

Polymorphism is a core concept in Python object-oriented programming, referring to "one interface, multiple implementations", allowing for unified processing of different types of objects. 1. Polymorphism is implemented through method rewriting. Subclasses can redefine parent class methods. For example, the spoke() method of Animal class has different implementations in Dog and Cat subclasses. 2. The practical uses of polymorphism include simplifying the code structure and enhancing scalability, such as calling the draw() method uniformly in the graphical drawing program, or handling the common behavior of different characters in game development. 3. Python implementation polymorphism needs to satisfy: the parent class defines a method, and the child class overrides the method, but does not require inheritance of the same parent class. As long as the object implements the same method, this is called the "duck type". 4. Things to note include the maintenance

The core answer to Python list slicing is to master the [start:end:step] syntax and understand its behavior. 1. The basic format of list slicing is list[start:end:step], where start is the starting index (included), end is the end index (not included), and step is the step size; 2. Omit start by default start from 0, omit end by default to the end, omit step by default to 1; 3. Use my_list[:n] to get the first n items, and use my_list[-n:] to get the last n items; 4. Use step to skip elements, such as my_list[::2] to get even digits, and negative step values ??can invert the list; 5. Common misunderstandings include the end index not
