国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Home Backend Development Python Tutorial A Complete Guide to LangChain in Python

A Complete Guide to LangChain in Python

Feb 10, 2025 am 08:29 AM

LangChain: A powerful Python library for building, experimenting and analyzing language models and agents

A Complete Guide to LangChain in Python

Core points:

  • LangChain is a Python library that simplifies the creation, experimentation and analysis of language models and agents, providing a wide range of functions for natural language processing.
  • It allows the creation of multifunctional agents that are able to understand and generate text and can configure specific behaviors and data sources to perform various language-related tasks.
  • LangChain provides three types of models: Large Language Model (LLM), Chat Model and Text Embedding Model, each providing unique functionality for language processing tasks.
  • It also provides features such as segmenting large text into easy-to-manage blocks, linking multiple LLM functions through chains to perform complex tasks, and integrating with various LLM and AI services outside of OpenAI.

LangChain is a powerful Python library that enables developers and researchers to create, experiment, and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a rich set of features, from building custom models to efficient manipulating text data. In this comprehensive guide, we will dig into the basic components of LangChain and demonstrate how to take advantage of its power in Python.

Environment settings:

To learn this article, create a new folder and install LangChain and OpenAI using pip:

pip3 install langchain openai

Agents:

In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.

Create LangChain agent:

Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and to respond. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

As you can see, after completing all the basic imports and initialization of LLM (llm = OpenAI(model="gpt-3.5-turbo", temperature=0)), the code uses tools = load_tools(["serpapi", "llm-math"], llm=llm) Load the tools required for the agent to work. It then uses the initialize_agent function to create an agent, provide it with the specified tool, and provides it with a ZERO_SHOT_REACT_DESCRIPTION description, which means it will not remember the previous problem.

Agency test example 1:

Let's test this agent with the following input:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

A Complete Guide to LangChain in Python

As you can see, it uses the following logic:

  • Search for "wind turbine energy production worldwide 2022" using Serp Internet Search API
  • The best results for analysis
  • Get any relevant numbers
  • Use the llm-math tool to convert 906 GW to Joule because we are asking for energy, not power

Agency Test Example 2:

LangChain agent is not limited to searching the Internet. We can connect almost any data source (including our own) to the LangChain agent and ask questions about the data. Let's try to create an agent trained on a CSV dataset.

Download this Netflix movie and TV show dataset from SHIVAM BANSAL on Kaggle and move it to your directory. Now add this code to a new Python file:

pip3 install langchain openai

This code calls the create_csv_agent function and uses the netflix_titles.csv dataset. The following figure shows our test.

A Complete Guide to LangChain in Python

As shown above, its logic is to look for all occurrences of "Christian Bale" in the cast column.

We can also create a Pandas DataFrame agent like this:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

If we run it, we will see the result as shown below.

A Complete Guide to LangChain in Python A Complete Guide to LangChain in Python

These are just some examples. We can use almost any API or dataset with LangChain.

Models:

There are three types of models in LangChain: Large Language Model (LLM), Chat Model and Text Embedding Model. Let's explore each type of model with some examples.

Large Language Model:

LangChain provides a way to use large language models in Python to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

As shown above, it uses the gpt-3.5-turbo model to generate output for the provided input ("Come up with a rap name for Matt Nikonorov"). In this example, I set the temperature to 0.9 to make the LLM more creative. It came up with “MC MegaMatt.” I gave it a 9/10 mark.

Chat Model:

It's fun to get the LLM model to come up with rap names, but if we want more complex answers and conversations, we need to use the chat model to improve our skills. Technically, how is the chat model different from a large language model? In the words of the LangChain document:

The chat model is a variant of the large language model. Although chat models use large language models in the background, they use slightly different interfaces. They do not use the "text input, text output" API, but use "chat messages" as the interface for input and output.

This is a simple Python chat model script:

pip3 install langchain openai

As shown above, the code first sends a SystemMessage and tells the chatbot to be friendly and informal, and then it sends a HumanMessage and tells the chatbot to convince us that Djokovich is better than Federer.

If you run this chatbot model, you will see the results shown below.

A Complete Guide to LangChain in Python

Embeddings:

Embing provides a way to convert words and numbers in blocks of text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

This will return a list of floating point numbers: [0.022762885317206383, -0.01276398915797472, 0.004815981723368168, -0.009435392916202545, 0.010824492201209068] . This is what embedding looks like.

Usage cases of embedded models:

If we want to train a chatbot or LLM to answer questions related to our data or specific text samples, we need to use embedding. Let's create a simple CSV file (embs.csv) with a "text" column containing three pieces of information:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

Now, this is a script that will use embeds to get the question "Who was the tallest human ever?" and find the correct answer in the CSV file:

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")

If we run this code, we will see it output "Robert Wadlow was the tallest human ever". The code finds the correct answer by getting the embedding of each piece of information and finding the embedding that is most relevant to the question "Who was the tallest human ever?". Embedded power!

Chunks:

LangChain models cannot process large texts at the same time and use them to generate responses. This is where block and text segmentation come in. Let's look at two simple ways to split text data into blocks before feeding it to LangChain.

Segment blocks by character:

To avoid sudden interruptions in blocks, we can split the text by paragraph by splitting the text at each occurrence of a newline or double newline:

from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")

Recursive segmentation block:

If we want to strictly split text by characters of a certain length, we can use RecursiveCharacterTextSplitter:

from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.9)
print(llm("Come up with a rap name for Matt Nikonorov"))

Block size and overlap:

When looking at the example above, you may want to know exactly what the block size and overlapping parameters mean, and how they affect performance. This can be explained in two ways:

  • Block size determines the number of characters in each block. The larger the block size, the more data there is in the block, the longer it takes LangChain to process it and generate the output, and vice versa.
  • Block overlap is the content that shares information between blocks so that they share some context. The higher the block overlap, the more redundant our blocks are, the lower the block overlap, the less context shared between blocks. Typically, a good block overlap is 10% to 20% of the block size, although the desired block overlap varies by different text types and use cases.

Chains:

Chapters are basically multiple LLM functions linked together to perform more complex tasks that cannot be accomplished through simple LLM input-> output. Let's look at a cool example:

pip3 install langchain openai

This code enters two variables into its prompts and develops a creative answer (temperature=0.9). In this example, we ask it to come up with a good title for a horror movie about mathematics. The output after running this code is "The Calculating Curse", but this doesn't really show the full functionality of the chain.

Let's look at a more practical example:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

This code may seem confusing, so let's explain it step by step.

This code reads a short biography of Nas (Hip Hop Artist) and extracts the following values ??from the text and formats them as JSON objects:

  • Artist's name
  • Artist's music genre
  • The artist's first album
  • The release year of the artist's first album

In the prompt, we also specified "Make sure to answer in the correct format" so that we always get the output in JSON format. Here is the output of this code:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

By providing the JSON pattern to the create_structed_output_chain function, we make the chain put its output into the JSON format.

Beyond OpenAI:

Although I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. We can use LangChain with many other LLM and AI services. (This is the complete list of LangChain's integrated LLMs.)

For example, we can use Cohere with LangChain. This is the documentation for the LangChain Cohere integration, but to provide a practical example, after installing Cohere using pip3 install cohere, we can write a simple Q&A code using LangChain and Cohere as follows:

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")

The above code produces the following output:

from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")

Conclusion:

In this guide, you have seen different aspects and functions of LangChain. Once you have mastered this knowledge, you can use LangChain's capabilities to perform NLP work, whether you are a researcher, developer or enthusiast.

You can find a repository on GitHub that contains all the images and Nas.txt files in this article.

I wish you a happy coding and experimenting with LangChain in Python!

The above is the detailed content of A Complete Guide to LangChain in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What are some common security vulnerabilities in Python web applications (e.g., XSS, SQL injection) and how can they be mitigated? What are some common security vulnerabilities in Python web applications (e.g., XSS, SQL injection) and how can they be mitigated? Jun 10, 2025 am 12:13 AM

Web application security needs to be paid attention to. Common vulnerabilities on Python websites include XSS, SQL injection, CSRF and file upload risks. For XSS, the template engine should be used to automatically escape, filter rich text HTML and set CSP policies; to prevent SQL injection, parameterized query or ORM framework, and verify user input; to prevent CSRF, CSRFTToken mechanism must be enabled and sensitive operations must be confirmed twice; file upload vulnerabilities must be used to restrict types, rename files, and prohibit execution permissions. Following the norms and using mature tools can effectively reduce risks, and safety needs continuous attention and testing.

How does Python's unittest or pytest framework facilitate automated testing? How does Python's unittest or pytest framework facilitate automated testing? Jun 19, 2025 am 01:10 AM

Python's unittest and pytest are two widely used testing frameworks that simplify the writing, organizing and running of automated tests. 1. Both support automatic discovery of test cases and provide a clear test structure: unittest defines tests by inheriting the TestCase class and starting with test\_; pytest is more concise, just need a function starting with test\_. 2. They all have built-in assertion support: unittest provides assertEqual, assertTrue and other methods, while pytest uses an enhanced assert statement to automatically display the failure details. 3. All have mechanisms for handling test preparation and cleaning: un

How does Python handle mutable default arguments in functions, and why can this be problematic? How does Python handle mutable default arguments in functions, and why can this be problematic? Jun 14, 2025 am 12:27 AM

Python's default parameters are only initialized once when defined. If mutable objects (such as lists or dictionaries) are used as default parameters, unexpected behavior may be caused. For example, when using an empty list as the default parameter, multiple calls to the function will reuse the same list instead of generating a new list each time. Problems caused by this behavior include: 1. Unexpected sharing of data between function calls; 2. The results of subsequent calls are affected by previous calls, increasing the difficulty of debugging; 3. It causes logical errors and is difficult to detect; 4. It is easy to confuse both novice and experienced developers. To avoid problems, the best practice is to set the default value to None and create a new object inside the function, such as using my_list=None instead of my_list=[] and initially in the function

What are the considerations for deploying Python applications to production environments? What are the considerations for deploying Python applications to production environments? Jun 10, 2025 am 12:14 AM

Deploying Python applications to production environments requires attention to stability, security and maintenance. First, use Gunicorn or uWSGI to replace the development server to support concurrent processing; second, cooperate with Nginx as a reverse proxy to improve performance; third, configure the number of processes according to the number of CPU cores to optimize resources; fourth, use a virtual environment to isolate dependencies and freeze versions to ensure consistency; fifth, enable detailed logs, integrate monitoring systems, and set up alarm mechanisms to facilitate operation and maintenance; sixth, avoid root permissions to run applications, close debugging information, and configure HTTPS to ensure security; finally, automatic deployment is achieved through CI/CD tools to reduce human errors.

How can Python be integrated with other languages or systems in a microservices architecture? How can Python be integrated with other languages or systems in a microservices architecture? Jun 14, 2025 am 12:25 AM

Python works well with other languages ??and systems in microservice architecture, the key is how each service runs independently and communicates effectively. 1. Using standard APIs and communication protocols (such as HTTP, REST, gRPC), Python builds APIs through frameworks such as Flask and FastAPI, and uses requests or httpx to call other language services; 2. Using message brokers (such as Kafka, RabbitMQ, Redis) to realize asynchronous communication, Python services can publish messages for other language consumers to process, improving system decoupling, scalability and fault tolerance; 3. Expand or embed other language runtimes (such as Jython) through C/C to achieve implementation

How can Python be used for data analysis and manipulation with libraries like NumPy and Pandas? How can Python be used for data analysis and manipulation with libraries like NumPy and Pandas? Jun 19, 2025 am 01:04 AM

PythonisidealfordataanalysisduetoNumPyandPandas.1)NumPyexcelsatnumericalcomputationswithfast,multi-dimensionalarraysandvectorizedoperationslikenp.sqrt().2)PandashandlesstructureddatawithSeriesandDataFrames,supportingtaskslikeloading,cleaning,filterin

How do list, dictionary, and set comprehensions improve code readability and conciseness in Python? How do list, dictionary, and set comprehensions improve code readability and conciseness in Python? Jun 14, 2025 am 12:31 AM

Python's list, dictionary and collection derivation improves code readability and writing efficiency through concise syntax. They are suitable for simplifying iteration and conversion operations, such as replacing multi-line loops with single-line code to implement element transformation or filtering. 1. List comprehensions such as [x2forxinrange(10)] can directly generate square sequences; 2. Dictionary comprehensions such as {x:x2forxinrange(5)} clearly express key-value mapping; 3. Conditional filtering such as [xforxinnumbersifx%2==0] makes the filtering logic more intuitive; 4. Complex conditions can also be embedded, such as combining multi-condition filtering or ternary expressions; but excessive nesting or side-effect operations should be avoided to avoid reducing maintainability. The rational use of derivation can reduce

How can you implement custom iterators in Python using __iter__ and __next__? How can you implement custom iterators in Python using __iter__ and __next__? Jun 19, 2025 am 01:12 AM

To implement a custom iterator, you need to define the __iter__ and __next__ methods in the class. ① The __iter__ method returns the iterator object itself, usually self, to be compatible with iterative environments such as for loops; ② The __next__ method controls the value of each iteration, returns the next element in the sequence, and when there are no more items, StopIteration exception should be thrown; ③ The status must be tracked correctly and the termination conditions must be set to avoid infinite loops; ④ Complex logic such as file line filtering, and pay attention to resource cleaning and memory management; ⑤ For simple logic, you can consider using the generator function yield instead, but you need to choose a suitable method based on the specific scenario.

See all articles