使用RAPTOR在Langchain中实施高级 RAG

2024年03月27日 由 alex 发表 402 0

1


在传统的 RAG 中,我们通常依靠检索短的连续文本块来进行检索。但是,当我们处理的是长上下文文档时,我们就不能仅仅将文档分块并嵌入其中,或者仅仅使用上下文来填充所有文档。相反,我们希望为长上下文 LLms 找到一种好的最小化分块方法。这就是 RAPTOR 的用武之地。


什么是 RAPTOR?

用于树状组织检索的递归抽象处理技术是一种全新且强大的索引和检索技术,可全面用于 LLM。它采用了一种自下而上的方法,通过对文本片段(块)进行聚类和归纳来形成一种分层树结构。


RAPTOR 论文提出了一种有趣的文档索引和检索方法:

  • 树叶是一组起始文档。
  • 树叶被嵌入并聚类。
  • 然后将聚类归纳为更高层次(更抽象)的类似文档信息整合。
  • 这个过程是递归进行的,形成一棵从原始文档(树叶)到更抽象摘要的 "树"。


我们可以在不同尺度上应用这种方法;树叶可以是:

  • 单篇文档中的文本块(如论文所示)
  • 完整文档(如下所示)
  • 通过较长的上下文 LLM,可以在完整文档上执行此功能。


这种树状结构是 RAPTOR 功能的关键所在,因为它能捕捉文本的高层次和细节方面,尤其适用于复杂的主题查询和问答任务中的多步骤推理。


这一过程包括将文档分割成较短的文本(称为 "块"),然后使用嵌入模型将 "块 "嵌入。然后使用聚类算法对这些嵌入进行聚类。聚类创建完成后,使用 LLM 对与每个聚类相关的文本进行摘要。


生成的摘要形成树状节点,较高层次的节点提供更抽象的摘要。


2


假设我们有 8 个属于一本大型手册的文档块。我们并不只是嵌入文档块并对其进行检索,而是先嵌入文档块,然后对其进行降维处理,因为对所有维度(OpenAI 嵌入模型为 1536 维,普通开源小型嵌入模型为 384 维)生成聚类的计算成本很高。


然后,使用聚类算法对缩减的维度进行聚类。然后,我们提取属于每个聚类的所有块,并总结每个聚类的上下文。生成的摘要进行增益嵌入和聚类,重复这一过程,直到达到模型的标记限制(上下文窗口)。


简而言之,RAPTOR 背后的直觉如下:


  • 对相似文档进行聚类和摘要。
  • 从相关文档中捕捉信息并汇总。
  • 为需要较少上下文内容的问题提供帮助。


代码实现


安装所需的库。


!pip install -U langchain umap-learn scikit-learn langchain_community tiktoken langchain-openai langchainhub chromadb
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install -qU llama-cpp-python


import locale
def getpreferredencoding(do_setlocale = True):
  return "UTF-8"
locale.getpreferredencoding = getpreferredencoding


下载所需的 Zephyr 模型参数文件


!wget "https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf"


实例化 LLM


from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
#
n_gpu_layers = -1  # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
#
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
#
# Make sure the model path is correct for your system!
model = LlamaCpp(
    model_path="/content/zephyr-7b-beta.Q4_K_M.gguf",
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    temperature=0.75,
    max_tokens=1000,
    top_p=1,
    n_ctx=35000,
    callback_manager=callback_manager,
    verbose=True,  # Verbose is required to pass to the callback manager
)


实例化嵌入模型。


from langchain.vectorstores import Chroma
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.utils import DistanceStrategy
#
EMBEDDING_MODEL_NAME = "thenlper/gte-small"
embd = HuggingFaceEmbeddings(
    model_name=EMBEDDING_MODEL_NAME,
    multi_process=True,
    model_kwargs={"device": "cuda"},
    encode_kwargs={"normalize_embeddings": True},  # set True for cosine similarity
)


加载数据


这里我们使用 LangChain 的 LCEL 文档作为输入数据


import matplotlib.pyplot as plt
import tiktoken
from bs4 import BeautifulSoup as Soup
from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader
## Helper Fuction to count the number of Tokensin each text
def num_tokens_from_string(string: str, encoding_name: str) -> int:
    """Returns the number of tokens in a text string."""
    encoding = tiktoken.get_encoding(encoding_name)
    num_tokens = len(encoding.encode(string))
    return num_tokens
#
# LCEL docs
url = "https://python.langchain.com/docs/expression_language/"
loader = RecursiveUrlLoader(
    url=url, max_depth=20, extractor=lambda x: Soup(x, "html.parser").text
)
docs = loader.load()
# LCEL w/ PydanticOutputParser (outside the primary LCEL docs)
url = "https://python.langchain.com/docs/modules/model_io/output_parsers/quick_start"
loader = RecursiveUrlLoader(
    url=url, max_depth=1, extractor=lambda x: Soup(x, "html.parser").text
)
docs_pydantic = loader.load()
# LCEL w/ Self Query (outside the primary LCEL docs)
url = "https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/"
loader = RecursiveUrlLoader(
    url=url, max_depth=1, extractor=lambda x: Soup(x, "html.parser").text
)
docs_sq = loader.load()
# Doc texts
docs.extend([*docs_pydantic, *docs_sq])
docs_texts = [d.page_content for d in docs]


通过计算每份文档的标记数来查看原始文档有多大,并使用直方图进行直观显示


counts = [num_tokens_from_string(d, "cl100k_base") for d in docs_texts]
# Plotting the histogram of token counts
plt.figure(figsize=(10, 6))
plt.hist(counts, bins=30, color="blue", edgecolor="black", alpha=0.7)
plt.title("Histogram of Token Counts")
plt.xlabel("Token Count")
plt.ylabel("Frequency")
plt.grid(axis="y", alpha=0.75)
# Display the histogram
plt.show()


3


检查所有文件是否都在我们文件的上下文窗口内。


# Doc texts concat
d_sorted = sorted(docs, key=lambda x: x.metadata["source"])
d_reversed = list(reversed(d_sorted))
concatenated_content = "\n\n\n --- \n\n\n".join(
    [doc.page_content for doc in d_reversed]
)
print(
    "Num tokens in all context: %s"
    % num_tokens_from_string(concatenated_content, "cl100k_base")
)
#
# Response
Num tokens in all context: 69108


将文件分块,以适应我们的 LLM 的上下文窗口。


# Doc texts split
from langchain_text_splitters import RecursiveCharacterTextSplitter
chunk_size_tok = 1000
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
    chunk_size=chunk_size_tok, chunk_overlap=0
)
texts_split = text_splitter.split_text(concatenated_content)
#
print(f"Number of text splits generated: {len(texts_split)}")
#
# Response
Number of text splits generated: 142


生成全局嵌入列表。


它包含每个语块的语义嵌入


global_embeddings = [embd.embed_query(txt) for txt in texts_split]
print(len(global_embeddings[0])
###########
384


将维度从 384 降为 2,生成一个缩小的聚类,并对嵌入进行可视化处理。


import matplotlib.pyplot as plt
from typing import Optional
import numpy as np
import umap
def reduce_cluster_embeddings(
    embeddings: np.ndarray,
    dim: int,
    n_neighbors: Optional[int] = None,
    metric: str = "cosine",
) -> np.ndarray:
    if n_neighbors is None:
        n_neighbors = int((len(embeddings) - 1) ** 0.5)
    return umap.UMAP(
        n_neighbors=n_neighbors, n_components=dim, metric=metric
    ).fit_transform(embeddings)

dim = 2
global_embeddings_reduced = reduce_cluster_embeddings(global_embeddings, dim)
print(global_embeddings_reduced[0])
#
plt.figure(figsize=(10, 8))
plt.scatter(global_embeddings_reduced[:, 0], global_embeddings_reduced[:, 1], alpha=0.5)
plt.title("Global Embeddings")
plt.xlabel("Dimension 1")
plt.ylabel("Dimension 2")
plt.show()


4


树形结构

树状结构中的聚类方法包括一些有趣的想法。


GMM(高斯混杂模型)

  • 建立数据点在不同聚类中的分布模型
  • 通过评估模型的贝叶斯信息准则(BIC)来优化聚类的数量


UMAP(统一曲面逼近和投影)

  • 支持聚类
  • 降低高维数据的维度
  • UMAP 有助于根据数据点的相似性突出数据点的自然分组


局部和全局聚类

  • 用于分析不同尺度的数据
  • 有效捕捉数据中的细粒度和更广泛的模式


阈值分析

  • 应用于 GMM,以确定聚类成员资格
  • 基于概率分布(将数据点分配到 ≥ 1 个聚类中)


import matplotlib.pyplot as plt
import numpy as np
from sklearn.mixture import GaussianMixture
def get_optimal_clusters(embeddings: np.ndarray, max_clusters: int = 50, random_state: int = 1234):
    max_clusters = min(max_clusters, len(embeddings))
    bics = [GaussianMixture(n_components=n, random_state=random_state).fit(embeddings).bic(embeddings)
            for n in range(1, max_clusters)]
    return np.argmin(bics) + 1
def gmm_clustering(embeddings: np.ndarray, threshold: float, random_state: int = 0):
    n_clusters = get_optimal_clusters(embeddings)
    gm = GaussianMixture(n_components=n_clusters, random_state=random_state).fit(embeddings)
    probs = gm.predict_proba(embeddings)
    labels = [np.where(prob > threshold)[0] for prob in probs]
    return labels, n_clusters
#
labels, _ = gmm_clustering(global_embeddings_reduced, threshold=0.5)
plot_labels = np.array([label[0] if len(label) > 0 else -1 for label in labels])
plt.figure(figsize=(10, 8))
unique_labels = np.unique(plot_labels)
colors = plt.cm.rainbow(np.linspace(0, 1, len(unique_labels)))
for label, color in zip(unique_labels, colors):
    mask = plot_labels == label
    plt.scatter(global_embeddings_reduced[mask, 0], global_embeddings_reduced[mask, 1], color=color, label=f'Cluster {label}', alpha=0.5)
plt.title("Cluster Visualization of Global Embeddings")
plt.xlabel("Dimension 1")
plt.ylabel("Dimension 2")
plt.legend()
plt.show()


5


创建一个数据框,检查与每个群组相关的文本。


import pandas as pd
simple_labels = [label[0] if len(label) > 0 else -1 for label in labels]
df = pd.DataFrame({
    'Text': texts_split,
    'Embedding': list(global_embeddings_reduced),
    'Cluster': simple_labels
})
print(df.head(3))


6


def format_cluster_texts(df):
    clustered_texts = {}
    for cluster in df['Cluster'].unique():
        cluster_texts = df[df['Cluster'] == cluster]['Text'].tolist()
        clustered_texts[cluster] = " --- ".join(cluster_texts)
    return clustered_texts
#
clustered_texts = format_cluster_texts(df)
#
clustered_texts
####################################################################
# Response
{4: 'Quickstart | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/OModel I/OQuickstartConceptsPromptsChat ModelsLLMsOutput ParsersQuickstartCustom Output ParsersTypesRetrievalAgentsChainsMoreLangServeLangSmithLangGraphModulesModel I/OOutput ParsersQuickstartOn this pageQuickstartLanguage models output text. But many times you may want to get more\nstructured information than just text back. This is where output parsers\ncome in.Output parsers are classes that help structure language model responses.\nThere are two main methods an output parser must implement:“Get format instructions”: A method which returns a string\ncontaining instructions for how the output of a language model\nshould be formatted.“Parse”: A method which takes in a string (assumed to be the\nresponse from a language model) and parses it into some structure.And then one optional one:“Parse with prompt”: A method which takes in a string (assumed to be\nthe response from a language model) and a prompt (assumed to be the\nprompt that generated such a response) and parses it into some\nstructure. The prompt is largely provided in the event the\nOutputParser wants to retry or fix the output in some way, and needs\ninformation from the prompt to do so.Get started\u200bBelow we go over the main type of output parser, the\nPydanticOutputParser.from langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Field, validatorfrom langchain_openai import OpenAImodel = OpenAI(model_name="gpt-3.5-turbo-instruct", temperature=0.0)# Define your desired data structure.class Joke(BaseModel):    setup: str = Field(description="question to set up a joke")    punchline: str = Field(description="answer to resolve the joke")    # You can add custom validation logic easily with Pydantic.    @validator("setup")    def question_ends_with_question_mark(cls, field):        if field[-1] != "?":            raise ValueError("Badly formed question!")        return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate(    template="Answer the user query.\\n{format_instructions}\\n{query}\\n",    input_variables=["query"],    partial_variables={"format_instructions": parser.get_format_instructions()},)# And a query intended to prompt a language model to populate the data structure.prompt_and_model = prompt | modeloutput = prompt_and_model.invoke({"query": "Tell me a joke."})parser.invoke(output)Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')LCEL\u200bOutput parsers implement the Runnable\ninterface, the basic building\nblock of the LangChain Expression Language\n(LCEL). This means they support invoke,\nainvoke, stream, astream, batch, abatch, astream_log calls.Output parsers accept a string or BaseMessage as input and can return\nan arbitrary type.parser.invoke(output)Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')Instead of manually invoking the parser, we also could’ve just added it --- --- \n\n\n\n\n\n\n\nSelf-querying | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalDocument loadersText SplittersRetrievalText embedding modelsVector storesRetrieversVector store-backed retrieverMultiQueryRetrieverContextual compressionEnsemble RetrieverLong-Context ReorderMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverIndexingAgentsChainsMoreLangServeLangSmithLangGraphModulesRetrievalRetrieversSelf-queryingOn this pageSelf-queryingHead to Integrations for\ndocumentation on vector stores with built-in support for self-querying.A self-querying retriever is one that, as the name suggests, has the\nability to query itself. Specifically, given any natural language query,\nthe retriever uses a query-constructing LLM chain to write a structured\nquery and then applies that structured query to its underlying\nVectorStore. This allows the retriever to not only use the user-input\nquery for semantic similarity comparison with the contents of stored\ndocuments but to also extract filters from the user query on the\nmetadata of stored documents and to execute those filters.Get started\u200bFor demonstration purposes we’ll use a Chroma vector store. We’ve\ncreated a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark package\ninstalled.%pip install --upgrade --quiet  lark chromadbfrom langchain_community.vectorstores import Chromafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsdocs = [    Document(        page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",        metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},    ),    Document(        page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",        metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},    ),    Document(        page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",        metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},    ),    Document(        page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",        metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},    ),    Document(        page_content="Toys come alive and have a blast doing so",        metadata={"year": 1995, "genre": "animated"},    ),    Document(        page_content="Three men walk into the Zone, three men walk out of the Zone",        metadata={            "year": 1979,            "director": "Andrei Tarkovsky",            "genre": "thriller",            "rating": 9.9,        },    ),]vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())Creating our self-querying retriever\u200bNow we can instantiate our retriever. To do this we’ll need to provide\nsome information upfront about the metadata fields that our documents --- support and a short description of the document contents.from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [    AttributeInfo(        name="genre",        description="The genre of the movie. One of [\'science fiction\', \'comedy\', \'drama\', \'thriller\', \'romance\', \'action\', \'animated\']",        type="string",    ),    AttributeInfo(        name="year",        description="The year the movie was released",        type="integer",    ),    AttributeInfo(        name="director",        description="The name of the movie director",        type="string",    ),    AttributeInfo(        name="rating", description="A 1-10 rating for the movie", type="float"    ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(    llm,    vectorstore,    document_content_description,    metadata_field_info,)Testing it out\u200bAnd now we can actually try using our retriever!# This example only specifies a filterretriever.invoke("I want to watch a movie rated higher than 8.5")[Document(page_content=\'Three men walk into the Zone, three men walk out of the Zone\', metadata={\'director\': \'Andrei Tarkovsky\', \'genre\': \'thriller\', \'rating\': 9.9, \'year\': 1979}), Document(page_content=\'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\', metadata={\'director\': \'Satoshi Kon\', \'rating\': 8.6, \'year\': 2006})]# This example specifies a query and a filterretriever.invoke("Has Greta Gerwig directed any movies about women")[Document(page_content=\'A bunch of normal-sized women are supremely wholesome and some men pine after them\', metadata={\'director\': \'Greta Gerwig\', \'rating\': 8.3, \'year\': 2019})]# This example specifies a composite filterretriever.invoke("What\'s a highly rated (above 8.5) science fiction film?")[Document(page_content=\'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\', metadata={\'director\': \'Satoshi Kon\', \'rating\': 8.6, \'year\': 2006}), Document(page_content=\'Three men walk into the Zone, three men walk out of the Zone\', metadata={\'director\': \'Andrei Tarkovsky\', \'genre\': \'thriller\', \'rating\': 9.9, \'year\': 1979})]# This example specifies a query and composite filterretriever.invoke(    "What\'s a movie after 1990 but before 2005 that\'s all about toys, and preferably is animated")[Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Filter k\u200bWe can also use the self query retriever to specify k: the number of\ndocuments to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm(    llm,    vectorstore,    document_content_description,    metadata_field_info,    enable_limit=True,)# This example only specifies a relevant queryretriever.invoke("What are two movies about dinosaurs")[Document(page_content=\'A bunch of scientists bring back dinosaurs and mayhem breaks loose\', metadata={\'genre\': \'science fiction\', \'rating\': 7.7, \'year\': 1993}), Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Constructing from scratch with LCEL\u200bTo see what’s going on under the hood, and to have more custom control, --- we can reconstruct our retriever from scratch.First, we need to create a query-construction chain. This chain will\ntake a user query and generated a StructuredQuery object which\ncaptures the filters specified by the user. We provide some helper\nfunctions for creating a prompt and output parser. These have a number --- of tunable params that we’ll ignore here for simplicity.from langchain.chains.query_constructor.base import (    StructuredQueryOutputParser,    get_query_constructor_prompt,)prompt = get_query_constructor_prompt(    document_content_description,    metadata_field_info,)output_parser = StructuredQueryOutputParser.from_components()query_constructor = prompt | llm | output_parserLet’s look at our prompt:print(prompt.format(query="dummy question"))Your goal is to structure the user\'s query to match the request schema provided below.<< Structured Request Schema >>When responding use a markdown code snippet with a JSON object formatted in the following schema:```json{    "query": string \\ text string to compare to document contents    "filter": string \\ logical condition statement for filtering documents}```The query string should contain only text that is expected to match the contents of documents. Any conditions in the filter should not be mentioned in the query as well.A logical condition statement is composed of one or more comparison and logical operation statements.A comparison statement takes the form: `comp(attr, val)`:- `comp` (eq | ne | gt | gte | lt | lte | contain | like | in | nin): comparator- `attr` (string):  name of attribute to apply the comparison to- `val` (string): is the comparison valueA logical operation statement takes the form `op(statement1, statement2, ...)`:- `op` (and | or | not): logical operator- `statement1`, `statement2`, ... (comparison statements or logical operation statements): one or more statements to apply the operation toMake sure that you only use the comparators and logical operators listed above and no others.Make sure that filters only refer to attributes that exist in the data source.Make sure that filters only use the attributed names with its function names if there are functions applied on them.Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values.Make sure that filters take into account the descriptions of attributes and only make comparisons that are feasible given the type of data being stored.Make sure that filters are only used as needed. If there are no filters that should be applied return "NO_FILTER" for the filter value.<< Example 1. >>Data Source:```json{    "content": "Lyrics of a song",    "attributes": {        "artist": {            "type": "string",            "description": "Name of the song artist"        },        "length": {            "type": "integer",            "description": "Length of the song in seconds"        },        "genre": {            "type": "string",            "description": "The song genre, one of "pop", "rock" or "rap""        }    }}```User Query:What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genreStructured Request:```json{    "query": "teenager love",    "filter": "and(or(eq(\\"artist\\", \\"Taylor Swift\\"), eq(\\"artist\\", \\"Katy Perry\\")), lt(\\"length\\", 180), eq(\\"genre\\", \\"pop\\"))"}```<< Example 2. >>Data Source:```json{    "content": "Lyrics of a song",    "attributes": {        "artist": {            "type": "string",            "description": "Name of the song artist"        },        "length": {            "type": "integer",            "description": "Length of the song in seconds"        },        "genre": --- {            "type": "string",            "description": "The song genre, one of "pop", "rock" or "rap""        }    }}```User Query:What are songs that were not published on SpotifyStructured Request:```json{    "query": "",    "filter": "NO_FILTER"}```<< Example 3. >>Data Source:```json{    "content": "Brief summary of a movie",    "attributes": {    "genre": {        "description": "The genre of the movie. One of [\'science fiction\', \'comedy\', \'drama\', \'thriller\', \'romance\', \'action\', \'animated\']",        "type": "string"    },    "year": {        "description": "The year the movie was released",        "type": "integer"    },    "director": {        "description": "The name of the movie director",        "type": "string"    },    "rating": {        "description": "A 1-10 rating for the movie",        "type": "float"    }}}```User Query:dummy questionStructured Request:And what our full chain produces:query_constructor.invoke(    {        "query": "What are some sci-fi movies from the 90\'s directed by Luc Besson about taxi drivers"    })StructuredQuery(query=\'taxi driver\', filter=Operation(operator=<Operator.AND: \'and\'>, arguments=[Comparison(comparator=<Comparator.EQ: \'eq\'>, attribute=\'genre\', value=\'science fiction\'), Operation(operator=<Operator.AND: \'and\'>, arguments=[Comparison(comparator=<Comparator.GTE: \'gte\'>, attribute=\'year\', value=1990), Comparison(comparator=<Comparator.LT: \'lt\'>, attribute=\'year\', value=2000)]), Comparison(comparator=<Comparator.EQ: \'eq\'>, attribute=\'director\', value=\'Luc Besson\')]), limit=None)The query constructor is the key element of the self-query retriever. To --- make a great retrieval system you’ll need to make sure your query\nconstructor works well. Often this requires adjusting the prompt, the\nexamples in the prompt, the attribute descriptions, etc. For an example\nthat walks through refining a query constructor on some hotel inventory\ndata, check out this\ncookbook.The next key element is the structured query translator. This is the\nobject responsible for translating the generic StructuredQuery object\ninto a metadata filter in the syntax of the vector store you’re using.\nLangChain comes with a number of built-in translators. To see them all\nhead to the Integrations\nsection.from langchain.retrievers.self_query.chroma import ChromaTranslatorretriever = SelfQueryRetriever(    query_constructor=query_constructor,    vectorstore=vectorstore,    structured_query_translator=ChromaTranslator(),)retriever.invoke(    "What\'s a movie after 1990 but before 2005 that\'s all about toys, and preferably is animated")[Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Help us out by providing feedback on this documentation page:PreviousParent Document RetrieverNextTime-weighted vector store retrieverGet startedCreating our self-querying retrieverTesting it outFilter kConstructing from scratch with LCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nWhy use LCEL | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageWhy use LCELOn this pageWhy use LCELWe recommend reading the LCEL Get\nstarted section first.LCEL makes it easy to build complex chains from basic components. It\ndoes this by providing: 1. A unified interface: Every LCEL object\nimplements the Runnable interface, which defines a common set of\ninvocation methods (invoke, batch, stream, ainvoke, …). This\nmakes it possible for chains of LCEL objects to also automatically\nsupport these invocations. That is, every chain of LCEL objects is\nitself an LCEL object. 2. Composition primitives: LCEL provides a\nnumber of primitives that make it easy to compose chains, parallelize\ncomponents, add fallbacks, dynamically configure chain internal, and\nmore.To better understand the value of LCEL, it’s helpful to see it in action\nand think about how we might recreate similar functionality without it.\nIn this walkthrough we’ll do just that with our basic\nexample from the\nget started section. We’ll take our simple prompt + model chain, which\nunder the hood already defines a lot of functionality, and see what it\nwould take to recreate all of it.%pip install --upgrade --quiet  langchain-core langchain-openai langchain-anthropicInvoke\u200bIn the simplest case, we just want to pass in a topic string and get --- cream")LCEL\u200bEvery component has built-in integrations with LangSmith. If we set the --- LCEL Interface, which we’ve only\npartially covered here. - Exploring the\nHow-to section to learn about\nadditional composition primitives that LCEL provides. - Looking through\nthe Cookbook section to see LCEL\nin action for common use cases. A good next use case to look at would be\nRetrieval-augmented\ngeneration.Help us out by providing feedback on this documentation page:PreviousGet startedNextInterfaceInvokeStreamBatchAsyncLLM instead of chat modelDifferent model providerRuntime configurabilityLoggingFallbacksFull code comparisonNext stepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nStreaming | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageStreamingOn this pageStreaming With LangChainStreaming is critical in making applications based on LLMs feel\nresponsive to end-users.Important LangChain primitives like LLMs, parsers, prompts, retrievers,\nand agents implement the LangChain Runnable\nInterface.This interface provides two general approaches to stream content:sync stream and async astream: a default implementation of\nstreaming that streams the final output from the chain.async astream_events and async astream_log: these provide a way\nto stream both intermediate steps and final output from the\nchain.Let’s take a look at both approaches, and try to understand how to use\nthem. ?Using Stream\u200bAll Runnable objects implement a sync method called stream and an\nasync variant called astream.These methods are designed to stream the final output in chunks,\nyielding each chunk as soon as it is available.Streaming is only possible if all steps in the program know how to\nprocess an input stream; i.e., process an input chunk one at a time,\nand yield a corresponding output chunk.The complexity of this processing can vary, from straightforward tasks\nlike emitting tokens produced by an LLM, to more challenging ones like\nstreaming parts of JSON results before the entire JSON is complete.The best place to start exploring streaming is with the single most\nimportant components in LLMs apps– the LLMs themselves!LLMs and Chat Models\u200bLarge language models and their chat variants are the primary bottleneck\nin LLM based apps. ?Large language models can take several seconds to generate a\ncomplete response to a query. This is far slower than the ~200-300\nms threshold at which an application feels responsive to an end user.The key strategy to make the application feel more responsive is to show\nintermediate progress; viz., to stream the output from the model token\nby token.We will show examples of streaming using the chat model from\nAnthropic. To use the model,\nyou will need to install the langchain-anthropic package. You can do\nthis with the following command:pip install -qU langchain-anthropic# Showing the example using anthropic, but you can use# your favorite chat model!from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic()chunks = []async for chunk in model.astream("hello. tell me something about yourself"):    chunks.append(chunk)    print(chunk.content, end="|", flush=True) Hello|!| My| name| is| Claude|.| I|\'m| an| AI| assistant| created| by| An|throp|ic| to| be| helpful|,| harmless|,| and| honest|.||Let’s inspect one of the chunkschunks[0]AIMessageChunk(content=\' Hello\')We got back something called an AIMessageChunk. This chunk represents\na part of an AIMessage.Message chunks are additive by design – one can simply add them up to\nget the state of the response so far!chunks[0] + chunks[1] + chunks[2] + chunks[3] + chunks[4]AIMessageChunk(content=\' Hello! My name is\')Chains\u200bVirtually all LLM applications involve more steps than just a call to a\nlanguage model.Let’s build a simple chain using LangChain Expression Language\n(LCEL) that combines a prompt, model and a parser and verify that\nstreaming works.We will use StrOutputParser to parse the output from the model. This\nis a simple parser that extracts the content field from an\nAIMessageChunk, giving us the token returned by the model.tipLCEL is a declarative way to specify a “program” by chainining\ntogether different LangChain primitives. Chains created using LCEL\nbenefit from an automatic implementation of stream and astream --- --- \n\n\n\n\n\n\n\nInterface | ?️? Langchain --- transforms this into a Python string, which is returned from the\ninvoke method.Next steps\u200bWe recommend reading our Why use LCEL\nsection next to see a side-by-side comparison of the code needed to\nproduce common functionality with and without LCEL.Help us out by providing feedback on this documentation page:PreviousLangChain Expression Language (LCEL)NextWhy use LCELBasic example: prompt + model + output parser1. Prompt2. Model3. Output parser4. Entire PipelineRAG Search ExampleNext stepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nUsing tools | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookUsing toolsUsing toolsYou can use any Tools with Runnables easily.%pip install --upgrade --quiet  langchain langchain-openai duckduckgo-searchfrom langchain.tools import DuckDuckGoSearchRunfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIsearch = DuckDuckGoSearchRun()template = """turn the following user input into a search query for a search engine:{input}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = prompt | model | StrOutputParser() | searchchain.invoke({"input": "I\'d like to figure out what games are tonight"})\'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\\\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\\\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code "GNPLAY" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\\\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\\\'s college football schedule still has some good games remaining to watch on your television.. We\\\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...\'Help us out by providing feedback on this documentation page:PreviousManaging prompt sizeNextLangChain Expression Language (LCEL)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nQuerying a SQL DB | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookQuerying a SQL DBQuerying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.prompts import ChatPromptTemplatetemplate = """Based on the table schema below, write a SQL query that would answer the user\'s question:{schema}Question: {question}SQL Query:"""prompt = ChatPromptTemplate.from_template(template)from langchain_community.utilities import SQLDatabaseWe’ll need the Chinook sample DB for this example. There’s many places\nto download it from,\ne.g.\xa0https://database.guide/2-sample-databases-sqlite/db = SQLDatabase.from_uri("sqlite:///./Chinook.db")def get_schema(_):    return db.get_table_info()def run_query(query):    return db.run(query)from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()sql_response = (    RunnablePassthrough.assign(schema=get_schema)    | prompt    | model.bind(stop=["\\nSQLResult:"])    | StrOutputParser())sql_response.invoke({"question": "How many employees are there?"})\'SELECT COUNT(*) FROM Employee\'template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:{schema}Question: {question}SQL Query: {query}SQL Response: {response}"""prompt_response = ChatPromptTemplate.from_template(template)full_chain = (    RunnablePassthrough.assign(query=sql_response).assign(        schema=get_schema,        response=lambda x: db.run(x["query"]),    )    | prompt_response    | model)full_chain.invoke({"question": "How many employees are there?"})AIMessage(content=\'There are 8 employees.\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousMultiple chainsNextAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nRAG | ?️? Langchain --- --- \n\n\n\n\n\n\n\nManaging prompt size | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookManaging prompt sizeManaging prompt sizeAgents dynamically call tools. The results of those tool calls are added\nback to the prompt, so that the agent can plan the next action.\nDepending on what tools are being used and how they’re being called, the\nagent prompt can easily grow larger than the model context window.With LCEL, it’s easy to add custom functionality for managing the size\nof prompts within your chain or agent. Let’s look at simple agent --- traceUnfortunately we run out of space in our model’s context window before\nwe the agent can get to the final answer. Now let’s add some prompt\nhandling logic. To keep things simple, if our messages have too many\ntokens we’ll start dropping the earliest AI, Function message pairs\n(this is the model tool invocation message and the subsequent tool --- traceHelp us out by providing feedback on this documentation page:PreviousAdding moderationNextUsing toolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nPrompt + LLM | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookPrompt + LLMOn this pagePrompt + LLMThe most common and valuable composition is taking:PromptTemplate / ChatPromptTemplate -> LLM / ChatModel ->\nOutputParserAlmost any other chains you build will use this building block.PromptTemplate + LLM\u200bThe simplest composition is just combining a prompt and model to create\na chain that takes user input, adds it to a prompt, passes it to a\nmodel, and returns the raw model output.Note, you can mix and match PromptTemplate/ChatPromptTemplates and\nLLMs/ChatModels as you like here.%pip install –upgrade –quiet langchain langchain-openaifrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")model = ChatOpenAI()chain = prompt | modelchain.invoke({"foo": "bears"})AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!", additional_kwargs={}, example=False)Often times we want to attach kwargs that’ll be passed to each model\ncall. Here are a few examples of that:Attaching Stop Sequences\u200bchain = prompt | model.bind(stop=["\\n"])chain.invoke({"foo": "bears"})AIMessage(content=\'Why did the bear never wear shoes?\', additional_kwargs={}, example=False)Attaching Function Call information\u200bfunctions = [    {        "name": "joke",        "description": "A joke",        "parameters": {            "type": "object",            "properties": {                "setup": {"type": "string", "description": "The setup for the joke"},                "punchline": {                    "type": "string",                    "description": "The punchline for the joke",                },            },            "required": ["setup", "punchline"],        },    }]chain = prompt | model.bind(function_call={"name": "joke"}, functions=functions)chain.invoke({"foo": "bears"}, config={})AIMessage(content=\'\', additional_kwargs={\'function_call\': {\'name\': \'joke\', \'arguments\': \'{\\n  "setup": "Why don\\\'t bears wear shoes?",\\n  "punchline": "Because they have bear feet!"\\n}\'}}, example=False)PromptTemplate + LLM + OutputParser\u200bWe can also add in an output parser to easily transform the raw\nLLM/ChatModel output into a more workable formatfrom langchain_core.output_parsers import StrOutputParserchain = prompt | model | StrOutputParser()Notice that this now returns a string - a much more workable format for\ndownstream taskschain.invoke({"foo": "bears"})"Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"Functions Output Parser\u200bWhen you specify the function to return, you may just want to parse that --- --- \n\n\n\n\n\n\n\nMultiple chains | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookMultiple chainsOn this pageMultiple chainsRunnables can easily be used to string together multiple Chains%pip install --upgrade --quiet  langchain langchain-openaifrom operator import itemgetterfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")prompt2 = ChatPromptTemplate.from_template(    "what country is the city {city} in? respond in {language}")model = ChatOpenAI()chain1 = prompt1 | model | StrOutputParser()chain2 = (    {"city": chain1, "language": itemgetter("language")}    | prompt2    | model    | StrOutputParser())chain2.invoke({"person": "obama", "language": "spanish"})\'El país donde se encuentra la ciudad de Honolulu, donde nació Barack Obama, el 44º Presidente de los Estados Unidos, es Estados Unidos. Honolulu se encuentra en la isla de Oahu, en el estado de Hawái.\'from langchain_core.runnables import RunnablePassthroughprompt1 = ChatPromptTemplate.from_template(    "generate a {attribute} color. Return the name of the color and nothing else:")prompt2 = ChatPromptTemplate.from_template(    "what is a fruit of color: {color}. Return the name of the fruit and nothing else:")prompt3 = ChatPromptTemplate.from_template(    "what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:")prompt4 = ChatPromptTemplate.from_template(    "What is the color of {fruit} and the flag of {country}?")model_parser = model | StrOutputParser()color_generator = (    {"attribute": RunnablePassthrough()} | prompt1 | {"color": model_parser})color_to_fruit = prompt2 | model_parsercolor_to_country = prompt3 | model_parserquestion_generator = (    color_generator | {"fruit": color_to_fruit, "country": color_to_country} | prompt4)question_generator.invoke("warm")ChatPromptValue(messages=[HumanMessage(content=\'What is the color of strawberry and the flag of China?\', additional_kwargs={}, example=False)])prompt = question_generator.invoke("warm")model.invoke(prompt)AIMessage(content=\'The color of an apple is typically red or green. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.\', additional_kwargs={}, example=False)Branching and Merging\u200bYou may want the output of one component to be processed by 2 or more\nother components.\nRunnableParallels\nlet you split or fork the chain so multiple components can process the\ninput in parallel. Later, other components can join or merge the results\nto synthesize a final response. This type of chain creates a computation --- --- \n\n\n\n\n\n\n\nAdding moderation | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAdding moderationAdding moderationThis shows how to add in moderation (or other safeguards) around your\nLLM application.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain.chains import OpenAIModerationChainfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import OpenAImoderate = OpenAIModerationChain()model = OpenAI()prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])chain = prompt | modelchain.invoke({"input": "you are stupid"})\'\\n\\nYou are stupid.\'moderated_chain = chain | moderatemoderated_chain.invoke({"input": "you are stupid"}){\'input\': \'\\n\\nYou are stupid\', \'output\': "Text was found that violates OpenAI\'s content policy."}Help us out by providing feedback on this documentation page:PreviousAdding memoryNextManaging prompt sizeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nAdding memory | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAdding memoryAdding memoryThis shows how to add memory to an arbitrary chain. Right now, you can\nuse the memory classes but need to hook it up manually%pip install --upgrade --quiet  langchain langchain-openaifrom operator import itemgetterfrom langchain.memory import ConversationBufferMemoryfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_messages(    [        ("system", "You are a helpful chatbot"),        MessagesPlaceholder(variable_name="history"),        ("human", "{input}"),    ])memory = ConversationBufferMemory(return_messages=True)memory.load_memory_variables({}){\'history\': []}chain = (    RunnablePassthrough.assign(        history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")    )    | prompt    | model)inputs = {"input": "hi im bob"}response = chain.invoke(inputs)responseAIMessage(content=\'Hello Bob! How can I assist you today?\', additional_kwargs={}, example=False)memory.save_context(inputs, {"output": response.content})memory.load_memory_variables({}){\'history\': [HumanMessage(content=\'hi im bob\', additional_kwargs={}, example=False),  AIMessage(content=\'Hello Bob! How can I assist you today?\', additional_kwargs={}, example=False)]}inputs = {"input": "whats my name"}response = chain.invoke(inputs)responseAIMessage(content=\'Your name is Bob.\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousRouting by semantic similarityNextAdding moderationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nRouting by semantic similarity | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookRouting by semantic similarityRouting by semantic similarityWith LCEL you can easily add custom routing\nlogic\nto your chain to dynamically determine the chain logic based on user\ninput. All you need to do is define a function that given an input\nreturns a Runnable.One especially useful technique is to use embeddings to route a query to --- --- \n\n\n\n\n\n\n\nCode writing | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCode writingCode writingExample of how to use LCEL to write Python code.%pip install --upgrade --quiet  langchain-core langchain-experimental langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import (    ChatPromptTemplate,)from langchain_experimental.utilities import PythonREPLfrom langchain_openai import ChatOpenAItemplate = """Write some python code to solve the user\'s problem. Return only python code in Markdown format, e.g.:```python....```"""prompt = ChatPromptTemplate.from_messages([("system", template), ("human", "{input}")])model = ChatOpenAI()def _sanitize_output(text: str):    _, after = text.split("```python")    return after.split("```")[0]chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().runchain.invoke({"input": "whats 2 plus 2"})Python REPL can execute arbitrary code. Use with caution.\'4\\n\'Help us out by providing feedback on this documentation page:PreviousAgentsNextRouting by semantic similarityCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nAgents | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAgentsAgentsYou can pass a Runnable into an agent. Make sure you have langchainhub\ninstalled: pip install langchainhubfrom langchain import hubfrom langchain.agents import AgentExecutor, toolfrom langchain.agents.output_parsers import XMLAgentOutputParserfrom langchain_community.chat_models import ChatAnthropicmodel = ChatAnthropic(model="claude-2")@tooldef search(query: str) -> str:    """Search things about current events."""    return "32 degrees"tool_list = [search]# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/xml-agent-convo")# Logic for going from intermediate steps to a string to pass into model# This is pretty tied to the promptdef convert_intermediate_steps(intermediate_steps):    log = ""    for action, observation in intermediate_steps:        log += (            f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"            f"</tool_input><observation>{observation}</observation>"        )    return log# Logic for converting tools to string to go in promptdef convert_tools(tools):    return "\\n".join([f"{tool.name}: {tool.description}" for tool in tools])Building an agent from a runnable usually involves a few things:Data processing for the intermediate steps. These need to be\nrepresented in a way that the language model can recognize them.\nThis should be pretty tightly coupled to the instructions in the\npromptThe prompt itselfThe model, complete with stop tokens if neededThe output parser - should be in sync with how the prompt specifies\nthings to be formatted.agent = (    {        "input": lambda x: x["input"],        "agent_scratchpad": lambda x: convert_intermediate_steps(            x["intermediate_steps"]        ),    }    | prompt.partial(tools=convert_tools(tool_list))    | model.bind(stop=["</tool_input>", "</final_answer>"])    | XMLAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)agent_executor.invoke({"input": "whats the weather in New york?"})> Entering new AgentExecutor chain... <tool>search</tool><tool_input>weather in New York32 degrees <tool>search</tool><tool_input>weather in New York32 degrees <final_answer>The weather in New York is 32 degrees> Finished chain.{\'input\': \'whats the weather in New york?\', \'output\': \'The weather in New York is 32 degrees\'}Help us out by providing feedback on this documentation page:PreviousQuerying a SQL DBNextCode writingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nCookbook | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCookbookExample code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you\'re just getting acquainted with LCEL, the Prompt + LLM page is a good place to start.?️ Prompt + LLMThe most common and valuable composition is taking:?️ RAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds?️ Multiple chainsRunnables can easily be used to string together multiple Chains?️ Querying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.?️ AgentsYou can pass a Runnable into an agent. Make sure you have langchainhub?️ Code writingExample of how to use LCEL to write Python code.?️ Routing by semantic similarityWith LCEL you can easily add [custom routing?️ Adding memoryThis shows how to add memory to an arbitrary chain. Right now, you can?️ Adding moderationThis shows how to add in moderation (or other safeguards) around your?️ Managing prompt sizeAgents dynamically call tools. The results of those tool calls are added?️ Using toolsYou can use any Tools with Runnables easily.Help us out by providing feedback on this documentation page:PreviousAdd message history (memory)NextPrompt + LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nCookbook | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCookbookExample code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you\'re just getting acquainted with LCEL, the Prompt + LLM page is a good place to start.?️ Prompt + LLMThe most common and valuable composition is taking:?️ RAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds?️ Multiple chainsRunnables can easily be used to string together multiple Chains?️ Querying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.?️ AgentsYou can pass a Runnable into an agent. Make sure you have langchainhub?️ Code writingExample of how to use LCEL to write Python code.?️ Routing by semantic similarityWith LCEL you can easily add [custom routing?️ Adding memoryThis shows how to add memory to an arbitrary chain. Right now, you can?️ Adding moderationThis shows how to add in moderation (or other safeguards) around your?️ Managing prompt sizeAgents dynamically call tools. The results of those tool calls are added?️ Using toolsYou can use any Tools with Runnables easily.Help us out by providing feedback on this documentation page:PreviousAdd message history (memory)NextPrompt + LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nLangChain Expression Language (LCEL) | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageLangChain Expression Language (LCEL)LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.\nLCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:Streaming support\nWhen you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.Async support\nAny chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a LangServe server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.Optimized parallel execution\nWhenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.Retries and fallbacks\nConfigure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.Access intermediate results\nFor more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every LangServe server.Input and output schemas\nInput and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.Seamless LangSmith tracing integration\nAs your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.\nWith LCEL, all steps are automatically logged to LangSmith for maximum observability and debuggability.Seamless LangServe deployment integration\nAny chain created with LCEL can be easily deployed using LangServe.Help us out by providing feedback on this documentation page:PreviousSecurityNextGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.',
 0: 'to our Runnable sequence:chain = prompt | model | parserchain.invoke({"query": "Tell me a joke."})Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')While all parsers support the streaming interface, only certain parsers\ncan stream through partially parsed objects, since this is highly\ndependent on the output type. Parsers which cannot construct partial\nobjects will simply yield the fully parsed output.The SimpleJsonOutputParser for example can stream through partial\noutputs:from langchain.output_parsers.json import SimpleJsonOutputParserjson_prompt = PromptTemplate.from_template(    "Return a JSON object with an `answer` key that answers the following question: {question}")json_parser = SimpleJsonOutputParser()json_chain = json_prompt | model | json_parserlist(json_chain.stream({"question": "Who invented the microscope?"}))[{}, {\'answer\': \'\'}, {\'answer\': \'Ant\'}, {\'answer\': \'Anton\'}, {\'answer\': \'Antonie\'}, {\'answer\': \'Antonie van\'}, {\'answer\': \'Antonie van Lee\'}, {\'answer\': \'Antonie van Leeu\'}, {\'answer\': \'Antonie van Leeuwen\'}, {\'answer\': \'Antonie van Leeuwenho\'}, {\'answer\': \'Antonie van Leeuwenhoek\'}]While the PydanticOutputParser cannot:list(chain.stream({"query": "Tell me a joke."}))[Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')]Help us out by providing feedback on this documentation page:PreviousOutput ParsersNextCustom Output ParsersGet startedLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- back a joke string:Without LCEL\u200bfrom typing import Listimport openaiprompt_template = "Tell me a short joke about {topic}"client = openai.OpenAI()def call_chat_model(messages: List[dict]) -> str:    response = client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentdef invoke_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    messages = [{"role": "user", "content": prompt_value}]    return call_chat_model(messages)invoke_chain("ice cream")LCEL\u200bfrom langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughprompt = ChatPromptTemplate.from_template(    "Tell me a short joke about {topic}")output_parser = StrOutputParser()model = ChatOpenAI(model="gpt-3.5-turbo")chain = (    {"topic": RunnablePassthrough()}     | prompt    | model    | output_parser)chain.invoke("ice cream") Stream\u200bIf we want to stream results instead, we’ll need to change our function: Without LCEL\u200bfrom typing import Iteratordef stream_chat_model(messages: List[dict]) -> Iterator[str]:    stream = client.chat.completions.create(        model="gpt-3.5-turbo",        messages=messages,        stream=True,    )    for response in stream:        content = response.choices[0].delta.content        if content is not None:            yield contentdef stream_chain(topic: str) -> Iterator[str]:    prompt_value = prompt.format(topic=topic)    return stream_chat_model([{"role": "user", "content": prompt_value}])for chunk in stream_chain("ice cream"):    print(chunk, end="", flush=True)LCEL\u200bfor chunk in chain.stream("ice cream"):    print(chunk, end="", flush=True) Batch\u200bIf we want to run on a batch of inputs in parallel, we’ll again need a --- new function: Without LCEL\u200bfrom concurrent.futures import ThreadPoolExecutordef batch_chain(topics: list) -> list:    with ThreadPoolExecutor(max_workers=5) as executor:        return list(executor.map(invoke_chain, topics))batch_chain(["ice cream", "spaghetti", "dumplings"])LCEL\u200bchain.batch(["ice cream", "spaghetti", "dumplings"]) Async\u200bIf we need an asynchronous version: Without LCEL\u200basync_client = openai.AsyncOpenAI()async def acall_chat_model(messages: List[dict]) -> str:    response = await async_client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentasync def ainvoke_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    messages = [{"role": "user", "content": prompt_value}]    return await acall_chat_model(messages)await ainvoke_chain("ice cream")LCEL\u200bchain.ainvoke("ice cream") LLM instead of chat model\u200bIf we want to use a completion endpoint instead of a chat endpoint: Without LCEL\u200bdef call_llm(prompt_value: str) -> str:    response = client.completions.create(        model="gpt-3.5-turbo-instruct",        prompt=prompt_value,    )    return response.choices[0].textdef invoke_llm_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    return call_llm(prompt_value)invoke_llm_chain("ice cream")LCEL\u200bfrom langchain_openai import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct")llm_chain = (    {"topic": RunnablePassthrough()}     | prompt    | llm    | output_parser)llm_chain.invoke("ice cream") Different model provider\u200bIf we want to use Anthropic instead of OpenAI: Without LCEL\u200bimport anthropicanthropic_template = f"Human:\\n\\n{prompt_template}\\n\\nAssistant:"anthropic_client = anthropic.Anthropic()def call_anthropic(prompt_value: str) -> str:    response = anthropic_client.completions.create(        model="claude-2",        prompt=prompt_value,        max_tokens_to_sample=256,    )    return response.completion    def invoke_anthropic_chain(topic: str) -> str:    prompt_value = anthropic_template.format(topic=topic)    return call_anthropic(prompt_value)invoke_anthropic_chain("ice cream")LCEL\u200bfrom langchain_anthropic import ChatAnthropicanthropic = ChatAnthropic(model="claude-2")anthropic_chain = (    {"topic": RunnablePassthrough()}     | prompt     | anthropic    | output_parser)anthropic_chain.invoke("ice cream") Runtime configurability\u200bIf we wanted to make the choice of chat model or LLM configurable at --- runtime: Without LCEL\u200bdef invoke_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> str:    if model == "chat_openai":        return invoke_chain(topic)    elif model == "openai":        return invoke_llm_chain(topic)    elif model == "anthropic":        return invoke_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def stream_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> Iterator[str]:    if model == "chat_openai":        return stream_chain(topic)    elif model == "openai":        # Note we haven\'t implemented this yet.        return stream_llm_chain(topic)    elif model == "anthropic":        # Note we haven\'t implemented this yet        return stream_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def batch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    # You get the idea    ...async def abatch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...invoke_configurable_chain("ice cream", model="openai")stream = stream_configurable_chain(    "ice_cream",     model="anthropic")for chunk in stream:    print(chunk, end="", flush=True)# batch_configurable_chain(["ice cream", "spaghetti", "dumplings"])# await ainvoke_configurable_chain("ice cream")With LCEL\u200bfrom langchain_core.runnables import ConfigurableFieldconfigurable_model = model.configurable_alternatives(    ConfigurableField(id="model"),     default_key="chat_openai",     openai=llm,    anthropic=anthropic,)configurable_chain = (    {"topic": RunnablePassthrough()}     | prompt     | configurable_model     | output_parser)configurable_chain.invoke(    "ice cream",     config={"model": "openai"})stream = configurable_chain.stream(    "ice cream",     config={"model": "anthropic"})for chunk in stream:    print(chunk, end="", flush=True)configurable_chain.batch(["ice cream", "spaghetti", "dumplings"])# await configurable_chain.ainvoke("ice cream") Logging\u200bIf we want to log our intermediate results: Without LCEL\u200bWe’ll print intermediate steps for illustrative purposesdef invoke_anthropic_chain_with_logging(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = anthropic_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_anthropic(prompt_value)    print(f"Output: {output}")    return outputinvoke_anthropic_chain_with_logging("ice --- valuable. Without LCEL\u200bfrom concurrent.futures import ThreadPoolExecutorfrom typing import Iterator, List, Tupleimport anthropicimport openaiprompt_template = "Tell me a short joke about {topic}"anthropic_template = f"Human:\\n\\n{prompt_template}\\n\\nAssistant:"client = openai.OpenAI()async_client = openai.AsyncOpenAI()anthropic_client = anthropic.Anthropic()def call_chat_model(messages: List[dict]) -> str:    response = client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentdef invoke_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = prompt_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    messages = [{"role": "user", "content": prompt_value}]    output = call_chat_model(messages)    print(f"Output: {output}")    return outputdef stream_chat_model(messages: List[dict]) -> Iterator[str]:    stream = client.chat.completions.create(        model="gpt-3.5-turbo",        messages=messages,        stream=True,    )    for response in stream:        content = response.choices[0].delta.content        if content is not None:            yield contentdef stream_chain(topic: str) -> Iterator[str]:    print(f"Input: {topic}")    prompt_value = prompt.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    stream = stream_chat_model([{"role": "user", "content": prompt_value}])    for chunk in stream:        print(f"Token: {chunk}", end="")        yield chunkdef batch_chain(topics: list) -> list:    with ThreadPoolExecutor(max_workers=5) as executor:        return list(executor.map(invoke_chain, topics))def call_llm(prompt_value: str) -> str:    response = client.completions.create(        model="gpt-3.5-turbo-instruct",        prompt=prompt_value,    )    return response.choices[0].textdef invoke_llm_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = promtp_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_llm(prompt_value)    print(f"Output: {output}")    return outputdef call_anthropic(prompt_value: str) -> str:    response = anthropic_client.completions.create(        model="claude-2",        prompt=prompt_value,        max_tokens_to_sample=256,    )    return response.completion   def invoke_anthropic_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = anthropic_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_anthropic(prompt_value)    print(f"Output: {output}")    return outputasync def ainvoke_anthropic_chain(topic: str) -> str: --- ...def stream_anthropic_chain(topic: str) -> Iterator[str]:    ...def batch_anthropic_chain(topics: List[str]) -> List[str]:    ...def invoke_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> str:    if model == "chat_openai":        return invoke_chain(topic)    elif model == "openai":        return invoke_llm_chain(topic)    elif model == "anthropic":        return invoke_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def stream_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> Iterator[str]:    if model == "chat_openai":        return stream_chain(topic)    elif model == "openai":        # Note we haven\'t implemented this yet.        return stream_llm_chain(topic)    elif model == "anthropic":        # Note we haven\'t implemented this yet        return stream_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def batch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...async def abatch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...def invoke_chain_with_fallback(topic: str) -> str:    try:        return invoke_chain(topic)    except Exception:        return invoke_anthropic_chain(topic)async def ainvoke_chain_with_fallback(topic: str) -> str:    try:        return await ainvoke_chain(topic)    except Exception:        return ainvoke_anthropic_chain(topic)async def batch_chain_with_fallback(topics: List[str]) -> str:    try:        return batch_chain(topics)    except Exception:        return batch_anthropic_chain(topics)LCEL\u200bimport osfrom langchain_anthropic import ChatAnthropicfrom langchain_openai import ChatOpenAIfrom langchain_openai import OpenAIfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthrough, ConfigurableFieldos.environ["LANGCHAIN_API_KEY"] = "..."os.environ["LANGCHAIN_TRACING_V2"] = "true"prompt = ChatPromptTemplate.from_template(    "Tell me a short joke about {topic}")chat_openai = ChatOpenAI(model="gpt-3.5-turbo")openai = OpenAI(model="gpt-3.5-turbo-instruct")anthropic = ChatAnthropic(model="claude-2")model = (    chat_openai    .with_fallbacks([anthropic])    .configurable_alternatives( --- ConfigurableField(id="model"),        default_key="chat_openai",        openai=openai,        anthropic=anthropic,    ))chain = (    {"topic": RunnablePassthrough()}     | prompt     | model     | StrOutputParser()) Next steps\u200bTo continue learning about LCEL, we recommend: - Reading up on the full --- allowing streaming of the final output. In fact, chains created with\nLCEL implement the entire standard Runnable interface.from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for chunk in chain.astream({"topic": "parrot"}):    print(chunk, end="|", flush=True) Here|\'s| a| silly| joke| about| a| par|rot|:|What| kind| of| teacher| gives| good| advice|?| An| ap|-|parent| (|app|arent|)| one|!||noteYou do not have to use the LangChain Expression Language to use\nLangChain and can instead rely on a standard imperative programming\napproach by caling invoke, batch or stream on each component\nindividually, assigning the results to variables and then using them\ndownstream as you see fit.If that works for your needs, then that’s fine by us ?!Working with Input Streams\u200bWhat if you wanted to stream JSON from the output as it was being\ngenerated?If you were to rely on json.loads to parse the partial json, the\nparsing would fail as the partial json wouldn’t be valid json.You’d likely be at a complete loss of what to do and claim that it\nwasn’t possible to stream JSON.Well, turns out there is a way to do it – the parser needs to operate on\nthe input stream, and attempt to “auto-complete” the partial json --- inputs.from langchain_core.output_parsers import (    JsonOutputParser,)# A function that operates on finalized inputs# rather than on an input_streamdef _extract_country_names(inputs):    """A function that does not operates on input streams and breaks streaming."""    if not isinstance(inputs, dict):        return ""    if "countries" not in inputs:        return ""    countries = inputs["countries"]    if not isinstance(countries, list):        return ""    country_names = [        country.get("name") for country in countries if isinstance(country, dict)    ]    return country_nameschain = model | JsonOutputParser() | _extract_country_namesasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, end="|", flush=True)[\'France\', \'Spain\', \'Japan\']|Generator Functions\u200bLe’ts fix the streaming using a generator function that can operate on\nthe input stream.tipA generator function (a function that uses yield) allows writing code\nthat operators on input streamsfrom langchain_core.output_parsers import JsonOutputParserasync def _extract_country_names_streaming(input_stream):    """A function that operates on input streams."""    country_names_so_far = set()    async for input in input_stream:        if not isinstance(input, dict):            continue        if "countries" not in input:            continue        countries = input["countries"]        if not isinstance(countries, list):            continue        for country in countries:            name = country.get("name")            if not name:                continue            if name not in country_names_so_far:                yield name                country_names_so_far.add(name)chain = model | JsonOutputParser() | _extract_country_names_streamingasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, end="|", flush=True)France|Sp|Spain|Japan|noteBecause the code above is relying on JSON auto-completion, you may see\npartial names of countries (e.g., Sp and Spain), which is not what\none would want for an extraction result!We’re focusing on streaming concepts, not necessarily the results of the\nchains.Non-streaming components\u200bSome built-in components like Retrievers do not offer any streaming. --- What happens if we try to stream them? ?from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts(    ["harrison worked at kensho", "harrison likes spicy food"],    embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()chunks = [chunk for chunk in retriever.stream("where did harrison work?")]chunks[[Document(page_content=\'harrison worked at kensho\'),  Document(page_content=\'harrison likes spicy food\')]]Stream just yielded the final result from that component.This is OK \U0001f979! Not all components have to implement streaming – in some\ncases streaming is either unnecessary, difficult or just doesn’t make\nsense.tipAn LCEL chain constructed using non-streaming components, will still be\nable to stream in a lot of cases, with streaming of partial output\nstarting after the last non-streaming step in the chain.retrieval_chain = (    {        "context": retriever.with_config(run_name="Docs"),        "question": RunnablePassthrough(),    }    | prompt    | model    | StrOutputParser())for chunk in retrieval_chain.stream(    "Where did harrison work? " "Write 3 made up sentences about this place."):    print(chunk, end="|", flush=True) Based| on| the| given| context|,| the| only| information| provided| about| where| Harrison| worked| is| that| he| worked| at| Ken|sh|o|.| Since| there| are| no| other| details| provided| about| Ken|sh|o|,| I| do| not| have| enough| information| to| write| 3| additional| made| up| sentences| about| this| place|.| I| can| only| state| that| Harrison| worked| at| Ken|sh|o|.||Now that we’ve seen how stream and astream work, let’s venture into\nthe world of streaming events. ?️Using Stream Events\u200bEvent Streaming is a beta API. This API may change a bit based on\nfeedback.noteIntroduced in langchain-core 0.1.14.import langchain_corelangchain_core.__version__\'0.1.18\'For the astream_events API to work properly:Use async throughout the code to the extent possible (e.g., async\ntools etc)Propagate callbacks if defining custom functions / runnablesWhenever using runnables without LCEL, make sure to call\n.astream() on LLMs rather than .ainvoke to force the LLM to\nstream tokens.Let us know if anything doesn’t work as expected! :)Event Reference\u200bBelow is a reference table that shows some events that might be emitted\nby the various Runnable objects.noteWhen streaming is implemented properly, the inputs to a runnable will\nnot be known until after the input stream has been entirely consumed.\nThis means that inputs will often be included only for end events --- max_concurrency parameterchain.batch([{"topic": "bears"}, {"topic": "cats"}], config={"max_concurrency": 5})[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild? Too many cheetahs!")]Async Stream\u200basync for s in chain.astream({"topic": "bears"}):    print(s.content, end="", flush=True)Why don\'t bears wear shoes?Because they have bear feet!Async Invoke\u200bawait chain.ainvoke({"topic": "bears"})AIMessage(content="Why don\'t bears ever wear shoes?\\n\\nBecause they already have bear feet!")Async Batch\u200bawait chain.abatch([{"topic": "bears"}])[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!")]Async Stream Events (beta)\u200bEvent Streaming is a beta API, and may change a bit based on\nfeedback.Note: Introduced in langchain-core 0.2.0For now, when using the astream_events API, for everything to work\nproperly please:Use async throughout the code (including async tools etc)Propagate callbacks if defining custom functions / runnables.Whenever using runnables without LCEL, make sure to call\n.astream() on LLMs rather than .ainvoke to force the LLM to\nstream tokens.Event Reference\u200bHere is a reference table that shows some events that might be emitted\nby the various Runnable objects. Definitions for some of the Runnable\nare included after the table.⚠️ When streaming the inputs for the runnable will not be available\nuntil the input stream has been entirely consumed This means that the\ninputs will be available at for the corresponding end hook rather than --- requests. For example, when using a RunnableParallel (often written as\na dictionary) it executes each element in parallel.from langchain_core.runnables import RunnableParallelchain1 = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelchain2 = (    ChatPromptTemplate.from_template("write a short (2 line) poem about {topic}")    | model)combined = RunnableParallel(joke=chain1, poem=chain2)%%timechain1.invoke({"topic": "bears"})CPU times: user 18 ms, sys: 1.27 ms, total: 19.3 msWall time: 692 msAIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they already have bear feet!")%%timechain2.invoke({"topic": "bears"})CPU times: user 10.5 ms, sys: 166 µs, total: 10.7 msWall time: 579 msAIMessage(content="In forest\'s embrace,\\nMajestic bears pace.")%%timecombined.invoke({"topic": "bears"})CPU times: user 32 ms, sys: 2.59 ms, total: 34.6 msWall time: 816 ms{\'joke\': AIMessage(content="Sure, here\'s a bear-related joke for you:\\n\\nWhy did the bear bring a ladder to the bar?\\n\\nBecause he heard the drinks were on the house!"), \'poem\': AIMessage(content="In wilderness they roam,\\nMajestic strength, nature\'s throne.")}Parallelism on batches\u200bParallelism can be combined with other runnables. Let’s try to use\nparallelism with batches.%%timechain1.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 17.3 ms, sys: 4.84 ms, total: 22.2 msWall time: 628 ms[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!")]%%timechain2.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 15.8 ms, sys: 3.83 ms, total: 19.7 msWall time: 718 ms[AIMessage(content=\'In the wild, bears roam,\\nMajestic guardians of ancient home.\'), AIMessage(content=\'Whiskers grace, eyes gleam,\\nCats dance through the moonbeam.\')]%%timecombined.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 44.8 ms, sys: 3.17 ms, total: 48 msWall time: 721 ms[{\'joke\': AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they have bear feet!"),  \'poem\': AIMessage(content="Majestic bears roam,\\nNature\'s strength, beauty shown.")}, {\'joke\': AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!"),  \'poem\': AIMessage(content="Whiskers dance, eyes aglow,\\nCats embrace the night\'s gentle flow.")}]Help us out by providing feedback on this documentation page:PreviousWhy use LCELNextStreamingInput SchemaOutput SchemaStreamInvokeBatchAsync StreamAsync InvokeAsync BatchAsync Stream Events (beta)Event ReferenceAsync Stream Intermediate StepsStreaming JSONPatch chunksStreaming the incremental RunStateParallelismParallelism on batchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- the map:from operator import itemgetterfrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}Answer in the following language: {language}"""prompt = ChatPromptTemplate.from_template(template)chain = (    {        "context": itemgetter("question") | retriever,        "question": itemgetter("question"),        "language": itemgetter("language"),    }    | prompt    | model    | StrOutputParser())chain.invoke({"question": "where did harrison work", "language": "italian"})\'Harrison ha lavorato a Kensho.\'Parallelize steps\u200bRunnableParallel (aka. RunnableMap) makes it easy to execute multiple\nRunnables in parallel, and to return the output of these Runnables as a\nmap.from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableParallelfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelpoem_chain = (    ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model)map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)map_chain.invoke({"topic": "bear"}){\'joke\': AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), \'poem\': AIMessage(content="In the wild\'s embrace, bear roams free,\\nStrength and grace, a majestic decree.")}Parallelism\u200bRunnableParallel are also useful for running independent processes in\nparallel, since each Runnable in the map is executed in parallel. For\nexample, we can see our earlier joke_chain, poem_chain and\nmap_chain all have about the same runtime, even though map_chain\nexecutes both of the other two.%%timeitjoke_chain.invoke({"topic": "bear"})958 ms ± 402 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)%%timeitpoem_chain.invoke({"topic": "bear"})1.22 s ± 508 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)%%timeitmap_chain.invoke({"topic": "bear"})1.15 s ± 119 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)Help us out by providing feedback on this documentation page:PreviousHow toNextRunnablePassthrough: Passing data throughUsing itemgetter as shorthandParallelize stepsParallelismCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- that’s easier to understandchain.get_graph().print_ascii()           +---------------------------------+                    | Parallel<context,question>Input |                    +---------------------------------+                             **               **                                 ***                   ***                            **                         **           +----------------------+              +-------------+  | VectorStoreRetriever |              | Passthrough |  +----------------------+              +-------------+                      **               **                                      ***         ***                                           **     **                                +----------------------------------+                   | Parallel<context,question>Output |                   +----------------------------------+                                     *                                                      *                                                      *                                           +--------------------+                                 | ChatPromptTemplate |                                 +--------------------+                                            *                                                      *                                                      *                                               +------------+ --- | ChatOpenAI |                                         +------------+                                                *                                                      *                                                      *                                            +-----------------+                                    | StrOutputParser |                                    +-----------------+                                              *                                                      *                                                      *                                         +-----------------------+                              | StrOutputParserOutput |                              +-----------------------+              Get the prompts\u200bAn important part of every chain is the prompts that are used. You can --- get the prompts present in the chain:chain.get_prompts()[ChatPromptTemplate(input_variables=[\'context\', \'question\'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[\'context\', \'question\'], template=\'Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n\'))])]Help us out by providing feedback on this documentation page:PreviousStream custom generator functionsNextAdd message history (memory)Get a graphPrint a graphGet the promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- the output of a previous step, while preserving streaming capabilitiesLet’s implement a custom output parser for comma-separated lists.Sync version\u200b%pip install --upgrade --quiet  langchain langchain-openaifrom typing import Iterator, Listfrom langchain.prompts.chat import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserfrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template(    "Write a comma-separated list of 5 animals similar to: {animal}")model = ChatOpenAI(temperature=0.0)str_chain = prompt | model | StrOutputParser()for chunk in str_chain.stream({"animal": "bear"}):    print(chunk, end="", flush=True)lion, tiger, wolf, gorilla, pandastr_chain.invoke({"animal": "bear"})\'lion, tiger, wolf, gorilla, panda\'# This is a custom parser that splits an iterator of llm tokens# into a list of strings separated by commasdef split_into_list(input: Iterator[str]) -> Iterator[List[str]]:    # hold partial input until we get a comma    buffer = ""    for chunk in input:        # add current chunk to buffer        buffer += chunk        # while there are commas in the buffer        while "," in buffer:            # split buffer on comma            comma_index = buffer.index(",")            # yield everything before the comma            yield [buffer[:comma_index].strip()]            # save the rest for the next iteration            buffer = buffer[comma_index + 1 :]    # yield the last chunk    yield [buffer.strip()]list_chain = str_chain | split_into_listfor chunk in list_chain.stream({"animal": "bear"}):    print(chunk, flush=True)[\'lion\'][\'tiger\'][\'wolf\'][\'gorilla\'][\'panda\']list_chain.invoke({"animal": "bear"})[\'lion\', \'tiger\', \'wolf\', \'gorilla\', \'panda\']Async version\u200bfrom typing import AsyncIteratorasync def asplit_into_list(    input: AsyncIterator[str],) -> AsyncIterator[List[str]]:  # async def    buffer = ""    async for (        chunk    ) in input:  # `input` is a `async_generator` object, so use `async for`        buffer += chunk        while "," in buffer:            comma_index = buffer.index(",")            yield [buffer[:comma_index].strip()]            buffer = buffer[comma_index + 1 :]    yield [buffer.strip()]list_chain = str_chain | asplit_into_listasync for chunk in list_chain.astream({"animal": "bear"}):    print(chunk, flush=True)[\'lion\'][\'tiger\'][\'wolf\'][\'gorilla\'][\'panda\']await list_chain.ainvoke({"animal": "bear"})[\'lion\', \'tiger\', \'wolf\', \'gorilla\', \'panda\']Help us out by providing feedback on this documentation page:PreviousAdd fallbacksNextInspect your runnablesSync versionAsync versionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- OpenAIfrom unittest.mock import patchimport httpxfrom openai import RateLimitErrorrequest = httpx.Request("GET", "/")response = httpx.Response(200, request=request)error = RateLimitError("rate limit", response=response, body="")# Note that we set max_retries = 0 to avoid retrying on RateLimits, etcopenai_llm = ChatOpenAI(max_retries=0)anthropic_llm = ChatAnthropic()llm = openai_llm.with_fallbacks([anthropic_llm])# Let\'s use just the OpenAI LLm first, to show that we run into an errorwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(openai_llm.invoke("Why did the chicken cross the road?"))    except RateLimitError:        print("Hit error")Hit error# Now let\'s try with fallbacks to Anthropicwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(llm.invoke("Why did the chicken cross the road?"))    except RateLimitError:        print("Hit error")content=\' I don\\\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\\n\\n- To get to the other side!\\n\\n- It was too chicken to just stand there. \\n\\n- It wanted a change of scenery.\\n\\n- It wanted to show the possum it could be done.\\n\\n- It was on its way to a poultry farmers\\\' convention.\\n\\nThe joke plays on the double meaning of "the other side" - literally crossing the road to the other side, or the "other side" meaning the afterlife. So it\\\'s an anti-joke, with a silly or unexpected pun as the answer.\' additional_kwargs={} example=FalseWe can use our “LLM with Fallbacks” as we would a normal LLM.from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re a nice assistant who always includes a compliment in your response",        ),        ("human", "Why did the {animal} cross the road"),    ])chain = prompt | llmwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(chain.invoke({"animal": "kangaroo"}))    except RateLimitError:        print("Hit error")content=" I don\'t actually know why the kangaroo crossed the road, but I\'m happy to take a guess! Maybe the kangaroo was trying to get to the other side to find some tasty grass to eat. Or maybe it was trying to get away from a predator or other danger. Kangaroos do need to cross roads and other open areas sometimes as part of their normal activities. Whatever the reason, I\'m sure the kangaroo looked both ways before hopping across!" additional_kwargs={} example=FalseSpecifying errors to handle\u200bWe can also specify the errors to handle if we want to be more specific\nabout when the fallback is invoked:llm = openai_llm.with_fallbacks(    [anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,))chain = prompt | llmwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(chain.invoke({"animal": "kangaroo"}))    except RateLimitError:        print("Hit error")Hit errorFallbacks for Sequences\u200bWe can also create fallbacks for sequences, that are sequences\nthemselves. Here we do that with two different models: ChatOpenAI and\nthen normal OpenAI (which does not use a chat model). Because OpenAI is --- NOT a chat model, you likely want a different prompt.# First let\'s create a chain with a ChatModel# We add in a string output parser here so the outputs between the two are the same typefrom langchain_core.output_parsers import StrOutputParserchat_prompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re a nice assistant who always includes a compliment in your response",        ),        ("human", "Why did the {animal} cross the road"),    ])# Here we\'re going to use a bad model name to easily create a chain that will errorchat_model = ChatOpenAI(model_name="gpt-fake")bad_chain = chat_prompt | chat_model | StrOutputParser()# Now lets create a chain with the normal OpenAI modelfrom langchain.prompts import PromptTemplatefrom langchain_openai import OpenAIprompt_template = """Instructions: You should always include a compliment in your response.Question: Why did the {animal} cross the road?"""prompt = PromptTemplate.from_template(prompt_template)llm = OpenAI()good_chain = prompt | llm# We can now create a final chain which combines the twochain = bad_chain.with_fallbacks([good_chain])chain.invoke({"animal": "turtle"})\'\\n\\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.\'Help us out by providing feedback on this documentation page:PreviousCreate a runnable with the `@chain` decoratorNextStream custom generator functionsHandling LLM API ErrorsSpecifying errors to handleFallbacks for SequencesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- runtime.Configuration Fields\u200bWith LLMs\u200bWith LLMs we can configure things like temperature%pip install --upgrade --quiet  langchain langchain-openaifrom langchain.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(temperature=0).configurable_fields(    temperature=ConfigurableField(        id="llm_temperature",        name="LLM Temperature",        description="The temperature of the LLM",    ))model.invoke("pick a random number")AIMessage(content=\'7\')model.with_config(configurable={"llm_temperature": 0.9}).invoke("pick a random number")AIMessage(content=\'34\')We can also do this when its used as part of a chainprompt = PromptTemplate.from_template("Pick a random number above {x}")chain = prompt | modelchain.invoke({"x": 0})AIMessage(content=\'57\')chain.with_config(configurable={"llm_temperature": 0.9}).invoke({"x": 0})AIMessage(content=\'6\')With HubRunnables\u200bThis is useful to allow for switching of promptsfrom langchain.runnables.hub import HubRunnableprompt = HubRunnable("rlm/rag-prompt").configurable_fields(    owner_repo_commit=ConfigurableField(        id="hub_commit",        name="Hub Commit",        description="The Hub commit to pull from",    ))prompt.invoke({"question": "foo", "context": "bar"})ChatPromptValue(messages=[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don\'t know the answer, just say that you don\'t know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:")])prompt.with_config(configurable={"hub_commit": "rlm/rag-prompt-llama"}).invoke(    {"question": "foo", "context": "bar"})ChatPromptValue(messages=[HumanMessage(content="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don\'t know the answer, just say that you don\'t know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]")])Configurable Alternatives\u200bWith LLMs\u200bLet’s take a look at doing this with LLMsfrom langchain.prompts import PromptTemplatefrom langchain_community.chat_models import ChatAnthropicfrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAIllm = ChatAnthropic(temperature=0).configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="llm"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="anthropic",    # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`    openai=ChatOpenAI(),    # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")`    gpt4=ChatOpenAI(model="gpt-4"),    # You can add more configuration options here)prompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | llm# By default it will call Anthropicchain.invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")# We can use --- `.with_config(configurable={"llm": "openai"})` to specify an llm to usechain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they already have bear feet!")# If we use the `default_key` then it uses the defaultchain.with_config(configurable={"llm": "anthropic"}).invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")With Prompts\u200bWe can do a similar thing, but alternate between promptsllm = ChatAnthropic(temperature=0)prompt = PromptTemplate.from_template(    "Tell me a joke about {topic}").configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="prompt"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="joke",    # This adds a new option, with name `poem`    poem=PromptTemplate.from_template("Write a short poem about {topic}"),    # You can add more configuration options here)chain = prompt | llm# By default it will write a jokechain.invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")# We can configure it write a poemchain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})AIMessage(content=\' Here is a short poem about bears:\\n\\nThe bears awaken from their sleep\\nAnd lumber out into the deep\\nForests filled with trees so tall\\nForaging for food before nightfall \\nTheir furry coats and claws so sharp\\nSniffing for berries and fish to nab\\nLumbering about without a care\\nThe mighty grizzly and black bear\\nProud creatures, wild and free\\nRuling their domain majestically\\nWandering the woods they call their own\\nBefore returning to their dens alone\')With Prompts and LLMs\u200bWe can also have multiple things configurable! Here’s an example doing --- that with both prompts and LLMs.llm = ChatAnthropic(temperature=0).configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="llm"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="anthropic",    # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`    openai=ChatOpenAI(),    # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")`    gpt4=ChatOpenAI(model="gpt-4"),    # You can add more configuration options here)prompt = PromptTemplate.from_template(    "Tell me a joke about {topic}").configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="prompt"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="joke",    # This adds a new option, with name `poem`    poem=PromptTemplate.from_template("Write a short poem about {topic}"),    # You can add more configuration options here)chain = prompt | llm# We can configure it write a poem with OpenAIchain.with_config(configurable={"prompt": "poem", "llm": "openai"}).invoke(    {"topic": "bears"})AIMessage(content="In the forest, where tall trees sway,\\nA creature roams, both fierce and gray.\\nWith mighty paws and piercing eyes,\\nThe bear, a symbol of strength, defies.\\n\\nThrough snow-kissed mountains, it does roam,\\nA guardian of its woodland home.\\nWith fur so thick, a shield of might,\\nIt braves the coldest winter night.\\n\\nA gentle giant, yet wild and free,\\nThe bear commands respect, you see.\\nWith every step, it leaves a trace,\\nOf untamed power and ancient grace.\\n\\nFrom honeyed feast to salmon\'s leap,\\nIt takes its place, in nature\'s keep.\\nA symbol of untamed delight,\\nThe bear, a wonder, day and night.\\n\\nSo let us honor this noble beast,\\nIn forests where its soul finds peace.\\nFor in its presence, we come to know,\\nThe untamed spirit that in us also flows.")# We can always just configure only one if we wantchain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they have bear feet!")Saving configurations\u200bWe can also easily save configured chains as their own objectsopenai_joke = chain.with_config(configurable={"llm": "openai"})openai_joke.invoke({"topic": "bears"})AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!")Help us out by providing feedback on this documentation page:PreviousBind runtime argsNextCreate a runnable with the `@chain` decoratorConfiguration FieldsWith LLMsWith HubRunnablesConfigurable AlternativesWith LLMsWith PromptsWith Prompts and LLMsSaving configurationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- functions to a compatible OpenAI model:function = {    "name": "solver",    "description": "Formulates and solves an equation",    "parameters": {        "type": "object",        "properties": {            "equation": {                "type": "string",                "description": "The algebraic expression of the equation",            },            "solution": {                "type": "string",                "description": "The solution to the equation",            },        },        "required": ["equation", "solution"],    },}# Need gpt-4 to solve this one correctlyprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "Write out the following equation using algebraic symbols then solve it.",        ),        ("human", "{equation_statement}"),    ])model = ChatOpenAI(model="gpt-4", temperature=0).bind(    function_call={"name": "solver"}, functions=[function])runnable = {"equation_statement": RunnablePassthrough()} | prompt | modelrunnable.invoke("x raised to the third plus seven equals 12")AIMessage(content=\'\', additional_kwargs={\'function_call\': {\'name\': \'solver\', \'arguments\': \'{\\n"equation": "x^3 + 7 = 12",\\n"solution": "x = ∛5"\\n}\'}}, example=False)Attaching OpenAI tools\u200btools = [    {        "type": "function",        "function": {            "name": "get_current_weather",            "description": "Get the current weather in a given location",            "parameters": {                "type": "object",                "properties": {                    "location": {                        "type": "string",                        "description": "The city and state, e.g. San Francisco, CA",                    },                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},                },                "required": ["location"],            },        },    }]model = ChatOpenAI(model="gpt-3.5-turbo-1106").bind(tools=tools)model.invoke("What\'s the weather in SF, NYC and LA?")AIMessage(content=\'\', additional_kwargs={\'tool_calls\': [{\'id\': \'call_zHN0ZHwrxM7nZDdqTp6dkPko\', \'function\': {\'arguments\': \'{"location": "San Francisco, CA", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}, {\'id\': --- \'call_aqdMm9HBSlFW9c9rqxTa7eQv\', \'function\': {\'arguments\': \'{"location": "New York, NY", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}, {\'id\': \'call_cx8E567zcLzYV2WSWVgO63f1\', \'function\': {\'arguments\': \'{"location": "Los Angeles, CA", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}]})Help us out by providing feedback on this documentation page:PreviousRunnableBranch: Dynamically route logic based on inputNextConfigure chain internals at runtimeAttaching OpenAI functionsAttaching OpenAI toolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageGet startedOn this pageGet startedLCEL makes it easy to build complex chains from basic components, and\nsupports out of the box functionality such as streaming, parallelism,\nand logging.Basic example: prompt + model + output parser\u200bThe most basic and common use case is chaining a prompt template and a\nmodel together. To see how this works, let’s create a chain that takes a\ntopic and generates a joke:%pip install --upgrade --quiet  langchain-core langchain-community langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")model = ChatOpenAI(model="gpt-4")output_parser = StrOutputParser()chain = prompt | model | output_parserchain.invoke({"topic": "ice cream"})"Why don\'t ice creams ever get invited to parties?\\n\\nBecause they always drip when things heat up!"Notice this line of this code, where we piece together then different\ncomponents into a single chain using LCEL:chain = prompt | model | output_parserThe | symbol is similar to a unix pipe\noperator, which chains\ntogether the different components feeds the output from one component as\ninput into the next component.In this chain the user input is passed to the prompt template, then the\nprompt template output is passed to the model, then the model output is\npassed to the output parser. Let’s take a look at each component\nindividually to really understand what’s going on.1. Prompt\u200bprompt is a BasePromptTemplate, which means it takes in a dictionary\nof template variables and produces a PromptValue. A PromptValue is a\nwrapper around a completed prompt that can be passed to either an LLM\n(which takes a string as input) or ChatModel (which takes a sequence\nof messages as input). It can work with either language model type\nbecause it defines logic both for producing BaseMessages and for\nproducing a string.prompt_value = prompt.invoke({"topic": "ice cream"})prompt_valueChatPromptValue(messages=[HumanMessage(content=\'tell me a short joke about ice cream\')])prompt_value.to_messages()[HumanMessage(content=\'tell me a short joke about ice cream\')]prompt_value.to_string()\'Human: tell me a short joke about ice cream\'2. Model\u200bThe PromptValue is then passed to model. In this case our model is\na ChatModel, meaning it will output a BaseMessage.message = model.invoke(prompt_value)messageAIMessage(content="Why don\'t ice creams ever get invited to parties?\\n\\nBecause they always bring a melt down!")If our model was an LLM, it would output a string.from langchain_openai.llms import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct")llm.invoke(prompt_value)\'\\n\\nRobot: Why did the ice cream truck break down? Because it had a meltdown!\'3. Output parser\u200bAnd lastly we pass our model output to the output_parser, which is a\nBaseOutputParser meaning it takes either a string or a BaseMessage\nas input. The StrOutputParser specifically simple converts any input\ninto a string.output_parser.invoke(message)"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn\'t find its cone-fidence!"4. Entire Pipeline\u200bTo follow the steps along:We pass in user input on the desired topic as\n{"topic": "ice cream"}The prompt component takes the user input, which is then used to\nconstruct a PromptValue after using the topic to construct the --- prompt.The model component takes the generated prompt, and passes into\nthe OpenAI LLM model for evaluation. The generated output from the\nmodel is a ChatMessage object.Finally, the output_parser component takes in a ChatMessage, and\ntransforms this into a Python string, which is returned from the\ninvoke method.Note that if you’re curious about the output of any components, you can\nalways test out a smaller version of the chain such as prompt or\nprompt | model to see the intermediate results:input = {"topic": "ice cream"}prompt.invoke(input)# > ChatPromptValue(messages=[HumanMessage(content=\'tell me a short joke about ice cream\')])(prompt | model).invoke(input)# > AIMessage(content="Why did the ice cream go to therapy?\\nBecause it had too many toppings and couldn\'t cone-trol itself!")RAG Search Example\u200bFor our next example, we want to run a retrieval-augmented generation\nchain to add some context when responding to questions.# Requires:# pip install langchain docarray tiktokenfrom langchain_community.vectorstores import DocArrayInMemorySearchfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableParallel, RunnablePassthroughfrom langchain_openai.chat_models import ChatOpenAIfrom langchain_openai.embeddings import OpenAIEmbeddingsvectorstore = DocArrayInMemorySearch.from_texts(    ["harrison worked at kensho", "bears like to eat honey"],    embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()output_parser = StrOutputParser()setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})chain = setup_and_retrieval | prompt | model | output_parserchain.invoke("where did harrison work?")In this case, the composed chain is:chain = setup_and_retrieval | prompt | model | output_parserTo explain this, we first can see that the prompt template above takes\nin context and question as values to be substituted in the prompt.\nBefore building the prompt template, we want to retrieve relevant\ndocuments to the search and include them as part of the context.As a preliminary step, we’ve setup the retriever using an in memory\nstore, which can retrieve documents based on a query. This is a runnable\ncomponent as well that can be chained together with other components,\nbut you can also try to run it separately:retriever.invoke("where did harrison work?")We then use the RunnableParallel to prepare the expected inputs into\nthe prompt by using the entries for the retrieved documents as well as\nthe original user question, using the retriever for document search, and\nRunnablePassthrough to pass the user’s question:setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})To review, the complete chain is:setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})chain = setup_and_retrieval | prompt | model | output_parserWith the flow being:The first steps create a RunnableParallel object with two entries.\nThe first entry, context will include the document results fetched\nby the retriever. The second entry, question will contain the\nuser’s original question. To pass on the question, we use\nRunnablePassthrough to copy this entry.Feed the dictionary from the step above to the prompt component.\nIt then takes the user input which is question as well as the\nretrieved document which is context to construct a prompt and\noutput a PromptValue.  The model component takes the generated prompt, and passes into\nthe OpenAI LLM model for evaluation. The generated output from the\nmodel is a ChatMessage object.Finally, the output_parser component takes in a ChatMessage, and --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookRAGOn this pageRAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds\nup to a “retrieval-augmented generation” chain%pip install --upgrade --quiet  langchain langchain-openai faiss-cpu tiktokenfrom operator import itemgetterfrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'template = """Answer the question based only on the following context:{context}Question: {question}Answer in the following language: {language}"""prompt = ChatPromptTemplate.from_template(template)chain = (    {        "context": itemgetter("question") | retriever,        "question": itemgetter("question"),        "language": itemgetter("language"),    }    | prompt    | model    | StrOutputParser())chain.invoke({"question": "where did harrison work", "language": "italian"})\'Harrison ha lavorato a Kensho.\'Conversational Retrieval Chain\u200bWe can easily add in conversation history. This primarily means adding --- in chat_message_historyfrom langchain_core.messages import AIMessage, HumanMessage, get_buffer_stringfrom langchain_core.prompts import format_documentfrom langchain_core.runnables import RunnableParallelfrom langchain.prompts.prompt import PromptTemplate_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)template = """Answer the question based only on the following context:{context}Question: {question}"""ANSWER_PROMPT = ChatPromptTemplate.from_template(template)DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")def _combine_documents(    docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\\n\\n"):    doc_strings = [format_document(doc, document_prompt) for doc in docs]    return document_separator.join(doc_strings)_inputs = RunnableParallel(    standalone_question=RunnablePassthrough.assign(        chat_history=lambda x: get_buffer_string(x["chat_history"])    )    | CONDENSE_QUESTION_PROMPT    | ChatOpenAI(temperature=0)    | StrOutputParser(),)_context = {    "context": itemgetter("standalone_question") | retriever | _combine_documents,    "question": lambda x: x["standalone_question"],}conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()conversational_qa_chain.invoke(    {        "question": "where did harrison work?",        "chat_history": [],    })AIMessage(content=\'Harrison was employed at Kensho.\')conversational_qa_chain.invoke(    {        "question": "where did he work?",        "chat_history": [            HumanMessage(content="Who wrote this notebook?"),            AIMessage(content="Harrison"),        ],    })AIMessage(content=\'Harrison worked at Kensho.\')With Memory and returning source documents\u200bThis shows how to use memory with the above. For memory, we need to\nmanage that outside at the memory. For returning the retrieved --- documents, we just need to pass them through all the way.from operator import itemgetterfrom langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(    return_messages=True, output_key="answer", input_key="question")# First we add a step to load memory# This adds a "memory" key to the input objectloaded_memory = RunnablePassthrough.assign(    chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history"),)# Now we calculate the standalone questionstandalone_question = {    "standalone_question": {        "question": lambda x: x["question"],        "chat_history": lambda x: get_buffer_string(x["chat_history"]),    }    | CONDENSE_QUESTION_PROMPT    | ChatOpenAI(temperature=0)    | StrOutputParser(),}# Now we retrieve the documentsretrieved_documents = {    "docs": itemgetter("standalone_question") | retriever,    "question": lambda x: x["standalone_question"],}# Now we construct the inputs for the final promptfinal_inputs = {    "context": lambda x: _combine_documents(x["docs"]),    "question": itemgetter("question"),}# And finally, we do the part that returns the answersanswer = {    "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),    "docs": itemgetter("docs"),}# And now we put it all together!final_chain = loaded_memory | standalone_question | retrieved_documents | answerinputs = {"question": "where did harrison work?"}result = final_chain.invoke(inputs)result{\'answer\': AIMessage(content=\'Harrison was employed at Kensho.\'), \'docs\': [Document(page_content=\'harrison worked at kensho\')]}# Note that the memory does not save automatically# This will be improved in the future# For now you need to save it yourselfmemory.save_context(inputs, {"answer": result["answer"].content})memory.load_memory_variables({}){\'history\': [HumanMessage(content=\'where did harrison work?\'),  AIMessage(content=\'Harrison was employed at Kensho.\')]}inputs = {"question": "but where did he really work?"}result = final_chain.invoke(inputs)result{\'answer\': AIMessage(content=\'Harrison actually worked at Kensho.\'), \'docs\': [Document(page_content=\'harrison worked at kensho\')]}Help us out by providing feedback on this documentation page:PreviousPrompt + LLMNextMultiple chainsConversational Retrieval ChainWith Memory and returning source documentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- example that can search Wikipedia for information.%pip install --upgrade --quiet  langchain langchain-openai wikipediafrom operator import itemgetterfrom langchain.agents import AgentExecutor, load_toolsfrom langchain.agents.format_scratchpad import format_to_openai_function_messagesfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParserfrom langchain.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperfrom langchain_core.prompt_values import ChatPromptValuefrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_openai import ChatOpenAIwiki = WikipediaQueryRun(    api_wrapper=WikipediaAPIWrapper(top_k_results=5, doc_content_chars_max=10_000))tools = [wiki]prompt = ChatPromptTemplate.from_messages(    [        ("system", "You are a helpful assistant"),        ("user", "{input}"),        MessagesPlaceholder(variable_name="agent_scratchpad"),    ])llm = ChatOpenAI(model="gpt-3.5-turbo")Let’s try a many-step question without any prompt size handling:agent = (    {        "input": itemgetter("input"),        "agent_scratchpad": lambda x: format_to_openai_function_messages(            x["intermediate_steps"]        ),    }    | prompt    | llm.bind_functions(tools)    | OpenAIFunctionsAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke(    {        "input": "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?"    })> Entering new AgentExecutor chain...Invoking: `Wikipedia` with `List of presidents of the United States`Page: List of presidents of the United StatesSummary: The president of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term via the Electoral College. The officeholder leads the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces. Since the office was established in 1789, 45 men have served in 46 presidencies. The first president, George Washington, won a unanimous vote of the Electoral College. Grover Cleveland served two non-consecutive terms and is therefore counted as the 22nd and 24th president of the United States, giving rise to the discrepancy between the number of presidencies and the number of individuals who have served as president. The incumbent president is Joe Biden.The presidency of William Henry Harrison, who died 31 days after taking office in 1841, was the shortest in American history. Franklin D. Roosevelt served the longest, over twelve years, before dying early in his fourth term in 1945. He is the only U.S. president to have served more than two terms. Since the ratification of the Twenty-second Amendment to the United States Constitution in 1951, no person may be elected president more than twice, and no one who has served more than two years of a term to which someone else was elected may be elected more than once.Four presidents died in office of natural causes (William Henry Harrison, Zachary Taylor, Warren G. Harding, and Franklin D. Roosevelt), four were assassinated (Abraham Lincoln, James A. Garfield, William McKinley, and John F. Kennedy), and one resigned (Richard Nixon, facing impeachment and removal from office). John Tyler was the first vice president to assume the presidency during a presidential term, and set the precedent that a vice president who does so becomes the fully functioning president with his presidency.Throughout most of its history, American politics has been dominated by political parties. The Constitution is silent on the issue of political parties, and at the time it came into force in 1789, no organized parties existed. Soon after the 1st Congress convened, political factions began rallying around dominant Washington administration officials, such as Alexander Hamilton and Thomas Jefferson. Concerned about the capacity of political parties to destroy the fragile unity holding the nation together, Washington remained unaffiliated with any political faction or party throughout --- directlyfrom langchain.output_parsers.openai_functions import JsonOutputFunctionsParserchain = (    prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonOutputFunctionsParser())chain.invoke({"foo": "bears"}){\'setup\': "Why don\'t bears like fast food?", \'punchline\': "Because they can\'t catch it!"}from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParserchain = (    prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke({"foo": "bears"})"Why don\'t bears wear shoes?"Simplifying input\u200bTo make invocation even simpler, we can add a RunnableParallel to take\ncare of creating the prompt input dict for us:from langchain_core.runnables import RunnableParallel, RunnablePassthroughmap_ = RunnableParallel(foo=RunnablePassthrough())chain = (    map_    | prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears")"Why don\'t bears wear shoes?"Since we’re composing our map with another Runnable, we can even use\nsome syntactic sugar and just use a dict:chain = (    {"foo": RunnablePassthrough()}    | prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears")"Why don\'t bears like fast food?"Help us out by providing feedback on this documentation page:PreviousCookbookNextRAGPromptTemplate + LLMAttaching Stop SequencesAttaching Function Call informationPromptTemplate + LLM + OutputParserFunctions Output ParserSimplifying inputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- graph that looks like the following:     Input      / \\     /   \\ Branch1 Branch2     \\   /      \\ /      Combineplanner = (    ChatPromptTemplate.from_template("Generate an argument about: {input}")    | ChatOpenAI()    | StrOutputParser()    | {"base_response": RunnablePassthrough()})arguments_for = (    ChatPromptTemplate.from_template(        "List the pros or positive aspects of {base_response}"    )    | ChatOpenAI()    | StrOutputParser())arguments_against = (    ChatPromptTemplate.from_template(        "List the cons or negative aspects of {base_response}"    )    | ChatOpenAI()    | StrOutputParser())final_responder = (    ChatPromptTemplate.from_messages(        [            ("ai", "{original_response}"),            ("human", "Pros:\\n{results_1}\\n\\nCons:\\n{results_2}"),            ("system", "Generate a final response given the critique"),        ]    )    | ChatOpenAI()    | StrOutputParser())chain = (    planner    | {        "results_1": arguments_for,        "results_2": arguments_against,        "original_response": itemgetter("base_response"),    }    | final_responder)chain.invoke({"input": "scrum"})\'While Scrum has its potential cons and challenges, many organizations have successfully embraced and implemented this project management framework to great effect. The cons mentioned above can be mitigated or overcome with proper training, support, and a commitment to continuous improvement. It is also important to note that not all cons may be applicable to every organization or project.\\n\\nFor example, while Scrum may be complex initially, with proper training and guidance, teams can quickly grasp the concepts and practices. The lack of predictability can be mitigated by implementing techniques such as velocity tracking and release planning. The limited documentation can be addressed by maintaining a balance between lightweight documentation and clear communication among team members. The dependency on team collaboration can be improved through effective communication channels and regular team-building activities.\\n\\nScrum can be scaled and adapted to larger projects by using frameworks like Scrum of Scrums or LeSS (Large Scale Scrum). Concerns about speed versus quality can be addressed by incorporating quality assurance practices, such as continuous integration and automated testing, into the Scrum process. Scope creep can be managed by having a well-defined and prioritized product backlog, and a strong product owner can be developed through training and mentorship.\\n\\nResistance to change can be overcome by providing proper education and communication to stakeholders and involving them in the decision-making process. Ultimately, the cons of Scrum can be seen as opportunities for growth and improvement, and with the right mindset and support, they can be effectively managed.\\n\\nIn conclusion, while Scrum may have its challenges and potential cons, the benefits and advantages it offers in terms of collaboration, flexibility, adaptability, transparency, and customer satisfaction make it a widely adopted and successful project management framework. With proper implementation and continuous improvement, organizations can leverage Scrum to drive innovation, efficiency, and project success.\'Help us out by providing feedback on this documentation page:PreviousRAGNextQuerying a SQL DBBranching and MergingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- the most relevant prompt. Here’s a very simple example.%pip install --upgrade --quiet  langchain-core langchain langchain-openaifrom langchain.utils.math import cosine_similarityfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsphysics_template = """You are a very smart physics professor. \\You are great at answering questions about physics in a concise and easy to understand manner. \\When you don\'t know the answer to a question you admit that you don\'t know.Here is a question:{query}"""math_template = """You are a very good mathematician. You are great at answering math questions. \\You are so good because you are able to break down hard problems into their component parts, \\answer the component parts, and then put them together to answer the broader question.Here is a question:{query}"""embeddings = OpenAIEmbeddings()prompt_templates = [physics_template, math_template]prompt_embeddings = embeddings.embed_documents(prompt_templates)def prompt_router(input):    query_embedding = embeddings.embed_query(input["query"])    similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]    most_similar = prompt_templates[similarity.argmax()]    print("Using MATH" if most_similar == math_template else "Using PHYSICS")    return PromptTemplate.from_template(most_similar)chain = (    {"query": RunnablePassthrough()}    | RunnableLambda(prompt_router)    | ChatOpenAI()    | StrOutputParser())print(chain.invoke("What\'s a black hole"))Using PHYSICSA black hole is a region in space where gravity is extremely strong, so strong that nothing, not even light, can escape its gravitational pull. It is formed when a massive star collapses under its own gravity during a supernova explosion. The collapse causes an incredibly dense mass to be concentrated in a small volume, creating a gravitational field that is so intense that it warps space and time. Black holes have a boundary called the event horizon, which marks the point of no return for anything that gets too close. Beyond the event horizon, the gravitational pull is so strong that even light cannot escape, hence the name "black hole." While we have a good understanding of black holes, there is still much to learn, especially about what happens inside them.print(chain.invoke("What\'s a path integral"))Using MATHThank you for your kind words! I will do my best to break down the concept of a path integral for you.In mathematics and physics, a path integral is a mathematical tool used to calculate the probability amplitude or wave function of a particle or system of particles. It was introduced by Richard Feynman and is an integral over all possible paths that a particle can take to go from an initial state to a final state.To understand the concept better, let\'s consider an example. Suppose we have a particle moving from point A to point B in space. Classically, we would describe this particle\'s motion using a definite trajectory, but in quantum mechanics, particles can simultaneously take multiple paths from A to B.The path integral formalism considers all possible paths that the particle could take and assigns a probability amplitude to each path. These probability amplitudes are then added up, taking into account the interference effects between different paths.To calculate a path integral, we need to define an action, which is a mathematical function that describes the behavior of the system. The action is usually expressed in terms of the particle\'s position, velocity, and time.Once we have the action, we can write down the path integral as an integral over all possible paths. Each path is weighted by a factor determined by the action and the principle of least action, which states that a particle takes a path that minimizes the action.Mathematically, the path integral is expressed as:∫ e^(iS/ħ) D[x(t)]Here, S is the action, ħ is the reduced Planck\'s constant, and D[x(t)] represents the integration over all possible paths x(t) of the particle.By evaluating this integral, we can obtain the probability amplitude for the particle to go from the initial --- state to the final state. The absolute square of this amplitude gives us the probability of finding the particle in a particular state.Path integrals have proven to be a powerful tool in various areas of physics, including quantum mechanics, quantum field theory, and statistical mechanics. They allow us to study complex systems and calculate probabilities that are difficult to obtain using other methods.I hope this explanation helps you understand the concept of a path integral. If you have any further questions, feel free to ask!Help us out by providing feedback on this documentation page:PreviousCode writingNextAdding memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.',
 1: 'following two environment variables, all chain traces are logged to\nLangSmith.import osos.environ["LANGCHAIN_API_KEY"] = "..."os.environ["LANGCHAIN_TRACING_V2"] = "true"anthropic_chain.invoke("ice cream")Here’s what our LangSmith trace looks like:\nhttps://smith.langchain.com/public/e4de52f8-bcd9-4732-b950-deee4b04e313/r Fallbacks\u200bIf we wanted to add fallback logic, in case one model API is down: Without LCEL\u200bdef invoke_chain_with_fallback(topic: str) -> str:    try:        return invoke_chain(topic)    except Exception:        return invoke_anthropic_chain(topic)async def ainvoke_chain_with_fallback(topic: str) -> str:    try:        return await ainvoke_chain(topic)    except Exception:        # Note: we haven\'t actually implemented this.        return ainvoke_anthropic_chain(topic)async def batch_chain_with_fallback(topics: List[str]) -> str:    try:        return batch_chain(topics)    except Exception:        # Note: we haven\'t actually implemented this.        return batch_anthropic_chain(topics)invoke_chain_with_fallback("ice cream")# await ainvoke_chain_with_fallback("ice cream")batch_chain_with_fallback(["ice cream", "spaghetti", "dumplings"]))LCEL\u200bfallback_chain = chain.with_fallbacks([anthropic_chain])fallback_chain.invoke("ice cream")# await fallback_chain.ainvoke("ice cream")fallback_chain.batch(["ice cream", "spaghetti", "dumplings"]) Full code comparison\u200bEven in this simple case, our LCEL chain succinctly packs in a lot of\nfunctionality. As chains become more complex, this becomes especially --- automatically behind the scenes.from langchain_core.runnables import RunnableLambdafrom langchain_core.tools import tooldef reverse_word(word: str):    return word[::-1]reverse_word = RunnableLambda(reverse_word)@tooldef bad_tool(word: str):    """Custom tool that doesn\'t propagate callbacks."""    return reverse_word.invoke(word)async for event in bad_tool.astream_events("hello", version="v1"):    print(event){\'event\': \'on_tool_start\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'name\': \'bad_tool\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_tool_stream\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'tags\': [], \'metadata\': {}, \'name\': \'bad_tool\', \'data\': {\'chunk\': \'olleh\'}}{\'event\': \'on_tool_end\', \'name\': \'bad_tool\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'olleh\'}}Here’s a re-implementation that does propagate callbacks correctly.\nYou’ll notice that now we’re getting events from the reverse_word\nrunnable as well.@tooldef correct_tool(word: str, callbacks):    """A tool that correctly propagates callbacks."""    return reverse_word.invoke(word, {"callbacks": callbacks})async for event in correct_tool.astream_events("hello", version="v1"):    print(event){\'event\': \'on_tool_start\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'name\': \'correct_tool\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'c4882303-8867-4dff-b031-7d9499b39dda\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'c4882303-8867-4dff-b031-7d9499b39dda\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\', \'output\': \'olleh\'}}{\'event\': \'on_tool_stream\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'tags\': [], \'metadata\': {}, \'name\': \'correct_tool\', \'data\': {\'chunk\': \'olleh\'}}{\'event\': \'on_tool_end\', \'name\': \'correct_tool\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'olleh\'}}If you’re invoking runnables from within Runnable Lambdas or @chains, --- then callbacks will be passed automatically on your behalf.from langchain_core.runnables import RunnableLambdaasync def reverse_and_double(word: str):    return await reverse_word.ainvoke(word) * 2reverse_and_double = RunnableLambda(reverse_and_double)await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v1"):    print(event){\'event\': \'on_chain_start\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'name\': \'reverse_and_double\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'335fe781-8944-4464-8d2e-81f61d1f85f5\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'335fe781-8944-4464-8d2e-81f61d1f85f5\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\', \'output\': \'4321\'}}{\'event\': \'on_chain_stream\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'tags\': [], \'metadata\': {}, \'name\': \'reverse_and_double\', \'data\': {\'chunk\': \'43214321\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_and_double\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'43214321\'}}And with the @chain decorator:from langchain_core.runnables import chain@chainasync def reverse_and_double(word: str):    return await reverse_word.ainvoke(word) * 2await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v1"):    print(event){\'event\': \'on_chain_start\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'name\': \'reverse_and_double\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'e7cddab2-9b95-4e80-abaf-4b2429117835\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'e7cddab2-9b95-4e80-abaf-4b2429117835\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\', \'output\': \'4321\'}}{\'event\': \'on_chain_stream\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'tags\': [], \'metadata\': {}, \'name\': \'reverse_and_double\', \'data\': {\'chunk\': \'43214321\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_and_double\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'43214321\'}}Help us out by providing feedback on this documentation page:PreviousInterfaceNextHow toUsing StreamLLMs and Chat ModelsChainsWorking with Input StreamsNon-streaming --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageInterfaceOn this pageInterfaceTo make it as easy as possible to create custom chains, we’ve\nimplemented a\n“Runnable”\nprotocol. The Runnable protocol is implemented for most components.\nThis is a standard interface, which makes it easy to define custom\nchains as well as invoke them in a standard way. The standard interface\nincludes:stream: stream back chunks of the responseinvoke: call the chain on an inputbatch: call the chain on a list of inputsThese also have corresponding async methods:astream: stream back chunks of the response asyncainvoke: call the chain on an input asyncabatch: call the chain on a list of inputs asyncastream_log: stream back\nintermediate steps as they happen, in addition to the final responseastream_events: beta stream events as\nthey happen in the chain (introduced in langchain-core 0.1.14)The input type and output type varies by component:ComponentInput TypeOutput TypePromptDictionaryPromptValueChatModelSingle string, list of chat messages or a PromptValueChatMessageLLMSingle string, list of chat messages or a PromptValueStringOutputParserThe output of an LLM or ChatModelDepends on the parserRetrieverSingle stringList of DocumentsToolSingle string or dictionary, depending on the toolDepends on the toolAll runnables expose input and output schemas to inspect the inputs\nand outputs: - input_schema: an input Pydantic model\nauto-generated from the structure of the Runnable -\noutput_schema: an output Pydantic model\nauto-generated from the structure of the RunnableLet’s take a look at these methods. To do so, we’ll create a super\nsimple PromptTemplate + ChatModel chain.%pip install –upgrade –quiet langchain-core langchain-community\nlangchain-openaifrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | modelInput Schema\u200bA description of the inputs accepted by a Runnable. This is a Pydantic\nmodel dynamically generated from the structure of any Runnable. You can --- model dynamically generated from the structure of any Runnable. You can --- --- \n\n\n\n\n\n\n\nRunnableBranch: Dynamically route logic based on input | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableBranch: Dynamically route logic based on inputOn this pageDynamically route logic based on inputThis notebook covers how to do routing in the LangChain Expression\nLanguage.Routing allows you to create non-deterministic chains where the output\nof a previous step defines the next step. Routing helps provide\nstructure and consistency around interactions with LLMs.There are two ways to perform routing:Conditionally return runnables from a\nRunnableLambda (recommended)Using a RunnableBranch.We’ll illustrate both methods using a two step sequence where the first\nstep classifies an input question as being about LangChain,\nAnthropic, or Other, then routes to a corresponding prompt chain.Example Setup\u200bFirst, let’s create a chain that will identify incoming questions as\nbeing about LangChain, Anthropic, or Other:from langchain_community.chat_models import ChatAnthropicfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatechain = (    PromptTemplate.from_template(        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>{question}</question>Classification:"""    )    | ChatAnthropic()    | StrOutputParser())chain.invoke({"question": "how do I call Anthropic?"})\' Anthropic\'Now, let’s create three sub chains:langchain_chain = (    PromptTemplate.from_template(        """You are an expert in langchain. \\Always answer questions starting with "As Harrison Chase told me". \\Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())anthropic_chain = (    PromptTemplate.from_template(        """You are an expert in anthropic. \\Always answer questions starting with "As Dario Amodei told me". \\Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())general_chain = (    PromptTemplate.from_template(        """Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())Using a custom function (Recommended)\u200bYou can also use a custom function to route between different outputs. --- Here’s an example:def route(info):    if "anthropic" in info["topic"].lower():        return anthropic_chain    elif "langchain" in info["topic"].lower():        return langchain_chain    else:        return general_chainfrom langchain_core.runnables import RunnableLambdafull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(    route)full_chain.invoke({"question": "how do I use Anthropic?"})AIMessage(content=\' As Dario Amodei told me, to use Anthropic IPC you first need to import it:\\n\\n```python\\nfrom anthroipc import ic\\n```\\n\\nThen you can create a client and connect to the server:\\n\\n```python \\nclient = ic.connect()\\n```\\n\\nAfter that, you can call methods on the client and get responses:\\n\\n```python\\nresponse = client.ask("What is the meaning of life?")\\nprint(response)\\n```\\n\\nYou can also register callbacks to handle events: \\n\\n```python\\ndef on_poke(event):\\n  print("Got poked!")\\n\\nclient.on(\\\'poke\\\', on_poke)\\n```\\n\\nAnd that\\\'s the basics of using the Anthropic IPC client library for Python! Let me know if you have any other questions!\', additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"})AIMessage(content=\' As Harrison Chase told me, to use LangChain you first need to sign up for an API key at platform.langchain.com. Once you have your API key, you can install the Python library and write a simple Python script to call the LangChain API. Here is some sample code to get started:\\n\\n```python\\nimport langchain\\n\\napi_key = "YOUR_API_KEY"\\n\\nlangchain.set_key(api_key)\\n\\nresponse = langchain.ask("What is the capital of France?")\\n\\nprint(response.response)\\n```\\n\\nThis will send the question "What is the capital of France?" to the LangChain API and print the response. You can customize the request by providing parameters like max_tokens, temperature, etc. The LangChain Python library documentation has more details on the available options. The key things are getting an API key and calling langchain.ask() with your question text. Let me know if you have any other questions!\', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"})AIMessage(content=\' 4\', additional_kwargs={}, example=False)Using a RunnableBranch\u200bA RunnableBranch is a special type of runnable that allows you to\ndefine a set of conditions and runnables to execute based on the input.\nIt does not offer anything that you can’t achieve in a custom\nfunction as described above, so we recommend using a custom function\ninstead.A RunnableBranch is initialized with a list of (condition, runnable)\npairs and a default runnable. It selects which branch by passing each\ncondition the input it’s invoked with. It selects the first condition to\nevaluate to True, and runs the corresponding runnable to that condition --- with the input.If no provided conditions match, it runs the default runnable.Here’s an example of what it looks like in action:from langchain_core.runnables import RunnableBranchbranch = RunnableBranch(    (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),    (lambda x: "langchain" in x["topic"].lower(), langchain_chain),    general_chain,)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branchfull_chain.invoke({"question": "how do I use Anthropic?"})AIMessage(content=" As Dario Amodei told me, here are some ways to use Anthropic:\\n\\n- Sign up for an account on Anthropic\'s website to access tools like Claude, Constitutional AI, and Writer. \\n\\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\\n\\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\\n\\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\\n\\n- Check out Anthropic\'s documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\\n\\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\\n\\n- For most people, the easiest way to leverage Anthropic\'s technology is through their website - just create an account to get started!", additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"})AIMessage(content=\' As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \\n\\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\\n\\n- Ask general knowledge questions and LangChain will try to answer factually. For example "What is the capital of France?"\\n\\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like "Let\\\'s discuss machine learning"\\n\\n- Ask for summaries or high-level explanations on subjects. For example "Can you summarize the main themes in Shakespeare\\\'s Hamlet?" \\n\\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example "Write a short children\\\'s story about a mouse" or "Generate a poem in the style of Robert Frost about nature"\\n\\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\\n\\nThe key is interacting naturally and giving it clear prompts and requests\', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"})AIMessage(content=\' 2 + 2 = 4\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousRunnableLambda: Run Custom FunctionsNextBind runtime argsExample SetupUsing a custom function (Recommended)Using a RunnableBranchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nRunnablePassthrough: Passing data through | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnablePassthrough: Passing data throughOn this pagePassing data throughRunnablePassthrough allows to pass inputs unchanged or with the addition\nof extra keys. This typically is used in conjuction with\nRunnableParallel to assign data to a new key in the map.RunnablePassthrough() called on it’s own, will simply take the input and\npass it through.RunnablePassthrough called with assign\n(RunnablePassthrough.assign(...)) will take the input, and will add\nthe extra arguments passed to the assign function.See the example below:%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel(    passed=RunnablePassthrough(),    extra=RunnablePassthrough.assign(mult=lambda x: x["num"] * 3),    modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1}){\'passed\': {\'num\': 1}, \'extra\': {\'num\': 1, \'mult\': 3}, \'modified\': 2}As seen above, passed key was called with RunnablePassthrough() and\nso it simply passed on {\'num\': 1}.In the second line, we used RunnablePastshrough.assign with a lambda\nthat multiplies the numerical value by 3. In this cased, extra was set\nwith {\'num\': 1, \'mult\': 3} which is the original value with the mult\nkey added.Finally, we also set a third key in the map with modified which uses a\nlambda to set a single value adding 1 to the num, which resulted in\nmodified key with the value of 2.Retrieval Example\u200bIn the example below, we see a use case where we use RunnablePassthrough\nalong with RunnableMap.from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())retrieval_chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'Here the input to prompt is expected to be a map with keys “context” and\n“question”. The user input is just the question. So we need to get the\ncontext using our retriever and passthrough the user input under the\n“question” key. In this case, the RunnablePassthrough allows us to pass --- on the user’s question to the prompt and model.Help us out by providing feedback on this documentation page:PreviousRunnableParallel: Manipulating dataNextRunnableLambda: Run Custom FunctionsRetrieval ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nAdd message history (memory) | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toAdd message history (memory)On this pageAdd message history (memory)The RunnableWithMessageHistory lets us add message history to certain\ntypes of chains. It wraps another Runnable and manages the chat message\nhistory for it.Specifically, it can be used for any Runnable that takes as input one ofa sequence of BaseMessagea dict with a key that takes a sequence of BaseMessagea dict with a key that takes the latest message(s) as a string or\nsequence of BaseMessage, and a separate key that takes historical\nmessagesAnd returns as output one ofa string that can be treated as the contents of an AIMessagea sequence of BaseMessagea dict with a key that contains a sequence of BaseMessageLet’s take a look at some examples to see how it works. First we\nconstruct a runnable (which here accepts a dict as input and returns a\nmessage as output):from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_openai.chat_models import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re an assistant who\'s good at {ability}. Respond in 20 words or fewer",        ),        MessagesPlaceholder(variable_name="history"),        ("human", "{input}"),    ])runnable = prompt | modelTo manage the message history, we will need: 1. This runnable; 2. A\ncallable that returns an instance of BaseChatMessageHistory.Check out the memory\nintegrations page for\nimplementations of chat message histories using Redis and other\nproviders. Here we demonstrate using an in-memory ChatMessageHistory\nas well as more persistent storage using RedisChatMessageHistory.In-memory\u200bBelow we show a simple example in which the chat history lives in\nmemory, in this case via a global Python dict.We construct a callable get_session_history that references this dict\nto return an instance of ChatMessageHistory. The arguments to the\ncallable can be specified by passing a configuration to the\nRunnableWithMessageHistory at runtime. By default, the configuration\nparameter is expected to be a single string session_id. This can be\nadjusted via the history_factory_config kwarg.Using the single-parameter default:from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory:    if session_id not in store:        store[session_id] = ChatMessageHistory()    return store[session_id]with_message_history = RunnableWithMessageHistory(    runnable,    get_session_history,    input_messages_key="input",    history_messages_key="history",)Note that we’ve specified input_messages_key (the key to be treated as\nthe latest input message) and history_messages_key (the key to add --- historical messages to).When invoking this new runnable, we specify the corresponding chat\nhistory via a configuration parameter:with_message_history.invoke(    {"ability": "math", "input": "What does cosine mean?"},    config={"configurable": {"session_id": "abc123"}},)AIMessage(content=\'Cosine is a trigonometric function that calculates the ratio of the adjacent side to the hypotenuse of a right triangle.\')# Rememberswith_message_history.invoke(    {"ability": "math", "input": "What?"},    config={"configurable": {"session_id": "abc123"}},)AIMessage(content=\'Cosine is a mathematical function used to calculate the length of a side in a right triangle.\')# New session_id --> does not remember.with_message_history.invoke(    {"ability": "math", "input": "What?"},    config={"configurable": {"session_id": "def234"}},)AIMessage(content=\'I can help with math problems. What do you need assistance with?\')The configuration parameters by which we track message histories can be\ncustomized by passing in a list of ConfigurableFieldSpec objects to\nthe history_factory_config parameter. Below, we use two parameters: a\nuser_id and conversation_id.from langchain_core.runnables import ConfigurableFieldSpecstore = {}def get_session_history(user_id: str, conversation_id: str) -> BaseChatMessageHistory:    if (user_id, conversation_id) not in store:        store[(user_id, conversation_id)] = ChatMessageHistory()    return store[(user_id, conversation_id)]with_message_history = RunnableWithMessageHistory(    runnable,    get_session_history,    input_messages_key="input",    history_messages_key="history",    history_factory_config=[        ConfigurableFieldSpec(            id="user_id",            annotation=str,            name="User ID",            description="Unique identifier for the user.",            default="",            is_shared=True,        ),        ConfigurableFieldSpec(            id="conversation_id",            annotation=str,            name="Conversation ID",            description="Unique identifier for the conversation.",            default="",            is_shared=True,        ),    ],)with_message_history.invoke(    {"ability": "math", "input": "Hello"},    config={"configurable": {"user_id": "123", "conversation_id": "1"}},)Examples with runnables of different signatures\u200bThe above runnable takes a dict as input and returns a BaseMessage. --- Below we show some alternatives.Messages input, dict output\u200bfrom langchain_core.messages import HumanMessagefrom langchain_core.runnables import RunnableParallelchain = RunnableParallel({"output_message": ChatOpenAI()})def get_session_history(session_id: str) -> BaseChatMessageHistory:    if session_id not in store:        store[session_id] = ChatMessageHistory()    return store[session_id]with_message_history = RunnableWithMessageHistory(    chain,    get_session_history,    output_messages_key="output_message",)with_message_history.invoke(    [HumanMessage(content="What did Simone de Beauvoir believe about free will")],    config={"configurable": {"session_id": "baz"}},){\'output_message\': AIMessage(content="Simone de Beauvoir believed in the existence of free will. She argued that individuals have the ability to make choices and determine their own actions, even in the face of social and cultural constraints. She rejected the idea that individuals are purely products of their environment or predetermined by biology or destiny. Instead, she emphasized the importance of personal responsibility and the need for individuals to actively engage in creating their own lives and defining their own existence. De Beauvoir believed that freedom and agency come from recognizing one\'s own freedom and actively exercising it in the pursuit of personal and collective liberation.")}with_message_history.invoke(    [HumanMessage(content="How did this compare to Sartre")],    config={"configurable": {"session_id": "baz"}},){\'output_message\': AIMessage(content=\'Simone de Beauvoir\\\'s views on free will were closely aligned with those of her contemporary and partner Jean-Paul Sartre. Both de Beauvoir and Sartre were existentialist philosophers who emphasized the importance of individual freedom and the rejection of determinism. They believed that human beings have the capacity to transcend their circumstances and create their own meaning and values.\\n\\nSartre, in his famous work "Being and Nothingness," argued that human beings are condemned to be free, meaning that we are burdened with the responsibility of making choices and defining ourselves in a world that lacks inherent meaning. Like de Beauvoir, Sartre believed that individuals have the ability to exercise their freedom and make choices in the face of external and internal constraints.\\n\\nWhile there may be some nuanced differences in their philosophical writings, overall, de Beauvoir and Sartre shared a similar belief in the existence of free will and the importance of individual agency in shaping one\\\'s own life.\')}Messages input, messages output\u200bRunnableWithMessageHistory(    ChatOpenAI(),    get_session_history,)Dict with single key for all messages input, messages output\u200bfrom operator import itemgetterRunnableWithMessageHistory(    itemgetter("input_messages") | ChatOpenAI(),    get_session_history,    input_messages_key="input_messages",)Persistent storage\u200bIn many cases it is preferable to persist conversation histories.\nRunnableWithMessageHistory is agnostic as to how the\nget_session_history callable retrieves its chat message histories. See\nhere\nfor an example using a local filesystem. Below we demonstrate how one\ncould use Redis. Check out the memory\nintegrations page for\nimplementations of chat message histories using other providers.Setup\u200bWe’ll need to install Redis if it’s not installed already:%pip install --upgrade --quiet redisStart a local Redis Stack server if we don’t have an existing Redis\ndeployment to connect to:docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latestREDIS_URL = "redis://localhost:6379/0"LangSmith\u200bLangSmith is especially useful for something like message history\ninjection, where it can be hard to otherwise understand what the inputs\nare to various parts of the chain.Note that LangSmith is not needed, but it is helpful. If you do want to\nuse LangSmith, after you sign up at the link above, make sure to\nuncoment the below and set your environment variables to start logging --- traces:# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()Updating the message history implementation just requires us to define a\nnew callable, this time returning an instance of\nRedisChatMessageHistory:from langchain_community.chat_message_histories import RedisChatMessageHistorydef get_message_history(session_id: str) -> RedisChatMessageHistory:    return RedisChatMessageHistory(session_id, url=REDIS_URL)with_message_history = RunnableWithMessageHistory(    runnable,    get_message_history,    input_messages_key="input",    history_messages_key="history",)We can invoke as before:with_message_history.invoke(    {"ability": "math", "input": "What does cosine mean?"},    config={"configurable": {"session_id": "foobar"}},)AIMessage(content=\'Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse in a right triangle.\')with_message_history.invoke(    {"ability": "math", "input": "What\'s its inverse"},    config={"configurable": {"session_id": "foobar"}},)AIMessage(content=\'The inverse of cosine is the arccosine function, denoted as acos or cos^-1, which gives the angle corresponding to a given cosine value.\')Langsmith\ntraceLooking at the Langsmith trace for the second call, we can see that when\nconstructing the prompt, a “history” variable has been injected which is\na list of two messages (our first input and first output).Help us out by providing feedback on this documentation page:PreviousInspect your runnablesNextCookbookIn-memoryExamples with runnables of different signaturesPersistent storageSetupLangSmithCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nRunnableParallel: Manipulating data | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableParallel: Manipulating dataOn this pageManipulating inputs & outputRunnableParallel can be useful for manipulating the output of one\nRunnable to match the input format of the next Runnable in a sequence.Here the input to prompt is expected to be a map with keys “context” and\n“question”. The user input is just the question. So we need to get the\ncontext using our retriever and passthrough the user input under the\n“question” key.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())retrieval_chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'tipNote that when composing a RunnableParallel with another Runnable we\ndon’t even need to wrap our dictionary in the RunnableParallel class\n—\xa0the type conversion is handled for us. In the context of a chain,\nthese are equivalent:{"context": retriever, "question": RunnablePassthrough()}RunnableParallel({"context": retriever, "question": RunnablePassthrough()})RunnableParallel(context=retriever, question=RunnablePassthrough())Using itemgetter as shorthand\u200bNote that you can use Python’s itemgetter as shorthand to extract data\nfrom the map when combining with RunnableParallel. You can find more\ninformation about itemgetter in the Python\nDocumentation.In the example below, we use itemgetter to extract specific keys from --- --- \n\n\n\n\n\n\n\nInspect your runnables | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toInspect your runnablesOn this pageInspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it\nto get a better sense for what is going on. This notebook covers some\nmethods for doing so.First, let’s create an example LCEL. We will create one that does\nretrieval%pip install --upgrade --quiet  langchain langchain-openai faiss-cpu tiktokenfrom langchain.prompts import ChatPromptTemplatefrom langchain.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())Get a graph\u200bYou can get a graph of the runnablechain.get_graph()Print a graph\u200bWhile that is not super legible, you can print it to get a display --- --- \n\n\n\n\n\n\n\nStream custom generator functions | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toStream custom generator functionsOn this pageStream custom generator functionsYou can use generator functions (ie. functions that use the yield\nkeyword, and behave like iterators) in a LCEL pipeline.The signature of these generators should be\nIterator[Input] -> Iterator[Output]. Or for async generators:\nAsyncIterator[Input] -> AsyncIterator[Output].These are useful for: - implementing a custom output parser - modifying --- --- \n\n\n\n\n\n\n\nRunnableLambda: Run Custom Functions | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableLambda: Run Custom FunctionsOn this pageRun custom functionsYou can use arbitrary functions in the pipeline.Note that all inputs to these functions need to be a SINGLE argument. If\nyou have a function that accepts multiple arguments, you should write a\nwrapper that accepts a single input and unpacks it into multiple\nargument.%pip install –upgrade –quiet langchain langchain-openaifrom operator import itemgetterfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambdafrom langchain_openai import ChatOpenAIdef length_function(text):    return len(text)def _multiple_length_function(text1, text2):    return len(text1) * len(text2)def multiple_length_function(_dict):    return _multiple_length_function(_dict["text1"], _dict["text2"])prompt = ChatPromptTemplate.from_template("what is {a} + {b}")model = ChatOpenAI()chain1 = prompt | modelchain = (    {        "a": itemgetter("foo") | RunnableLambda(length_function),        "b": {"text1": itemgetter("foo"), "text2": itemgetter("bar")}        | RunnableLambda(multiple_length_function),    }    | prompt    | model)chain.invoke({"foo": "bar", "bar": "gah"})AIMessage(content=\'3 + 9 equals 12.\')Accepting a Runnable Config\u200bRunnable lambdas can optionally accept a\nRunnableConfig,\nwhich they can use to pass callbacks, tags, and other configuration --- information to nested runs.from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnableConfigimport jsondef parse_or_fix(text: str, config: RunnableConfig):    fixing_chain = (        ChatPromptTemplate.from_template(            "Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}"            " Don\'t narrate, just respond with the fixed data."        )        | ChatOpenAI()        | StrOutputParser()    )    for _ in range(3):        try:            return json.loads(text)        except Exception as e:            text = fixing_chain.invoke({"input": text, "error": e}, config)    return "Failed to parse"from langchain.callbacks import get_openai_callbackwith get_openai_callback() as cb:    output = RunnableLambda(parse_or_fix).invoke(        "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]}    )    print(output)    print(cb){\'foo\': \'bar\'}Tokens Used: 65    Prompt Tokens: 56    Completion Tokens: 9Successful Requests: 1Total Cost (USD): $0.00010200000000000001Help us out by providing feedback on this documentation page:PreviousRunnablePassthrough: Passing data throughNextRunnableBranch: Dynamically route logic based on inputAccepting a Runnable ConfigCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- --- \n\n\n\n\n\n\n\nAdd fallbacks | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toAdd fallbacksOn this pageAdd fallbacksThere are many possible points of failure in an LLM application, whether\nthat be issues with LLM API’s, poor model outputs, issues with other\nintegrations, etc. Fallbacks help you gracefully handle and isolate\nthese issues.Crucially, fallbacks can be applied not only on the LLM level but on the\nwhole runnable level.Handling LLM API Errors\u200bThis is maybe the most common use case for fallbacks. A request to an\nLLM API can fail for a variety of reasons - the API could be down, you\ncould have hit rate limits, any number of things. Therefore, using\nfallbacks can help protect against these types of things.IMPORTANT: By default, a lot of the LLM wrappers catch errors and retry.\nYou will most likely want to turn those off when working with fallbacks.\nOtherwise the first wrapper will keep on retrying and not failing.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_community.chat_models import ChatAnthropicfrom langchain_openai import ChatOpenAIFirst, let’s mock out what happens if we hit a RateLimitError from --- --- \n\n\n\n\n\n\n\nCreate a runnable with the `@chain` decorator | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toCreate a runnable with the `@chain` decoratorCreate a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a\n@chain decorator. This is functionaly equivalent to wrapping in a\nRunnableLambda.This will have the benefit of improved observability by tracing your\nchain correctly. Any calls to runnables inside this function will be\ntraced as nested childen.It will also allow you to use this as any other runnable, compose it in\nchain, etc.Let’s take a look at this in action!%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import chainfrom langchain_openai import ChatOpenAIprompt1 = ChatPromptTemplate.from_template("Tell me a joke about {topic}")prompt2 = ChatPromptTemplate.from_template("What is the subject of this joke: {joke}")@chaindef custom_chain(text):    prompt_val1 = prompt1.invoke({"topic": text})    output1 = ChatOpenAI().invoke(prompt_val1)    parsed_output1 = StrOutputParser().invoke(output1)    chain2 = prompt2 | ChatOpenAI() | StrOutputParser()    return chain2.invoke({"joke": parsed_output1})custom_chain is now a runnable, meaning you will need to use invokecustom_chain.invoke("bears")\'The subject of this joke is bears.\'If you check out your LangSmith traces, you should see a custom_chain\ntrace in there, with the calls to OpenAI nested underneathHelp us out by providing feedback on this documentation page:PreviousConfigure chain internals at runtimeNextAdd fallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nConfigure chain internals at runtime | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toConfigure chain internals at runtimeOn this pageConfigure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end\nuser, multiple different ways of doing things. In order to make this\nexperience as easy as possible, we have defined two methods.First, a configurable_fields method. This lets you configure\nparticular fields of a runnable.Second, a configurable_alternatives method. With this method, you can\nlist out alternatives for any particular runnable that can be set during --- --- \n\n\n\n\n\n\n\nBind runtime args | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toBind runtime argsOn this pageBind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with\nconstant arguments that are not part of the output of the preceding\nRunnable in the sequence, and which are not part of the user input. We\ncan use Runnable.bind() to easily pass these arguments in.Suppose we have a simple prompt + model sequence:%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n",        ),        ("human", "{equation_statement}"),    ])model = ChatOpenAI(temperature=0)runnable = (    {"equation_statement": RunnablePassthrough()} | prompt | model | StrOutputParser())print(runnable.invoke("x raised to the third plus seven equals 12"))EQUATION: x^3 + 7 = 12SOLUTION:Subtracting 7 from both sides of the equation, we get:x^3 = 12 - 7x^3 = 5Taking the cube root of both sides, we get:x = ∛5Therefore, the solution to the equation x^3 + 7 = 12 is x = ∛5.and want to call the model with certain stop words:runnable = (    {"equation_statement": RunnablePassthrough()}    | prompt    | model.bind(stop="SOLUTION")    | StrOutputParser())print(runnable.invoke("x raised to the third plus seven equals 12"))EQUATION: x^3 + 7 = 12Attaching OpenAI functions\u200bOne particularly useful application of binding is to attach OpenAI --- --- \n\n\n\n\n\n\n\nHow to | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toHow to?️ RunnableParallel: Manipulating datamanipulating-inputs-output}?️ RunnablePassthrough: Passing data throughpassing-data-through}?️ RunnableLambda: Run Custom Functionsrun-custom-functions}?️ RunnableBranch: Dynamically route logic based on inputdynamically-route-logic-based-on-input}?️ Bind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with?️ Configure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end?️ Create a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a?️ Add fallbacksThere are many possible points of failure in an LLM application, whether?️ Stream custom generator functionsYou can use generator functions (ie. functions that use the yield?️ Inspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it?️ Add message history (memory)The RunnableWithMessageHistory lets us add message history to certainHelp us out by providing feedback on this documentation page:PreviousStreamingNextRunnableParallel: Manipulating dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nHow to | ?️? Langchain --- Skip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toHow to?️ RunnableParallel: Manipulating datamanipulating-inputs-output}?️ RunnablePassthrough: Passing data throughpassing-data-through}?️ RunnableLambda: Run Custom Functionsrun-custom-functions}?️ RunnableBranch: Dynamically route logic based on inputdynamically-route-logic-based-on-input}?️ Bind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with?️ Configure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end?️ Create a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a?️ Add fallbacksThere are many possible points of failure in an LLM application, whether?️ Stream custom generator functionsYou can use generator functions (ie. functions that use the yield?️ Inspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it?️ Add message history (memory)The RunnableWithMessageHistory lets us add message history to certainHelp us out by providing feedback on this documentation page:PreviousStreamingNextRunnableParallel: Manipulating dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n\n\n\n --- \n\n\n\n\n\n\n\nGet started | ?️? Langchain',
 2: 'into a valid state.Let’s see such a parser in action to understand what this means.from langchain_core.output_parsers import JsonOutputParserchain = (    model | JsonOutputParser())  # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, flush=True){}{\'countries\': []}{\'countries\': [{}]}{\'countries\': [{\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67}]}{\'countries\': [{\'name\': \'France\', \'population\': 6739}]}{\'countries\': [{\'name\': \'France\', \'population\': 673915}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Sp\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 4675}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 467547}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 12}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 12647}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 1264764}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 126476461}]}Now, let’s break streaming. We’ll use the previous example and\nappend an extraction function at the end that extracts the country names\nfrom the finalized JSON.dangerAny steps in the chain that operate on finalized inputs rather than\non input streams can break streaming functionality via stream or\nastream.tipLater, we will discuss the astream_events API which streams results\nfrom intermediate steps. This API will stream results from intermediate\nsteps even if the chain contains steps that only operate on finalized --- and rather than for start events.eventnamechunkinputoutputon_chat_model_start[model name]{“messages”: [[SystemMessage, HumanMessage]]}on_chat_model_stream[model name]AIMessageChunk(content=“hello”)on_chat_model_end[model name]{“messages”: [[SystemMessage, HumanMessage]]}{“generations”: […], “llm_output”: None, …}on_llm_start[model name]{‘input’: ‘hello’}on_llm_stream[model name]‘Hello’on_llm_end[model name]‘Hello human!’on_chain_startformat_docson_chain_streamformat_docs“hello world!, goodbye world!”on_chain_endformat_docs[Document(…)]“hello world!, goodbye world!”on_tool_startsome_tool{“x”: 1, “y”: “2”}on_tool_streamsome_tool{“x”: 1, “y”: “2”}on_tool_endsome_tool{“x”: 1, “y”: “2”}on_retriever_start[retriever name]{“query”: “hello”}on_retriever_chunk[retriever name]{documents: […]}on_retriever_end[retriever name]{“query”: “hello”}{documents: […]}on_prompt_start[template_name]{“question”: “hello”}on_prompt_end[template_name]{“question”: “hello”}ChatPromptValue(messages: [SystemMessage, …])Chat Model\u200bLet’s start off by looking at the events produced by a chat model.events = []async for event in model.astream_events("hello", version="v1"):    events.append(event)/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.  warn_beta(noteHey what’s that funny version=“v1” parameter in the API?! ?This is a beta API, and we’re almost certainly going to make some\nchanges to it.This version parameter will allow us to minimize such breaking changes\nto your code.In short, we are annoying you now, so we don’t have to annoy you later.Let’s take a look at the few of the start event and a few of the end --- events.events[:3][{\'event\': \'on_chat_model_start\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'name\': \'ChatAnthropic\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'input\': \'hello\'}}, {\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\' Hello\')}}, {\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\'!\')}}]events[-2:][{\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\'\')}}, {\'event\': \'on_chat_model_end\',  \'name\': \'ChatAnthropic\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'output\': AIMessageChunk(content=\' Hello!\')}}]Chain\u200bLet’s revisit the example chain that parsed streaming JSON to explore\nthe streaming events API.chain = (    model | JsonOutputParser())  # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsevents = [    event    async for event in chain.astream_events(        \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',        version="v1",    )]If you examine at the first few events, you’ll notice that there are --- 3 different start events rather than 2 start events.The three start events correspond to:The chain (model + parser)The modelThe parserevents[:3][{\'event\': \'on_chain_start\',  \'run_id\': \'b1074bff-2a17-458b-9e7b-625211710df4\',  \'name\': \'RunnableSequence\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'input\': \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'}}, {\'event\': \'on_chat_model_start\',  \'name\': \'ChatAnthropic\',  \'run_id\': \'6072be59-1f43-4f1c-9470-3b92e8406a99\',  \'tags\': [\'seq:step:1\'],  \'metadata\': {},  \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}, {\'event\': \'on_parser_start\',  \'name\': \'JsonOutputParser\',  \'run_id\': \'bf978194-0eda-4494-ad15-3a5bfe69cd59\',  \'tags\': [\'seq:step:2\'],  \'metadata\': {},  \'data\': {}}]What do you think you’d see if you looked at the last 3 events? what\nabout the middle?Let’s use this API to take output the stream events from the model and\nthe parser. We’re ignoring start events, end events and events from the\nchain.num_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",):    kind = event["event"]    if kind == "on_chat_model_stream":        print(            f"Chat model chunk: {repr(event[\'data\'][\'chunk\'].content)}",            flush=True,        )    if kind == "on_parser_stream":        print(f"Parser chunk: {event[\'data\'][\'chunk\']}", flush=True)    num_events += 1    if num_events > 30:        # Truncate the output        print("...")        breakChat model chunk: \' Here\'Chat model chunk: \' is\'Chat model chunk: \' the\'Chat model chunk: \' JSON\'Chat model chunk: \' with\'Chat model chunk: \' the\'Chat model chunk: \' requested\'Chat model chunk: \' countries\'Chat model chunk: \' and\'Chat model chunk: \' their\'Chat model chunk: \' populations\'Chat model chunk: \':\'Chat model chunk: \'\\n\\n```\'Chat model chunk: \'json\'Parser chunk: {}Chat model chunk: \'\\n{\'Chat model chunk: \'\\n \'Chat model chunk: \' "\'Chat model chunk: \'countries\'Chat model chunk: \'":\'Parser chunk: {\'countries\': []}Chat model chunk: \' [\'Chat model chunk: \'\\n   \'Parser chunk: {\'countries\': [{}]}Chat model chunk: \' {\'...Because both the model and the parser support streaming, we see sreaming\nevents from both components in real time! Kind of cool isn’t it? ?Filtering Events\u200bBecause this API produces so many events, it is useful to be able to\nfilter on events.You can filter by either component name, component tags or component --- type.By Name\u200bchain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config(    {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_names=["my_parser"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_parser_start\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': []}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'\'}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\'}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 6739}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 673915}]}}}{\'event\': \'on_parser_stream\', \'name\': --- \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67391582}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {}]}}}...By Type\u200bchain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config(    {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_types=["chat_model"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_chat_model_start\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' Here\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' is\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' JSON\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' with\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], --- \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' requested\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' countries\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' and\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' their\')}}...By Tags\u200bcautionTags are inherited by child components of a given runnable.If you’re using tags to filter, make sure that this is what you want.chain = (model | JsonOutputParser()).with_config({"tags": ["my_chain"]})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_tags=["my_chain"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_chain_start\', \'run_id\': \'190875f3-3fb7-49ad-9b6e-f49da22f3e49\', \'name\': \'RunnableSequence\', \'tags\': [\'my_chain\'], \'metadata\': {}, \'data\': {\'input\': \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'}}{\'event\': \'on_chat_model_start\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}{\'event\': \'on_parser_start\', \'name\': \'JsonOutputParser\', \'run_id\': \'3b5e4ca1-40fe-4a02-9a19-ba2a43a6115c\', \'tags\': [\'seq:step:2\', \'my_chain\'], \'metadata\': {}, \'data\': {}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', --- \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' Here\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' is\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' JSON\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' with\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' requested\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' countries\')}}...Non-streaming components\u200bRemember how some components don’t stream well because they don’t --- operate on input streams?While such components can break streaming of the final output when using\nastream, astream_events will still yield streaming events from\nintermediate steps that support streaming!# Function that does not support streaming.# It operates on the finalizes inputs rather than# operating on the input stream.def _extract_country_names(inputs):    """A function that does not operates on input streams and breaks streaming."""    if not isinstance(inputs, dict):        return ""    if "countries" not in inputs:        return ""    countries = inputs["countries"]    if not isinstance(countries, list):        return ""    country_names = [        country.get("name") for country in countries if isinstance(country, dict)    ]    return country_nameschain = (    model | JsonOutputParser() | _extract_country_names)  # This parser only works with OpenAI right nowAs expected, the astream API doesn’t work correctly because\n_extract_country_names doesn’t operate on streams.async for chunk in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',):    print(chunk, flush=True)[\'France\', \'Spain\', \'Japan\']Now, let’s confirm that with astream_events we’re still seeing streaming\noutput from the model and the parser.num_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",):    kind = event["event"]    if kind == "on_chat_model_stream":        print(            f"Chat model chunk: {repr(event[\'data\'][\'chunk\'].content)}",            flush=True,        )    if kind == "on_parser_stream":        print(f"Parser chunk: {event[\'data\'][\'chunk\']}", flush=True)    num_events += 1    if num_events > 30:        # Truncate the output        print("...")        breakChat model chunk: \' Here\'Chat model chunk: \' is\'Chat model chunk: \' the\'Chat model chunk: \' JSON\'Chat model chunk: \' with\'Chat model chunk: \' the\'Chat model chunk: \' requested\'Chat model chunk: \' countries\'Chat model chunk: \' and\'Chat model chunk: \' their\'Chat model chunk: \' populations\'Chat model chunk: \':\'Chat model chunk: \'\\n\\n```\'Chat model chunk: \'json\'Parser chunk: {}Chat model chunk: \'\\n{\'Chat model chunk: \'\\n \'Chat model chunk: \' "\'Chat model chunk: \'countries\'Chat model chunk: \'":\'Parser chunk: {\'countries\': []}Chat model chunk: \' [\'Chat model chunk: \'\\n   \'Parser chunk: {\'countries\': [{}]}Chat model chunk: \' {\'Chat model chunk: \'\\n     \'Chat model chunk: \' "\'...Propagating Callbacks\u200bcautionIf you’re using invoking runnables inside your tools, you need to\npropagate callbacks to the runnable; otherwise, no stream events will be\ngenerated.noteWhen using RunnableLambdas or @chain decorator, callbacks are propagated --- componentsUsing Stream EventsEvent ReferenceChat ModelChainFiltering EventsNon-streaming componentsPropagating CallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc. --- call .schema() on it to obtain a JSONSchema representation.# The input schema of the chain is the input schema of its first part, the prompt.chain.input_schema.schema(){\'title\': \'PromptInput\', \'type\': \'object\', \'properties\': {\'topic\': {\'title\': \'Topic\', \'type\': \'string\'}}}prompt.input_schema.schema(){\'title\': \'PromptInput\', \'type\': \'object\', \'properties\': {\'topic\': {\'title\': \'Topic\', \'type\': \'string\'}}}model.input_schema.schema(){\'title\': \'ChatOpenAIInput\', \'anyOf\': [{\'type\': \'string\'},  {\'$ref\': \'#/definitions/StringPromptValue\'},  {\'$ref\': \'#/definitions/ChatPromptValueConcrete\'},  {\'type\': \'array\',   \'items\': {\'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},     {\'$ref\': \'#/definitions/HumanMessage\'},     {\'$ref\': \'#/definitions/ChatMessage\'},     {\'$ref\': \'#/definitions/SystemMessage\'},     {\'$ref\': \'#/definitions/FunctionMessage\'},     {\'$ref\': \'#/definitions/ToolMessage\'}]}}], \'definitions\': {\'StringPromptValue\': {\'title\': \'StringPromptValue\',   \'description\': \'String prompt value.\',   \'type\': \'object\',   \'properties\': {\'text\': {\'title\': \'Text\', \'type\': \'string\'},    \'type\': {\'title\': \'Type\',     \'default\': \'StringPromptValue\',     \'enum\': [\'StringPromptValue\'],     \'type\': \'string\'}},   \'required\': [\'text\']},  \'AIMessage\': {\'title\': \'AIMessage\',   \'description\': \'A Message from an AI.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'ai\',     \'enum\': [\'ai\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'HumanMessage\': {\'title\': \'HumanMessage\',   \'description\': \'A Message from a human.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'human\',     \'enum\': [\'human\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'ChatMessage\': {\'title\': \'ChatMessage\',   \'description\': \'A Message that can be assigned an arbitrary speaker (i.e. role).\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': --- [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'chat\',     \'enum\': [\'chat\'],     \'type\': \'string\'},    \'role\': {\'title\': \'Role\', \'type\': \'string\'}},   \'required\': [\'content\', \'role\']},  \'SystemMessage\': {\'title\': \'SystemMessage\',   \'description\': \'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'system\',     \'enum\': [\'system\'],     \'type\': \'string\'}},   \'required\': [\'content\']},  \'FunctionMessage\': {\'title\': \'FunctionMessage\',   \'description\': \'A Message for passing the result of executing a function back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'function\',     \'enum\': [\'function\'],     \'type\': \'string\'},    \'name\': {\'title\': \'Name\', \'type\': \'string\'}},   \'required\': [\'content\', \'name\']},  \'ToolMessage\': {\'title\': \'ToolMessage\',   \'description\': \'A Message for passing the result of executing a tool back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'tool\',     \'enum\': [\'tool\'],     \'type\': \'string\'},    \'tool_call_id\': {\'title\': \'Tool Call Id\', \'type\': \'string\'}},   \'required\': [\'content\', \'tool_call_id\']},  \'ChatPromptValueConcrete\': {\'title\': \'ChatPromptValueConcrete\',   \'description\': \'Chat prompt value which explicitly lists out the message types it accepts.\\nFor use in external schemas.\',   \'type\': \'object\',   \'properties\': {\'messages\': {\'title\': \'Messages\',     \'type\': \'array\',     \'items\': {\'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},       {\'$ref\': \'#/definitions/HumanMessage\'},       {\'$ref\': \'#/definitions/ChatMessage\'},       {\'$ref\': \'#/definitions/SystemMessage\'},       {\'$ref\': \'#/definitions/FunctionMessage\'}, --- {\'$ref\': \'#/definitions/ToolMessage\'}]}},    \'type\': {\'title\': \'Type\',     \'default\': \'ChatPromptValueConcrete\',     \'enum\': [\'ChatPromptValueConcrete\'],     \'type\': \'string\'}},   \'required\': [\'messages\']}}}Output Schema\u200bA description of the outputs produced by a Runnable. This is a Pydantic --- call .schema() on it to obtain a JSONSchema representation.# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessagechain.output_schema.schema(){\'title\': \'ChatOpenAIOutput\', \'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},  {\'$ref\': \'#/definitions/HumanMessage\'},  {\'$ref\': \'#/definitions/ChatMessage\'},  {\'$ref\': \'#/definitions/SystemMessage\'},  {\'$ref\': \'#/definitions/FunctionMessage\'},  {\'$ref\': \'#/definitions/ToolMessage\'}], \'definitions\': {\'AIMessage\': {\'title\': \'AIMessage\',   \'description\': \'A Message from an AI.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'ai\',     \'enum\': [\'ai\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'HumanMessage\': {\'title\': \'HumanMessage\',   \'description\': \'A Message from a human.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'human\',     \'enum\': [\'human\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'ChatMessage\': {\'title\': \'ChatMessage\',   \'description\': \'A Message that can be assigned an arbitrary speaker (i.e. role).\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'chat\',     \'enum\': [\'chat\'],     \'type\': \'string\'},    \'role\': {\'title\': \'Role\', \'type\': \'string\'}},   \'required\': [\'content\', \'role\']},  \'SystemMessage\': {\'title\': \'SystemMessage\',   \'description\': \'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'}, --- \'type\': {\'title\': \'Type\',     \'default\': \'system\',     \'enum\': [\'system\'],     \'type\': \'string\'}},   \'required\': [\'content\']},  \'FunctionMessage\': {\'title\': \'FunctionMessage\',   \'description\': \'A Message for passing the result of executing a function back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'function\',     \'enum\': [\'function\'],     \'type\': \'string\'},    \'name\': {\'title\': \'Name\', \'type\': \'string\'}},   \'required\': [\'content\', \'name\']},  \'ToolMessage\': {\'title\': \'ToolMessage\',   \'description\': \'A Message for passing the result of executing a tool back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'tool\',     \'enum\': [\'tool\'],     \'type\': \'string\'},    \'tool_call_id\': {\'title\': \'Tool Call Id\', \'type\': \'string\'}},   \'required\': [\'content\', \'tool_call_id\']}}}Stream\u200bfor s in chain.stream({"topic": "bears"}):    print(s.content, end="", flush=True)Sure, here\'s a bear-themed joke for you:Why don\'t bears wear shoes?Because they already have bear feet!Invoke\u200bchain.invoke({"topic": "bears"})AIMessage(content="Why don\'t bears wear shoes? \\n\\nBecause they have bear feet!")Batch\u200bchain.batch([{"topic": "bears"}, {"topic": "cats"}])[AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they already have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!")]You can set the number of concurrent requests by using the --- start event.eventnamechunkinputoutputon_chat_model_start[model name]{“messages”: [[SystemMessage, HumanMessage]]}on_chat_model_stream[model name]AIMessageChunk(content=“hello”)on_chat_model_end[model name]{“messages”: [[SystemMessage, HumanMessage]]}{“generations”: […], “llm_output”: None, …}on_llm_start[model name]{‘input’: ‘hello’}on_llm_stream[model name]‘Hello’on_llm_end[model name]‘Hello human!’on_chain_startformat_docson_chain_streamformat_docs“hello world!, goodbye world!”on_chain_endformat_docs[Document(…)]“hello world!, goodbye world!”on_tool_startsome_tool{“x”: 1, “y”: “2”}on_tool_streamsome_tool{“x”: 1, “y”: “2”}on_tool_endsome_tool{“x”: 1, “y”: “2”}on_retriever_start[retriever name]{“query”: “hello”}on_retriever_chunk[retriever name]{documents: […]}on_retriever_end[retriever name]{“query”: “hello”}{documents: […]}on_prompt_start[template_name]{“question”: “hello”}on_prompt_end[template_name]{“question”: “hello”}ChatPromptValue(messages: [SystemMessage, …])Here are declarations associated with the events shown above:format_docs:def format_docs(docs: List[Document]) -> str:    \'\'\'Format the docs.\'\'\'    return ", ".join([doc.page_content for doc in docs])format_docs = RunnableLambda(format_docs)some_tool:@tooldef some_tool(x: int, y: str) -> dict:    \'\'\'Some_tool.\'\'\'    return {"x": x, "y": y}prompt:template = ChatPromptTemplate.from_messages(    [("system", "You are Cat Agent 007"), ("human", "{question}")]).with_config({"run_name": "my_template", "tags": ["my_template"]})Let’s define a new chain to make it more interesting to show off the\nastream_events interface (and later the astream_log interface).from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()retrieval_chain = (    {        "context": retriever.with_config(run_name="Docs"),        "question": RunnablePassthrough(),    }    | prompt    | model.with_config(run_name="my_llm")    | StrOutputParser())Now let’s use astream_events to get events from the retriever and the --- LLM.async for event in retrieval_chain.astream_events(    "where did harrison work?", version="v1", include_names=["Docs", "my_llm"]):    kind = event["event"]    if kind == "on_chat_model_stream":        print(event["data"]["chunk"].content, end="|")    elif kind in {"on_chat_model_start"}:        print()        print("Streaming LLM:")    elif kind in {"on_chat_model_end"}:        print()        print("Done streaming LLM.")    elif kind == "on_retriever_end":        print("--")        print("Retrieved the following documents:")        print(event["data"]["output"]["documents"])    elif kind == "on_tool_end":        print(f"Ended tool: {event[\'name\']}")    else:        pass/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.  warn_beta(--Retrieved the following documents:[Document(page_content=\'harrison worked at kensho\')]Streaming LLM:|H|arrison| worked| at| Kens|ho|.||Done streaming LLM.Async Stream Intermediate Steps\u200bAll runnables also have a method .astream_log() which is used to\nstream (as they happen) all or part of the intermediate steps of your\nchain/sequence.This is useful to show progress to the user, to use intermediate\nresults, or to debug your chain.You can stream all steps (default) or include/exclude steps by name,\ntags or metadata.This method yields JSONPatch ops that when\napplied in the same order as received build up the RunState.class LogEntry(TypedDict):    id: str    """ID of the sub-run."""    name: str    """Name of the object being run."""    type: str    """Type of the object being run, eg. prompt, chain, llm, etc."""    tags: List[str]    """List of tags for the run."""    metadata: Dict[str, Any]    """Key-value pairs of metadata for the run."""    start_time: str    """ISO-8601 timestamp of when the run started."""    streamed_output_str: List[str]    """List of LLM tokens streamed by this run, if applicable."""    final_output: Optional[Any]    """Final output of this run.    Only available after the run has finished successfully."""    end_time: Optional[str]    """ISO-8601 timestamp of when the run ended.    Only available after the run has finished."""class RunState(TypedDict):    id: str    """ID of the run."""    streamed_output: List[Any]    """List of output chunks streamed by Runnable.stream()"""    final_output: Optional[Any]    """Final output of the run, usually the result of aggregating (`+`) streamed_output.    Only available after the run has finished successfully."""    logs: Dict[str, LogEntry]    """Map of run names to sub-runs. If filters were supplied, this list will    contain only the runs that matched the filters."""Streaming JSONPatch chunks\u200bThis is useful eg. to stream the JSONPatch in an HTTP server, and then\napply the ops on the client to rebuild the run state there. See\nLangServe for tooling to --- make it easier to build a webserver from any Runnable.async for chunk in retrieval_chain.astream_log(    "where did harrison work?", include_names=["Docs"]):    print("-" * 40)    print(chunk)----------------------------------------RunLogPatch({\'op\': \'replace\',  \'path\': \'\',  \'value\': {\'final_output\': None,            \'id\': \'82e9b4b1-3dd6-4732-8db9-90e79c4da48c\',            \'logs\': {},            \'name\': \'RunnableSequence\',            \'streamed_output\': [],            \'type\': \'chain\'}})----------------------------------------RunLogPatch({\'op\': \'add\',  \'path\': \'/logs/Docs\',  \'value\': {\'end_time\': None,            \'final_output\': None,            \'id\': \'9206e94a-57bd-48ee-8c5e-fdd1c52a6da2\',            \'metadata\': {},            \'name\': \'Docs\',            \'start_time\': \'2024-01-19T22:33:55.902+00:00\',            \'streamed_output\': [],            \'streamed_output_str\': [],            \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],            \'type\': \'retriever\'}})----------------------------------------RunLogPatch({\'op\': \'add\',  \'path\': \'/logs/Docs/final_output\',  \'value\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]}}, {\'op\': \'add\',  \'path\': \'/logs/Docs/end_time\',  \'value\': \'2024-01-19T22:33:56.064+00:00\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'H\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'H\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'arrison\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' worked\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' at\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked at\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' Kens\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked at Kens\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'ho\'}, {\'op\': \'replace\',  \'path\': \'/final_output\',  \'value\': \'Harrison worked at Kensho\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'.\'}, {\'op\': \'replace\',  \'path\': \'/final_output\',  \'value\': \'Harrison worked at --- Kensho.\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'\'})Streaming the incremental RunState\u200bYou can simply pass diff=False to get incremental values of --- RunState. You get more verbose output with more repetitive parts.async for chunk in retrieval_chain.astream_log(    "where did harrison work?", include_names=["Docs"], diff=False):    print("-" * 70)    print(chunk)----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': None,                   \'final_output\': None,                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': --- \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'H\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\', --- \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'], --- \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kens\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\', \'ho\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho.\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': --- \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\', \'ho\', \'.\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho.\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\',                     \'H\',                     \'arrison\',                     \' worked\',                     \' at\',                     \' Kens\',                     \'ho\',                     \'.\',                     \'\'], \'type\': \'chain\'})Parallelism\u200bLet’s take a look at how LangChain Expression Language supports parallel',
 3: 'his eight-year presidency. He was, and remains, the only U.S. president never affiliated with a political party.Page: List of presidents of the United States by ageSummary: In this list of presidents of the United States by age, the first table charts the age of each president of the United States at the time of presidential inauguration (first inauguration if elected to multiple and consecutive terms), upon leaving office, and at the time of death. Where the president is still living, their lifespan and post-presidency timespan are calculated up to January 25, 2024.Page: List of vice presidents of the United StatesSummary: There have been 49 vice presidents of the United States since the office was created in 1789. Originally, the vice president was the person who received the second-most votes for president in the Electoral College. But after the election of 1800 produced a tie between Thomas Jefferson and Aaron Burr, requiring the House of Representatives to choose between them, lawmakers acted to prevent such a situation from recurring. The Twelfth Amendment was added to the Constitution in 1804, creating the current system where electors cast a separate ballot for the vice presidency.The vice president is the first person in the presidential line of succession—that is, they assume the presidency if the president dies, resigns, or is impeached and removed from office. Nine vice presidents have ascended to the presidency in this way: eight (John Tyler, Millard Fillmore, Andrew Johnson, Chester A. Arthur, Theodore Roosevelt, Calvin Coolidge, Harry S. Truman, and Lyndon B. Johnson) through the president\'s death and one (Gerald Ford) through the president\'s resignation. The vice president also serves as the president of the Senate and may choose to cast a tie-breaking vote on decisions made by the Senate. Vice presidents have exercised this latter power to varying extents over the years.Before adoption of the Twenty-fifth Amendment in 1967, an intra-term vacancy in the office of the vice president could not be filled until the next post-election inauguration. Several such vacancies occurred: seven vice presidents died, one resigned and eight succeeded to the presidency. This amendment allowed for a vacancy to be filled through appointment by the president and confirmation by both chambers of the Congress. Since its ratification, the vice presidency has been vacant twice (both in the context of scandals surrounding the Nixon administration) and was filled both times through this process, namely in 1973 following Spiro Agnew\'s resignation, and again in 1974 after Gerald Ford succeeded to the presidency. The amendment also established a procedure whereby a vice president may, if the president is unable to discharge the powers and duties of the office, temporarily assume the powers and duties of the office as acting president. Three vice presidents have briefly acted as president under the 25th Amendment: George H. W. Bush on July 13, 1985; Dick Cheney on June 29, 2002, and on July 21, 2007; and Kamala Harris on November 19, 2021.The persons who have served as vice president were born in or primarily affiliated with 27 states plus the District of Columbia. New York has produced the most of any state as eight have been born there and three others considered it their home state. Most vice presidents have been in their 50s or 60s and had political experience before assuming the office. Two vice presidents—George Clinton and John C. Calhoun—served under more than one president. Ill with tuberculosis and recovering in Cuba on Inauguration Day in 1853, William R. King, by an Act of Congress, was allowed to take the oath outside the United States. He is the only vice president to take his oath of office in a foreign country.Page: List of presidents of the United States by net worthSummary: The list of presidents of the United States by net worth at peak varies greatly. Debt and depreciation often means that presidents\' net worth is less than $0 at the time of death. Most presidents before 1845 were extremely wealthy, especially Andrew Jackson and George Washington.    Presidents since 1929, when Herbert Hoover took office, have generally been wealthier than presidents of the late nineteenth and early twentieth centuries; with the exception of Harry S. Truman, all presidents since this time have been millionaires. These presidents have often received income from autobiographies and other writing. Except for Franklin D. Roosevelt and John F. Kennedy (both of whom died while in office), all presidents beginning with Calvin Coolidge have written autobiographies. In addition, many presidents—including Bill Clinton—have earned considerable income from public speaking after leaving office.The richest president in history may be Donald Trump. However, his net worth is not precisely known because the Trump Organization is privately held.Truman was among the poorest U.S. presidents, with a net worth considerably less than $1 million. His financial situation contributed to the doubling of the presidential salary to $100,000 in 1949. In addition, the presidential --- pension was created in 1958 when Truman was again experiencing financial difficulties. Harry and Bess Truman received the first Medicare cards in 1966 via the Social Security Act of 1965.Page: List of presidents of the United States by home stateSummary: These lists give the states of primary affiliation and of birth for each president of the United States.Invoking: `Wikipedia` with `Joe Biden`Page: Joe BidenSummary: Joseph Robinette Biden Jr. (  BY-dən; born November 20, 1942) is an American politician who is the 46th and current president of the United States. A member of the Democratic Party, he previously served as the 47th vice president from 2009 to 2017 under President Barack Obama and represented Delaware in the United States Senate from 1973 to 2009.Born in Scranton, Pennsylvania, Biden moved with his family to Delaware in 1953. He graduated from the University of Delaware before earning his law degree from Syracuse University. He was elected to the New Castle County Council in 1970 and to the U.S. Senate in 1972. As a senator, Biden drafted and led the effort to pass the Violent Crime Control and Law Enforcement Act and the Violence Against Women Act. He also oversaw six U.S. Supreme Court confirmation hearings, including the contentious hearings for Robert Bork and Clarence Thomas. Biden ran unsuccessfully for the Democratic presidential nomination in 1988 and 2008. In 2008, Obama chose Biden as his running mate, and he was a close counselor to Obama during his two terms as vice president. In the 2020 presidential election, Biden and his running mate, Kamala Harris, defeated incumbents Donald Trump and Mike Pence. He became the oldest president in U.S. history, and the first to have a female vice president.As president, Biden signed the American Rescue Plan Act in response to the COVID-19 pandemic and subsequent recession. He signed bipartisan bills on infrastructure and manufacturing. He proposed the Build Back Better Act, which failed in Congress, but aspects of which were incorporated into the Inflation Reduction Act that he signed into law in 2022. Biden appointed Ketanji Brown Jackson to the Supreme Court. He worked with congressional Republicans to resolve the 2023 United States debt-ceiling crisis by negotiating a deal to raise the debt ceiling. In foreign policy, Biden restored America\'s membership in the Paris Agreement. He oversaw the complete withdrawal of U.S. troops from Afghanistan that ended the war in Afghanistan, during which the Afghan government collapsed and the Taliban seized control. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia and authorizing civilian and military aid to Ukraine. During the Israel–Hamas war, Biden announced military support for Israel, and condemned the actions of Hamas and other Palestinian militants as terrorism. In April 2023, Biden announced his candidacy for the Democratic nomination in the 2024 presidential election.Page: Presidency of Joe BidenSummary: Joe Biden\'s tenure as the 46th president of the United States began with his inauguration on January 20, 2021. Biden, a Democrat from Delaware who previously served as vice president for two terms under president Barack Obama, took office following his victory in the 2020 presidential election over Republican incumbent president Donald Trump. Biden won the presidency with a popular vote of over 81 million, the highest number of votes cast for a single United States presidential candidate. Upon his inauguration, he became the oldest president in American history, breaking the record set by his predecessor Trump. Biden entered office amid the COVID-19 pandemic, an economic crisis, and increased political polarization.On the first day of his presidency, Biden made an effort to revert President Trump\'s energy policy by restoring U.S. participation in the Paris Agreement and revoking the permit for the Keystone XL pipeline. He also halted funding for Trump\'s border wall, an expansion of the Mexican border wall. On his second day, he issued a series of executive orders to reduce the impact of COVID-19, including invoking the Defense Production Act of 1950, and set an early goal of achieving one hundred million COVID-19 vaccinations in the United States in his first 100 days.Biden signed into law the American Rescue Plan Act of 2021; a $1.9 trillion stimulus bill that temporarily established expanded unemployment insurance and sent $1,400 stimulus checks to most Americans in response to continued economic pressure from COVID-19. He signed the bipartisan Infrastructure Investment and Jobs Act; a ten-year plan brokered by Biden alongside Democrats and Republicans in Congress, to invest in American roads, bridges, public transit, ports and broadband access. Biden signed the Juneteenth National Independence Day Act, making Juneteenth a federal holiday in the United States. He appointed Ketanji Brown Jackson to the U.S. Supreme Court—the first Black woman to serve on the court. After The Supreme Court overturned Roe v. Wade, Biden took executive actions, such as the signing of Executive Order 14076, to preserve and protect women\'s health rights --- nationwide, against abortion bans in Republican led states. Biden proposed a significant expansion of the U.S. social safety net through the Build Back Better Act, but those efforts, along with voting rights legislation, failed in Congress. However, in August 2022, Biden signed the Inflation Reduction Act of 2022, a domestic appropriations bill that included some of the provisions of the Build Back Better Act after the entire bill failed to pass. It included significant federal investment in climate and domestic clean energy production, tax credits for solar panels, electric cars and other home energy programs as well as a three-year extension of Affordable Care Act subsidies. The administration\'s economic policies, known as "Bidenomics", were inspired and designed by Trickle-up economics. Described as growing the economy from the middle out and bottom up and growing the middle class. Biden signed the CHIPS and Science Act, bolstering the semiconductor and manufacturing industry, the Honoring our PACT Act, expanding health care for US veterans, the Bipartisan Safer Communities Act and the Electoral Count Reform and Presidential Transition Improvement Act. In late 2022, Biden signed the Respect for Marriage Act, which repealed the Defense of Marriage Act and codified same-sex and interracial marriage in the United States. In response to the debt-ceiling crisis of 2023, Biden negotiated and signed the Fiscal Responsibility Act of 2023, which restrains federal spending for fiscal years 2024 and 2025, implements minor changes to SNAP and TANF, includes energy permitting reform, claws back some IRS funding and unspent money for COVID-19, and suspends the debt ceiling to January 1, 2025. Biden established the American Climate Corps and created the first ever White House Office of Gun Violence Prevention. On September 26, 2023, Joe Biden visited a United Auto Workers picket line during the 2023 United Auto Workers strike, making him the first US president to visit one.The foreign policy goal of the Biden administration is to restore the US to a "position of trusted leadership" among global democracies in order to address the challenges posed by Russia and China. In foreign policy, Biden completed the withdrawal of U.S. military forces from Afghanistan, declaring an end to nation-building efforts and shifting U.S. foreign policy toward strategic competition with China and, to a lesser extent, Russia. However, during the withdrawal, the Afghan government collapsed and the Taliban seized control, leading to Biden receiving bipartisan criticism. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia as well as providing Ukraine with over $100 billion in combined military, economic, and humanitarian aid. Biden also approved a raid which led to the death of Abu Ibrahim al-Hashimi al-Qurashi, the leader of the Islamic State, and approved a drone strike which killed Ayman Al Zawahiri, leader of Al-Qaeda. Biden signed and created AUKUS, an international security alliance, together with Australia and the United Kingdom. Biden called for the expansion of NATO with the addition of Finland and Sweden, and rallied NATO allies in support of Ukraine. During the 2023 Israel–Hamas war, Biden condemned Hamas and other Palestinian militants as terrorism and announced American military support for Israel; Biden also showed his support and sympathy towards Palestinians affected by the war, sent humanitarian aid, and brokered a four-day temporary pause and hostage exchange.Page: Family of Joe BidenSummary: Joe Biden, the 46th and current president of the United States, has family members who are prominent in law, education, activism and politics. Biden\'s immediate family became the first family of the United States on his inauguration on January 20, 2021. His immediate family circle was also the second family of the United States from 2009 to 2017, when Biden was vice president. Biden\'s family is mostly descended from the British Isles, with most of their ancestors coming from Ireland and England, and a smaller number descending from the French.Of Joe Biden\'s sixteen great-great-grandparents, ten were born in Ireland. He is descended from the Blewitts of County Mayo and the Finnegans of County Louth. One of Biden\'s great-great-great-grandfathers was born in Sussex, England, and emigrated to Maryland in the United States by 1820.Page: Inauguration of Joe BidenSummary: The inauguration of Joe Biden as the 46th president of the United States took place on Wednesday, January 20, 2021, marking the start of the four-year term of Joe Biden as president and Kamala Harris as vice president. The 59th presidential inauguration took place on the West Front of the United States Capitol in Washington, D.C. Biden took the presidential oath of office, before which Harris took the vice presidential oath of office.The inauguration took place amidst extraordinary political, public health, economic, and national security crises, including the ongoing COVID-19 pandemic; outgoing President Donald --- Trump\'s attempts to overturn the 2020 United States presidential election, which provoked an attack on the United States Capitol on January 6; Trump\'Invoking: `Wikipedia` with `Delaware`Page: DelawareSummary: Delaware (  DEL-ə-wair) is a state in the northeast and Mid-Atlantic regions of the United States. It borders Maryland to its south and west, Pennsylvania to its north, New Jersey to its northeast, and the Atlantic Ocean to its east. The state\'s name derives from the adjacent Delaware Bay, which in turn was named after Thomas West, 3rd Baron De La Warr, an English nobleman and the Colony of Virginia\'s first colonial-era governor.Delaware occupies the northeastern portion of the Delmarva Peninsula, and some islands and territory within the Delaware River. It is the 2nd smallest and 6th least populous state, but also the 6th most densely populated. Delaware\'s most populous city is Wilmington, and the state\'s capital is Dover, the 2nd most populous city in Delaware. The state is divided into three counties, the fewest number of counties of any of the 50 U.S. states; from north to south, the three counties are: New Castle County, Kent County, and Sussex County.The southern two counties, Kent and Sussex counties, historically have been predominantly agrarian economies. New Castle is more urbanized and is considered part of the Delaware Valley metropolitan statistical area that surrounds and includes Philadelphia, the nation\'s 6th most populous city. Delaware is considered part of the Southern United States by the U.S. Census Bureau, but the state\'s geography, culture, and history are a hybrid of the Mid-Atlantic and Northeastern regions of the country.Before Delaware coastline was explored and developed by Europeans in the 16th century, the state was inhabited by several Native Americans tribes, including the Lenape in the north and Nanticoke in the south. The state was first colonized by Dutch traders at Zwaanendael, near present-day Lewes, Delaware, in 1631.Delaware was one of the Thirteen Colonies that participated in the American Revolution and American Revolutionary War, in which the American Continental Army, led by George Washington, defeated the British, ended British colonization and establishing the United States as a sovereign and independent nation.On December 7, 1787, Delaware was the first state to ratify the Constitution of the United States, earning it the nickname "The First State".Since the turn of the 20th century, Delaware has become an onshore corporate haven whose corporate laws are deemed appealing to corporations; over half of all New York Stock Exchange-listed corporations and over three-fifths of the Fortune 500 is legally incorporated in the state.Page: Delaware City, DelawareSummary: Delaware City is a city in New Castle County, Delaware, United States. The population was 1,885 as of 2020. It is a small port town on the eastern terminus of the Chesapeake and Delaware Canal and is the location of the Forts Ferry Crossing to Fort Delaware on Pea Patch Island.Page: Delaware RiverSummary: The Delaware River is a major river in the Mid-Atlantic region of the United States and is the longest free-flowing (undammed) river in the Eastern United States. From the meeting of its branches in Hancock, New York, the river flows for 282 miles (454 km) along the borders of New York, Pennsylvania, New Jersey, and Delaware, before emptying into Delaware Bay.The river has been recognized by the National Wildlife Federation as one of the country\'s Great Waters and has been called the "Lifeblood of the Northeast" by American Rivers. Its watershed drains an area of 13,539 square miles (35,070 km2) and provides drinking water for 17 million people, including half of New York City via the Delaware Aqueduct.The Delaware River has two branches that rise in the Catskill Mountains of New York: the West Branch at Mount Jefferson in Jefferson, Schoharie County, and the East Branch at Grand Gorge, Delaware County. The branches merge to form the main Delaware River at Hancock, New York. Flowing south, the river remains relatively undeveloped, with 152 miles (245 km) protected as the Upper, Middle, and Lower Delaware National Scenic Rivers. At Trenton, New Jersey, the Delaware becomes tidal, navigable, and significantly more industrial. This section forms the backbone of the Delaware Valley metropolitan area, serving the port cities of Philadelphia, Camden, New Jersey, and Wilmington, Delaware. The river flows into Delaware Bay at Liston Point, 48 miles (77 km) upstream of the bay\'s outlet to the Atlantic Ocean between Cape May and Cape Henlopen.Before the arrival of European settlers, the river was the homeland of the Lenape native people. They called the river Lenapewihittuk, or Lenape --- River, and Kithanne, meaning the largest river in this part of the country.In 1609, the river was visited by a Dutch East India Company expedition led by Henry Hudson. Hudson, an English navigator, was hired to find a western route to Cathay (China), but his encounters set the stage for Dutch colonization of North America in the 17th century. Early Dutch and Swedish settlements were established along the lower section of the river and Delaware Bay. Both colonial powers called the river the South River (Zuidrivier), compared to the Hudson River, which was known as the North River. After the English expelled the Dutch and took control of the New Netherland colony in 1664, the river was renamed Delaware after Sir Thomas West, 3rd Baron De La Warr, an English nobleman and the Virginia colony\'s first royal governor, who defended the colony during the First Anglo-Powhatan War.Page: University of DelawareSummary: The University of Delaware (colloquially known as UD or Delaware) is a privately governed, state-assisted land-grant research university located in Newark, Delaware. UD is the largest university in Delaware. It offers three associate\'s programs, 148 bachelor\'s programs, 121 master\'s programs (with 13 joint degrees), and 55 doctoral programs across its eight colleges. The main campus is in Newark, with satellite campuses in Dover, Wilmington, Lewes, and Georgetown. It is considered a large institution with approximately 18,200 undergraduate and 4,200 graduate students. It is a privately governed university which receives public funding for being a land-grant, sea-grant, and space-grant state-supported research institution.UD is classified among "R1: Doctoral Universities – Very high research activity". According to the National Science Foundation, UD spent $186 million on research and development in 2018, ranking it 119th in the nation.  It is recognized with the Community Engagement Classification by the Carnegie Foundation for the Advancement of Teaching.UD students, alumni, and sports teams are known as the "Fightin\' Blue Hens", more commonly shortened to "Blue Hens", and the school colors are Delaware blue and gold. UD sponsors 21 men\'s and women\'s NCAA Division-I sports teams and have competed in the Colonial Athletic Association (CAA) since 2001.Page: LenapeSummary: The Lenape (English: , , ; Lenape languages: [lənaːpe]), also called the Lenni Lenape and Delaware people, are an Indigenous people of the Northeastern Woodlands, who live in the United States and Canada.The Lenape\'s historical territory includes present-day northeastern Delaware, all of New Jersey, the eastern Pennsylvania regions of the Lehigh Valley and Northeastern Pennsylvania, and New York Bay, western Long Island, and the lower Hudson Valley in New York state. Today they are based in Oklahoma, Wisconsin, and Ontario.During the last decades of the 18th century, European settlers and the effects of the American Revolutionary War displaced most Lenape from their homelands and pushed them north and west. In the 1860s, under the Indian removal policy, the U.S. federal government relocated most Lenape remaining in the Eastern United States to the Indian Territory and surrounding regions. Lenape people currently belong to the Delaware Nation and Delaware Tribe of Indians in Oklahoma, the Stockbridge–Munsee Community in Wisconsin, and the Munsee-Delaware Nation, Moravian of the Thames First Nation, and Delaware of Six Nations in Ontario.BadRequestError: Error code: 400 - {\'error\': {\'message\': "This model\'s maximum context length is 4097 tokens. However, your messages resulted in 5487 tokens (5419 in the messages, 68 in the functions). Please reduce the length of the messages or functions.", \'type\': \'invalid_request_error\', \'param\': \'messages\', \'code\': \'context_length_exceeded\'}}LangSmith --- output message) in the chat history.def condense_prompt(prompt: ChatPromptValue) -> ChatPromptValue:    messages = prompt.to_messages()    num_tokens = llm.get_num_tokens_from_messages(messages)    ai_function_messages = messages[2:]    while num_tokens > 4_000:        ai_function_messages = ai_function_messages[2:]        num_tokens = llm.get_num_tokens_from_messages(            messages[:2] + ai_function_messages        )    messages = messages[:2] + ai_function_messages    return ChatPromptValue(messages=messages)agent = (    {        "input": itemgetter("input"),        "agent_scratchpad": lambda x: format_to_openai_function_messages(            x["intermediate_steps"]        ),    }    | prompt    | condense_prompt    | llm.bind_functions(tools)    | OpenAIFunctionsAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke(    {        "input": "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?"    })> Entering new AgentExecutor chain...Invoking: `Wikipedia` with `List of presidents of the United States`Page: List of presidents of the United StatesSummary: The president of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term via the Electoral College. The officeholder leads the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces. Since the office was established in 1789, 45 men have served in 46 presidencies. The first president, George Washington, won a unanimous vote of the Electoral College. Grover Cleveland served two non-consecutive terms and is therefore counted as the 22nd and 24th president of the United States, giving rise to the discrepancy between the number of presidencies and the number of individuals who have served as president. The incumbent president is Joe Biden.The presidency of William Henry Harrison, who died 31 days after taking office in 1841, was the shortest in American history. Franklin D. Roosevelt served the longest, over twelve years, before dying early in his fourth term in 1945. He is the only U.S. president to have served more than two terms. Since the ratification of the Twenty-second Amendment to the United States Constitution in 1951, no person may be elected president more than twice, and no one who has served more than two years of a term to which someone else was elected may be elected more than once.Four presidents died in office of natural causes (William Henry Harrison, Zachary Taylor, Warren G. Harding, and Franklin D. Roosevelt), four were assassinated (Abraham Lincoln, James A. Garfield, William McKinley, and John F. Kennedy), and one resigned (Richard Nixon, facing impeachment and removal from office). John Tyler was the first vice president to assume the presidency during a presidential term, and set the precedent that a vice president who does so becomes the fully functioning president with his presidency.Throughout most of its history, American politics has been dominated by political parties. The Constitution is silent on the issue of political parties, and at the time it came into force in 1789, no organized parties existed. Soon after the 1st Congress convened, political factions began rallying around dominant Washington administration officials, such as Alexander Hamilton and Thomas Jefferson. Concerned about the capacity of political parties to destroy the fragile unity holding the nation together, Washington remained unaffiliated with any political faction or party throughout his eight-year presidency. He was, and remains, the only U.S. president never affiliated with a political party.Page: List of presidents of the United States by ageSummary: In this list of presidents of the United States by age, the first table charts the age of each president of the United States at the time of presidential inauguration (first inauguration if elected to multiple and consecutive --- terms), upon leaving office, and at the time of death. Where the president is still living, their lifespan and post-presidency timespan are calculated up to January 25, 2024.Page: List of vice presidents of the United StatesSummary: There have been 49 vice presidents of the United States since the office was created in 1789. Originally, the vice president was the person who received the second-most votes for president in the Electoral College. But after the election of 1800 produced a tie between Thomas Jefferson and Aaron Burr, requiring the House of Representatives to choose between them, lawmakers acted to prevent such a situation from recurring. The Twelfth Amendment was added to the Constitution in 1804, creating the current system where electors cast a separate ballot for the vice presidency.The vice president is the first person in the presidential line of succession—that is, they assume the presidency if the president dies, resigns, or is impeached and removed from office. Nine vice presidents have ascended to the presidency in this way: eight (John Tyler, Millard Fillmore, Andrew Johnson, Chester A. Arthur, Theodore Roosevelt, Calvin Coolidge, Harry S. Truman, and Lyndon B. Johnson) through the president\'s death and one (Gerald Ford) through the president\'s resignation. The vice president also serves as the president of the Senate and may choose to cast a tie-breaking vote on decisions made by the Senate. Vice presidents have exercised this latter power to varying extents over the years.Before adoption of the Twenty-fifth Amendment in 1967, an intra-term vacancy in the office of the vice president could not be filled until the next post-election inauguration. Several such vacancies occurred: seven vice presidents died, one resigned and eight succeeded to the presidency. This amendment allowed for a vacancy to be filled through appointment by the president and confirmation by both chambers of the Congress. Since its ratification, the vice presidency has been vacant twice (both in the context of scandals surrounding the Nixon administration) and was filled both times through this process, namely in 1973 following Spiro Agnew\'s resignation, and again in 1974 after Gerald Ford succeeded to the presidency. The amendment also established a procedure whereby a vice president may, if the president is unable to discharge the powers and duties of the office, temporarily assume the powers and duties of the office as acting president. Three vice presidents have briefly acted as president under the 25th Amendment: George H. W. Bush on July 13, 1985; Dick Cheney on June 29, 2002, and on July 21, 2007; and Kamala Harris on November 19, 2021.The persons who have served as vice president were born in or primarily affiliated with 27 states plus the District of Columbia. New York has produced the most of any state as eight have been born there and three others considered it their home state. Most vice presidents have been in their 50s or 60s and had political experience before assuming the office. Two vice presidents—George Clinton and John C. Calhoun—served under more than one president. Ill with tuberculosis and recovering in Cuba on Inauguration Day in 1853, William R. King, by an Act of Congress, was allowed to take the oath outside the United States. He is the only vice president to take his oath of office in a foreign country.Page: List of presidents of the United States by net worthSummary: The list of presidents of the United States by net worth at peak varies greatly. Debt and depreciation often means that presidents\' net worth is less than $0 at the time of death. Most presidents before 1845 were extremely wealthy, especially Andrew Jackson and George Washington.    Presidents since 1929, when Herbert Hoover took office, have generally been wealthier than presidents of the late nineteenth and early twentieth centuries; with the exception of Harry S. Truman, all presidents since this time have been millionaires. These presidents have often received income from autobiographies and other writing. Except for Franklin D. Roosevelt and John F. Kennedy (both of whom died while in office), all presidents beginning with Calvin Coolidge have written autobiographies. In addition, many presidents—including Bill Clinton—have earned considerable income from public speaking after leaving office.The richest president in history may be Donald Trump. However, his net worth is not precisely known because the Trump Organization is privately held.Truman was among the poorest U.S. presidents, with a net worth considerably less than $1 million. His financial situation contributed to the doubling of the presidential salary to $100,000 in 1949. In addition, the presidential pension was created in 1958 when Truman was again experiencing financial difficulties. Harry and Bess Truman received the first Medicare cards in 1966 via the Social Security Act of 1965.Page: List of presidents of the United States by home stateSummary: These lists give the states of primary affiliation and of birth for each president of the United States.Invoking: `Wikipedia` with `Joe Biden`Page: --- Joe BidenSummary: Joseph Robinette Biden Jr. (  BY-dən; born November 20, 1942) is an American politician who is the 46th and current president of the United States. A member of the Democratic Party, he previously served as the 47th vice president from 2009 to 2017 under President Barack Obama and represented Delaware in the United States Senate from 1973 to 2009.Born in Scranton, Pennsylvania, Biden moved with his family to Delaware in 1953. He graduated from the University of Delaware before earning his law degree from Syracuse University. He was elected to the New Castle County Council in 1970 and to the U.S. Senate in 1972. As a senator, Biden drafted and led the effort to pass the Violent Crime Control and Law Enforcement Act and the Violence Against Women Act. He also oversaw six U.S. Supreme Court confirmation hearings, including the contentious hearings for Robert Bork and Clarence Thomas. Biden ran unsuccessfully for the Democratic presidential nomination in 1988 and 2008. In 2008, Obama chose Biden as his running mate, and he was a close counselor to Obama during his two terms as vice president. In the 2020 presidential election, Biden and his running mate, Kamala Harris, defeated incumbents Donald Trump and Mike Pence. He became the oldest president in U.S. history, and the first to have a female vice president.As president, Biden signed the American Rescue Plan Act in response to the COVID-19 pandemic and subsequent recession. He signed bipartisan bills on infrastructure and manufacturing. He proposed the Build Back Better Act, which failed in Congress, but aspects of which were incorporated into the Inflation Reduction Act that he signed into law in 2022. Biden appointed Ketanji Brown Jackson to the Supreme Court. He worked with congressional Republicans to resolve the 2023 United States debt-ceiling crisis by negotiating a deal to raise the debt ceiling. In foreign policy, Biden restored America\'s membership in the Paris Agreement. He oversaw the complete withdrawal of U.S. troops from Afghanistan that ended the war in Afghanistan, during which the Afghan government collapsed and the Taliban seized control. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia and authorizing civilian and military aid to Ukraine. During the Israel–Hamas war, Biden announced military support for Israel, and condemned the actions of Hamas and other Palestinian militants as terrorism. In April 2023, Biden announced his candidacy for the Democratic nomination in the 2024 presidential election.Page: Presidency of Joe BidenSummary: Joe Biden\'s tenure as the 46th president of the United States began with his inauguration on January 20, 2021. Biden, a Democrat from Delaware who previously served as vice president for two terms under president Barack Obama, took office following his victory in the 2020 presidential election over Republican incumbent president Donald Trump. Biden won the presidency with a popular vote of over 81 million, the highest number of votes cast for a single United States presidential candidate. Upon his inauguration, he became the oldest president in American history, breaking the record set by his predecessor Trump. Biden entered office amid the COVID-19 pandemic, an economic crisis, and increased political polarization.On the first day of his presidency, Biden made an effort to revert President Trump\'s energy policy by restoring U.S. participation in the Paris Agreement and revoking the permit for the Keystone XL pipeline. He also halted funding for Trump\'s border wall, an expansion of the Mexican border wall. On his second day, he issued a series of executive orders to reduce the impact of COVID-19, including invoking the Defense Production Act of 1950, and set an early goal of achieving one hundred million COVID-19 vaccinations in the United States in his first 100 days.Biden signed into law the American Rescue Plan Act of 2021; a $1.9 trillion stimulus bill that temporarily established expanded unemployment insurance and sent $1,400 stimulus checks to most Americans in response to continued economic pressure from COVID-19. He signed the bipartisan Infrastructure Investment and Jobs Act; a ten-year plan brokered by Biden alongside Democrats and Republicans in Congress, to invest in American roads, bridges, public transit, ports and broadband access. Biden signed the Juneteenth National Independence Day Act, making Juneteenth a federal holiday in the United States. He appointed Ketanji Brown Jackson to the U.S. Supreme Court—the first Black woman to serve on the court. After The Supreme Court overturned Roe v. Wade, Biden took executive actions, such as the signing of Executive Order 14076, to preserve and protect women\'s health rights nationwide, against abortion bans in Republican led states. Biden proposed a significant expansion of the U.S. social safety net through the Build Back Better Act, but those efforts, along with voting rights legislation, failed in Congress. However, in August 2022, Biden signed the Inflation Reduction Act of 2022, a domestic appropriations bill that included some of the provisions of the Build Back Better Act after the entire --- bill failed to pass. It included significant federal investment in climate and domestic clean energy production, tax credits for solar panels, electric cars and other home energy programs as well as a three-year extension of Affordable Care Act subsidies. The administration\'s economic policies, known as "Bidenomics", were inspired and designed by Trickle-up economics. Described as growing the economy from the middle out and bottom up and growing the middle class. Biden signed the CHIPS and Science Act, bolstering the semiconductor and manufacturing industry, the Honoring our PACT Act, expanding health care for US veterans, the Bipartisan Safer Communities Act and the Electoral Count Reform and Presidential Transition Improvement Act. In late 2022, Biden signed the Respect for Marriage Act, which repealed the Defense of Marriage Act and codified same-sex and interracial marriage in the United States. In response to the debt-ceiling crisis of 2023, Biden negotiated and signed the Fiscal Responsibility Act of 2023, which restrains federal spending for fiscal years 2024 and 2025, implements minor changes to SNAP and TANF, includes energy permitting reform, claws back some IRS funding and unspent money for COVID-19, and suspends the debt ceiling to January 1, 2025. Biden established the American Climate Corps and created the first ever White House Office of Gun Violence Prevention. On September 26, 2023, Joe Biden visited a United Auto Workers picket line during the 2023 United Auto Workers strike, making him the first US president to visit one.The foreign policy goal of the Biden administration is to restore the US to a "position of trusted leadership" among global democracies in order to address the challenges posed by Russia and China. In foreign policy, Biden completed the withdrawal of U.S. military forces from Afghanistan, declaring an end to nation-building efforts and shifting U.S. foreign policy toward strategic competition with China and, to a lesser extent, Russia. However, during the withdrawal, the Afghan government collapsed and the Taliban seized control, leading to Biden receiving bipartisan criticism. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia as well as providing Ukraine with over $100 billion in combined military, economic, and humanitarian aid. Biden also approved a raid which led to the death of Abu Ibrahim al-Hashimi al-Qurashi, the leader of the Islamic State, and approved a drone strike which killed Ayman Al Zawahiri, leader of Al-Qaeda. Biden signed and created AUKUS, an international security alliance, together with Australia and the United Kingdom. Biden called for the expansion of NATO with the addition of Finland and Sweden, and rallied NATO allies in support of Ukraine. During the 2023 Israel–Hamas war, Biden condemned Hamas and other Palestinian militants as terrorism and announced American military support for Israel; Biden also showed his support and sympathy towards Palestinians affected by the war, sent humanitarian aid, and brokered a four-day temporary pause and hostage exchange.Page: Family of Joe BidenSummary: Joe Biden, the 46th and current president of the United States, has family members who are prominent in law, education, activism and politics. Biden\'s immediate family became the first family of the United States on his inauguration on January 20, 2021. His immediate family circle was also the second family of the United States from 2009 to 2017, when Biden was vice president. Biden\'s family is mostly descended from the British Isles, with most of their ancestors coming from Ireland and England, and a smaller number descending from the French.Of Joe Biden\'s sixteen great-great-grandparents, ten were born in Ireland. He is descended from the Blewitts of County Mayo and the Finnegans of County Louth. One of Biden\'s great-great-great-grandfathers was born in Sussex, England, and emigrated to Maryland in the United States by 1820.Page: Inauguration of Joe BidenSummary: The inauguration of Joe Biden as the 46th president of the United States took place on Wednesday, January 20, 2021, marking the start of the four-year term of Joe Biden as president and Kamala Harris as vice president. The 59th presidential inauguration took place on the West Front of the United States Capitol in Washington, D.C. Biden took the presidential oath of office, before which Harris took the vice presidential oath of office.The inauguration took place amidst extraordinary political, public health, economic, and national security crises, including the ongoing COVID-19 pandemic; outgoing President Donald Trump\'s attempts to overturn the 2020 United States presidential election, which provoked an attack on the United States Capitol on January 6; Trump\'Invoking: `Wikipedia` with `Delaware`Page: DelawareSummary: Delaware (  DEL-ə-wair) is a state in the northeast and Mid-Atlantic regions of the United States. It borders Maryland to its south and west, Pennsylvania to --- its north, New Jersey to its northeast, and the Atlantic Ocean to its east. The state\'s name derives from the adjacent Delaware Bay, which in turn was named after Thomas West, 3rd Baron De La Warr, an English nobleman and the Colony of Virginia\'s first colonial-era governor.Delaware occupies the northeastern portion of the Delmarva Peninsula, and some islands and territory within the Delaware River. It is the 2nd smallest and 6th least populous state, but also the 6th most densely populated. Delaware\'s most populous city is Wilmington, and the state\'s capital is Dover, the 2nd most populous city in Delaware. The state is divided into three counties, the fewest number of counties of any of the 50 U.S. states; from north to south, the three counties are: New Castle County, Kent County, and Sussex County.The southern two counties, Kent and Sussex counties, historically have been predominantly agrarian economies. New Castle is more urbanized and is considered part of the Delaware Valley metropolitan statistical area that surrounds and includes Philadelphia, the nation\'s 6th most populous city. Delaware is considered part of the Southern United States by the U.S. Census Bureau, but the state\'s geography, culture, and history are a hybrid of the Mid-Atlantic and Northeastern regions of the country.Before Delaware coastline was explored and developed by Europeans in the 16th century, the state was inhabited by several Native Americans tribes, including the Lenape in the north and Nanticoke in the south. The state was first colonized by Dutch traders at Zwaanendael, near present-day Lewes, Delaware, in 1631.Delaware was one of the Thirteen Colonies that participated in the American Revolution and American Revolutionary War, in which the American Continental Army, led by George Washington, defeated the British, ended British colonization and establishing the United States as a sovereign and independent nation.On December 7, 1787, Delaware was the first state to ratify the Constitution of the United States, earning it the nickname "The First State".Since the turn of the 20th century, Delaware has become an onshore corporate haven whose corporate laws are deemed appealing to corporations; over half of all New York Stock Exchange-listed corporations and over three-fifths of the Fortune 500 is legally incorporated in the state.Page: Delaware City, DelawareSummary: Delaware City is a city in New Castle County, Delaware, United States. The population was 1,885 as of 2020. It is a small port town on the eastern terminus of the Chesapeake and Delaware Canal and is the location of the Forts Ferry Crossing to Fort Delaware on Pea Patch Island.Page: Delaware RiverSummary: The Delaware River is a major river in the Mid-Atlantic region of the United States and is the longest free-flowing (undammed) river in the Eastern United States. From the meeting of its branches in Hancock, New York, the river flows for 282 miles (454 km) along the borders of New York, Pennsylvania, New Jersey, and Delaware, before emptying into Delaware Bay.The river has been recognized by the National Wildlife Federation as one of the country\'s Great Waters and has been called the "Lifeblood of the Northeast" by American Rivers. Its watershed drains an area of 13,539 square miles (35,070 km2) and provides drinking water for 17 million people, including half of New York City via the Delaware Aqueduct.The Delaware River has two branches that rise in the Catskill Mountains of New York: the West Branch at Mount Jefferson in Jefferson, Schoharie County, and the East Branch at Grand Gorge, Delaware County. The branches merge to form the main Delaware River at Hancock, New York. Flowing south, the river remains relatively undeveloped, with 152 miles (245 km) protected as the Upper, Middle, and Lower Delaware National Scenic Rivers. At Trenton, New Jersey, the Delaware becomes tidal, navigable, and significantly more industrial. This section forms the backbone of the Delaware Valley metropolitan area, serving the port cities of Philadelphia, Camden, New Jersey, and Wilmington, Delaware. The river flows into Delaware Bay at Liston Point, 48 miles (77 km) upstream of the bay\'s outlet to the Atlantic Ocean between Cape May and Cape Henlopen.Before the arrival of European settlers, the river was the homeland of the Lenape native people. They called the river Lenapewihittuk, or Lenape River, and Kithanne, meaning the largest river in this part of the country.In 1609, the river was visited by a Dutch East India Company expedition led by Henry Hudson. Hudson, an English navigator, was hired to find a western route to Cathay (China), but his encounters set the stage for Dutch colonization of North America in the 17th century. Early Dutch and Swedish settlements were --- established along the lower section of the river and Delaware Bay. Both colonial powers called the river the South River (Zuidrivier), compared to the Hudson River, which was known as the North River. After the English expelled the Dutch and took control of the New Netherland colony in 1664, the river was renamed Delaware after Sir Thomas West, 3rd Baron De La Warr, an English nobleman and the Virginia colony\'s first royal governor, who defended the colony during the First Anglo-Powhatan War.Page: University of DelawareSummary: The University of Delaware (colloquially known as UD or Delaware) is a privately governed, state-assisted land-grant research university located in Newark, Delaware. UD is the largest university in Delaware. It offers three associate\'s programs, 148 bachelor\'s programs, 121 master\'s programs (with 13 joint degrees), and 55 doctoral programs across its eight colleges. The main campus is in Newark, with satellite campuses in Dover, Wilmington, Lewes, and Georgetown. It is considered a large institution with approximately 18,200 undergraduate and 4,200 graduate students. It is a privately governed university which receives public funding for being a land-grant, sea-grant, and space-grant state-supported research institution.UD is classified among "R1: Doctoral Universities – Very high research activity". According to the National Science Foundation, UD spent $186 million on research and development in 2018, ranking it 119th in the nation.  It is recognized with the Community Engagement Classification by the Carnegie Foundation for the Advancement of Teaching.UD students, alumni, and sports teams are known as the "Fightin\' Blue Hens", more commonly shortened to "Blue Hens", and the school colors are Delaware blue and gold. UD sponsors 21 men\'s and women\'s NCAA Division-I sports teams and have competed in the Colonial Athletic Association (CAA) since 2001.Page: LenapeSummary: The Lenape (English: , , ; Lenape languages: [lənaːpe]), also called the Lenni Lenape and Delaware people, are an Indigenous people of the Northeastern Woodlands, who live in the United States and Canada.The Lenape\'s historical territory includes present-day northeastern Delaware, all of New Jersey, the eastern Pennsylvania regions of the Lehigh Valley and Northeastern Pennsylvania, and New York Bay, western Long Island, and the lower Hudson Valley in New York state. Today they are based in Oklahoma, Wisconsin, and Ontario.During the last decades of the 18th century, European settlers and the effects of the American Revolutionary War displaced most Lenape from their homelands and pushed them north and west. In the 1860s, under the Indian removal policy, the U.S. federal government relocated most Lenape remaining in the Eastern United States to the Indian Territory and surrounding regions. Lenape people currently belong to the Delaware Nation and Delaware Tribe of Indians in Oklahoma, the Stockbridge–Munsee Community in Wisconsin, and the Munsee-Delaware Nation, Moravian of the Thames First Nation, and Delaware of Six Nations in Ontario.Invoking: `Wikipedia` with `Blue hen chicken`Page: Delaware Blue HenSummary: The Delaware Blue Hen or Blue Hen of Delaware is a blue strain of American gamecock. Under the name Blue Hen Chicken it is the official bird of the State of Delaware. It is the emblem or mascot of several institutions in the state, among them the sports teams of the University of Delaware.Page: Delaware Fightin\' Blue HensSummary: The Delaware Fightin\' Blue Hens are the athletic teams of the University of Delaware (UD) of Newark, Delaware, in the United States. The Blue Hens compete in the Football Championship Subdivision (FCS) of Division I of the National Collegiate Athletic Association (NCAA) as members of the Coastal Athletic Association and its technically separate football league, CAA Football.On November 28, 2023, UD and Conference USA (CUSA) jointly announced that UD would start a transition to the Division I Football Bowl Subdivision (FBS) in 2024 and join CUSA in 2025. UD will continue to compete in both sides of the CAA in 2024–25; it will be ineligible for the FCS playoffs due to NCAA rules for transitioning programs, but will be eligible for all non-football CAA championships. Upon joining CUSA, UD will be eligible for all conference championship events except the football championship game; it will become eligible for that event upon completing the FBS transition in 2026. At the same time, UD also announced it would add one women\'s sport due to Title IX considerations, and would also be seeking conference homes for the seven sports that UD sponsors but CUSA does not. The new women\'s sport would later be announced as ice hockey; UD will join College Hockey America for its first season --- of varsity play in 2025–26.Page: Brahma chickenSummary: The Brahma is an American breed of chicken. It was bred in the United States from birds imported from the Chinese port of Shanghai,:\u200a78\u200a and was the principal American meat breed from the 1850s until about 1930.Page: SilkieSummary: The Silkie (also known as the Silky or Chinese silk chicken) is a breed of chicken named for its atypically fluffy plumage, which is said to feel like silk and satin. The breed has several other unusual qualities, such as black skin and bones, blue earlobes, and five toes on each foot, whereas most chickens have only four. They are often exhibited in poultry shows, and also appear in various colors. In addition to their distinctive physical characteristics, Silkies are well known for their calm and friendly temperament. It is among the most docile of poultry. Hens are also exceptionally broody, and care for young well. Although they are fair layers themselves, laying only about three eggs a week, they are commonly used to hatch eggs from other breeds and bird species due to their broody nature. Silkie chickens have been bred to have a wide variety of colors which include but are not limited to: Black, Blue, Buff, Partridge, Splash, White, Lavender, Paint and Porcelain.Page: Silverudd BlueSummary: The Silverudd Blue, Swedish: Silverudds Blå, is a Swedish breed of chicken. It was developed by Martin Silverudd in Småland, in southern Sweden. Hens lay blue/green eggs, weighing 50–65 grams. The flock-book for the breed is kept by the Svenska Kulturhönsföreningen – the Swedish Cultural Hen Association. It was initially known by various names including Isbar, Blue Isbar and Svensk Grönvärpare, or "Swedish green egg layer"; in 2016 it was renamed to \'Silverudd Blue\' after its creator.The current US president is Joe Biden. His home state is Delaware. The home state bird of Delaware is the Delaware Blue Hen. The scientific name of the Delaware Blue Hen is Gallus gallus domesticus.> Finished chain.{\'input\': "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?", \'output\': \'The current US president is Joe Biden. His home state is Delaware. The home state bird of Delaware is the Delaware Blue Hen. The scientific name of the Delaware Blue Hen is Gallus gallus domesticus.\'}LangSmith'}


为每个 RAPTOR 步骤定义辅助函数


from typing import Dict, List, Optional, Tuple
import numpy as np
import pandas as pd
import umap
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from sklearn.mixture import GaussianMixture
RANDOM_SEED = 224  # Fixed seed for reproducibility
### --- Code from citations referenced above (added comments and docstrings) --- ###

def global_cluster_embeddings(
    embeddings: np.ndarray,
    dim: int,
    n_neighbors: Optional[int] = None,
    metric: str = "cosine",
) -> np.ndarray:
    """
    Perform global dimensionality reduction on the embeddings using UMAP.
    Parameters:
    - embeddings: The input embeddings as a numpy array.
    - dim: The target dimensionality for the reduced space.
    - n_neighbors: Optional; the number of neighbors to consider for each point.
                   If not provided, it defaults to the square root of the number of embeddings.
    - metric: The distance metric to use for UMAP.
    Returns:
    - A numpy array of the embeddings reduced to the specified dimensionality.
    """
    if n_neighbors is None:
        n_neighbors = int((len(embeddings) - 1) ** 0.5)
    return umap.UMAP(
        n_neighbors=n_neighbors, n_components=dim, metric=metric
    ).fit_transform(embeddings)

def local_cluster_embeddings(
    embeddings: np.ndarray, dim: int, num_neighbors: int = 10, metric: str = "cosine"
) -> np.ndarray:
    """
    Perform local dimensionality reduction on the embeddings using UMAP, typically after global clustering.
    Parameters:
    - embeddings: The input embeddings as a numpy array.
    - dim: The target dimensionality for the reduced space.
    - num_neighbors: The number of neighbors to consider for each point.
    - metric: The distance metric to use for UMAP.
    Returns:
    - A numpy array of the embeddings reduced to the specified dimensionality.
    """
    return umap.UMAP(
        n_neighbors=num_neighbors, n_components=dim, metric=metric
    ).fit_transform(embeddings)

def get_optimal_clusters(
    embeddings: np.ndarray, max_clusters: int = 50, random_state: int = RANDOM_SEED
) -> int:
    """
    Determine the optimal number of clusters using the Bayesian Information Criterion (BIC) with a Gaussian Mixture Model.
    Parameters:
    - embeddings: The input embeddings as a numpy array.
    - max_clusters: The maximum number of clusters to consider.
    - random_state: Seed for reproducibility.
    Returns:
    - An integer representing the optimal number of clusters found.
    """
    max_clusters = min(max_clusters, len(embeddings))
    n_clusters = np.arange(1, max_clusters)
    bics = []
    for n in n_clusters:
        gm = GaussianMixture(n_components=n, random_state=random_state)
        gm.fit(embeddings)
        bics.append(gm.bic(embeddings))
    return n_clusters[np.argmin(bics)]

def GMM_cluster(embeddings: np.ndarray, threshold: float, random_state: int = 0):
    """
    Cluster embeddings using a Gaussian Mixture Model (GMM) based on a probability threshold.
    Parameters:
    - embeddings: The input embeddings as a numpy array.
    - threshold: The probability threshold for assigning an embedding to a cluster.
    - random_state: Seed for reproducibility.
    Returns:
    - A tuple containing the cluster labels and the number of clusters determined.
    """
    n_clusters = get_optimal_clusters(embeddings)
    gm = GaussianMixture(n_components=n_clusters, random_state=random_state)
    gm.fit(embeddings)
    probs = gm.predict_proba(embeddings)
    labels = [np.where(prob > threshold)[0] for prob in probs]
    return labels, n_clusters

def perform_clustering(
    embeddings: np.ndarray,
    dim: int,
    threshold: float,
) -> List[np.ndarray]:
    """
    Perform clustering on the embeddings by first reducing their dimensionality globally, then clustering
    using a Gaussian Mixture Model, and finally performing local clustering within each global cluster.
    Parameters:
    - embeddings: The input embeddings as a numpy array.
    - dim: The target dimensionality for UMAP reduction.
    - threshold: The probability threshold for assigning an embedding to a cluster in GMM.
    Returns:
    - A list of numpy arrays, where each array contains the cluster IDs for each embedding.
    """
    if len(embeddings) <= dim + 1:
        # Avoid clustering when there's insufficient data
        return [np.array([0]) for _ in range(len(embeddings))]
    # Global dimensionality reduction
    reduced_embeddings_global = global_cluster_embeddings(embeddings, dim)
    # Global clustering
    global_clusters, n_global_clusters = GMM_cluster(
        reduced_embeddings_global, threshold
    )
    all_local_clusters = [np.array([]) for _ in range(len(embeddings))]
    total_clusters = 0
    # Iterate through each global cluster to perform local clustering
    for i in range(n_global_clusters):
        # Extract embeddings belonging to the current global cluster
        global_cluster_embeddings_ = embeddings[
            np.array([i in gc for gc in global_clusters])
        ]
        if len(global_cluster_embeddings_) == 0:
            continue
        if len(global_cluster_embeddings_) <= dim + 1:
            # Handle small clusters with direct assignment
            local_clusters = [np.array([0]) for _ in global_cluster_embeddings_]
            n_local_clusters = 1
        else:
            # Local dimensionality reduction and clustering
            reduced_embeddings_local = local_cluster_embeddings(
                global_cluster_embeddings_, dim
            )
            local_clusters, n_local_clusters = GMM_cluster(
                reduced_embeddings_local, threshold
            )
        # Assign local cluster IDs, adjusting for total clusters already processed
        for j in range(n_local_clusters):
            local_cluster_embeddings_ = global_cluster_embeddings_[
                np.array([j in lc for lc in local_clusters])
            ]
            indices = np.where(
                (embeddings == local_cluster_embeddings_[:, None]).all(-1)
            )[1]
            for idx in indices:
                all_local_clusters[idx] = np.append(
                    all_local_clusters[idx], j + total_clusters
                )
        total_clusters += n_local_clusters
    return all_local_clusters

### --- Our code below --- ###

def embed(texts):
    """
    Generate embeddings for a list of text documents.
    This function assumes the existence of an `embd` object with a method `embed_documents`
    that takes a list of texts and returns their embeddings.
    Parameters:
    - texts: List[str], a list of text documents to be embedded.
    Returns:
    - numpy.ndarray: An array of embeddings for the given text documents.
    """
    text_embeddings = embd.embed_documents(texts)
    text_embeddings_np = np.array(text_embeddings)
    return text_embeddings_np

def embed_cluster_texts(texts):
    """
    Embeds a list of texts and clusters them, returning a DataFrame with texts, their embeddings, and cluster labels.
    This function combines embedding generation and clustering into a single step. It assumes the existence
    of a previously defined `perform_clustering` function that performs clustering on the embeddings.
    Parameters:
    - texts: List[str], a list of text documents to be processed.
    Returns:
    - pandas.DataFrame: A DataFrame containing the original texts, their embeddings, and the assigned cluster labels.
    """
    text_embeddings_np = embed(texts)  # Generate embeddings
    cluster_labels = perform_clustering(
        text_embeddings_np, 10, 0.1
    )  # Perform clustering on the embeddings
    df = pd.DataFrame()  # Initialize a DataFrame to store the results
    df["text"] = texts  # Store original texts
    df["embd"] = list(text_embeddings_np)  # Store embeddings as a list in the DataFrame
    df["cluster"] = cluster_labels  # Store cluster labels
    return df

def fmt_txt(df: pd.DataFrame) -> str:
    """
    Formats the text documents in a DataFrame into a single string.
    Parameters:
    - df: DataFrame containing the 'text' column with text documents to format.
    Returns:
    - A single string where all text documents are joined by a specific delimiter.
    """
    unique_txt = df["text"].tolist()
    return "--- --- \n --- --- ".join(unique_txt)

def embed_cluster_summarize_texts(
    texts: List[str], level: int
) -> Tuple[pd.DataFrame, pd.DataFrame]:
    """
    Embeds, clusters, and summarizes a list of texts. This function first generates embeddings for the texts,
    clusters them based on similarity, expands the cluster assignments for easier processing, and then summarizes
    the content within each cluster.
    Parameters:
    - texts: A list of text documents to be processed.
    - level: An integer parameter that could define the depth or detail of processing.
    Returns:
    - Tuple containing two DataFrames:
      1. The first DataFrame (`df_clusters`) includes the original texts, their embeddings, and cluster assignments.
      2. The second DataFrame (`df_summary`) contains summaries for each cluster, the specified level of detail,
         and the cluster identifiers.
    """
    # Embed and cluster the texts, resulting in a DataFrame with 'text', 'embd', and 'cluster' columns
    df_clusters = embed_cluster_texts(texts)
    # Prepare to expand the DataFrame for easier manipulation of clusters
    expanded_list = []
    # Expand DataFrame entries to document-cluster pairings for straightforward processing
    for index, row in df_clusters.iterrows():
        for cluster in row["cluster"]:
            expanded_list.append(
                {"text": row["text"], "embd": row["embd"], "cluster": cluster}
            )
    # Create a new DataFrame from the expanded list
    expanded_df = pd.DataFrame(expanded_list)
    # Retrieve unique cluster identifiers for processing
    all_clusters = expanded_df["cluster"].unique()
    print(f"--Generated {len(all_clusters)} clusters--")
    # Summarization
    template = """Here is a sub-set of LangChain Expression Langauge doc.
    LangChain Expression Langauge provides a way to compose chain in LangChain.
    Give a detailed summary of the documentation provided.
    Documentation:
    {context}
    """
    prompt = ChatPromptTemplate.from_template(template)
    chain = prompt | model | StrOutputParser()
    # Format text within each cluster for summarization
    summaries = []
    for i in all_clusters:
        df_cluster = expanded_df[expanded_df["cluster"] == i]
        formatted_txt = fmt_txt(df_cluster)
        summaries.append(chain.invoke({"context": formatted_txt}))
    # Create a DataFrame to store summaries with their corresponding cluster and level
    df_summary = pd.DataFrame(
        {
            "summaries": summaries,
            "level": [level] * len(summaries),
            "cluster": list(all_clusters),
        }
    )
    return df_clusters, df_summary

def recursive_embed_cluster_summarize(
    texts: List[str], level: int = 1, n_levels: int = 3
) -> Dict[int, Tuple[pd.DataFrame, pd.DataFrame]]:
    """
    Recursively embeds, clusters, and summarizes texts up to a specified level or until
    the number of unique clusters becomes 1, storing the results at each level.
    Parameters:
    - texts: List[str], texts to be processed.
    - level: int, current recursion level (starts at 1).
    - n_levels: int, maximum depth of recursion.
    Returns:
    - Dict[int, Tuple[pd.DataFrame, pd.DataFrame]], a dictionary where keys are the recursion
      levels and values are tuples containing the clusters DataFrame and summaries DataFrame at that level.
    """
    results = {}  # Dictionary to store results at each level
    # Perform embedding, clustering, and summarization for the current level
    df_clusters, df_summary = embed_cluster_summarize_texts(texts, level)
    # Store the results of the current level
    results[level] = (df_clusters, df_summary)
    # Determine if further recursion is possible and meaningful
    unique_clusters = df_summary["cluster"].nunique()
    if level < n_levels and unique_clusters > 1:
        # Use summaries as the input texts for the next level of recursion
        new_texts = df_summary["summaries"].tolist()
        next_level_results = recursive_embed_cluster_summarize(
            new_texts, level + 1, n_levels
        )
        # Merge the results from the next level into the current results dictionary
        results.update(next_level_results)
    return results


构建树


leaf_texts = docs_texts
results = recursive_embed_cluster_summarize(leaf_texts, level=1, n_levels=3)


生成最终摘要。


这一步在很大程度上受我们如何从向量存储中检索文档的影响。基本上有两种选择


1. 树遍历检索


7


树遍历从树根层开始,根据向量嵌入的余弦相似度检索节点的前 k 个文档。因此,在每一级,它都会检索子节点的前 k 个文档。


2. 折叠树检索


8


折叠树检索是一种简单得多的方法。它将所有树折叠成单层,并根据查询向量的余弦相似度检索节点,直到达到阈值数量为止。


在我们的代码中,我们将提取数据帧文本、群集文本和最终摘要文本,并将它们组合在一起,创建一个包含根文档和摘要的大型文本列表。然后将这些文本存储到向量存储区。


# Initialize all_texts with leaf_texts
all_texts = leaf_texts.copy()
# Iterate through the results to extract summaries from each level and add them to all_texts
for level in sorted(results.keys()):
    # Extract summaries from the current level's DataFrame
    summaries = results[level][1]["summaries"].tolist()
    # Extend all_texts with the summaries from the current level
    all_texts.extend(summaries)
#Final Summaries extracted
print(all_texts)
###############  Response ###############################################
['\n\n\n\n\nLangChain Expression Language (LCEL) | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageLangChain Expression Language (LCEL)LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.\nLCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:Streaming support\nWhen you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.Async support\nAny chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a LangServe server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.Optimized parallel execution\nWhenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.Retries and fallbacks\nConfigure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.Access intermediate results\nFor more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and it’s available on every LangServe server.Input and output schemas\nInput and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.Seamless LangSmith tracing integration\nAs your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.\nWith LCEL, all steps are automatically logged to LangSmith for maximum observability and debuggability.Seamless LangServe deployment integration\nAny chain created with LCEL can be easily deployed using LangServe.Help us out by providing feedback on this documentation page:PreviousSecurityNextGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nWhy use LCEL | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageWhy use LCELOn this pageWhy use LCELWe recommend reading the LCEL Get\nstarted section first.LCEL makes it easy to build complex chains from basic components. It\ndoes this by providing: 1. A unified interface: Every LCEL object\nimplements the Runnable interface, which defines a common set of\ninvocation methods (invoke, batch, stream, ainvoke, …). This\nmakes it possible for chains of LCEL objects to also automatically\nsupport these invocations. That is, every chain of LCEL objects is\nitself an LCEL object. 2. Composition primitives: LCEL provides a\nnumber of primitives that make it easy to compose chains, parallelize\ncomponents, add fallbacks, dynamically configure chain internal, and\nmore.To better understand the value of LCEL, it’s helpful to see it in action\nand think about how we might recreate similar functionality without it.\nIn this walkthrough we’ll do just that with our basic\nexample from the\nget started section. We’ll take our simple prompt + model chain, which\nunder the hood already defines a lot of functionality, and see what it\nwould take to recreate all of it.%pip install --upgrade --quiet  langchain-core langchain-openai langchain-anthropicInvoke\u200bIn the simplest case, we just want to pass in a topic string and get\nback a joke string:Without LCEL\u200bfrom typing import Listimport openaiprompt_template = "Tell me a short joke about {topic}"client = openai.OpenAI()def call_chat_model(messages: List[dict]) -> str:    response = client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentdef invoke_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    messages = [{"role": "user", "content": prompt_value}]    return call_chat_model(messages)invoke_chain("ice cream")LCEL\u200bfrom langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughprompt = ChatPromptTemplate.from_template(    "Tell me a short joke about {topic}")output_parser = StrOutputParser()model = ChatOpenAI(model="gpt-3.5-turbo")chain = (    {"topic": RunnablePassthrough()}     | prompt    | model    | output_parser)chain.invoke("ice cream") Stream\u200bIf we want to stream results instead, we’ll need to change our function: Without LCEL\u200bfrom typing import Iteratordef stream_chat_model(messages: List[dict]) -> Iterator[str]:    stream = client.chat.completions.create(        model="gpt-3.5-turbo",        messages=messages,        stream=True,    )    for response in stream:        content = response.choices[0].delta.content        if content is not None:            yield contentdef stream_chain(topic: str) -> Iterator[str]:    prompt_value = prompt.format(topic=topic)    return stream_chat_model([{"role": "user", "content": prompt_value}])for chunk in stream_chain("ice cream"):    print(chunk, end="", flush=True)LCEL\u200bfor chunk in chain.stream("ice cream"):    print(chunk, end="", flush=True) Batch\u200bIf we want to run on a batch of inputs in parallel, we’ll again need a\nnew function: Without LCEL\u200bfrom concurrent.futures import ThreadPoolExecutordef batch_chain(topics: list) -> list:    with ThreadPoolExecutor(max_workers=5) as executor:        return list(executor.map(invoke_chain, topics))batch_chain(["ice cream", "spaghetti", "dumplings"])LCEL\u200bchain.batch(["ice cream", "spaghetti", "dumplings"]) Async\u200bIf we need an asynchronous version: Without LCEL\u200basync_client = openai.AsyncOpenAI()async def acall_chat_model(messages: List[dict]) -> str:    response = await async_client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentasync def ainvoke_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    messages = [{"role": "user", "content": prompt_value}]    return await acall_chat_model(messages)await ainvoke_chain("ice cream")LCEL\u200bchain.ainvoke("ice cream") LLM instead of chat model\u200bIf we want to use a completion endpoint instead of a chat endpoint: Without LCEL\u200bdef call_llm(prompt_value: str) -> str:    response = client.completions.create(        model="gpt-3.5-turbo-instruct",        prompt=prompt_value,    )    return response.choices[0].textdef invoke_llm_chain(topic: str) -> str:    prompt_value = prompt_template.format(topic=topic)    return call_llm(prompt_value)invoke_llm_chain("ice cream")LCEL\u200bfrom langchain_openai import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct")llm_chain = (    {"topic": RunnablePassthrough()}     | prompt    | llm    | output_parser)llm_chain.invoke("ice cream") Different model provider\u200bIf we want to use Anthropic instead of OpenAI: Without LCEL\u200bimport anthropicanthropic_template = f"Human:\\n\\n{prompt_template}\\n\\nAssistant:"anthropic_client = anthropic.Anthropic()def call_anthropic(prompt_value: str) -> str:    response = anthropic_client.completions.create(        model="claude-2",        prompt=prompt_value,        max_tokens_to_sample=256,    )    return response.completion    def invoke_anthropic_chain(topic: str) -> str:    prompt_value = anthropic_template.format(topic=topic)    return call_anthropic(prompt_value)invoke_anthropic_chain("ice cream")LCEL\u200bfrom langchain_anthropic import ChatAnthropicanthropic = ChatAnthropic(model="claude-2")anthropic_chain = (    {"topic": RunnablePassthrough()}     | prompt     | anthropic    | output_parser)anthropic_chain.invoke("ice cream") Runtime configurability\u200bIf we wanted to make the choice of chat model or LLM configurable at\nruntime: Without LCEL\u200bdef invoke_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> str:    if model == "chat_openai":        return invoke_chain(topic)    elif model == "openai":        return invoke_llm_chain(topic)    elif model == "anthropic":        return invoke_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def stream_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> Iterator[str]:    if model == "chat_openai":        return stream_chain(topic)    elif model == "openai":        # Note we haven\'t implemented this yet.        return stream_llm_chain(topic)    elif model == "anthropic":        # Note we haven\'t implemented this yet        return stream_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def batch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    # You get the idea    ...async def abatch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...invoke_configurable_chain("ice cream", model="openai")stream = stream_configurable_chain(    "ice_cream",     model="anthropic")for chunk in stream:    print(chunk, end="", flush=True)# batch_configurable_chain(["ice cream", "spaghetti", "dumplings"])# await ainvoke_configurable_chain("ice cream")With LCEL\u200bfrom langchain_core.runnables import ConfigurableFieldconfigurable_model = model.configurable_alternatives(    ConfigurableField(id="model"),     default_key="chat_openai",     openai=llm,    anthropic=anthropic,)configurable_chain = (    {"topic": RunnablePassthrough()}     | prompt     | configurable_model     | output_parser)configurable_chain.invoke(    "ice cream",     config={"model": "openai"})stream = configurable_chain.stream(    "ice cream",     config={"model": "anthropic"})for chunk in stream:    print(chunk, end="", flush=True)configurable_chain.batch(["ice cream", "spaghetti", "dumplings"])# await configurable_chain.ainvoke("ice cream") Logging\u200bIf we want to log our intermediate results: Without LCEL\u200bWe’ll print intermediate steps for illustrative purposesdef invoke_anthropic_chain_with_logging(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = anthropic_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_anthropic(prompt_value)    print(f"Output: {output}")    return outputinvoke_anthropic_chain_with_logging("ice cream")LCEL\u200bEvery component has built-in integrations with LangSmith. If we set the\nfollowing two environment variables, all chain traces are logged to\nLangSmith.import osos.environ["LANGCHAIN_API_KEY"] = "..."os.environ["LANGCHAIN_TRACING_V2"] = "true"anthropic_chain.invoke("ice cream")Here’s what our LangSmith trace looks like:\nhttps://smith.langchain.com/public/e4de52f8-bcd9-4732-b950-deee4b04e313/r Fallbacks\u200bIf we wanted to add fallback logic, in case one model API is down: Without LCEL\u200bdef invoke_chain_with_fallback(topic: str) -> str:    try:        return invoke_chain(topic)    except Exception:        return invoke_anthropic_chain(topic)async def ainvoke_chain_with_fallback(topic: str) -> str:    try:        return await ainvoke_chain(topic)    except Exception:        # Note: we haven\'t actually implemented this.        return ainvoke_anthropic_chain(topic)async def batch_chain_with_fallback(topics: List[str]) -> str:    try:        return batch_chain(topics)    except Exception:        # Note: we haven\'t actually implemented this.        return batch_anthropic_chain(topics)invoke_chain_with_fallback("ice cream")# await ainvoke_chain_with_fallback("ice cream")batch_chain_with_fallback(["ice cream", "spaghetti", "dumplings"]))LCEL\u200bfallback_chain = chain.with_fallbacks([anthropic_chain])fallback_chain.invoke("ice cream")# await fallback_chain.ainvoke("ice cream")fallback_chain.batch(["ice cream", "spaghetti", "dumplings"]) Full code comparison\u200bEven in this simple case, our LCEL chain succinctly packs in a lot of\nfunctionality. As chains become more complex, this becomes especially\nvaluable. Without LCEL\u200bfrom concurrent.futures import ThreadPoolExecutorfrom typing import Iterator, List, Tupleimport anthropicimport openaiprompt_template = "Tell me a short joke about {topic}"anthropic_template = f"Human:\\n\\n{prompt_template}\\n\\nAssistant:"client = openai.OpenAI()async_client = openai.AsyncOpenAI()anthropic_client = anthropic.Anthropic()def call_chat_model(messages: List[dict]) -> str:    response = client.chat.completions.create(        model="gpt-3.5-turbo",         messages=messages,    )    return response.choices[0].message.contentdef invoke_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = prompt_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    messages = [{"role": "user", "content": prompt_value}]    output = call_chat_model(messages)    print(f"Output: {output}")    return outputdef stream_chat_model(messages: List[dict]) -> Iterator[str]:    stream = client.chat.completions.create(        model="gpt-3.5-turbo",        messages=messages,        stream=True,    )    for response in stream:        content = response.choices[0].delta.content        if content is not None:            yield contentdef stream_chain(topic: str) -> Iterator[str]:    print(f"Input: {topic}")    prompt_value = prompt.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    stream = stream_chat_model([{"role": "user", "content": prompt_value}])    for chunk in stream:        print(f"Token: {chunk}", end="")        yield chunkdef batch_chain(topics: list) -> list:    with ThreadPoolExecutor(max_workers=5) as executor:        return list(executor.map(invoke_chain, topics))def call_llm(prompt_value: str) -> str:    response = client.completions.create(        model="gpt-3.5-turbo-instruct",        prompt=prompt_value,    )    return response.choices[0].textdef invoke_llm_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = promtp_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_llm(prompt_value)    print(f"Output: {output}")    return outputdef call_anthropic(prompt_value: str) -> str:    response = anthropic_client.completions.create(        model="claude-2",        prompt=prompt_value,        max_tokens_to_sample=256,    )    return response.completion   def invoke_anthropic_chain(topic: str) -> str:    print(f"Input: {topic}")    prompt_value = anthropic_template.format(topic=topic)    print(f"Formatted prompt: {prompt_value}")    output = call_anthropic(prompt_value)    print(f"Output: {output}")    return outputasync def ainvoke_anthropic_chain(topic: str) -> str:    ...def stream_anthropic_chain(topic: str) -> Iterator[str]:    ...def batch_anthropic_chain(topics: List[str]) -> List[str]:    ...def invoke_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> str:    if model == "chat_openai":        return invoke_chain(topic)    elif model == "openai":        return invoke_llm_chain(topic)    elif model == "anthropic":        return invoke_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def stream_configurable_chain(    topic: str,     *,     model: str = "chat_openai") -> Iterator[str]:    if model == "chat_openai":        return stream_chain(topic)    elif model == "openai":        # Note we haven\'t implemented this yet.        return stream_llm_chain(topic)    elif model == "anthropic":        # Note we haven\'t implemented this yet        return stream_anthropic_chain(topic)    else:        raise ValueError(            f"Received invalid model \'{model}\'."            " Expected one of chat_openai, openai, anthropic"        )def batch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...async def abatch_configurable_chain(    topics: List[str],     *,     model: str = "chat_openai") -> List[str]:    ...def invoke_chain_with_fallback(topic: str) -> str:    try:        return invoke_chain(topic)    except Exception:        return invoke_anthropic_chain(topic)async def ainvoke_chain_with_fallback(topic: str) -> str:    try:        return await ainvoke_chain(topic)    except Exception:        return ainvoke_anthropic_chain(topic)async def batch_chain_with_fallback(topics: List[str]) -> str:    try:        return batch_chain(topics)    except Exception:        return batch_anthropic_chain(topics)LCEL\u200bimport osfrom langchain_anthropic import ChatAnthropicfrom langchain_openai import ChatOpenAIfrom langchain_openai import OpenAIfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthrough, ConfigurableFieldos.environ["LANGCHAIN_API_KEY"] = "..."os.environ["LANGCHAIN_TRACING_V2"] = "true"prompt = ChatPromptTemplate.from_template(    "Tell me a short joke about {topic}")chat_openai = ChatOpenAI(model="gpt-3.5-turbo")openai = OpenAI(model="gpt-3.5-turbo-instruct")anthropic = ChatAnthropic(model="claude-2")model = (    chat_openai    .with_fallbacks([anthropic])    .configurable_alternatives(        ConfigurableField(id="model"),        default_key="chat_openai",        openai=openai,        anthropic=anthropic,    ))chain = (    {"topic": RunnablePassthrough()}     | prompt     | model     | StrOutputParser()) Next steps\u200bTo continue learning about LCEL, we recommend: - Reading up on the full\nLCEL Interface, which we’ve only\npartially covered here. - Exploring the\nHow-to section to learn about\nadditional composition primitives that LCEL provides. - Looking through\nthe Cookbook section to see LCEL\nin action for common use cases. A good next use case to look at would be\nRetrieval-augmented\ngeneration.Help us out by providing feedback on this documentation page:PreviousGet startedNextInterfaceInvokeStreamBatchAsyncLLM instead of chat modelDifferent model providerRuntime configurabilityLoggingFallbacksFull code comparisonNext stepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nHow to | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toHow to?️ RunnableParallel: Manipulating datamanipulating-inputs-output}?️ RunnablePassthrough: Passing data throughpassing-data-through}?️ RunnableLambda: Run Custom Functionsrun-custom-functions}?️ RunnableBranch: Dynamically route logic based on inputdynamically-route-logic-based-on-input}?️ Bind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with?️ Configure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end?️ Create a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a?️ Add fallbacksThere are many possible points of failure in an LLM application, whether?️ Stream custom generator functionsYou can use generator functions (ie. functions that use the yield?️ Inspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it?️ Add message history (memory)The RunnableWithMessageHistory lets us add message history to certainHelp us out by providing feedback on this documentation page:PreviousStreamingNextRunnableParallel: Manipulating dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRunnableBranch: Dynamically route logic based on input | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableBranch: Dynamically route logic based on inputOn this pageDynamically route logic based on inputThis notebook covers how to do routing in the LangChain Expression\nLanguage.Routing allows you to create non-deterministic chains where the output\nof a previous step defines the next step. Routing helps provide\nstructure and consistency around interactions with LLMs.There are two ways to perform routing:Conditionally return runnables from a\nRunnableLambda (recommended)Using a RunnableBranch.We’ll illustrate both methods using a two step sequence where the first\nstep classifies an input question as being about LangChain,\nAnthropic, or Other, then routes to a corresponding prompt chain.Example Setup\u200bFirst, let’s create a chain that will identify incoming questions as\nbeing about LangChain, Anthropic, or Other:from langchain_community.chat_models import ChatAnthropicfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatechain = (    PromptTemplate.from_template(        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>{question}</question>Classification:"""    )    | ChatAnthropic()    | StrOutputParser())chain.invoke({"question": "how do I call Anthropic?"})\' Anthropic\'Now, let’s create three sub chains:langchain_chain = (    PromptTemplate.from_template(        """You are an expert in langchain. \\Always answer questions starting with "As Harrison Chase told me". \\Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())anthropic_chain = (    PromptTemplate.from_template(        """You are an expert in anthropic. \\Always answer questions starting with "As Dario Amodei told me". \\Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())general_chain = (    PromptTemplate.from_template(        """Respond to the following question:Question: {question}Answer:"""    )    | ChatAnthropic())Using a custom function (Recommended)\u200bYou can also use a custom function to route between different outputs.\nHere’s an example:def route(info):    if "anthropic" in info["topic"].lower():        return anthropic_chain    elif "langchain" in info["topic"].lower():        return langchain_chain    else:        return general_chainfrom langchain_core.runnables import RunnableLambdafull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(    route)full_chain.invoke({"question": "how do I use Anthropic?"})AIMessage(content=\' As Dario Amodei told me, to use Anthropic IPC you first need to import it:\\n\\n```python\\nfrom anthroipc import ic\\n```\\n\\nThen you can create a client and connect to the server:\\n\\n```python \\nclient = ic.connect()\\n```\\n\\nAfter that, you can call methods on the client and get responses:\\n\\n```python\\nresponse = client.ask("What is the meaning of life?")\\nprint(response)\\n```\\n\\nYou can also register callbacks to handle events: \\n\\n```python\\ndef on_poke(event):\\n  print("Got poked!")\\n\\nclient.on(\\\'poke\\\', on_poke)\\n```\\n\\nAnd that\\\'s the basics of using the Anthropic IPC client library for Python! Let me know if you have any other questions!\', additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"})AIMessage(content=\' As Harrison Chase told me, to use LangChain you first need to sign up for an API key at platform.langchain.com. Once you have your API key, you can install the Python library and write a simple Python script to call the LangChain API. Here is some sample code to get started:\\n\\n```python\\nimport langchain\\n\\napi_key = "YOUR_API_KEY"\\n\\nlangchain.set_key(api_key)\\n\\nresponse = langchain.ask("What is the capital of France?")\\n\\nprint(response.response)\\n```\\n\\nThis will send the question "What is the capital of France?" to the LangChain API and print the response. You can customize the request by providing parameters like max_tokens, temperature, etc. The LangChain Python library documentation has more details on the available options. The key things are getting an API key and calling langchain.ask() with your question text. Let me know if you have any other questions!\', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"})AIMessage(content=\' 4\', additional_kwargs={}, example=False)Using a RunnableBranch\u200bA RunnableBranch is a special type of runnable that allows you to\ndefine a set of conditions and runnables to execute based on the input.\nIt does not offer anything that you can’t achieve in a custom\nfunction as described above, so we recommend using a custom function\ninstead.A RunnableBranch is initialized with a list of (condition, runnable)\npairs and a default runnable. It selects which branch by passing each\ncondition the input it’s invoked with. It selects the first condition to\nevaluate to True, and runs the corresponding runnable to that condition\nwith the input.If no provided conditions match, it runs the default runnable.Here’s an example of what it looks like in action:from langchain_core.runnables import RunnableBranchbranch = RunnableBranch(    (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),    (lambda x: "langchain" in x["topic"].lower(), langchain_chain),    general_chain,)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branchfull_chain.invoke({"question": "how do I use Anthropic?"})AIMessage(content=" As Dario Amodei told me, here are some ways to use Anthropic:\\n\\n- Sign up for an account on Anthropic\'s website to access tools like Claude, Constitutional AI, and Writer. \\n\\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\\n\\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\\n\\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\\n\\n- Check out Anthropic\'s documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\\n\\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\\n\\n- For most people, the easiest way to leverage Anthropic\'s technology is through their website - just create an account to get started!", additional_kwargs={}, example=False)full_chain.invoke({"question": "how do I use LangChain?"})AIMessage(content=\' As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \\n\\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\\n\\n- Ask general knowledge questions and LangChain will try to answer factually. For example "What is the capital of France?"\\n\\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like "Let\\\'s discuss machine learning"\\n\\n- Ask for summaries or high-level explanations on subjects. For example "Can you summarize the main themes in Shakespeare\\\'s Hamlet?" \\n\\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example "Write a short children\\\'s story about a mouse" or "Generate a poem in the style of Robert Frost about nature"\\n\\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\\n\\nThe key is interacting naturally and giving it clear prompts and requests\', additional_kwargs={}, example=False)full_chain.invoke({"question": "whats 2 + 2"})AIMessage(content=\' 2 + 2 = 4\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousRunnableLambda: Run Custom FunctionsNextBind runtime argsExample SetupUsing a custom function (Recommended)Using a RunnableBranchCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nCreate a runnable with the `@chain` decorator | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toCreate a runnable with the `@chain` decoratorCreate a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a\n@chain decorator. This is functionaly equivalent to wrapping in a\nRunnableLambda.This will have the benefit of improved observability by tracing your\nchain correctly. Any calls to runnables inside this function will be\ntraced as nested childen.It will also allow you to use this as any other runnable, compose it in\nchain, etc.Let’s take a look at this in action!%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import chainfrom langchain_openai import ChatOpenAIprompt1 = ChatPromptTemplate.from_template("Tell me a joke about {topic}")prompt2 = ChatPromptTemplate.from_template("What is the subject of this joke: {joke}")@chaindef custom_chain(text):    prompt_val1 = prompt1.invoke({"topic": text})    output1 = ChatOpenAI().invoke(prompt_val1)    parsed_output1 = StrOutputParser().invoke(output1)    chain2 = prompt2 | ChatOpenAI() | StrOutputParser()    return chain2.invoke({"joke": parsed_output1})custom_chain is now a runnable, meaning you will need to use invokecustom_chain.invoke("bears")\'The subject of this joke is bears.\'If you check out your LangSmith traces, you should see a custom_chain\ntrace in there, with the calls to OpenAI nested underneathHelp us out by providing feedback on this documentation page:PreviousConfigure chain internals at runtimeNextAdd fallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nInspect your runnables | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toInspect your runnablesOn this pageInspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it\nto get a better sense for what is going on. This notebook covers some\nmethods for doing so.First, let’s create an example LCEL. We will create one that does\nretrieval%pip install --upgrade --quiet  langchain langchain-openai faiss-cpu tiktokenfrom langchain.prompts import ChatPromptTemplatefrom langchain.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())Get a graph\u200bYou can get a graph of the runnablechain.get_graph()Print a graph\u200bWhile that is not super legible, you can print it to get a display\nthat’s easier to understandchain.get_graph().print_ascii()           +---------------------------------+                    | Parallel<context,question>Input |                    +---------------------------------+                             **               **                                 ***                   ***                            **                         **           +----------------------+              +-------------+  | VectorStoreRetriever |              | Passthrough |  +----------------------+              +-------------+                      **               **                                      ***         ***                                           **     **                                +----------------------------------+                   | Parallel<context,question>Output |                   +----------------------------------+                                     *                                                      *                                                      *                                           +--------------------+                                 | ChatPromptTemplate |                                 +--------------------+                                            *                                                      *                                                      *                                               +------------+                                         | ChatOpenAI |                                         +------------+                                                *                                                      *                                                      *                                            +-----------------+                                    | StrOutputParser |                                    +-----------------+                                              *                                                      *                                                      *                                         +-----------------------+                              | StrOutputParserOutput |                              +-----------------------+              Get the prompts\u200bAn important part of every chain is the prompts that are used. You can\nget the prompts present in the chain:chain.get_prompts()[ChatPromptTemplate(input_variables=[\'context\', \'question\'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[\'context\', \'question\'], template=\'Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n\'))])]Help us out by providing feedback on this documentation page:PreviousStream custom generator functionsNextAdd message history (memory)Get a graphPrint a graphGet the promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nGet started | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageGet startedOn this pageGet startedLCEL makes it easy to build complex chains from basic components, and\nsupports out of the box functionality such as streaming, parallelism,\nand logging.Basic example: prompt + model + output parser\u200bThe most basic and common use case is chaining a prompt template and a\nmodel together. To see how this works, let’s create a chain that takes a\ntopic and generates a joke:%pip install --upgrade --quiet  langchain-core langchain-community langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")model = ChatOpenAI(model="gpt-4")output_parser = StrOutputParser()chain = prompt | model | output_parserchain.invoke({"topic": "ice cream"})"Why don\'t ice creams ever get invited to parties?\\n\\nBecause they always drip when things heat up!"Notice this line of this code, where we piece together then different\ncomponents into a single chain using LCEL:chain = prompt | model | output_parserThe | symbol is similar to a unix pipe\noperator, which chains\ntogether the different components feeds the output from one component as\ninput into the next component.In this chain the user input is passed to the prompt template, then the\nprompt template output is passed to the model, then the model output is\npassed to the output parser. Let’s take a look at each component\nindividually to really understand what’s going on.1. Prompt\u200bprompt is a BasePromptTemplate, which means it takes in a dictionary\nof template variables and produces a PromptValue. A PromptValue is a\nwrapper around a completed prompt that can be passed to either an LLM\n(which takes a string as input) or ChatModel (which takes a sequence\nof messages as input). It can work with either language model type\nbecause it defines logic both for producing BaseMessages and for\nproducing a string.prompt_value = prompt.invoke({"topic": "ice cream"})prompt_valueChatPromptValue(messages=[HumanMessage(content=\'tell me a short joke about ice cream\')])prompt_value.to_messages()[HumanMessage(content=\'tell me a short joke about ice cream\')]prompt_value.to_string()\'Human: tell me a short joke about ice cream\'2. Model\u200bThe PromptValue is then passed to model. In this case our model is\na ChatModel, meaning it will output a BaseMessage.message = model.invoke(prompt_value)messageAIMessage(content="Why don\'t ice creams ever get invited to parties?\\n\\nBecause they always bring a melt down!")If our model was an LLM, it would output a string.from langchain_openai.llms import OpenAIllm = OpenAI(model="gpt-3.5-turbo-instruct")llm.invoke(prompt_value)\'\\n\\nRobot: Why did the ice cream truck break down? Because it had a meltdown!\'3. Output parser\u200bAnd lastly we pass our model output to the output_parser, which is a\nBaseOutputParser meaning it takes either a string or a BaseMessage\nas input. The StrOutputParser specifically simple converts any input\ninto a string.output_parser.invoke(message)"Why did the ice cream go to therapy? \\n\\nBecause it had too many toppings and couldn\'t find its cone-fidence!"4. Entire Pipeline\u200bTo follow the steps along:We pass in user input on the desired topic as\n{"topic": "ice cream"}The prompt component takes the user input, which is then used to\nconstruct a PromptValue after using the topic to construct the\nprompt.The model component takes the generated prompt, and passes into\nthe OpenAI LLM model for evaluation. The generated output from the\nmodel is a ChatMessage object.Finally, the output_parser component takes in a ChatMessage, and\ntransforms this into a Python string, which is returned from the\ninvoke method.Note that if you’re curious about the output of any components, you can\nalways test out a smaller version of the chain such as prompt or\nprompt | model to see the intermediate results:input = {"topic": "ice cream"}prompt.invoke(input)# > ChatPromptValue(messages=[HumanMessage(content=\'tell me a short joke about ice cream\')])(prompt | model).invoke(input)# > AIMessage(content="Why did the ice cream go to therapy?\\nBecause it had too many toppings and couldn\'t cone-trol itself!")RAG Search Example\u200bFor our next example, we want to run a retrieval-augmented generation\nchain to add some context when responding to questions.# Requires:# pip install langchain docarray tiktokenfrom langchain_community.vectorstores import DocArrayInMemorySearchfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableParallel, RunnablePassthroughfrom langchain_openai.chat_models import ChatOpenAIfrom langchain_openai.embeddings import OpenAIEmbeddingsvectorstore = DocArrayInMemorySearch.from_texts(    ["harrison worked at kensho", "bears like to eat honey"],    embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()output_parser = StrOutputParser()setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})chain = setup_and_retrieval | prompt | model | output_parserchain.invoke("where did harrison work?")In this case, the composed chain is:chain = setup_and_retrieval | prompt | model | output_parserTo explain this, we first can see that the prompt template above takes\nin context and question as values to be substituted in the prompt.\nBefore building the prompt template, we want to retrieve relevant\ndocuments to the search and include them as part of the context.As a preliminary step, we’ve setup the retriever using an in memory\nstore, which can retrieve documents based on a query. This is a runnable\ncomponent as well that can be chained together with other components,\nbut you can also try to run it separately:retriever.invoke("where did harrison work?")We then use the RunnableParallel to prepare the expected inputs into\nthe prompt by using the entries for the retrieved documents as well as\nthe original user question, using the retriever for document search, and\nRunnablePassthrough to pass the user’s question:setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})To review, the complete chain is:setup_and_retrieval = RunnableParallel(    {"context": retriever, "question": RunnablePassthrough()})chain = setup_and_retrieval | prompt | model | output_parserWith the flow being:The first steps create a RunnableParallel object with two entries.\nThe first entry, context will include the document results fetched\nby the retriever. The second entry, question will contain the\nuser’s original question. To pass on the question, we use\nRunnablePassthrough to copy this entry.Feed the dictionary from the step above to the prompt component.\nIt then takes the user input which is question as well as the\nretrieved document which is context to construct a prompt and\noutput a PromptValue.  The model component takes the generated prompt, and passes into\nthe OpenAI LLM model for evaluation. The generated output from the\nmodel is a ChatMessage object.Finally, the output_parser component takes in a ChatMessage, and\ntransforms this into a Python string, which is returned from the\ninvoke method.Next steps\u200bWe recommend reading our Why use LCEL\nsection next to see a side-by-side comparison of the code needed to\nproduce common functionality with and without LCEL.Help us out by providing feedback on this documentation page:PreviousLangChain Expression Language (LCEL)NextWhy use LCELBasic example: prompt + model + output parser1. Prompt2. Model3. Output parser4. Entire PipelineRAG Search ExampleNext stepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nHow to | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toHow to?️ RunnableParallel: Manipulating datamanipulating-inputs-output}?️ RunnablePassthrough: Passing data throughpassing-data-through}?️ RunnableLambda: Run Custom Functionsrun-custom-functions}?️ RunnableBranch: Dynamically route logic based on inputdynamically-route-logic-based-on-input}?️ Bind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with?️ Configure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end?️ Create a runnable with the `@chain` decoratorYou can also turn an arbitrary function into a chain by adding a?️ Add fallbacksThere are many possible points of failure in an LLM application, whether?️ Stream custom generator functionsYou can use generator functions (ie. functions that use the yield?️ Inspect your runnablesOnce you create a runnable with LCEL, you may often want to inspect it?️ Add message history (memory)The RunnableWithMessageHistory lets us add message history to certainHelp us out by providing feedback on this documentation page:PreviousStreamingNextRunnableParallel: Manipulating dataCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nBind runtime args | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toBind runtime argsOn this pageBind runtime argsSometimes we want to invoke a Runnable within a Runnable sequence with\nconstant arguments that are not part of the output of the preceding\nRunnable in the sequence, and which are not part of the user input. We\ncan use Runnable.bind() to easily pass these arguments in.Suppose we have a simple prompt + model sequence:%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n",        ),        ("human", "{equation_statement}"),    ])model = ChatOpenAI(temperature=0)runnable = (    {"equation_statement": RunnablePassthrough()} | prompt | model | StrOutputParser())print(runnable.invoke("x raised to the third plus seven equals 12"))EQUATION: x^3 + 7 = 12SOLUTION:Subtracting 7 from both sides of the equation, we get:x^3 = 12 - 7x^3 = 5Taking the cube root of both sides, we get:x = ∛5Therefore, the solution to the equation x^3 + 7 = 12 is x = ∛5.and want to call the model with certain stop words:runnable = (    {"equation_statement": RunnablePassthrough()}    | prompt    | model.bind(stop="SOLUTION")    | StrOutputParser())print(runnable.invoke("x raised to the third plus seven equals 12"))EQUATION: x^3 + 7 = 12Attaching OpenAI functions\u200bOne particularly useful application of binding is to attach OpenAI\nfunctions to a compatible OpenAI model:function = {    "name": "solver",    "description": "Formulates and solves an equation",    "parameters": {        "type": "object",        "properties": {            "equation": {                "type": "string",                "description": "The algebraic expression of the equation",            },            "solution": {                "type": "string",                "description": "The solution to the equation",            },        },        "required": ["equation", "solution"],    },}# Need gpt-4 to solve this one correctlyprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "Write out the following equation using algebraic symbols then solve it.",        ),        ("human", "{equation_statement}"),    ])model = ChatOpenAI(model="gpt-4", temperature=0).bind(    function_call={"name": "solver"}, functions=[function])runnable = {"equation_statement": RunnablePassthrough()} | prompt | modelrunnable.invoke("x raised to the third plus seven equals 12")AIMessage(content=\'\', additional_kwargs={\'function_call\': {\'name\': \'solver\', \'arguments\': \'{\\n"equation": "x^3 + 7 = 12",\\n"solution": "x = ∛5"\\n}\'}}, example=False)Attaching OpenAI tools\u200btools = [    {        "type": "function",        "function": {            "name": "get_current_weather",            "description": "Get the current weather in a given location",            "parameters": {                "type": "object",                "properties": {                    "location": {                        "type": "string",                        "description": "The city and state, e.g. San Francisco, CA",                    },                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},                },                "required": ["location"],            },        },    }]model = ChatOpenAI(model="gpt-3.5-turbo-1106").bind(tools=tools)model.invoke("What\'s the weather in SF, NYC and LA?")AIMessage(content=\'\', additional_kwargs={\'tool_calls\': [{\'id\': \'call_zHN0ZHwrxM7nZDdqTp6dkPko\', \'function\': {\'arguments\': \'{"location": "San Francisco, CA", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}, {\'id\': \'call_aqdMm9HBSlFW9c9rqxTa7eQv\', \'function\': {\'arguments\': \'{"location": "New York, NY", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}, {\'id\': \'call_cx8E567zcLzYV2WSWVgO63f1\', \'function\': {\'arguments\': \'{"location": "Los Angeles, CA", "unit": "celsius"}\', \'name\': \'get_current_weather\'}, \'type\': \'function\'}]})Help us out by providing feedback on this documentation page:PreviousRunnableBranch: Dynamically route logic based on inputNextConfigure chain internals at runtimeAttaching OpenAI functionsAttaching OpenAI toolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nAdd message history (memory) | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toAdd message history (memory)On this pageAdd message history (memory)The RunnableWithMessageHistory lets us add message history to certain\ntypes of chains. It wraps another Runnable and manages the chat message\nhistory for it.Specifically, it can be used for any Runnable that takes as input one ofa sequence of BaseMessagea dict with a key that takes a sequence of BaseMessagea dict with a key that takes the latest message(s) as a string or\nsequence of BaseMessage, and a separate key that takes historical\nmessagesAnd returns as output one ofa string that can be treated as the contents of an AIMessagea sequence of BaseMessagea dict with a key that contains a sequence of BaseMessageLet’s take a look at some examples to see how it works. First we\nconstruct a runnable (which here accepts a dict as input and returns a\nmessage as output):from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_openai.chat_models import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re an assistant who\'s good at {ability}. Respond in 20 words or fewer",        ),        MessagesPlaceholder(variable_name="history"),        ("human", "{input}"),    ])runnable = prompt | modelTo manage the message history, we will need: 1. This runnable; 2. A\ncallable that returns an instance of BaseChatMessageHistory.Check out the memory\nintegrations page for\nimplementations of chat message histories using Redis and other\nproviders. Here we demonstrate using an in-memory ChatMessageHistory\nas well as more persistent storage using RedisChatMessageHistory.In-memory\u200bBelow we show a simple example in which the chat history lives in\nmemory, in this case via a global Python dict.We construct a callable get_session_history that references this dict\nto return an instance of ChatMessageHistory. The arguments to the\ncallable can be specified by passing a configuration to the\nRunnableWithMessageHistory at runtime. By default, the configuration\nparameter is expected to be a single string session_id. This can be\nadjusted via the history_factory_config kwarg.Using the single-parameter default:from langchain_community.chat_message_histories import ChatMessageHistoryfrom langchain_core.chat_history import BaseChatMessageHistoryfrom langchain_core.runnables.history import RunnableWithMessageHistorystore = {}def get_session_history(session_id: str) -> BaseChatMessageHistory:    if session_id not in store:        store[session_id] = ChatMessageHistory()    return store[session_id]with_message_history = RunnableWithMessageHistory(    runnable,    get_session_history,    input_messages_key="input",    history_messages_key="history",)Note that we’ve specified input_messages_key (the key to be treated as\nthe latest input message) and history_messages_key (the key to add\nhistorical messages to).When invoking this new runnable, we specify the corresponding chat\nhistory via a configuration parameter:with_message_history.invoke(    {"ability": "math", "input": "What does cosine mean?"},    config={"configurable": {"session_id": "abc123"}},)AIMessage(content=\'Cosine is a trigonometric function that calculates the ratio of the adjacent side to the hypotenuse of a right triangle.\')# Rememberswith_message_history.invoke(    {"ability": "math", "input": "What?"},    config={"configurable": {"session_id": "abc123"}},)AIMessage(content=\'Cosine is a mathematical function used to calculate the length of a side in a right triangle.\')# New session_id --> does not remember.with_message_history.invoke(    {"ability": "math", "input": "What?"},    config={"configurable": {"session_id": "def234"}},)AIMessage(content=\'I can help with math problems. What do you need assistance with?\')The configuration parameters by which we track message histories can be\ncustomized by passing in a list of ConfigurableFieldSpec objects to\nthe history_factory_config parameter. Below, we use two parameters: a\nuser_id and conversation_id.from langchain_core.runnables import ConfigurableFieldSpecstore = {}def get_session_history(user_id: str, conversation_id: str) -> BaseChatMessageHistory:    if (user_id, conversation_id) not in store:        store[(user_id, conversation_id)] = ChatMessageHistory()    return store[(user_id, conversation_id)]with_message_history = RunnableWithMessageHistory(    runnable,    get_session_history,    input_messages_key="input",    history_messages_key="history",    history_factory_config=[        ConfigurableFieldSpec(            id="user_id",            annotation=str,            name="User ID",            description="Unique identifier for the user.",            default="",            is_shared=True,        ),        ConfigurableFieldSpec(            id="conversation_id",            annotation=str,            name="Conversation ID",            description="Unique identifier for the conversation.",            default="",            is_shared=True,        ),    ],)with_message_history.invoke(    {"ability": "math", "input": "Hello"},    config={"configurable": {"user_id": "123", "conversation_id": "1"}},)Examples with runnables of different signatures\u200bThe above runnable takes a dict as input and returns a BaseMessage.\nBelow we show some alternatives.Messages input, dict output\u200bfrom langchain_core.messages import HumanMessagefrom langchain_core.runnables import RunnableParallelchain = RunnableParallel({"output_message": ChatOpenAI()})def get_session_history(session_id: str) -> BaseChatMessageHistory:    if session_id not in store:        store[session_id] = ChatMessageHistory()    return store[session_id]with_message_history = RunnableWithMessageHistory(    chain,    get_session_history,    output_messages_key="output_message",)with_message_history.invoke(    [HumanMessage(content="What did Simone de Beauvoir believe about free will")],    config={"configurable": {"session_id": "baz"}},){\'output_message\': AIMessage(content="Simone de Beauvoir believed in the existence of free will. She argued that individuals have the ability to make choices and determine their own actions, even in the face of social and cultural constraints. She rejected the idea that individuals are purely products of their environment or predetermined by biology or destiny. Instead, she emphasized the importance of personal responsibility and the need for individuals to actively engage in creating their own lives and defining their own existence. De Beauvoir believed that freedom and agency come from recognizing one\'s own freedom and actively exercising it in the pursuit of personal and collective liberation.")}with_message_history.invoke(    [HumanMessage(content="How did this compare to Sartre")],    config={"configurable": {"session_id": "baz"}},){\'output_message\': AIMessage(content=\'Simone de Beauvoir\\\'s views on free will were closely aligned with those of her contemporary and partner Jean-Paul Sartre. Both de Beauvoir and Sartre were existentialist philosophers who emphasized the importance of individual freedom and the rejection of determinism. They believed that human beings have the capacity to transcend their circumstances and create their own meaning and values.\\n\\nSartre, in his famous work "Being and Nothingness," argued that human beings are condemned to be free, meaning that we are burdened with the responsibility of making choices and defining ourselves in a world that lacks inherent meaning. Like de Beauvoir, Sartre believed that individuals have the ability to exercise their freedom and make choices in the face of external and internal constraints.\\n\\nWhile there may be some nuanced differences in their philosophical writings, overall, de Beauvoir and Sartre shared a similar belief in the existence of free will and the importance of individual agency in shaping one\\\'s own life.\')}Messages input, messages output\u200bRunnableWithMessageHistory(    ChatOpenAI(),    get_session_history,)Dict with single key for all messages input, messages output\u200bfrom operator import itemgetterRunnableWithMessageHistory(    itemgetter("input_messages") | ChatOpenAI(),    get_session_history,    input_messages_key="input_messages",)Persistent storage\u200bIn many cases it is preferable to persist conversation histories.\nRunnableWithMessageHistory is agnostic as to how the\nget_session_history callable retrieves its chat message histories. See\nhere\nfor an example using a local filesystem. Below we demonstrate how one\ncould use Redis. Check out the memory\nintegrations page for\nimplementations of chat message histories using other providers.Setup\u200bWe’ll need to install Redis if it’s not installed already:%pip install --upgrade --quiet redisStart a local Redis Stack server if we don’t have an existing Redis\ndeployment to connect to:docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latestREDIS_URL = "redis://localhost:6379/0"LangSmith\u200bLangSmith is especially useful for something like message history\ninjection, where it can be hard to otherwise understand what the inputs\nare to various parts of the chain.Note that LangSmith is not needed, but it is helpful. If you do want to\nuse LangSmith, after you sign up at the link above, make sure to\nuncoment the below and set your environment variables to start logging\ntraces:# os.environ["LANGCHAIN_TRACING_V2"] = "true"# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()Updating the message history implementation just requires us to define a\nnew callable, this time returning an instance of\nRedisChatMessageHistory:from langchain_community.chat_message_histories import RedisChatMessageHistorydef get_message_history(session_id: str) -> RedisChatMessageHistory:    return RedisChatMessageHistory(session_id, url=REDIS_URL)with_message_history = RunnableWithMessageHistory(    runnable,    get_message_history,    input_messages_key="input",    history_messages_key="history",)We can invoke as before:with_message_history.invoke(    {"ability": "math", "input": "What does cosine mean?"},    config={"configurable": {"session_id": "foobar"}},)AIMessage(content=\'Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse in a right triangle.\')with_message_history.invoke(    {"ability": "math", "input": "What\'s its inverse"},    config={"configurable": {"session_id": "foobar"}},)AIMessage(content=\'The inverse of cosine is the arccosine function, denoted as acos or cos^-1, which gives the angle corresponding to a given cosine value.\')Langsmith\ntraceLooking at the Langsmith trace for the second call, we can see that when\nconstructing the prompt, a “history” variable has been injected which is\na list of two messages (our first input and first output).Help us out by providing feedback on this documentation page:PreviousInspect your runnablesNextCookbookIn-memoryExamples with runnables of different signaturesPersistent storageSetupLangSmithCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRunnablePassthrough: Passing data through | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnablePassthrough: Passing data throughOn this pagePassing data throughRunnablePassthrough allows to pass inputs unchanged or with the addition\nof extra keys. This typically is used in conjuction with\nRunnableParallel to assign data to a new key in the map.RunnablePassthrough() called on it’s own, will simply take the input and\npass it through.RunnablePassthrough called with assign\n(RunnablePassthrough.assign(...)) will take the input, and will add\nthe extra arguments passed to the assign function.See the example below:%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.runnables import RunnableParallel, RunnablePassthroughrunnable = RunnableParallel(    passed=RunnablePassthrough(),    extra=RunnablePassthrough.assign(mult=lambda x: x["num"] * 3),    modified=lambda x: x["num"] + 1,)runnable.invoke({"num": 1}){\'passed\': {\'num\': 1}, \'extra\': {\'num\': 1, \'mult\': 3}, \'modified\': 2}As seen above, passed key was called with RunnablePassthrough() and\nso it simply passed on {\'num\': 1}.In the second line, we used RunnablePastshrough.assign with a lambda\nthat multiplies the numerical value by 3. In this cased, extra was set\nwith {\'num\': 1, \'mult\': 3} which is the original value with the mult\nkey added.Finally, we also set a third key in the map with modified which uses a\nlambda to set a single value adding 1 to the num, which resulted in\nmodified key with the value of 2.Retrieval Example\u200bIn the example below, we see a use case where we use RunnablePassthrough\nalong with RunnableMap.from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())retrieval_chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'Here the input to prompt is expected to be a map with keys “context” and\n“question”. The user input is just the question. So we need to get the\ncontext using our retriever and passthrough the user input under the\n“question” key. In this case, the RunnablePassthrough allows us to pass\non the user’s question to the prompt and model.Help us out by providing feedback on this documentation page:PreviousRunnableParallel: Manipulating dataNextRunnableLambda: Run Custom FunctionsRetrieval ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRunnableLambda: Run Custom Functions | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableLambda: Run Custom FunctionsOn this pageRun custom functionsYou can use arbitrary functions in the pipeline.Note that all inputs to these functions need to be a SINGLE argument. If\nyou have a function that accepts multiple arguments, you should write a\nwrapper that accepts a single input and unpacks it into multiple\nargument.%pip install –upgrade –quiet langchain langchain-openaifrom operator import itemgetterfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambdafrom langchain_openai import ChatOpenAIdef length_function(text):    return len(text)def _multiple_length_function(text1, text2):    return len(text1) * len(text2)def multiple_length_function(_dict):    return _multiple_length_function(_dict["text1"], _dict["text2"])prompt = ChatPromptTemplate.from_template("what is {a} + {b}")model = ChatOpenAI()chain1 = prompt | modelchain = (    {        "a": itemgetter("foo") | RunnableLambda(length_function),        "b": {"text1": itemgetter("foo"), "text2": itemgetter("bar")}        | RunnableLambda(multiple_length_function),    }    | prompt    | model)chain.invoke({"foo": "bar", "bar": "gah"})AIMessage(content=\'3 + 9 equals 12.\')Accepting a Runnable Config\u200bRunnable lambdas can optionally accept a\nRunnableConfig,\nwhich they can use to pass callbacks, tags, and other configuration\ninformation to nested runs.from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnableConfigimport jsondef parse_or_fix(text: str, config: RunnableConfig):    fixing_chain = (        ChatPromptTemplate.from_template(            "Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}"            " Don\'t narrate, just respond with the fixed data."        )        | ChatOpenAI()        | StrOutputParser()    )    for _ in range(3):        try:            return json.loads(text)        except Exception as e:            text = fixing_chain.invoke({"input": text, "error": e}, config)    return "Failed to parse"from langchain.callbacks import get_openai_callbackwith get_openai_callback() as cb:    output = RunnableLambda(parse_or_fix).invoke(        "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]}    )    print(output)    print(cb){\'foo\': \'bar\'}Tokens Used: 65    Prompt Tokens: 56    Completion Tokens: 9Successful Requests: 1Total Cost (USD): $0.00010200000000000001Help us out by providing feedback on this documentation page:PreviousRunnablePassthrough: Passing data throughNextRunnableBranch: Dynamically route logic based on inputAccepting a Runnable ConfigCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 "\n\n\n\n\nCookbook | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCookbookExample code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the Prompt + LLM page is a good place to start.?️ Prompt + LLMThe most common and valuable composition is taking:?️ RAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds?️ Multiple chainsRunnables can easily be used to string together multiple Chains?️ Querying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.?️ AgentsYou can pass a Runnable into an agent. Make sure you have langchainhub?️ Code writingExample of how to use LCEL to write Python code.?️ Routing by semantic similarityWith LCEL you can easily add [custom routing?️ Adding memoryThis shows how to add memory to an arbitrary chain. Right now, you can?️ Adding moderationThis shows how to add in moderation (or other safeguards) around your?️ Managing prompt sizeAgents dynamically call tools. The results of those tool calls are added?️ Using toolsYou can use any Tools with Runnables easily.Help us out by providing feedback on this documentation page:PreviousAdd message history (memory)NextPrompt + LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n",
 '\n\n\n\n\nUsing tools | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookUsing toolsUsing toolsYou can use any Tools with Runnables easily.%pip install --upgrade --quiet  langchain langchain-openai duckduckgo-searchfrom langchain.tools import DuckDuckGoSearchRunfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIsearch = DuckDuckGoSearchRun()template = """turn the following user input into a search query for a search engine:{input}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = prompt | model | StrOutputParser() | searchchain.invoke({"input": "I\'d like to figure out what games are tonight"})\'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\\\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\\\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code "GNPLAY" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\\\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\\\'s college football schedule still has some good games remaining to watch on your television.. We\\\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...\'Help us out by providing feedback on this documentation page:PreviousManaging prompt sizeNextLangChain Expression Language (LCEL)CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nPrompt + LLM | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookPrompt + LLMOn this pagePrompt + LLMThe most common and valuable composition is taking:PromptTemplate / ChatPromptTemplate -> LLM / ChatModel ->\nOutputParserAlmost any other chains you build will use this building block.PromptTemplate + LLM\u200bThe simplest composition is just combining a prompt and model to create\na chain that takes user input, adds it to a prompt, passes it to a\nmodel, and returns the raw model output.Note, you can mix and match PromptTemplate/ChatPromptTemplates and\nLLMs/ChatModels as you like here.%pip install –upgrade –quiet langchain langchain-openaifrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")model = ChatOpenAI()chain = prompt | modelchain.invoke({"foo": "bears"})AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!", additional_kwargs={}, example=False)Often times we want to attach kwargs that’ll be passed to each model\ncall. Here are a few examples of that:Attaching Stop Sequences\u200bchain = prompt | model.bind(stop=["\\n"])chain.invoke({"foo": "bears"})AIMessage(content=\'Why did the bear never wear shoes?\', additional_kwargs={}, example=False)Attaching Function Call information\u200bfunctions = [    {        "name": "joke",        "description": "A joke",        "parameters": {            "type": "object",            "properties": {                "setup": {"type": "string", "description": "The setup for the joke"},                "punchline": {                    "type": "string",                    "description": "The punchline for the joke",                },            },            "required": ["setup", "punchline"],        },    }]chain = prompt | model.bind(function_call={"name": "joke"}, functions=functions)chain.invoke({"foo": "bears"}, config={})AIMessage(content=\'\', additional_kwargs={\'function_call\': {\'name\': \'joke\', \'arguments\': \'{\\n  "setup": "Why don\\\'t bears wear shoes?",\\n  "punchline": "Because they have bear feet!"\\n}\'}}, example=False)PromptTemplate + LLM + OutputParser\u200bWe can also add in an output parser to easily transform the raw\nLLM/ChatModel output into a more workable formatfrom langchain_core.output_parsers import StrOutputParserchain = prompt | model | StrOutputParser()Notice that this now returns a string - a much more workable format for\ndownstream taskschain.invoke({"foo": "bears"})"Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"Functions Output Parser\u200bWhen you specify the function to return, you may just want to parse that\ndirectlyfrom langchain.output_parsers.openai_functions import JsonOutputFunctionsParserchain = (    prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonOutputFunctionsParser())chain.invoke({"foo": "bears"}){\'setup\': "Why don\'t bears like fast food?", \'punchline\': "Because they can\'t catch it!"}from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParserchain = (    prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke({"foo": "bears"})"Why don\'t bears wear shoes?"Simplifying input\u200bTo make invocation even simpler, we can add a RunnableParallel to take\ncare of creating the prompt input dict for us:from langchain_core.runnables import RunnableParallel, RunnablePassthroughmap_ = RunnableParallel(foo=RunnablePassthrough())chain = (    map_    | prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears")"Why don\'t bears wear shoes?"Since we’re composing our map with another Runnable, we can even use\nsome syntactic sugar and just use a dict:chain = (    {"foo": RunnablePassthrough()}    | prompt    | model.bind(function_call={"name": "joke"}, functions=functions)    | JsonKeyOutputFunctionsParser(key_name="setup"))chain.invoke("bears")"Why don\'t bears like fast food?"Help us out by providing feedback on this documentation page:PreviousCookbookNextRAGPromptTemplate + LLMAttaching Stop SequencesAttaching Function Call informationPromptTemplate + LLM + OutputParserFunctions Output ParserSimplifying inputCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nAdding moderation | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAdding moderationAdding moderationThis shows how to add in moderation (or other safeguards) around your\nLLM application.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain.chains import OpenAIModerationChainfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import OpenAImoderate = OpenAIModerationChain()model = OpenAI()prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])chain = prompt | modelchain.invoke({"input": "you are stupid"})\'\\n\\nYou are stupid.\'moderated_chain = chain | moderatemoderated_chain.invoke({"input": "you are stupid"}){\'input\': \'\\n\\nYou are stupid\', \'output\': "Text was found that violates OpenAI\'s content policy."}Help us out by providing feedback on this documentation page:PreviousAdding memoryNextManaging prompt sizeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nCode writing | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCode writingCode writingExample of how to use LCEL to write Python code.%pip install --upgrade --quiet  langchain-core langchain-experimental langchain-openaifrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import (    ChatPromptTemplate,)from langchain_experimental.utilities import PythonREPLfrom langchain_openai import ChatOpenAItemplate = """Write some python code to solve the user\'s problem. Return only python code in Markdown format, e.g.:```python....```"""prompt = ChatPromptTemplate.from_messages([("system", template), ("human", "{input}")])model = ChatOpenAI()def _sanitize_output(text: str):    _, after = text.split("```python")    return after.split("```")[0]chain = prompt | model | StrOutputParser() | _sanitize_output | PythonREPL().runchain.invoke({"input": "whats 2 plus 2"})Python REPL can execute arbitrary code. Use with caution.\'4\\n\'Help us out by providing feedback on this documentation page:PreviousAgentsNextRouting by semantic similarityCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nAdding memory | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAdding memoryAdding memoryThis shows how to add memory to an arbitrary chain. Right now, you can\nuse the memory classes but need to hook it up manually%pip install --upgrade --quiet  langchain langchain-openaifrom operator import itemgetterfrom langchain.memory import ConversationBufferMemoryfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_messages(    [        ("system", "You are a helpful chatbot"),        MessagesPlaceholder(variable_name="history"),        ("human", "{input}"),    ])memory = ConversationBufferMemory(return_messages=True)memory.load_memory_variables({}){\'history\': []}chain = (    RunnablePassthrough.assign(        history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")    )    | prompt    | model)inputs = {"input": "hi im bob"}response = chain.invoke(inputs)responseAIMessage(content=\'Hello Bob! How can I assist you today?\', additional_kwargs={}, example=False)memory.save_context(inputs, {"output": response.content})memory.load_memory_variables({}){\'history\': [HumanMessage(content=\'hi im bob\', additional_kwargs={}, example=False),  AIMessage(content=\'Hello Bob! How can I assist you today?\', additional_kwargs={}, example=False)]}inputs = {"input": "whats my name"}response = chain.invoke(inputs)responseAIMessage(content=\'Your name is Bob.\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousRouting by semantic similarityNextAdding moderationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRouting by semantic similarity | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookRouting by semantic similarityRouting by semantic similarityWith LCEL you can easily add custom routing\nlogic\nto your chain to dynamically determine the chain logic based on user\ninput. All you need to do is define a function that given an input\nreturns a Runnable.One especially useful technique is to use embeddings to route a query to\nthe most relevant prompt. Here’s a very simple example.%pip install --upgrade --quiet  langchain-core langchain langchain-openaifrom langchain.utils.math import cosine_similarityfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsphysics_template = """You are a very smart physics professor. \\You are great at answering questions about physics in a concise and easy to understand manner. \\When you don\'t know the answer to a question you admit that you don\'t know.Here is a question:{query}"""math_template = """You are a very good mathematician. You are great at answering math questions. \\You are so good because you are able to break down hard problems into their component parts, \\answer the component parts, and then put them together to answer the broader question.Here is a question:{query}"""embeddings = OpenAIEmbeddings()prompt_templates = [physics_template, math_template]prompt_embeddings = embeddings.embed_documents(prompt_templates)def prompt_router(input):    query_embedding = embeddings.embed_query(input["query"])    similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]    most_similar = prompt_templates[similarity.argmax()]    print("Using MATH" if most_similar == math_template else "Using PHYSICS")    return PromptTemplate.from_template(most_similar)chain = (    {"query": RunnablePassthrough()}    | RunnableLambda(prompt_router)    | ChatOpenAI()    | StrOutputParser())print(chain.invoke("What\'s a black hole"))Using PHYSICSA black hole is a region in space where gravity is extremely strong, so strong that nothing, not even light, can escape its gravitational pull. It is formed when a massive star collapses under its own gravity during a supernova explosion. The collapse causes an incredibly dense mass to be concentrated in a small volume, creating a gravitational field that is so intense that it warps space and time. Black holes have a boundary called the event horizon, which marks the point of no return for anything that gets too close. Beyond the event horizon, the gravitational pull is so strong that even light cannot escape, hence the name "black hole." While we have a good understanding of black holes, there is still much to learn, especially about what happens inside them.print(chain.invoke("What\'s a path integral"))Using MATHThank you for your kind words! I will do my best to break down the concept of a path integral for you.In mathematics and physics, a path integral is a mathematical tool used to calculate the probability amplitude or wave function of a particle or system of particles. It was introduced by Richard Feynman and is an integral over all possible paths that a particle can take to go from an initial state to a final state.To understand the concept better, let\'s consider an example. Suppose we have a particle moving from point A to point B in space. Classically, we would describe this particle\'s motion using a definite trajectory, but in quantum mechanics, particles can simultaneously take multiple paths from A to B.The path integral formalism considers all possible paths that the particle could take and assigns a probability amplitude to each path. These probability amplitudes are then added up, taking into account the interference effects between different paths.To calculate a path integral, we need to define an action, which is a mathematical function that describes the behavior of the system. The action is usually expressed in terms of the particle\'s position, velocity, and time.Once we have the action, we can write down the path integral as an integral over all possible paths. Each path is weighted by a factor determined by the action and the principle of least action, which states that a particle takes a path that minimizes the action.Mathematically, the path integral is expressed as:∫ e^(iS/ħ) D[x(t)]Here, S is the action, ħ is the reduced Planck\'s constant, and D[x(t)] represents the integration over all possible paths x(t) of the particle.By evaluating this integral, we can obtain the probability amplitude for the particle to go from the initial state to the final state. The absolute square of this amplitude gives us the probability of finding the particle in a particular state.Path integrals have proven to be a powerful tool in various areas of physics, including quantum mechanics, quantum field theory, and statistical mechanics. They allow us to study complex systems and calculate probabilities that are difficult to obtain using other methods.I hope this explanation helps you understand the concept of a path integral. If you have any further questions, feel free to ask!Help us out by providing feedback on this documentation page:PreviousCode writingNextAdding memoryCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nAgents | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookAgentsAgentsYou can pass a Runnable into an agent. Make sure you have langchainhub\ninstalled: pip install langchainhubfrom langchain import hubfrom langchain.agents import AgentExecutor, toolfrom langchain.agents.output_parsers import XMLAgentOutputParserfrom langchain_community.chat_models import ChatAnthropicmodel = ChatAnthropic(model="claude-2")@tooldef search(query: str) -> str:    """Search things about current events."""    return "32 degrees"tool_list = [search]# Get the prompt to use - you can modify this!prompt = hub.pull("hwchase17/xml-agent-convo")# Logic for going from intermediate steps to a string to pass into model# This is pretty tied to the promptdef convert_intermediate_steps(intermediate_steps):    log = ""    for action, observation in intermediate_steps:        log += (            f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"            f"</tool_input><observation>{observation}</observation>"        )    return log# Logic for converting tools to string to go in promptdef convert_tools(tools):    return "\\n".join([f"{tool.name}: {tool.description}" for tool in tools])Building an agent from a runnable usually involves a few things:Data processing for the intermediate steps. These need to be\nrepresented in a way that the language model can recognize them.\nThis should be pretty tightly coupled to the instructions in the\npromptThe prompt itselfThe model, complete with stop tokens if neededThe output parser - should be in sync with how the prompt specifies\nthings to be formatted.agent = (    {        "input": lambda x: x["input"],        "agent_scratchpad": lambda x: convert_intermediate_steps(            x["intermediate_steps"]        ),    }    | prompt.partial(tools=convert_tools(tool_list))    | model.bind(stop=["</tool_input>", "</final_answer>"])    | XMLAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)agent_executor.invoke({"input": "whats the weather in New york?"})> Entering new AgentExecutor chain... <tool>search</tool><tool_input>weather in New York32 degrees <tool>search</tool><tool_input>weather in New York32 degrees <final_answer>The weather in New York is 32 degrees> Finished chain.{\'input\': \'whats the weather in New york?\', \'output\': \'The weather in New York is 32 degrees\'}Help us out by providing feedback on this documentation page:PreviousQuerying a SQL DBNextCode writingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nQuerying a SQL DB | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookQuerying a SQL DBQuerying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_core.prompts import ChatPromptTemplatetemplate = """Based on the table schema below, write a SQL query that would answer the user\'s question:{schema}Question: {question}SQL Query:"""prompt = ChatPromptTemplate.from_template(template)from langchain_community.utilities import SQLDatabaseWe’ll need the Chinook sample DB for this example. There’s many places\nto download it from,\ne.g.\xa0https://database.guide/2-sample-databases-sqlite/db = SQLDatabase.from_uri("sqlite:///./Chinook.db")def get_schema(_):    return db.get_table_info()def run_query(query):    return db.run(query)from langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()sql_response = (    RunnablePassthrough.assign(schema=get_schema)    | prompt    | model.bind(stop=["\\nSQLResult:"])    | StrOutputParser())sql_response.invoke({"question": "How many employees are there?"})\'SELECT COUNT(*) FROM Employee\'template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:{schema}Question: {question}SQL Query: {query}SQL Response: {response}"""prompt_response = ChatPromptTemplate.from_template(template)full_chain = (    RunnablePassthrough.assign(query=sql_response).assign(        schema=get_schema,        response=lambda x: db.run(x["query"]),    )    | prompt_response    | model)full_chain.invoke({"question": "How many employees are there?"})AIMessage(content=\'There are 8 employees.\', additional_kwargs={}, example=False)Help us out by providing feedback on this documentation page:PreviousMultiple chainsNextAgentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nManaging prompt size | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookManaging prompt sizeManaging prompt sizeAgents dynamically call tools. The results of those tool calls are added\nback to the prompt, so that the agent can plan the next action.\nDepending on what tools are being used and how they’re being called, the\nagent prompt can easily grow larger than the model context window.With LCEL, it’s easy to add custom functionality for managing the size\nof prompts within your chain or agent. Let’s look at simple agent\nexample that can search Wikipedia for information.%pip install --upgrade --quiet  langchain langchain-openai wikipediafrom operator import itemgetterfrom langchain.agents import AgentExecutor, load_toolsfrom langchain.agents.format_scratchpad import format_to_openai_function_messagesfrom langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParserfrom langchain.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapperfrom langchain_core.prompt_values import ChatPromptValuefrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_openai import ChatOpenAIwiki = WikipediaQueryRun(    api_wrapper=WikipediaAPIWrapper(top_k_results=5, doc_content_chars_max=10_000))tools = [wiki]prompt = ChatPromptTemplate.from_messages(    [        ("system", "You are a helpful assistant"),        ("user", "{input}"),        MessagesPlaceholder(variable_name="agent_scratchpad"),    ])llm = ChatOpenAI(model="gpt-3.5-turbo")Let’s try a many-step question without any prompt size handling:agent = (    {        "input": itemgetter("input"),        "agent_scratchpad": lambda x: format_to_openai_function_messages(            x["intermediate_steps"]        ),    }    | prompt    | llm.bind_functions(tools)    | OpenAIFunctionsAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke(    {        "input": "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?"    })> Entering new AgentExecutor chain...Invoking: `Wikipedia` with `List of presidents of the United States`Page: List of presidents of the United StatesSummary: The president of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term via the Electoral College. The officeholder leads the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces. Since the office was established in 1789, 45 men have served in 46 presidencies. The first president, George Washington, won a unanimous vote of the Electoral College. Grover Cleveland served two non-consecutive terms and is therefore counted as the 22nd and 24th president of the United States, giving rise to the discrepancy between the number of presidencies and the number of individuals who have served as president. The incumbent president is Joe Biden.The presidency of William Henry Harrison, who died 31 days after taking office in 1841, was the shortest in American history. Franklin D. Roosevelt served the longest, over twelve years, before dying early in his fourth term in 1945. He is the only U.S. president to have served more than two terms. Since the ratification of the Twenty-second Amendment to the United States Constitution in 1951, no person may be elected president more than twice, and no one who has served more than two years of a term to which someone else was elected may be elected more than once.Four presidents died in office of natural causes (William Henry Harrison, Zachary Taylor, Warren G. Harding, and Franklin D. Roosevelt), four were assassinated (Abraham Lincoln, James A. Garfield, William McKinley, and John F. Kennedy), and one resigned (Richard Nixon, facing impeachment and removal from office). John Tyler was the first vice president to assume the presidency during a presidential term, and set the precedent that a vice president who does so becomes the fully functioning president with his presidency.Throughout most of its history, American politics has been dominated by political parties. The Constitution is silent on the issue of political parties, and at the time it came into force in 1789, no organized parties existed. Soon after the 1st Congress convened, political factions began rallying around dominant Washington administration officials, such as Alexander Hamilton and Thomas Jefferson. Concerned about the capacity of political parties to destroy the fragile unity holding the nation together, Washington remained unaffiliated with any political faction or party throughout his eight-year presidency. He was, and remains, the only U.S. president never affiliated with a political party.Page: List of presidents of the United States by ageSummary: In this list of presidents of the United States by age, the first table charts the age of each president of the United States at the time of presidential inauguration (first inauguration if elected to multiple and consecutive terms), upon leaving office, and at the time of death. Where the president is still living, their lifespan and post-presidency timespan are calculated up to January 25, 2024.Page: List of vice presidents of the United StatesSummary: There have been 49 vice presidents of the United States since the office was created in 1789. Originally, the vice president was the person who received the second-most votes for president in the Electoral College. But after the election of 1800 produced a tie between Thomas Jefferson and Aaron Burr, requiring the House of Representatives to choose between them, lawmakers acted to prevent such a situation from recurring. The Twelfth Amendment was added to the Constitution in 1804, creating the current system where electors cast a separate ballot for the vice presidency.The vice president is the first person in the presidential line of succession—that is, they assume the presidency if the president dies, resigns, or is impeached and removed from office. Nine vice presidents have ascended to the presidency in this way: eight (John Tyler, Millard Fillmore, Andrew Johnson, Chester A. Arthur, Theodore Roosevelt, Calvin Coolidge, Harry S. Truman, and Lyndon B. Johnson) through the president\'s death and one (Gerald Ford) through the president\'s resignation. The vice president also serves as the president of the Senate and may choose to cast a tie-breaking vote on decisions made by the Senate. Vice presidents have exercised this latter power to varying extents over the years.Before adoption of the Twenty-fifth Amendment in 1967, an intra-term vacancy in the office of the vice president could not be filled until the next post-election inauguration. Several such vacancies occurred: seven vice presidents died, one resigned and eight succeeded to the presidency. This amendment allowed for a vacancy to be filled through appointment by the president and confirmation by both chambers of the Congress. Since its ratification, the vice presidency has been vacant twice (both in the context of scandals surrounding the Nixon administration) and was filled both times through this process, namely in 1973 following Spiro Agnew\'s resignation, and again in 1974 after Gerald Ford succeeded to the presidency. The amendment also established a procedure whereby a vice president may, if the president is unable to discharge the powers and duties of the office, temporarily assume the powers and duties of the office as acting president. Three vice presidents have briefly acted as president under the 25th Amendment: George H. W. Bush on July 13, 1985; Dick Cheney on June 29, 2002, and on July 21, 2007; and Kamala Harris on November 19, 2021.The persons who have served as vice president were born in or primarily affiliated with 27 states plus the District of Columbia. New York has produced the most of any state as eight have been born there and three others considered it their home state. Most vice presidents have been in their 50s or 60s and had political experience before assuming the office. Two vice presidents—George Clinton and John C. Calhoun—served under more than one president. Ill with tuberculosis and recovering in Cuba on Inauguration Day in 1853, William R. King, by an Act of Congress, was allowed to take the oath outside the United States. He is the only vice president to take his oath of office in a foreign country.Page: List of presidents of the United States by net worthSummary: The list of presidents of the United States by net worth at peak varies greatly. Debt and depreciation often means that presidents\' net worth is less than $0 at the time of death. Most presidents before 1845 were extremely wealthy, especially Andrew Jackson and George Washington.    Presidents since 1929, when Herbert Hoover took office, have generally been wealthier than presidents of the late nineteenth and early twentieth centuries; with the exception of Harry S. Truman, all presidents since this time have been millionaires. These presidents have often received income from autobiographies and other writing. Except for Franklin D. Roosevelt and John F. Kennedy (both of whom died while in office), all presidents beginning with Calvin Coolidge have written autobiographies. In addition, many presidents—including Bill Clinton—have earned considerable income from public speaking after leaving office.The richest president in history may be Donald Trump. However, his net worth is not precisely known because the Trump Organization is privately held.Truman was among the poorest U.S. presidents, with a net worth considerably less than $1 million. His financial situation contributed to the doubling of the presidential salary to $100,000 in 1949. In addition, the presidential pension was created in 1958 when Truman was again experiencing financial difficulties. Harry and Bess Truman received the first Medicare cards in 1966 via the Social Security Act of 1965.Page: List of presidents of the United States by home stateSummary: These lists give the states of primary affiliation and of birth for each president of the United States.Invoking: `Wikipedia` with `Joe Biden`Page: Joe BidenSummary: Joseph Robinette Biden Jr. (  BY-dən; born November 20, 1942) is an American politician who is the 46th and current president of the United States. A member of the Democratic Party, he previously served as the 47th vice president from 2009 to 2017 under President Barack Obama and represented Delaware in the United States Senate from 1973 to 2009.Born in Scranton, Pennsylvania, Biden moved with his family to Delaware in 1953. He graduated from the University of Delaware before earning his law degree from Syracuse University. He was elected to the New Castle County Council in 1970 and to the U.S. Senate in 1972. As a senator, Biden drafted and led the effort to pass the Violent Crime Control and Law Enforcement Act and the Violence Against Women Act. He also oversaw six U.S. Supreme Court confirmation hearings, including the contentious hearings for Robert Bork and Clarence Thomas. Biden ran unsuccessfully for the Democratic presidential nomination in 1988 and 2008. In 2008, Obama chose Biden as his running mate, and he was a close counselor to Obama during his two terms as vice president. In the 2020 presidential election, Biden and his running mate, Kamala Harris, defeated incumbents Donald Trump and Mike Pence. He became the oldest president in U.S. history, and the first to have a female vice president.As president, Biden signed the American Rescue Plan Act in response to the COVID-19 pandemic and subsequent recession. He signed bipartisan bills on infrastructure and manufacturing. He proposed the Build Back Better Act, which failed in Congress, but aspects of which were incorporated into the Inflation Reduction Act that he signed into law in 2022. Biden appointed Ketanji Brown Jackson to the Supreme Court. He worked with congressional Republicans to resolve the 2023 United States debt-ceiling crisis by negotiating a deal to raise the debt ceiling. In foreign policy, Biden restored America\'s membership in the Paris Agreement. He oversaw the complete withdrawal of U.S. troops from Afghanistan that ended the war in Afghanistan, during which the Afghan government collapsed and the Taliban seized control. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia and authorizing civilian and military aid to Ukraine. During the Israel–Hamas war, Biden announced military support for Israel, and condemned the actions of Hamas and other Palestinian militants as terrorism. In April 2023, Biden announced his candidacy for the Democratic nomination in the 2024 presidential election.Page: Presidency of Joe BidenSummary: Joe Biden\'s tenure as the 46th president of the United States began with his inauguration on January 20, 2021. Biden, a Democrat from Delaware who previously served as vice president for two terms under president Barack Obama, took office following his victory in the 2020 presidential election over Republican incumbent president Donald Trump. Biden won the presidency with a popular vote of over 81 million, the highest number of votes cast for a single United States presidential candidate. Upon his inauguration, he became the oldest president in American history, breaking the record set by his predecessor Trump. Biden entered office amid the COVID-19 pandemic, an economic crisis, and increased political polarization.On the first day of his presidency, Biden made an effort to revert President Trump\'s energy policy by restoring U.S. participation in the Paris Agreement and revoking the permit for the Keystone XL pipeline. He also halted funding for Trump\'s border wall, an expansion of the Mexican border wall. On his second day, he issued a series of executive orders to reduce the impact of COVID-19, including invoking the Defense Production Act of 1950, and set an early goal of achieving one hundred million COVID-19 vaccinations in the United States in his first 100 days.Biden signed into law the American Rescue Plan Act of 2021; a $1.9 trillion stimulus bill that temporarily established expanded unemployment insurance and sent $1,400 stimulus checks to most Americans in response to continued economic pressure from COVID-19. He signed the bipartisan Infrastructure Investment and Jobs Act; a ten-year plan brokered by Biden alongside Democrats and Republicans in Congress, to invest in American roads, bridges, public transit, ports and broadband access. Biden signed the Juneteenth National Independence Day Act, making Juneteenth a federal holiday in the United States. He appointed Ketanji Brown Jackson to the U.S. Supreme Court—the first Black woman to serve on the court. After The Supreme Court overturned Roe v. Wade, Biden took executive actions, such as the signing of Executive Order 14076, to preserve and protect women\'s health rights nationwide, against abortion bans in Republican led states. Biden proposed a significant expansion of the U.S. social safety net through the Build Back Better Act, but those efforts, along with voting rights legislation, failed in Congress. However, in August 2022, Biden signed the Inflation Reduction Act of 2022, a domestic appropriations bill that included some of the provisions of the Build Back Better Act after the entire bill failed to pass. It included significant federal investment in climate and domestic clean energy production, tax credits for solar panels, electric cars and other home energy programs as well as a three-year extension of Affordable Care Act subsidies. The administration\'s economic policies, known as "Bidenomics", were inspired and designed by Trickle-up economics. Described as growing the economy from the middle out and bottom up and growing the middle class. Biden signed the CHIPS and Science Act, bolstering the semiconductor and manufacturing industry, the Honoring our PACT Act, expanding health care for US veterans, the Bipartisan Safer Communities Act and the Electoral Count Reform and Presidential Transition Improvement Act. In late 2022, Biden signed the Respect for Marriage Act, which repealed the Defense of Marriage Act and codified same-sex and interracial marriage in the United States. In response to the debt-ceiling crisis of 2023, Biden negotiated and signed the Fiscal Responsibility Act of 2023, which restrains federal spending for fiscal years 2024 and 2025, implements minor changes to SNAP and TANF, includes energy permitting reform, claws back some IRS funding and unspent money for COVID-19, and suspends the debt ceiling to January 1, 2025. Biden established the American Climate Corps and created the first ever White House Office of Gun Violence Prevention. On September 26, 2023, Joe Biden visited a United Auto Workers picket line during the 2023 United Auto Workers strike, making him the first US president to visit one.The foreign policy goal of the Biden administration is to restore the US to a "position of trusted leadership" among global democracies in order to address the challenges posed by Russia and China. In foreign policy, Biden completed the withdrawal of U.S. military forces from Afghanistan, declaring an end to nation-building efforts and shifting U.S. foreign policy toward strategic competition with China and, to a lesser extent, Russia. However, during the withdrawal, the Afghan government collapsed and the Taliban seized control, leading to Biden receiving bipartisan criticism. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia as well as providing Ukraine with over $100 billion in combined military, economic, and humanitarian aid. Biden also approved a raid which led to the death of Abu Ibrahim al-Hashimi al-Qurashi, the leader of the Islamic State, and approved a drone strike which killed Ayman Al Zawahiri, leader of Al-Qaeda. Biden signed and created AUKUS, an international security alliance, together with Australia and the United Kingdom. Biden called for the expansion of NATO with the addition of Finland and Sweden, and rallied NATO allies in support of Ukraine. During the 2023 Israel–Hamas war, Biden condemned Hamas and other Palestinian militants as terrorism and announced American military support for Israel; Biden also showed his support and sympathy towards Palestinians affected by the war, sent humanitarian aid, and brokered a four-day temporary pause and hostage exchange.Page: Family of Joe BidenSummary: Joe Biden, the 46th and current president of the United States, has family members who are prominent in law, education, activism and politics. Biden\'s immediate family became the first family of the United States on his inauguration on January 20, 2021. His immediate family circle was also the second family of the United States from 2009 to 2017, when Biden was vice president. Biden\'s family is mostly descended from the British Isles, with most of their ancestors coming from Ireland and England, and a smaller number descending from the French.Of Joe Biden\'s sixteen great-great-grandparents, ten were born in Ireland. He is descended from the Blewitts of County Mayo and the Finnegans of County Louth. One of Biden\'s great-great-great-grandfathers was born in Sussex, England, and emigrated to Maryland in the United States by 1820.Page: Inauguration of Joe BidenSummary: The inauguration of Joe Biden as the 46th president of the United States took place on Wednesday, January 20, 2021, marking the start of the four-year term of Joe Biden as president and Kamala Harris as vice president. The 59th presidential inauguration took place on the West Front of the United States Capitol in Washington, D.C. Biden took the presidential oath of office, before which Harris took the vice presidential oath of office.The inauguration took place amidst extraordinary political, public health, economic, and national security crises, including the ongoing COVID-19 pandemic; outgoing President Donald Trump\'s attempts to overturn the 2020 United States presidential election, which provoked an attack on the United States Capitol on January 6; Trump\'Invoking: `Wikipedia` with `Delaware`Page: DelawareSummary: Delaware (  DEL-ə-wair) is a state in the northeast and Mid-Atlantic regions of the United States. It borders Maryland to its south and west, Pennsylvania to its north, New Jersey to its northeast, and the Atlantic Ocean to its east. The state\'s name derives from the adjacent Delaware Bay, which in turn was named after Thomas West, 3rd Baron De La Warr, an English nobleman and the Colony of Virginia\'s first colonial-era governor.Delaware occupies the northeastern portion of the Delmarva Peninsula, and some islands and territory within the Delaware River. It is the 2nd smallest and 6th least populous state, but also the 6th most densely populated. Delaware\'s most populous city is Wilmington, and the state\'s capital is Dover, the 2nd most populous city in Delaware. The state is divided into three counties, the fewest number of counties of any of the 50 U.S. states; from north to south, the three counties are: New Castle County, Kent County, and Sussex County.The southern two counties, Kent and Sussex counties, historically have been predominantly agrarian economies. New Castle is more urbanized and is considered part of the Delaware Valley metropolitan statistical area that surrounds and includes Philadelphia, the nation\'s 6th most populous city. Delaware is considered part of the Southern United States by the U.S. Census Bureau, but the state\'s geography, culture, and history are a hybrid of the Mid-Atlantic and Northeastern regions of the country.Before Delaware coastline was explored and developed by Europeans in the 16th century, the state was inhabited by several Native Americans tribes, including the Lenape in the north and Nanticoke in the south. The state was first colonized by Dutch traders at Zwaanendael, near present-day Lewes, Delaware, in 1631.Delaware was one of the Thirteen Colonies that participated in the American Revolution and American Revolutionary War, in which the American Continental Army, led by George Washington, defeated the British, ended British colonization and establishing the United States as a sovereign and independent nation.On December 7, 1787, Delaware was the first state to ratify the Constitution of the United States, earning it the nickname "The First State".Since the turn of the 20th century, Delaware has become an onshore corporate haven whose corporate laws are deemed appealing to corporations; over half of all New York Stock Exchange-listed corporations and over three-fifths of the Fortune 500 is legally incorporated in the state.Page: Delaware City, DelawareSummary: Delaware City is a city in New Castle County, Delaware, United States. The population was 1,885 as of 2020. It is a small port town on the eastern terminus of the Chesapeake and Delaware Canal and is the location of the Forts Ferry Crossing to Fort Delaware on Pea Patch Island.Page: Delaware RiverSummary: The Delaware River is a major river in the Mid-Atlantic region of the United States and is the longest free-flowing (undammed) river in the Eastern United States. From the meeting of its branches in Hancock, New York, the river flows for 282 miles (454 km) along the borders of New York, Pennsylvania, New Jersey, and Delaware, before emptying into Delaware Bay.The river has been recognized by the National Wildlife Federation as one of the country\'s Great Waters and has been called the "Lifeblood of the Northeast" by American Rivers. Its watershed drains an area of 13,539 square miles (35,070 km2) and provides drinking water for 17 million people, including half of New York City via the Delaware Aqueduct.The Delaware River has two branches that rise in the Catskill Mountains of New York: the West Branch at Mount Jefferson in Jefferson, Schoharie County, and the East Branch at Grand Gorge, Delaware County. The branches merge to form the main Delaware River at Hancock, New York. Flowing south, the river remains relatively undeveloped, with 152 miles (245 km) protected as the Upper, Middle, and Lower Delaware National Scenic Rivers. At Trenton, New Jersey, the Delaware becomes tidal, navigable, and significantly more industrial. This section forms the backbone of the Delaware Valley metropolitan area, serving the port cities of Philadelphia, Camden, New Jersey, and Wilmington, Delaware. The river flows into Delaware Bay at Liston Point, 48 miles (77 km) upstream of the bay\'s outlet to the Atlantic Ocean between Cape May and Cape Henlopen.Before the arrival of European settlers, the river was the homeland of the Lenape native people. They called the river Lenapewihittuk, or Lenape River, and Kithanne, meaning the largest river in this part of the country.In 1609, the river was visited by a Dutch East India Company expedition led by Henry Hudson. Hudson, an English navigator, was hired to find a western route to Cathay (China), but his encounters set the stage for Dutch colonization of North America in the 17th century. Early Dutch and Swedish settlements were established along the lower section of the river and Delaware Bay. Both colonial powers called the river the South River (Zuidrivier), compared to the Hudson River, which was known as the North River. After the English expelled the Dutch and took control of the New Netherland colony in 1664, the river was renamed Delaware after Sir Thomas West, 3rd Baron De La Warr, an English nobleman and the Virginia colony\'s first royal governor, who defended the colony during the First Anglo-Powhatan War.Page: University of DelawareSummary: The University of Delaware (colloquially known as UD or Delaware) is a privately governed, state-assisted land-grant research university located in Newark, Delaware. UD is the largest university in Delaware. It offers three associate\'s programs, 148 bachelor\'s programs, 121 master\'s programs (with 13 joint degrees), and 55 doctoral programs across its eight colleges. The main campus is in Newark, with satellite campuses in Dover, Wilmington, Lewes, and Georgetown. It is considered a large institution with approximately 18,200 undergraduate and 4,200 graduate students. It is a privately governed university which receives public funding for being a land-grant, sea-grant, and space-grant state-supported research institution.UD is classified among "R1: Doctoral Universities – Very high research activity". According to the National Science Foundation, UD spent $186 million on research and development in 2018, ranking it 119th in the nation.  It is recognized with the Community Engagement Classification by the Carnegie Foundation for the Advancement of Teaching.UD students, alumni, and sports teams are known as the "Fightin\' Blue Hens", more commonly shortened to "Blue Hens", and the school colors are Delaware blue and gold. UD sponsors 21 men\'s and women\'s NCAA Division-I sports teams and have competed in the Colonial Athletic Association (CAA) since 2001.Page: LenapeSummary: The Lenape (English: , , ; Lenape languages: [lənaːpe]), also called the Lenni Lenape and Delaware people, are an Indigenous people of the Northeastern Woodlands, who live in the United States and Canada.The Lenape\'s historical territory includes present-day northeastern Delaware, all of New Jersey, the eastern Pennsylvania regions of the Lehigh Valley and Northeastern Pennsylvania, and New York Bay, western Long Island, and the lower Hudson Valley in New York state. Today they are based in Oklahoma, Wisconsin, and Ontario.During the last decades of the 18th century, European settlers and the effects of the American Revolutionary War displaced most Lenape from their homelands and pushed them north and west. In the 1860s, under the Indian removal policy, the U.S. federal government relocated most Lenape remaining in the Eastern United States to the Indian Territory and surrounding regions. Lenape people currently belong to the Delaware Nation and Delaware Tribe of Indians in Oklahoma, the Stockbridge–Munsee Community in Wisconsin, and the Munsee-Delaware Nation, Moravian of the Thames First Nation, and Delaware of Six Nations in Ontario.BadRequestError: Error code: 400 - {\'error\': {\'message\': "This model\'s maximum context length is 4097 tokens. However, your messages resulted in 5487 tokens (5419 in the messages, 68 in the functions). Please reduce the length of the messages or functions.", \'type\': \'invalid_request_error\', \'param\': \'messages\', \'code\': \'context_length_exceeded\'}}LangSmith\ntraceUnfortunately we run out of space in our model’s context window before\nwe the agent can get to the final answer. Now let’s add some prompt\nhandling logic. To keep things simple, if our messages have too many\ntokens we’ll start dropping the earliest AI, Function message pairs\n(this is the model tool invocation message and the subsequent tool\noutput message) in the chat history.def condense_prompt(prompt: ChatPromptValue) -> ChatPromptValue:    messages = prompt.to_messages()    num_tokens = llm.get_num_tokens_from_messages(messages)    ai_function_messages = messages[2:]    while num_tokens > 4_000:        ai_function_messages = ai_function_messages[2:]        num_tokens = llm.get_num_tokens_from_messages(            messages[:2] + ai_function_messages        )    messages = messages[:2] + ai_function_messages    return ChatPromptValue(messages=messages)agent = (    {        "input": itemgetter("input"),        "agent_scratchpad": lambda x: format_to_openai_function_messages(            x["intermediate_steps"]        ),    }    | prompt    | condense_prompt    | llm.bind_functions(tools)    | OpenAIFunctionsAgentOutputParser())agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)agent_executor.invoke(    {        "input": "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?"    })> Entering new AgentExecutor chain...Invoking: `Wikipedia` with `List of presidents of the United States`Page: List of presidents of the United StatesSummary: The president of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term via the Electoral College. The officeholder leads the executive branch of the federal government and is the commander-in-chief of the United States Armed Forces. Since the office was established in 1789, 45 men have served in 46 presidencies. The first president, George Washington, won a unanimous vote of the Electoral College. Grover Cleveland served two non-consecutive terms and is therefore counted as the 22nd and 24th president of the United States, giving rise to the discrepancy between the number of presidencies and the number of individuals who have served as president. The incumbent president is Joe Biden.The presidency of William Henry Harrison, who died 31 days after taking office in 1841, was the shortest in American history. Franklin D. Roosevelt served the longest, over twelve years, before dying early in his fourth term in 1945. He is the only U.S. president to have served more than two terms. Since the ratification of the Twenty-second Amendment to the United States Constitution in 1951, no person may be elected president more than twice, and no one who has served more than two years of a term to which someone else was elected may be elected more than once.Four presidents died in office of natural causes (William Henry Harrison, Zachary Taylor, Warren G. Harding, and Franklin D. Roosevelt), four were assassinated (Abraham Lincoln, James A. Garfield, William McKinley, and John F. Kennedy), and one resigned (Richard Nixon, facing impeachment and removal from office). John Tyler was the first vice president to assume the presidency during a presidential term, and set the precedent that a vice president who does so becomes the fully functioning president with his presidency.Throughout most of its history, American politics has been dominated by political parties. The Constitution is silent on the issue of political parties, and at the time it came into force in 1789, no organized parties existed. Soon after the 1st Congress convened, political factions began rallying around dominant Washington administration officials, such as Alexander Hamilton and Thomas Jefferson. Concerned about the capacity of political parties to destroy the fragile unity holding the nation together, Washington remained unaffiliated with any political faction or party throughout his eight-year presidency. He was, and remains, the only U.S. president never affiliated with a political party.Page: List of presidents of the United States by ageSummary: In this list of presidents of the United States by age, the first table charts the age of each president of the United States at the time of presidential inauguration (first inauguration if elected to multiple and consecutive terms), upon leaving office, and at the time of death. Where the president is still living, their lifespan and post-presidency timespan are calculated up to January 25, 2024.Page: List of vice presidents of the United StatesSummary: There have been 49 vice presidents of the United States since the office was created in 1789. Originally, the vice president was the person who received the second-most votes for president in the Electoral College. But after the election of 1800 produced a tie between Thomas Jefferson and Aaron Burr, requiring the House of Representatives to choose between them, lawmakers acted to prevent such a situation from recurring. The Twelfth Amendment was added to the Constitution in 1804, creating the current system where electors cast a separate ballot for the vice presidency.The vice president is the first person in the presidential line of succession—that is, they assume the presidency if the president dies, resigns, or is impeached and removed from office. Nine vice presidents have ascended to the presidency in this way: eight (John Tyler, Millard Fillmore, Andrew Johnson, Chester A. Arthur, Theodore Roosevelt, Calvin Coolidge, Harry S. Truman, and Lyndon B. Johnson) through the president\'s death and one (Gerald Ford) through the president\'s resignation. The vice president also serves as the president of the Senate and may choose to cast a tie-breaking vote on decisions made by the Senate. Vice presidents have exercised this latter power to varying extents over the years.Before adoption of the Twenty-fifth Amendment in 1967, an intra-term vacancy in the office of the vice president could not be filled until the next post-election inauguration. Several such vacancies occurred: seven vice presidents died, one resigned and eight succeeded to the presidency. This amendment allowed for a vacancy to be filled through appointment by the president and confirmation by both chambers of the Congress. Since its ratification, the vice presidency has been vacant twice (both in the context of scandals surrounding the Nixon administration) and was filled both times through this process, namely in 1973 following Spiro Agnew\'s resignation, and again in 1974 after Gerald Ford succeeded to the presidency. The amendment also established a procedure whereby a vice president may, if the president is unable to discharge the powers and duties of the office, temporarily assume the powers and duties of the office as acting president. Three vice presidents have briefly acted as president under the 25th Amendment: George H. W. Bush on July 13, 1985; Dick Cheney on June 29, 2002, and on July 21, 2007; and Kamala Harris on November 19, 2021.The persons who have served as vice president were born in or primarily affiliated with 27 states plus the District of Columbia. New York has produced the most of any state as eight have been born there and three others considered it their home state. Most vice presidents have been in their 50s or 60s and had political experience before assuming the office. Two vice presidents—George Clinton and John C. Calhoun—served under more than one president. Ill with tuberculosis and recovering in Cuba on Inauguration Day in 1853, William R. King, by an Act of Congress, was allowed to take the oath outside the United States. He is the only vice president to take his oath of office in a foreign country.Page: List of presidents of the United States by net worthSummary: The list of presidents of the United States by net worth at peak varies greatly. Debt and depreciation often means that presidents\' net worth is less than $0 at the time of death. Most presidents before 1845 were extremely wealthy, especially Andrew Jackson and George Washington.    Presidents since 1929, when Herbert Hoover took office, have generally been wealthier than presidents of the late nineteenth and early twentieth centuries; with the exception of Harry S. Truman, all presidents since this time have been millionaires. These presidents have often received income from autobiographies and other writing. Except for Franklin D. Roosevelt and John F. Kennedy (both of whom died while in office), all presidents beginning with Calvin Coolidge have written autobiographies. In addition, many presidents—including Bill Clinton—have earned considerable income from public speaking after leaving office.The richest president in history may be Donald Trump. However, his net worth is not precisely known because the Trump Organization is privately held.Truman was among the poorest U.S. presidents, with a net worth considerably less than $1 million. His financial situation contributed to the doubling of the presidential salary to $100,000 in 1949. In addition, the presidential pension was created in 1958 when Truman was again experiencing financial difficulties. Harry and Bess Truman received the first Medicare cards in 1966 via the Social Security Act of 1965.Page: List of presidents of the United States by home stateSummary: These lists give the states of primary affiliation and of birth for each president of the United States.Invoking: `Wikipedia` with `Joe Biden`Page: Joe BidenSummary: Joseph Robinette Biden Jr. (  BY-dən; born November 20, 1942) is an American politician who is the 46th and current president of the United States. A member of the Democratic Party, he previously served as the 47th vice president from 2009 to 2017 under President Barack Obama and represented Delaware in the United States Senate from 1973 to 2009.Born in Scranton, Pennsylvania, Biden moved with his family to Delaware in 1953. He graduated from the University of Delaware before earning his law degree from Syracuse University. He was elected to the New Castle County Council in 1970 and to the U.S. Senate in 1972. As a senator, Biden drafted and led the effort to pass the Violent Crime Control and Law Enforcement Act and the Violence Against Women Act. He also oversaw six U.S. Supreme Court confirmation hearings, including the contentious hearings for Robert Bork and Clarence Thomas. Biden ran unsuccessfully for the Democratic presidential nomination in 1988 and 2008. In 2008, Obama chose Biden as his running mate, and he was a close counselor to Obama during his two terms as vice president. In the 2020 presidential election, Biden and his running mate, Kamala Harris, defeated incumbents Donald Trump and Mike Pence. He became the oldest president in U.S. history, and the first to have a female vice president.As president, Biden signed the American Rescue Plan Act in response to the COVID-19 pandemic and subsequent recession. He signed bipartisan bills on infrastructure and manufacturing. He proposed the Build Back Better Act, which failed in Congress, but aspects of which were incorporated into the Inflation Reduction Act that he signed into law in 2022. Biden appointed Ketanji Brown Jackson to the Supreme Court. He worked with congressional Republicans to resolve the 2023 United States debt-ceiling crisis by negotiating a deal to raise the debt ceiling. In foreign policy, Biden restored America\'s membership in the Paris Agreement. He oversaw the complete withdrawal of U.S. troops from Afghanistan that ended the war in Afghanistan, during which the Afghan government collapsed and the Taliban seized control. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia and authorizing civilian and military aid to Ukraine. During the Israel–Hamas war, Biden announced military support for Israel, and condemned the actions of Hamas and other Palestinian militants as terrorism. In April 2023, Biden announced his candidacy for the Democratic nomination in the 2024 presidential election.Page: Presidency of Joe BidenSummary: Joe Biden\'s tenure as the 46th president of the United States began with his inauguration on January 20, 2021. Biden, a Democrat from Delaware who previously served as vice president for two terms under president Barack Obama, took office following his victory in the 2020 presidential election over Republican incumbent president Donald Trump. Biden won the presidency with a popular vote of over 81 million, the highest number of votes cast for a single United States presidential candidate. Upon his inauguration, he became the oldest president in American history, breaking the record set by his predecessor Trump. Biden entered office amid the COVID-19 pandemic, an economic crisis, and increased political polarization.On the first day of his presidency, Biden made an effort to revert President Trump\'s energy policy by restoring U.S. participation in the Paris Agreement and revoking the permit for the Keystone XL pipeline. He also halted funding for Trump\'s border wall, an expansion of the Mexican border wall. On his second day, he issued a series of executive orders to reduce the impact of COVID-19, including invoking the Defense Production Act of 1950, and set an early goal of achieving one hundred million COVID-19 vaccinations in the United States in his first 100 days.Biden signed into law the American Rescue Plan Act of 2021; a $1.9 trillion stimulus bill that temporarily established expanded unemployment insurance and sent $1,400 stimulus checks to most Americans in response to continued economic pressure from COVID-19. He signed the bipartisan Infrastructure Investment and Jobs Act; a ten-year plan brokered by Biden alongside Democrats and Republicans in Congress, to invest in American roads, bridges, public transit, ports and broadband access. Biden signed the Juneteenth National Independence Day Act, making Juneteenth a federal holiday in the United States. He appointed Ketanji Brown Jackson to the U.S. Supreme Court—the first Black woman to serve on the court. After The Supreme Court overturned Roe v. Wade, Biden took executive actions, such as the signing of Executive Order 14076, to preserve and protect women\'s health rights nationwide, against abortion bans in Republican led states. Biden proposed a significant expansion of the U.S. social safety net through the Build Back Better Act, but those efforts, along with voting rights legislation, failed in Congress. However, in August 2022, Biden signed the Inflation Reduction Act of 2022, a domestic appropriations bill that included some of the provisions of the Build Back Better Act after the entire bill failed to pass. It included significant federal investment in climate and domestic clean energy production, tax credits for solar panels, electric cars and other home energy programs as well as a three-year extension of Affordable Care Act subsidies. The administration\'s economic policies, known as "Bidenomics", were inspired and designed by Trickle-up economics. Described as growing the economy from the middle out and bottom up and growing the middle class. Biden signed the CHIPS and Science Act, bolstering the semiconductor and manufacturing industry, the Honoring our PACT Act, expanding health care for US veterans, the Bipartisan Safer Communities Act and the Electoral Count Reform and Presidential Transition Improvement Act. In late 2022, Biden signed the Respect for Marriage Act, which repealed the Defense of Marriage Act and codified same-sex and interracial marriage in the United States. In response to the debt-ceiling crisis of 2023, Biden negotiated and signed the Fiscal Responsibility Act of 2023, which restrains federal spending for fiscal years 2024 and 2025, implements minor changes to SNAP and TANF, includes energy permitting reform, claws back some IRS funding and unspent money for COVID-19, and suspends the debt ceiling to January 1, 2025. Biden established the American Climate Corps and created the first ever White House Office of Gun Violence Prevention. On September 26, 2023, Joe Biden visited a United Auto Workers picket line during the 2023 United Auto Workers strike, making him the first US president to visit one.The foreign policy goal of the Biden administration is to restore the US to a "position of trusted leadership" among global democracies in order to address the challenges posed by Russia and China. In foreign policy, Biden completed the withdrawal of U.S. military forces from Afghanistan, declaring an end to nation-building efforts and shifting U.S. foreign policy toward strategic competition with China and, to a lesser extent, Russia. However, during the withdrawal, the Afghan government collapsed and the Taliban seized control, leading to Biden receiving bipartisan criticism. He responded to the Russian invasion of Ukraine by imposing sanctions on Russia as well as providing Ukraine with over $100 billion in combined military, economic, and humanitarian aid. Biden also approved a raid which led to the death of Abu Ibrahim al-Hashimi al-Qurashi, the leader of the Islamic State, and approved a drone strike which killed Ayman Al Zawahiri, leader of Al-Qaeda. Biden signed and created AUKUS, an international security alliance, together with Australia and the United Kingdom. Biden called for the expansion of NATO with the addition of Finland and Sweden, and rallied NATO allies in support of Ukraine. During the 2023 Israel–Hamas war, Biden condemned Hamas and other Palestinian militants as terrorism and announced American military support for Israel; Biden also showed his support and sympathy towards Palestinians affected by the war, sent humanitarian aid, and brokered a four-day temporary pause and hostage exchange.Page: Family of Joe BidenSummary: Joe Biden, the 46th and current president of the United States, has family members who are prominent in law, education, activism and politics. Biden\'s immediate family became the first family of the United States on his inauguration on January 20, 2021. His immediate family circle was also the second family of the United States from 2009 to 2017, when Biden was vice president. Biden\'s family is mostly descended from the British Isles, with most of their ancestors coming from Ireland and England, and a smaller number descending from the French.Of Joe Biden\'s sixteen great-great-grandparents, ten were born in Ireland. He is descended from the Blewitts of County Mayo and the Finnegans of County Louth. One of Biden\'s great-great-great-grandfathers was born in Sussex, England, and emigrated to Maryland in the United States by 1820.Page: Inauguration of Joe BidenSummary: The inauguration of Joe Biden as the 46th president of the United States took place on Wednesday, January 20, 2021, marking the start of the four-year term of Joe Biden as president and Kamala Harris as vice president. The 59th presidential inauguration took place on the West Front of the United States Capitol in Washington, D.C. Biden took the presidential oath of office, before which Harris took the vice presidential oath of office.The inauguration took place amidst extraordinary political, public health, economic, and national security crises, including the ongoing COVID-19 pandemic; outgoing President Donald Trump\'s attempts to overturn the 2020 United States presidential election, which provoked an attack on the United States Capitol on January 6; Trump\'Invoking: `Wikipedia` with `Delaware`Page: DelawareSummary: Delaware (  DEL-ə-wair) is a state in the northeast and Mid-Atlantic regions of the United States. It borders Maryland to its south and west, Pennsylvania to its north, New Jersey to its northeast, and the Atlantic Ocean to its east. The state\'s name derives from the adjacent Delaware Bay, which in turn was named after Thomas West, 3rd Baron De La Warr, an English nobleman and the Colony of Virginia\'s first colonial-era governor.Delaware occupies the northeastern portion of the Delmarva Peninsula, and some islands and territory within the Delaware River. It is the 2nd smallest and 6th least populous state, but also the 6th most densely populated. Delaware\'s most populous city is Wilmington, and the state\'s capital is Dover, the 2nd most populous city in Delaware. The state is divided into three counties, the fewest number of counties of any of the 50 U.S. states; from north to south, the three counties are: New Castle County, Kent County, and Sussex County.The southern two counties, Kent and Sussex counties, historically have been predominantly agrarian economies. New Castle is more urbanized and is considered part of the Delaware Valley metropolitan statistical area that surrounds and includes Philadelphia, the nation\'s 6th most populous city. Delaware is considered part of the Southern United States by the U.S. Census Bureau, but the state\'s geography, culture, and history are a hybrid of the Mid-Atlantic and Northeastern regions of the country.Before Delaware coastline was explored and developed by Europeans in the 16th century, the state was inhabited by several Native Americans tribes, including the Lenape in the north and Nanticoke in the south. The state was first colonized by Dutch traders at Zwaanendael, near present-day Lewes, Delaware, in 1631.Delaware was one of the Thirteen Colonies that participated in the American Revolution and American Revolutionary War, in which the American Continental Army, led by George Washington, defeated the British, ended British colonization and establishing the United States as a sovereign and independent nation.On December 7, 1787, Delaware was the first state to ratify the Constitution of the United States, earning it the nickname "The First State".Since the turn of the 20th century, Delaware has become an onshore corporate haven whose corporate laws are deemed appealing to corporations; over half of all New York Stock Exchange-listed corporations and over three-fifths of the Fortune 500 is legally incorporated in the state.Page: Delaware City, DelawareSummary: Delaware City is a city in New Castle County, Delaware, United States. The population was 1,885 as of 2020. It is a small port town on the eastern terminus of the Chesapeake and Delaware Canal and is the location of the Forts Ferry Crossing to Fort Delaware on Pea Patch Island.Page: Delaware RiverSummary: The Delaware River is a major river in the Mid-Atlantic region of the United States and is the longest free-flowing (undammed) river in the Eastern United States. From the meeting of its branches in Hancock, New York, the river flows for 282 miles (454 km) along the borders of New York, Pennsylvania, New Jersey, and Delaware, before emptying into Delaware Bay.The river has been recognized by the National Wildlife Federation as one of the country\'s Great Waters and has been called the "Lifeblood of the Northeast" by American Rivers. Its watershed drains an area of 13,539 square miles (35,070 km2) and provides drinking water for 17 million people, including half of New York City via the Delaware Aqueduct.The Delaware River has two branches that rise in the Catskill Mountains of New York: the West Branch at Mount Jefferson in Jefferson, Schoharie County, and the East Branch at Grand Gorge, Delaware County. The branches merge to form the main Delaware River at Hancock, New York. Flowing south, the river remains relatively undeveloped, with 152 miles (245 km) protected as the Upper, Middle, and Lower Delaware National Scenic Rivers. At Trenton, New Jersey, the Delaware becomes tidal, navigable, and significantly more industrial. This section forms the backbone of the Delaware Valley metropolitan area, serving the port cities of Philadelphia, Camden, New Jersey, and Wilmington, Delaware. The river flows into Delaware Bay at Liston Point, 48 miles (77 km) upstream of the bay\'s outlet to the Atlantic Ocean between Cape May and Cape Henlopen.Before the arrival of European settlers, the river was the homeland of the Lenape native people. They called the river Lenapewihittuk, or Lenape River, and Kithanne, meaning the largest river in this part of the country.In 1609, the river was visited by a Dutch East India Company expedition led by Henry Hudson. Hudson, an English navigator, was hired to find a western route to Cathay (China), but his encounters set the stage for Dutch colonization of North America in the 17th century. Early Dutch and Swedish settlements were established along the lower section of the river and Delaware Bay. Both colonial powers called the river the South River (Zuidrivier), compared to the Hudson River, which was known as the North River. After the English expelled the Dutch and took control of the New Netherland colony in 1664, the river was renamed Delaware after Sir Thomas West, 3rd Baron De La Warr, an English nobleman and the Virginia colony\'s first royal governor, who defended the colony during the First Anglo-Powhatan War.Page: University of DelawareSummary: The University of Delaware (colloquially known as UD or Delaware) is a privately governed, state-assisted land-grant research university located in Newark, Delaware. UD is the largest university in Delaware. It offers three associate\'s programs, 148 bachelor\'s programs, 121 master\'s programs (with 13 joint degrees), and 55 doctoral programs across its eight colleges. The main campus is in Newark, with satellite campuses in Dover, Wilmington, Lewes, and Georgetown. It is considered a large institution with approximately 18,200 undergraduate and 4,200 graduate students. It is a privately governed university which receives public funding for being a land-grant, sea-grant, and space-grant state-supported research institution.UD is classified among "R1: Doctoral Universities – Very high research activity". According to the National Science Foundation, UD spent $186 million on research and development in 2018, ranking it 119th in the nation.  It is recognized with the Community Engagement Classification by the Carnegie Foundation for the Advancement of Teaching.UD students, alumni, and sports teams are known as the "Fightin\' Blue Hens", more commonly shortened to "Blue Hens", and the school colors are Delaware blue and gold. UD sponsors 21 men\'s and women\'s NCAA Division-I sports teams and have competed in the Colonial Athletic Association (CAA) since 2001.Page: LenapeSummary: The Lenape (English: , , ; Lenape languages: [lənaːpe]), also called the Lenni Lenape and Delaware people, are an Indigenous people of the Northeastern Woodlands, who live in the United States and Canada.The Lenape\'s historical territory includes present-day northeastern Delaware, all of New Jersey, the eastern Pennsylvania regions of the Lehigh Valley and Northeastern Pennsylvania, and New York Bay, western Long Island, and the lower Hudson Valley in New York state. Today they are based in Oklahoma, Wisconsin, and Ontario.During the last decades of the 18th century, European settlers and the effects of the American Revolutionary War displaced most Lenape from their homelands and pushed them north and west. In the 1860s, under the Indian removal policy, the U.S. federal government relocated most Lenape remaining in the Eastern United States to the Indian Territory and surrounding regions. Lenape people currently belong to the Delaware Nation and Delaware Tribe of Indians in Oklahoma, the Stockbridge–Munsee Community in Wisconsin, and the Munsee-Delaware Nation, Moravian of the Thames First Nation, and Delaware of Six Nations in Ontario.Invoking: `Wikipedia` with `Blue hen chicken`Page: Delaware Blue HenSummary: The Delaware Blue Hen or Blue Hen of Delaware is a blue strain of American gamecock. Under the name Blue Hen Chicken it is the official bird of the State of Delaware. It is the emblem or mascot of several institutions in the state, among them the sports teams of the University of Delaware.Page: Delaware Fightin\' Blue HensSummary: The Delaware Fightin\' Blue Hens are the athletic teams of the University of Delaware (UD) of Newark, Delaware, in the United States. The Blue Hens compete in the Football Championship Subdivision (FCS) of Division I of the National Collegiate Athletic Association (NCAA) as members of the Coastal Athletic Association and its technically separate football league, CAA Football.On November 28, 2023, UD and Conference USA (CUSA) jointly announced that UD would start a transition to the Division I Football Bowl Subdivision (FBS) in 2024 and join CUSA in 2025. UD will continue to compete in both sides of the CAA in 2024–25; it will be ineligible for the FCS playoffs due to NCAA rules for transitioning programs, but will be eligible for all non-football CAA championships. Upon joining CUSA, UD will be eligible for all conference championship events except the football championship game; it will become eligible for that event upon completing the FBS transition in 2026. At the same time, UD also announced it would add one women\'s sport due to Title IX considerations, and would also be seeking conference homes for the seven sports that UD sponsors but CUSA does not. The new women\'s sport would later be announced as ice hockey; UD will join College Hockey America for its first season of varsity play in 2025–26.Page: Brahma chickenSummary: The Brahma is an American breed of chicken. It was bred in the United States from birds imported from the Chinese port of Shanghai,:\u200a78\u200a and was the principal American meat breed from the 1850s until about 1930.Page: SilkieSummary: The Silkie (also known as the Silky or Chinese silk chicken) is a breed of chicken named for its atypically fluffy plumage, which is said to feel like silk and satin. The breed has several other unusual qualities, such as black skin and bones, blue earlobes, and five toes on each foot, whereas most chickens have only four. They are often exhibited in poultry shows, and also appear in various colors. In addition to their distinctive physical characteristics, Silkies are well known for their calm and friendly temperament. It is among the most docile of poultry. Hens are also exceptionally broody, and care for young well. Although they are fair layers themselves, laying only about three eggs a week, they are commonly used to hatch eggs from other breeds and bird species due to their broody nature. Silkie chickens have been bred to have a wide variety of colors which include but are not limited to: Black, Blue, Buff, Partridge, Splash, White, Lavender, Paint and Porcelain.Page: Silverudd BlueSummary: The Silverudd Blue, Swedish: Silverudds Blå, is a Swedish breed of chicken. It was developed by Martin Silverudd in Småland, in southern Sweden. Hens lay blue/green eggs, weighing 50–65 grams. The flock-book for the breed is kept by the Svenska Kulturhönsföreningen – the Swedish Cultural Hen Association. It was initially known by various names including Isbar, Blue Isbar and Svensk Grönvärpare, or "Swedish green egg layer"; in 2016 it was renamed to \'Silverudd Blue\' after its creator.The current US president is Joe Biden. His home state is Delaware. The home state bird of Delaware is the Delaware Blue Hen. The scientific name of the Delaware Blue Hen is Gallus gallus domesticus.> Finished chain.{\'input\': "Who is the current US president? What\'s their home state? What\'s their home state\'s bird? What\'s that bird\'s scientific name?", \'output\': \'The current US president is Joe Biden. His home state is Delaware. The home state bird of Delaware is the Delaware Blue Hen. The scientific name of the Delaware Blue Hen is Gallus gallus domesticus.\'}LangSmith\ntraceHelp us out by providing feedback on this documentation page:PreviousAdding moderationNextUsing toolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nInterface | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageInterfaceOn this pageInterfaceTo make it as easy as possible to create custom chains, we’ve\nimplemented a\n“Runnable”\nprotocol. The Runnable protocol is implemented for most components.\nThis is a standard interface, which makes it easy to define custom\nchains as well as invoke them in a standard way. The standard interface\nincludes:stream: stream back chunks of the responseinvoke: call the chain on an inputbatch: call the chain on a list of inputsThese also have corresponding async methods:astream: stream back chunks of the response asyncainvoke: call the chain on an input asyncabatch: call the chain on a list of inputs asyncastream_log: stream back\nintermediate steps as they happen, in addition to the final responseastream_events: beta stream events as\nthey happen in the chain (introduced in langchain-core 0.1.14)The input type and output type varies by component:ComponentInput TypeOutput TypePromptDictionaryPromptValueChatModelSingle string, list of chat messages or a PromptValueChatMessageLLMSingle string, list of chat messages or a PromptValueStringOutputParserThe output of an LLM or ChatModelDepends on the parserRetrieverSingle stringList of DocumentsToolSingle string or dictionary, depending on the toolDepends on the toolAll runnables expose input and output schemas to inspect the inputs\nand outputs: - input_schema: an input Pydantic model\nauto-generated from the structure of the Runnable -\noutput_schema: an output Pydantic model\nauto-generated from the structure of the RunnableLet’s take a look at these methods. To do so, we’ll create a super\nsimple PromptTemplate + ChatModel chain.%pip install –upgrade –quiet langchain-core langchain-community\nlangchain-openaifrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAImodel = ChatOpenAI()prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | modelInput Schema\u200bA description of the inputs accepted by a Runnable. This is a Pydantic\nmodel dynamically generated from the structure of any Runnable. You can\ncall .schema() on it to obtain a JSONSchema representation.# The input schema of the chain is the input schema of its first part, the prompt.chain.input_schema.schema(){\'title\': \'PromptInput\', \'type\': \'object\', \'properties\': {\'topic\': {\'title\': \'Topic\', \'type\': \'string\'}}}prompt.input_schema.schema(){\'title\': \'PromptInput\', \'type\': \'object\', \'properties\': {\'topic\': {\'title\': \'Topic\', \'type\': \'string\'}}}model.input_schema.schema(){\'title\': \'ChatOpenAIInput\', \'anyOf\': [{\'type\': \'string\'},  {\'$ref\': \'#/definitions/StringPromptValue\'},  {\'$ref\': \'#/definitions/ChatPromptValueConcrete\'},  {\'type\': \'array\',   \'items\': {\'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},     {\'$ref\': \'#/definitions/HumanMessage\'},     {\'$ref\': \'#/definitions/ChatMessage\'},     {\'$ref\': \'#/definitions/SystemMessage\'},     {\'$ref\': \'#/definitions/FunctionMessage\'},     {\'$ref\': \'#/definitions/ToolMessage\'}]}}], \'definitions\': {\'StringPromptValue\': {\'title\': \'StringPromptValue\',   \'description\': \'String prompt value.\',   \'type\': \'object\',   \'properties\': {\'text\': {\'title\': \'Text\', \'type\': \'string\'},    \'type\': {\'title\': \'Type\',     \'default\': \'StringPromptValue\',     \'enum\': [\'StringPromptValue\'],     \'type\': \'string\'}},   \'required\': [\'text\']},  \'AIMessage\': {\'title\': \'AIMessage\',   \'description\': \'A Message from an AI.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'ai\',     \'enum\': [\'ai\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'HumanMessage\': {\'title\': \'HumanMessage\',   \'description\': \'A Message from a human.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'human\',     \'enum\': [\'human\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'ChatMessage\': {\'title\': \'ChatMessage\',   \'description\': \'A Message that can be assigned an arbitrary speaker (i.e. role).\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'chat\',     \'enum\': [\'chat\'],     \'type\': \'string\'},    \'role\': {\'title\': \'Role\', \'type\': \'string\'}},   \'required\': [\'content\', \'role\']},  \'SystemMessage\': {\'title\': \'SystemMessage\',   \'description\': \'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'system\',     \'enum\': [\'system\'],     \'type\': \'string\'}},   \'required\': [\'content\']},  \'FunctionMessage\': {\'title\': \'FunctionMessage\',   \'description\': \'A Message for passing the result of executing a function back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'function\',     \'enum\': [\'function\'],     \'type\': \'string\'},    \'name\': {\'title\': \'Name\', \'type\': \'string\'}},   \'required\': [\'content\', \'name\']},  \'ToolMessage\': {\'title\': \'ToolMessage\',   \'description\': \'A Message for passing the result of executing a tool back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'tool\',     \'enum\': [\'tool\'],     \'type\': \'string\'},    \'tool_call_id\': {\'title\': \'Tool Call Id\', \'type\': \'string\'}},   \'required\': [\'content\', \'tool_call_id\']},  \'ChatPromptValueConcrete\': {\'title\': \'ChatPromptValueConcrete\',   \'description\': \'Chat prompt value which explicitly lists out the message types it accepts.\\nFor use in external schemas.\',   \'type\': \'object\',   \'properties\': {\'messages\': {\'title\': \'Messages\',     \'type\': \'array\',     \'items\': {\'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},       {\'$ref\': \'#/definitions/HumanMessage\'},       {\'$ref\': \'#/definitions/ChatMessage\'},       {\'$ref\': \'#/definitions/SystemMessage\'},       {\'$ref\': \'#/definitions/FunctionMessage\'},       {\'$ref\': \'#/definitions/ToolMessage\'}]}},    \'type\': {\'title\': \'Type\',     \'default\': \'ChatPromptValueConcrete\',     \'enum\': [\'ChatPromptValueConcrete\'],     \'type\': \'string\'}},   \'required\': [\'messages\']}}}Output Schema\u200bA description of the outputs produced by a Runnable. This is a Pydantic\nmodel dynamically generated from the structure of any Runnable. You can\ncall .schema() on it to obtain a JSONSchema representation.# The output schema of the chain is the output schema of its last part, in this case a ChatModel, which outputs a ChatMessagechain.output_schema.schema(){\'title\': \'ChatOpenAIOutput\', \'anyOf\': [{\'$ref\': \'#/definitions/AIMessage\'},  {\'$ref\': \'#/definitions/HumanMessage\'},  {\'$ref\': \'#/definitions/ChatMessage\'},  {\'$ref\': \'#/definitions/SystemMessage\'},  {\'$ref\': \'#/definitions/FunctionMessage\'},  {\'$ref\': \'#/definitions/ToolMessage\'}], \'definitions\': {\'AIMessage\': {\'title\': \'AIMessage\',   \'description\': \'A Message from an AI.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'ai\',     \'enum\': [\'ai\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'HumanMessage\': {\'title\': \'HumanMessage\',   \'description\': \'A Message from a human.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'human\',     \'enum\': [\'human\'],     \'type\': \'string\'},    \'example\': {\'title\': \'Example\', \'default\': False, \'type\': \'boolean\'}},   \'required\': [\'content\']},  \'ChatMessage\': {\'title\': \'ChatMessage\',   \'description\': \'A Message that can be assigned an arbitrary speaker (i.e. role).\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'chat\',     \'enum\': [\'chat\'],     \'type\': \'string\'},    \'role\': {\'title\': \'Role\', \'type\': \'string\'}},   \'required\': [\'content\', \'role\']},  \'SystemMessage\': {\'title\': \'SystemMessage\',   \'description\': \'A Message for priming AI behavior, usually passed in as the first of a sequence\\nof input messages.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'system\',     \'enum\': [\'system\'],     \'type\': \'string\'}},   \'required\': [\'content\']},  \'FunctionMessage\': {\'title\': \'FunctionMessage\',   \'description\': \'A Message for passing the result of executing a function back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'function\',     \'enum\': [\'function\'],     \'type\': \'string\'},    \'name\': {\'title\': \'Name\', \'type\': \'string\'}},   \'required\': [\'content\', \'name\']},  \'ToolMessage\': {\'title\': \'ToolMessage\',   \'description\': \'A Message for passing the result of executing a tool back to a model.\',   \'type\': \'object\',   \'properties\': {\'content\': {\'title\': \'Content\',     \'anyOf\': [{\'type\': \'string\'},      {\'type\': \'array\',       \'items\': {\'anyOf\': [{\'type\': \'string\'}, {\'type\': \'object\'}]}}]},    \'additional_kwargs\': {\'title\': \'Additional Kwargs\', \'type\': \'object\'},    \'type\': {\'title\': \'Type\',     \'default\': \'tool\',     \'enum\': [\'tool\'],     \'type\': \'string\'},    \'tool_call_id\': {\'title\': \'Tool Call Id\', \'type\': \'string\'}},   \'required\': [\'content\', \'tool_call_id\']}}}Stream\u200bfor s in chain.stream({"topic": "bears"}):    print(s.content, end="", flush=True)Sure, here\'s a bear-themed joke for you:Why don\'t bears wear shoes?Because they already have bear feet!Invoke\u200bchain.invoke({"topic": "bears"})AIMessage(content="Why don\'t bears wear shoes? \\n\\nBecause they have bear feet!")Batch\u200bchain.batch([{"topic": "bears"}, {"topic": "cats"}])[AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they already have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!")]You can set the number of concurrent requests by using the\nmax_concurrency parameterchain.batch([{"topic": "bears"}, {"topic": "cats"}], config={"max_concurrency": 5})[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild? Too many cheetahs!")]Async Stream\u200basync for s in chain.astream({"topic": "bears"}):    print(s.content, end="", flush=True)Why don\'t bears wear shoes?Because they have bear feet!Async Invoke\u200bawait chain.ainvoke({"topic": "bears"})AIMessage(content="Why don\'t bears ever wear shoes?\\n\\nBecause they already have bear feet!")Async Batch\u200bawait chain.abatch([{"topic": "bears"}])[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!")]Async Stream Events (beta)\u200bEvent Streaming is a beta API, and may change a bit based on\nfeedback.Note: Introduced in langchain-core 0.2.0For now, when using the astream_events API, for everything to work\nproperly please:Use async throughout the code (including async tools etc)Propagate callbacks if defining custom functions / runnables.Whenever using runnables without LCEL, make sure to call\n.astream() on LLMs rather than .ainvoke to force the LLM to\nstream tokens.Event Reference\u200bHere is a reference table that shows some events that might be emitted\nby the various Runnable objects. Definitions for some of the Runnable\nare included after the table.⚠️ When streaming the inputs for the runnable will not be available\nuntil the input stream has been entirely consumed This means that the\ninputs will be available at for the corresponding end hook rather than\nstart event.eventnamechunkinputoutputon_chat_model_start[model name]{“messages”: [[SystemMessage, HumanMessage]]}on_chat_model_stream[model name]AIMessageChunk(content=“hello”)on_chat_model_end[model name]{“messages”: [[SystemMessage, HumanMessage]]}{“generations”: […], “llm_output”: None, …}on_llm_start[model name]{‘input’: ‘hello’}on_llm_stream[model name]‘Hello’on_llm_end[model name]‘Hello human!’on_chain_startformat_docson_chain_streamformat_docs“hello world!, goodbye world!”on_chain_endformat_docs[Document(…)]“hello world!, goodbye world!”on_tool_startsome_tool{“x”: 1, “y”: “2”}on_tool_streamsome_tool{“x”: 1, “y”: “2”}on_tool_endsome_tool{“x”: 1, “y”: “2”}on_retriever_start[retriever name]{“query”: “hello”}on_retriever_chunk[retriever name]{documents: […]}on_retriever_end[retriever name]{“query”: “hello”}{documents: […]}on_prompt_start[template_name]{“question”: “hello”}on_prompt_end[template_name]{“question”: “hello”}ChatPromptValue(messages: [SystemMessage, …])Here are declarations associated with the events shown above:format_docs:def format_docs(docs: List[Document]) -> str:    \'\'\'Format the docs.\'\'\'    return ", ".join([doc.page_content for doc in docs])format_docs = RunnableLambda(format_docs)some_tool:@tooldef some_tool(x: int, y: str) -> dict:    \'\'\'Some_tool.\'\'\'    return {"x": x, "y": y}prompt:template = ChatPromptTemplate.from_messages(    [("system", "You are Cat Agent 007"), ("human", "{question}")]).with_config({"run_name": "my_template", "tags": ["my_template"]})Let’s define a new chain to make it more interesting to show off the\nastream_events interface (and later the astream_log interface).from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()retrieval_chain = (    {        "context": retriever.with_config(run_name="Docs"),        "question": RunnablePassthrough(),    }    | prompt    | model.with_config(run_name="my_llm")    | StrOutputParser())Now let’s use astream_events to get events from the retriever and the\nLLM.async for event in retrieval_chain.astream_events(    "where did harrison work?", version="v1", include_names=["Docs", "my_llm"]):    kind = event["event"]    if kind == "on_chat_model_stream":        print(event["data"]["chunk"].content, end="|")    elif kind in {"on_chat_model_start"}:        print()        print("Streaming LLM:")    elif kind in {"on_chat_model_end"}:        print()        print("Done streaming LLM.")    elif kind == "on_retriever_end":        print("--")        print("Retrieved the following documents:")        print(event["data"]["output"]["documents"])    elif kind == "on_tool_end":        print(f"Ended tool: {event[\'name\']}")    else:        pass/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.  warn_beta(--Retrieved the following documents:[Document(page_content=\'harrison worked at kensho\')]Streaming LLM:|H|arrison| worked| at| Kens|ho|.||Done streaming LLM.Async Stream Intermediate Steps\u200bAll runnables also have a method .astream_log() which is used to\nstream (as they happen) all or part of the intermediate steps of your\nchain/sequence.This is useful to show progress to the user, to use intermediate\nresults, or to debug your chain.You can stream all steps (default) or include/exclude steps by name,\ntags or metadata.This method yields JSONPatch ops that when\napplied in the same order as received build up the RunState.class LogEntry(TypedDict):    id: str    """ID of the sub-run."""    name: str    """Name of the object being run."""    type: str    """Type of the object being run, eg. prompt, chain, llm, etc."""    tags: List[str]    """List of tags for the run."""    metadata: Dict[str, Any]    """Key-value pairs of metadata for the run."""    start_time: str    """ISO-8601 timestamp of when the run started."""    streamed_output_str: List[str]    """List of LLM tokens streamed by this run, if applicable."""    final_output: Optional[Any]    """Final output of this run.    Only available after the run has finished successfully."""    end_time: Optional[str]    """ISO-8601 timestamp of when the run ended.    Only available after the run has finished."""class RunState(TypedDict):    id: str    """ID of the run."""    streamed_output: List[Any]    """List of output chunks streamed by Runnable.stream()"""    final_output: Optional[Any]    """Final output of the run, usually the result of aggregating (`+`) streamed_output.    Only available after the run has finished successfully."""    logs: Dict[str, LogEntry]    """Map of run names to sub-runs. If filters were supplied, this list will    contain only the runs that matched the filters."""Streaming JSONPatch chunks\u200bThis is useful eg. to stream the JSONPatch in an HTTP server, and then\napply the ops on the client to rebuild the run state there. See\nLangServe for tooling to\nmake it easier to build a webserver from any Runnable.async for chunk in retrieval_chain.astream_log(    "where did harrison work?", include_names=["Docs"]):    print("-" * 40)    print(chunk)----------------------------------------RunLogPatch({\'op\': \'replace\',  \'path\': \'\',  \'value\': {\'final_output\': None,            \'id\': \'82e9b4b1-3dd6-4732-8db9-90e79c4da48c\',            \'logs\': {},            \'name\': \'RunnableSequence\',            \'streamed_output\': [],            \'type\': \'chain\'}})----------------------------------------RunLogPatch({\'op\': \'add\',  \'path\': \'/logs/Docs\',  \'value\': {\'end_time\': None,            \'final_output\': None,            \'id\': \'9206e94a-57bd-48ee-8c5e-fdd1c52a6da2\',            \'metadata\': {},            \'name\': \'Docs\',            \'start_time\': \'2024-01-19T22:33:55.902+00:00\',            \'streamed_output\': [],            \'streamed_output_str\': [],            \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],            \'type\': \'retriever\'}})----------------------------------------RunLogPatch({\'op\': \'add\',  \'path\': \'/logs/Docs/final_output\',  \'value\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]}}, {\'op\': \'add\',  \'path\': \'/logs/Docs/end_time\',  \'value\': \'2024-01-19T22:33:56.064+00:00\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'H\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'H\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'arrison\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' worked\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' at\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked at\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \' Kens\'}, {\'op\': \'replace\', \'path\': \'/final_output\', \'value\': \'Harrison worked at Kens\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'ho\'}, {\'op\': \'replace\',  \'path\': \'/final_output\',  \'value\': \'Harrison worked at Kensho\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'.\'}, {\'op\': \'replace\',  \'path\': \'/final_output\',  \'value\': \'Harrison worked at Kensho.\'})----------------------------------------RunLogPatch({\'op\': \'add\', \'path\': \'/streamed_output/-\', \'value\': \'\'})Streaming the incremental RunState\u200bYou can simply pass diff=False to get incremental values of\nRunState. You get more verbose output with more repetitive parts.async for chunk in retrieval_chain.astream_log(    "where did harrison work?", include_names=["Docs"], diff=False):    print("-" * 70)    print(chunk)----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': None,                   \'final_output\': None,                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': None, \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'H\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kens\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\', \'ho\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho.\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\', \'H\', \'arrison\', \' worked\', \' at\', \' Kens\', \'ho\', \'.\'], \'type\': \'chain\'})----------------------------------------------------------------------RunLog({\'final_output\': \'Harrison worked at Kensho.\', \'id\': \'431d1c55-7c50-48ac-b3a2-2f5ba5f35172\', \'logs\': {\'Docs\': {\'end_time\': \'2024-01-19T22:33:57.120+00:00\',                   \'final_output\': {\'documents\': [Document(page_content=\'harrison worked at kensho\')]},                   \'id\': \'8de10b49-d6af-4cb7-a4e7-fbadf6efa01e\',                   \'metadata\': {},                   \'name\': \'Docs\',                   \'start_time\': \'2024-01-19T22:33:56.939+00:00\',                   \'streamed_output\': [],                   \'streamed_output_str\': [],                   \'tags\': [\'map:key:context\', \'FAISS\', \'OpenAIEmbeddings\'],                   \'type\': \'retriever\'}}, \'name\': \'RunnableSequence\', \'streamed_output\': [\'\',                     \'H\',                     \'arrison\',                     \' worked\',                     \' at\',                     \' Kens\',                     \'ho\',                     \'.\',                     \'\'], \'type\': \'chain\'})Parallelism\u200bLet’s take a look at how LangChain Expression Language supports parallel\nrequests. For example, when using a RunnableParallel (often written as\na dictionary) it executes each element in parallel.from langchain_core.runnables import RunnableParallelchain1 = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelchain2 = (    ChatPromptTemplate.from_template("write a short (2 line) poem about {topic}")    | model)combined = RunnableParallel(joke=chain1, poem=chain2)%%timechain1.invoke({"topic": "bears"})CPU times: user 18 ms, sys: 1.27 ms, total: 19.3 msWall time: 692 msAIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they already have bear feet!")%%timechain2.invoke({"topic": "bears"})CPU times: user 10.5 ms, sys: 166 µs, total: 10.7 msWall time: 579 msAIMessage(content="In forest\'s embrace,\\nMajestic bears pace.")%%timecombined.invoke({"topic": "bears"})CPU times: user 32 ms, sys: 2.59 ms, total: 34.6 msWall time: 816 ms{\'joke\': AIMessage(content="Sure, here\'s a bear-related joke for you:\\n\\nWhy did the bear bring a ladder to the bar?\\n\\nBecause he heard the drinks were on the house!"), \'poem\': AIMessage(content="In wilderness they roam,\\nMajestic strength, nature\'s throne.")}Parallelism on batches\u200bParallelism can be combined with other runnables. Let’s try to use\nparallelism with batches.%%timechain1.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 17.3 ms, sys: 4.84 ms, total: 22.2 msWall time: 628 ms[AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!")]%%timechain2.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 15.8 ms, sys: 3.83 ms, total: 19.7 msWall time: 718 ms[AIMessage(content=\'In the wild, bears roam,\\nMajestic guardians of ancient home.\'), AIMessage(content=\'Whiskers grace, eyes gleam,\\nCats dance through the moonbeam.\')]%%timecombined.batch([{"topic": "bears"}, {"topic": "cats"}])CPU times: user 44.8 ms, sys: 3.17 ms, total: 48 msWall time: 721 ms[{\'joke\': AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they have bear feet!"),  \'poem\': AIMessage(content="Majestic bears roam,\\nNature\'s strength, beauty shown.")}, {\'joke\': AIMessage(content="Why don\'t cats play poker in the wild?\\n\\nToo many cheetahs!"),  \'poem\': AIMessage(content="Whiskers dance, eyes aglow,\\nCats embrace the night\'s gentle flow.")}]Help us out by providing feedback on this documentation page:PreviousWhy use LCELNextStreamingInput SchemaOutput SchemaStreamInvokeBatchAsync StreamAsync InvokeAsync BatchAsync Stream Events (beta)Event ReferenceAsync Stream Intermediate StepsStreaming JSONPatch chunksStreaming the incremental RunStateParallelismParallelism on batchesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nStreaming | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageStreamingOn this pageStreaming With LangChainStreaming is critical in making applications based on LLMs feel\nresponsive to end-users.Important LangChain primitives like LLMs, parsers, prompts, retrievers,\nand agents implement the LangChain Runnable\nInterface.This interface provides two general approaches to stream content:sync stream and async astream: a default implementation of\nstreaming that streams the final output from the chain.async astream_events and async astream_log: these provide a way\nto stream both intermediate steps and final output from the\nchain.Let’s take a look at both approaches, and try to understand how to use\nthem. ?Using Stream\u200bAll Runnable objects implement a sync method called stream and an\nasync variant called astream.These methods are designed to stream the final output in chunks,\nyielding each chunk as soon as it is available.Streaming is only possible if all steps in the program know how to\nprocess an input stream; i.e., process an input chunk one at a time,\nand yield a corresponding output chunk.The complexity of this processing can vary, from straightforward tasks\nlike emitting tokens produced by an LLM, to more challenging ones like\nstreaming parts of JSON results before the entire JSON is complete.The best place to start exploring streaming is with the single most\nimportant components in LLMs apps– the LLMs themselves!LLMs and Chat Models\u200bLarge language models and their chat variants are the primary bottleneck\nin LLM based apps. ?Large language models can take several seconds to generate a\ncomplete response to a query. This is far slower than the ~200-300\nms threshold at which an application feels responsive to an end user.The key strategy to make the application feel more responsive is to show\nintermediate progress; viz., to stream the output from the model token\nby token.We will show examples of streaming using the chat model from\nAnthropic. To use the model,\nyou will need to install the langchain-anthropic package. You can do\nthis with the following command:pip install -qU langchain-anthropic# Showing the example using anthropic, but you can use# your favorite chat model!from langchain_anthropic import ChatAnthropicmodel = ChatAnthropic()chunks = []async for chunk in model.astream("hello. tell me something about yourself"):    chunks.append(chunk)    print(chunk.content, end="|", flush=True) Hello|!| My| name| is| Claude|.| I|\'m| an| AI| assistant| created| by| An|throp|ic| to| be| helpful|,| harmless|,| and| honest|.||Let’s inspect one of the chunkschunks[0]AIMessageChunk(content=\' Hello\')We got back something called an AIMessageChunk. This chunk represents\na part of an AIMessage.Message chunks are additive by design – one can simply add them up to\nget the state of the response so far!chunks[0] + chunks[1] + chunks[2] + chunks[3] + chunks[4]AIMessageChunk(content=\' Hello! My name is\')Chains\u200bVirtually all LLM applications involve more steps than just a call to a\nlanguage model.Let’s build a simple chain using LangChain Expression Language\n(LCEL) that combines a prompt, model and a parser and verify that\nstreaming works.We will use StrOutputParser to parse the output from the model. This\nis a simple parser that extracts the content field from an\nAIMessageChunk, giving us the token returned by the model.tipLCEL is a declarative way to specify a “program” by chainining\ntogether different LangChain primitives. Chains created using LCEL\nbenefit from an automatic implementation of stream and astream\nallowing streaming of the final output. In fact, chains created with\nLCEL implement the entire standard Runnable interface.from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")parser = StrOutputParser()chain = prompt | model | parserasync for chunk in chain.astream({"topic": "parrot"}):    print(chunk, end="|", flush=True) Here|\'s| a| silly| joke| about| a| par|rot|:|What| kind| of| teacher| gives| good| advice|?| An| ap|-|parent| (|app|arent|)| one|!||noteYou do not have to use the LangChain Expression Language to use\nLangChain and can instead rely on a standard imperative programming\napproach by caling invoke, batch or stream on each component\nindividually, assigning the results to variables and then using them\ndownstream as you see fit.If that works for your needs, then that’s fine by us ?!Working with Input Streams\u200bWhat if you wanted to stream JSON from the output as it was being\ngenerated?If you were to rely on json.loads to parse the partial json, the\nparsing would fail as the partial json wouldn’t be valid json.You’d likely be at a complete loss of what to do and claim that it\nwasn’t possible to stream JSON.Well, turns out there is a way to do it – the parser needs to operate on\nthe input stream, and attempt to “auto-complete” the partial json\ninto a valid state.Let’s see such a parser in action to understand what this means.from langchain_core.output_parsers import JsonOutputParserchain = (    model | JsonOutputParser())  # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, flush=True){}{\'countries\': []}{\'countries\': [{}]}{\'countries\': [{\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67}]}{\'countries\': [{\'name\': \'France\', \'population\': 6739}]}{\'countries\': [{\'name\': \'France\', \'population\': 673915}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Sp\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 4675}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 467547}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\'}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 12}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 12647}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 1264764}]}{\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {\'name\': \'Spain\', \'population\': 46754778}, {\'name\': \'Japan\', \'population\': 126476461}]}Now, let’s break streaming. We’ll use the previous example and\nappend an extraction function at the end that extracts the country names\nfrom the finalized JSON.dangerAny steps in the chain that operate on finalized inputs rather than\non input streams can break streaming functionality via stream or\nastream.tipLater, we will discuss the astream_events API which streams results\nfrom intermediate steps. This API will stream results from intermediate\nsteps even if the chain contains steps that only operate on finalized\ninputs.from langchain_core.output_parsers import (    JsonOutputParser,)# A function that operates on finalized inputs# rather than on an input_streamdef _extract_country_names(inputs):    """A function that does not operates on input streams and breaks streaming."""    if not isinstance(inputs, dict):        return ""    if "countries" not in inputs:        return ""    countries = inputs["countries"]    if not isinstance(countries, list):        return ""    country_names = [        country.get("name") for country in countries if isinstance(country, dict)    ]    return country_nameschain = model | JsonOutputParser() | _extract_country_namesasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, end="|", flush=True)[\'France\', \'Spain\', \'Japan\']|Generator Functions\u200bLe’ts fix the streaming using a generator function that can operate on\nthe input stream.tipA generator function (a function that uses yield) allows writing code\nthat operators on input streamsfrom langchain_core.output_parsers import JsonOutputParserasync def _extract_country_names_streaming(input_stream):    """A function that operates on input streams."""    country_names_so_far = set()    async for input in input_stream:        if not isinstance(input, dict):            continue        if "countries" not in input:            continue        countries = input["countries"]        if not isinstance(countries, list):            continue        for country in countries:            name = country.get("name")            if not name:                continue            if name not in country_names_so_far:                yield name                country_names_so_far.add(name)chain = model | JsonOutputParser() | _extract_country_names_streamingasync for text in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'):    print(text, end="|", flush=True)France|Sp|Spain|Japan|noteBecause the code above is relying on JSON auto-completion, you may see\npartial names of countries (e.g., Sp and Spain), which is not what\none would want for an extraction result!We’re focusing on streaming concepts, not necessarily the results of the\nchains.Non-streaming components\u200bSome built-in components like Retrievers do not offer any streaming.\nWhat happens if we try to stream them? ?from langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import OpenAIEmbeddingstemplate = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)vectorstore = FAISS.from_texts(    ["harrison worked at kensho", "harrison likes spicy food"],    embedding=OpenAIEmbeddings(),)retriever = vectorstore.as_retriever()chunks = [chunk for chunk in retriever.stream("where did harrison work?")]chunks[[Document(page_content=\'harrison worked at kensho\'),  Document(page_content=\'harrison likes spicy food\')]]Stream just yielded the final result from that component.This is OK \U0001f979! Not all components have to implement streaming – in some\ncases streaming is either unnecessary, difficult or just doesn’t make\nsense.tipAn LCEL chain constructed using non-streaming components, will still be\nable to stream in a lot of cases, with streaming of partial output\nstarting after the last non-streaming step in the chain.retrieval_chain = (    {        "context": retriever.with_config(run_name="Docs"),        "question": RunnablePassthrough(),    }    | prompt    | model    | StrOutputParser())for chunk in retrieval_chain.stream(    "Where did harrison work? " "Write 3 made up sentences about this place."):    print(chunk, end="|", flush=True) Based| on| the| given| context|,| the| only| information| provided| about| where| Harrison| worked| is| that| he| worked| at| Ken|sh|o|.| Since| there| are| no| other| details| provided| about| Ken|sh|o|,| I| do| not| have| enough| information| to| write| 3| additional| made| up| sentences| about| this| place|.| I| can| only| state| that| Harrison| worked| at| Ken|sh|o|.||Now that we’ve seen how stream and astream work, let’s venture into\nthe world of streaming events. ?️Using Stream Events\u200bEvent Streaming is a beta API. This API may change a bit based on\nfeedback.noteIntroduced in langchain-core 0.1.14.import langchain_corelangchain_core.__version__\'0.1.18\'For the astream_events API to work properly:Use async throughout the code to the extent possible (e.g., async\ntools etc)Propagate callbacks if defining custom functions / runnablesWhenever using runnables without LCEL, make sure to call\n.astream() on LLMs rather than .ainvoke to force the LLM to\nstream tokens.Let us know if anything doesn’t work as expected! :)Event Reference\u200bBelow is a reference table that shows some events that might be emitted\nby the various Runnable objects.noteWhen streaming is implemented properly, the inputs to a runnable will\nnot be known until after the input stream has been entirely consumed.\nThis means that inputs will often be included only for end events\nand rather than for start events.eventnamechunkinputoutputon_chat_model_start[model name]{“messages”: [[SystemMessage, HumanMessage]]}on_chat_model_stream[model name]AIMessageChunk(content=“hello”)on_chat_model_end[model name]{“messages”: [[SystemMessage, HumanMessage]]}{“generations”: […], “llm_output”: None, …}on_llm_start[model name]{‘input’: ‘hello’}on_llm_stream[model name]‘Hello’on_llm_end[model name]‘Hello human!’on_chain_startformat_docson_chain_streamformat_docs“hello world!, goodbye world!”on_chain_endformat_docs[Document(…)]“hello world!, goodbye world!”on_tool_startsome_tool{“x”: 1, “y”: “2”}on_tool_streamsome_tool{“x”: 1, “y”: “2”}on_tool_endsome_tool{“x”: 1, “y”: “2”}on_retriever_start[retriever name]{“query”: “hello”}on_retriever_chunk[retriever name]{documents: […]}on_retriever_end[retriever name]{“query”: “hello”}{documents: […]}on_prompt_start[template_name]{“question”: “hello”}on_prompt_end[template_name]{“question”: “hello”}ChatPromptValue(messages: [SystemMessage, …])Chat Model\u200bLet’s start off by looking at the events produced by a chat model.events = []async for event in model.astream_events("hello", version="v1"):    events.append(event)/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.  warn_beta(noteHey what’s that funny version=“v1” parameter in the API?! ?This is a beta API, and we’re almost certainly going to make some\nchanges to it.This version parameter will allow us to minimize such breaking changes\nto your code.In short, we are annoying you now, so we don’t have to annoy you later.Let’s take a look at the few of the start event and a few of the end\nevents.events[:3][{\'event\': \'on_chat_model_start\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'name\': \'ChatAnthropic\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'input\': \'hello\'}}, {\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\' Hello\')}}, {\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\'!\')}}]events[-2:][{\'event\': \'on_chat_model_stream\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'name\': \'ChatAnthropic\',  \'data\': {\'chunk\': AIMessageChunk(content=\'\')}}, {\'event\': \'on_chat_model_end\',  \'name\': \'ChatAnthropic\',  \'run_id\': \'555843ed-3d24-4774-af25-fbf030d5e8c4\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'output\': AIMessageChunk(content=\' Hello!\')}}]Chain\u200bLet’s revisit the example chain that parsed streaming JSON to explore\nthe streaming events API.chain = (    model | JsonOutputParser())  # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some modelsevents = [    event    async for event in chain.astream_events(        \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',        version="v1",    )]If you examine at the first few events, you’ll notice that there are\n3 different start events rather than 2 start events.The three start events correspond to:The chain (model + parser)The modelThe parserevents[:3][{\'event\': \'on_chain_start\',  \'run_id\': \'b1074bff-2a17-458b-9e7b-625211710df4\',  \'name\': \'RunnableSequence\',  \'tags\': [],  \'metadata\': {},  \'data\': {\'input\': \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'}}, {\'event\': \'on_chat_model_start\',  \'name\': \'ChatAnthropic\',  \'run_id\': \'6072be59-1f43-4f1c-9470-3b92e8406a99\',  \'tags\': [\'seq:step:1\'],  \'metadata\': {},  \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}, {\'event\': \'on_parser_start\',  \'name\': \'JsonOutputParser\',  \'run_id\': \'bf978194-0eda-4494-ad15-3a5bfe69cd59\',  \'tags\': [\'seq:step:2\'],  \'metadata\': {},  \'data\': {}}]What do you think you’d see if you looked at the last 3 events? what\nabout the middle?Let’s use this API to take output the stream events from the model and\nthe parser. We’re ignoring start events, end events and events from the\nchain.num_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",):    kind = event["event"]    if kind == "on_chat_model_stream":        print(            f"Chat model chunk: {repr(event[\'data\'][\'chunk\'].content)}",            flush=True,        )    if kind == "on_parser_stream":        print(f"Parser chunk: {event[\'data\'][\'chunk\']}", flush=True)    num_events += 1    if num_events > 30:        # Truncate the output        print("...")        breakChat model chunk: \' Here\'Chat model chunk: \' is\'Chat model chunk: \' the\'Chat model chunk: \' JSON\'Chat model chunk: \' with\'Chat model chunk: \' the\'Chat model chunk: \' requested\'Chat model chunk: \' countries\'Chat model chunk: \' and\'Chat model chunk: \' their\'Chat model chunk: \' populations\'Chat model chunk: \':\'Chat model chunk: \'\\n\\n```\'Chat model chunk: \'json\'Parser chunk: {}Chat model chunk: \'\\n{\'Chat model chunk: \'\\n \'Chat model chunk: \' "\'Chat model chunk: \'countries\'Chat model chunk: \'":\'Parser chunk: {\'countries\': []}Chat model chunk: \' [\'Chat model chunk: \'\\n   \'Parser chunk: {\'countries\': [{}]}Chat model chunk: \' {\'...Because both the model and the parser support streaming, we see sreaming\nevents from both components in real time! Kind of cool isn’t it? ?Filtering Events\u200bBecause this API produces so many events, it is useful to be able to\nfilter on events.You can filter by either component name, component tags or component\ntype.By Name\u200bchain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config(    {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_names=["my_parser"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_parser_start\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': []}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'\'}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\'}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 6739}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 673915}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67391582}]}}}{\'event\': \'on_parser_stream\', \'name\': \'my_parser\', \'run_id\': \'f2ac1d1c-e14a-45fc-8990-e5c24e707299\', \'tags\': [\'seq:step:2\'], \'metadata\': {}, \'data\': {\'chunk\': {\'countries\': [{\'name\': \'France\', \'population\': 67391582}, {}]}}}...By Type\u200bchain = model.with_config({"run_name": "model"}) | JsonOutputParser().with_config(    {"run_name": "my_parser"})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_types=["chat_model"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_chat_model_start\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' Here\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' is\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' JSON\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' with\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' requested\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' countries\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' and\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'model\', \'run_id\': \'98a6e192-8159-460c-ba73-6dfc921e3777\', \'tags\': [\'seq:step:1\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' their\')}}...By Tags\u200bcautionTags are inherited by child components of a given runnable.If you’re using tags to filter, make sure that this is what you want.chain = (model | JsonOutputParser()).with_config({"tags": ["my_chain"]})max_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",    include_tags=["my_chain"],):    print(event)    max_events += 1    if max_events > 10:        # Truncate output        print("...")        break{\'event\': \'on_chain_start\', \'run_id\': \'190875f3-3fb7-49ad-9b6e-f49da22f3e49\', \'name\': \'RunnableSequence\', \'tags\': [\'my_chain\'], \'metadata\': {}, \'data\': {\'input\': \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\'}}{\'event\': \'on_chat_model_start\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'input\': {\'messages\': [[HumanMessage(content=\'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\')]]}}}{\'event\': \'on_parser_start\', \'name\': \'JsonOutputParser\', \'run_id\': \'3b5e4ca1-40fe-4a02-9a19-ba2a43a6115c\', \'tags\': [\'seq:step:2\', \'my_chain\'], \'metadata\': {}, \'data\': {}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' Here\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' is\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' JSON\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' with\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' the\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' requested\')}}{\'event\': \'on_chat_model_stream\', \'name\': \'ChatAnthropic\', \'run_id\': \'ff58f732-b494-4ff9-852a-783d42f4455d\', \'tags\': [\'seq:step:1\', \'my_chain\'], \'metadata\': {}, \'data\': {\'chunk\': AIMessageChunk(content=\' countries\')}}...Non-streaming components\u200bRemember how some components don’t stream well because they don’t\noperate on input streams?While such components can break streaming of the final output when using\nastream, astream_events will still yield streaming events from\nintermediate steps that support streaming!# Function that does not support streaming.# It operates on the finalizes inputs rather than# operating on the input stream.def _extract_country_names(inputs):    """A function that does not operates on input streams and breaks streaming."""    if not isinstance(inputs, dict):        return ""    if "countries" not in inputs:        return ""    countries = inputs["countries"]    if not isinstance(countries, list):        return ""    country_names = [        country.get("name") for country in countries if isinstance(country, dict)    ]    return country_nameschain = (    model | JsonOutputParser() | _extract_country_names)  # This parser only works with OpenAI right nowAs expected, the astream API doesn’t work correctly because\n_extract_country_names doesn’t operate on streams.async for chunk in chain.astream(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',):    print(chunk, flush=True)[\'France\', \'Spain\', \'Japan\']Now, let’s confirm that with astream_events we’re still seeing streaming\noutput from the model and the parser.num_events = 0async for event in chain.astream_events(    \'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of "countries" which contains a list of countries. Each country should have the key `name` and `population`\',    version="v1",):    kind = event["event"]    if kind == "on_chat_model_stream":        print(            f"Chat model chunk: {repr(event[\'data\'][\'chunk\'].content)}",            flush=True,        )    if kind == "on_parser_stream":        print(f"Parser chunk: {event[\'data\'][\'chunk\']}", flush=True)    num_events += 1    if num_events > 30:        # Truncate the output        print("...")        breakChat model chunk: \' Here\'Chat model chunk: \' is\'Chat model chunk: \' the\'Chat model chunk: \' JSON\'Chat model chunk: \' with\'Chat model chunk: \' the\'Chat model chunk: \' requested\'Chat model chunk: \' countries\'Chat model chunk: \' and\'Chat model chunk: \' their\'Chat model chunk: \' populations\'Chat model chunk: \':\'Chat model chunk: \'\\n\\n```\'Chat model chunk: \'json\'Parser chunk: {}Chat model chunk: \'\\n{\'Chat model chunk: \'\\n \'Chat model chunk: \' "\'Chat model chunk: \'countries\'Chat model chunk: \'":\'Parser chunk: {\'countries\': []}Chat model chunk: \' [\'Chat model chunk: \'\\n   \'Parser chunk: {\'countries\': [{}]}Chat model chunk: \' {\'Chat model chunk: \'\\n     \'Chat model chunk: \' "\'...Propagating Callbacks\u200bcautionIf you’re using invoking runnables inside your tools, you need to\npropagate callbacks to the runnable; otherwise, no stream events will be\ngenerated.noteWhen using RunnableLambdas or @chain decorator, callbacks are propagated\nautomatically behind the scenes.from langchain_core.runnables import RunnableLambdafrom langchain_core.tools import tooldef reverse_word(word: str):    return word[::-1]reverse_word = RunnableLambda(reverse_word)@tooldef bad_tool(word: str):    """Custom tool that doesn\'t propagate callbacks."""    return reverse_word.invoke(word)async for event in bad_tool.astream_events("hello", version="v1"):    print(event){\'event\': \'on_tool_start\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'name\': \'bad_tool\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_tool_stream\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'tags\': [], \'metadata\': {}, \'name\': \'bad_tool\', \'data\': {\'chunk\': \'olleh\'}}{\'event\': \'on_tool_end\', \'name\': \'bad_tool\', \'run_id\': \'ae7690f8-ebc9-4886-9bbe-cb336ff274f2\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'olleh\'}}Here’s a re-implementation that does propagate callbacks correctly.\nYou’ll notice that now we’re getting events from the reverse_word\nrunnable as well.@tooldef correct_tool(word: str, callbacks):    """A tool that correctly propagates callbacks."""    return reverse_word.invoke(word, {"callbacks": callbacks})async for event in correct_tool.astream_events("hello", version="v1"):    print(event){\'event\': \'on_tool_start\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'name\': \'correct_tool\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'c4882303-8867-4dff-b031-7d9499b39dda\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'c4882303-8867-4dff-b031-7d9499b39dda\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'hello\', \'output\': \'olleh\'}}{\'event\': \'on_tool_stream\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'tags\': [], \'metadata\': {}, \'name\': \'correct_tool\', \'data\': {\'chunk\': \'olleh\'}}{\'event\': \'on_tool_end\', \'name\': \'correct_tool\', \'run_id\': \'384f1710-612e-4022-a6d4-8a7bb0cc757e\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'olleh\'}}If you’re invoking runnables from within Runnable Lambdas or @chains,\nthen callbacks will be passed automatically on your behalf.from langchain_core.runnables import RunnableLambdaasync def reverse_and_double(word: str):    return await reverse_word.ainvoke(word) * 2reverse_and_double = RunnableLambda(reverse_and_double)await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v1"):    print(event){\'event\': \'on_chain_start\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'name\': \'reverse_and_double\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'335fe781-8944-4464-8d2e-81f61d1f85f5\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'335fe781-8944-4464-8d2e-81f61d1f85f5\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\', \'output\': \'4321\'}}{\'event\': \'on_chain_stream\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'tags\': [], \'metadata\': {}, \'name\': \'reverse_and_double\', \'data\': {\'chunk\': \'43214321\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_and_double\', \'run_id\': \'4fe56c7b-6982-4999-a42d-79ba56151176\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'43214321\'}}And with the @chain decorator:from langchain_core.runnables import chain@chainasync def reverse_and_double(word: str):    return await reverse_word.ainvoke(word) * 2await reverse_and_double.ainvoke("1234")async for event in reverse_and_double.astream_events("1234", version="v1"):    print(event){\'event\': \'on_chain_start\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'name\': \'reverse_and_double\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_start\', \'name\': \'reverse_word\', \'run_id\': \'e7cddab2-9b95-4e80-abaf-4b2429117835\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_word\', \'run_id\': \'e7cddab2-9b95-4e80-abaf-4b2429117835\', \'tags\': [], \'metadata\': {}, \'data\': {\'input\': \'1234\', \'output\': \'4321\'}}{\'event\': \'on_chain_stream\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'tags\': [], \'metadata\': {}, \'name\': \'reverse_and_double\', \'data\': {\'chunk\': \'43214321\'}}{\'event\': \'on_chain_end\', \'name\': \'reverse_and_double\', \'run_id\': \'7485eedb-1854-429c-a2f8-03d01452daef\', \'tags\': [], \'metadata\': {}, \'data\': {\'output\': \'43214321\'}}Help us out by providing feedback on this documentation page:PreviousInterfaceNextHow toUsing StreamLLMs and Chat ModelsChainsWorking with Input StreamsNon-streaming componentsUsing Stream EventsEvent ReferenceChat ModelChainFiltering EventsNon-streaming componentsPropagating CallbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nMultiple chains | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookMultiple chainsOn this pageMultiple chainsRunnables can easily be used to string together multiple Chains%pip install --upgrade --quiet  langchain langchain-openaifrom operator import itemgetterfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIprompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")prompt2 = ChatPromptTemplate.from_template(    "what country is the city {city} in? respond in {language}")model = ChatOpenAI()chain1 = prompt1 | model | StrOutputParser()chain2 = (    {"city": chain1, "language": itemgetter("language")}    | prompt2    | model    | StrOutputParser())chain2.invoke({"person": "obama", "language": "spanish"})\'El país donde se encuentra la ciudad de Honolulu, donde nació Barack Obama, el 44º Presidente de los Estados Unidos, es Estados Unidos. Honolulu se encuentra en la isla de Oahu, en el estado de Hawái.\'from langchain_core.runnables import RunnablePassthroughprompt1 = ChatPromptTemplate.from_template(    "generate a {attribute} color. Return the name of the color and nothing else:")prompt2 = ChatPromptTemplate.from_template(    "what is a fruit of color: {color}. Return the name of the fruit and nothing else:")prompt3 = ChatPromptTemplate.from_template(    "what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:")prompt4 = ChatPromptTemplate.from_template(    "What is the color of {fruit} and the flag of {country}?")model_parser = model | StrOutputParser()color_generator = (    {"attribute": RunnablePassthrough()} | prompt1 | {"color": model_parser})color_to_fruit = prompt2 | model_parsercolor_to_country = prompt3 | model_parserquestion_generator = (    color_generator | {"fruit": color_to_fruit, "country": color_to_country} | prompt4)question_generator.invoke("warm")ChatPromptValue(messages=[HumanMessage(content=\'What is the color of strawberry and the flag of China?\', additional_kwargs={}, example=False)])prompt = question_generator.invoke("warm")model.invoke(prompt)AIMessage(content=\'The color of an apple is typically red or green. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.\', additional_kwargs={}, example=False)Branching and Merging\u200bYou may want the output of one component to be processed by 2 or more\nother components.\nRunnableParallels\nlet you split or fork the chain so multiple components can process the\ninput in parallel. Later, other components can join or merge the results\nto synthesize a final response. This type of chain creates a computation\ngraph that looks like the following:     Input      / \\     /   \\ Branch1 Branch2     \\   /      \\ /      Combineplanner = (    ChatPromptTemplate.from_template("Generate an argument about: {input}")    | ChatOpenAI()    | StrOutputParser()    | {"base_response": RunnablePassthrough()})arguments_for = (    ChatPromptTemplate.from_template(        "List the pros or positive aspects of {base_response}"    )    | ChatOpenAI()    | StrOutputParser())arguments_against = (    ChatPromptTemplate.from_template(        "List the cons or negative aspects of {base_response}"    )    | ChatOpenAI()    | StrOutputParser())final_responder = (    ChatPromptTemplate.from_messages(        [            ("ai", "{original_response}"),            ("human", "Pros:\\n{results_1}\\n\\nCons:\\n{results_2}"),            ("system", "Generate a final response given the critique"),        ]    )    | ChatOpenAI()    | StrOutputParser())chain = (    planner    | {        "results_1": arguments_for,        "results_2": arguments_against,        "original_response": itemgetter("base_response"),    }    | final_responder)chain.invoke({"input": "scrum"})\'While Scrum has its potential cons and challenges, many organizations have successfully embraced and implemented this project management framework to great effect. The cons mentioned above can be mitigated or overcome with proper training, support, and a commitment to continuous improvement. It is also important to note that not all cons may be applicable to every organization or project.\\n\\nFor example, while Scrum may be complex initially, with proper training and guidance, teams can quickly grasp the concepts and practices. The lack of predictability can be mitigated by implementing techniques such as velocity tracking and release planning. The limited documentation can be addressed by maintaining a balance between lightweight documentation and clear communication among team members. The dependency on team collaboration can be improved through effective communication channels and regular team-building activities.\\n\\nScrum can be scaled and adapted to larger projects by using frameworks like Scrum of Scrums or LeSS (Large Scale Scrum). Concerns about speed versus quality can be addressed by incorporating quality assurance practices, such as continuous integration and automated testing, into the Scrum process. Scope creep can be managed by having a well-defined and prioritized product backlog, and a strong product owner can be developed through training and mentorship.\\n\\nResistance to change can be overcome by providing proper education and communication to stakeholders and involving them in the decision-making process. Ultimately, the cons of Scrum can be seen as opportunities for growth and improvement, and with the right mindset and support, they can be effectively managed.\\n\\nIn conclusion, while Scrum may have its challenges and potential cons, the benefits and advantages it offers in terms of collaboration, flexibility, adaptability, transparency, and customer satisfaction make it a widely adopted and successful project management framework. With proper implementation and continuous improvement, organizations can leverage Scrum to drive innovation, efficiency, and project success.\'Help us out by providing feedback on this documentation page:PreviousRAGNextQuerying a SQL DBBranching and MergingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRAG | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookRAGOn this pageRAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds\nup to a “retrieval-augmented generation” chain%pip install --upgrade --quiet  langchain langchain-openai faiss-cpu tiktokenfrom operator import itemgetterfrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'template = """Answer the question based only on the following context:{context}Question: {question}Answer in the following language: {language}"""prompt = ChatPromptTemplate.from_template(template)chain = (    {        "context": itemgetter("question") | retriever,        "question": itemgetter("question"),        "language": itemgetter("language"),    }    | prompt    | model    | StrOutputParser())chain.invoke({"question": "where did harrison work", "language": "italian"})\'Harrison ha lavorato a Kensho.\'Conversational Retrieval Chain\u200bWe can easily add in conversation history. This primarily means adding\nin chat_message_historyfrom langchain_core.messages import AIMessage, HumanMessage, get_buffer_stringfrom langchain_core.prompts import format_documentfrom langchain_core.runnables import RunnableParallelfrom langchain.prompts.prompt import PromptTemplate_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)template = """Answer the question based only on the following context:{context}Question: {question}"""ANSWER_PROMPT = ChatPromptTemplate.from_template(template)DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")def _combine_documents(    docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\\n\\n"):    doc_strings = [format_document(doc, document_prompt) for doc in docs]    return document_separator.join(doc_strings)_inputs = RunnableParallel(    standalone_question=RunnablePassthrough.assign(        chat_history=lambda x: get_buffer_string(x["chat_history"])    )    | CONDENSE_QUESTION_PROMPT    | ChatOpenAI(temperature=0)    | StrOutputParser(),)_context = {    "context": itemgetter("standalone_question") | retriever | _combine_documents,    "question": lambda x: x["standalone_question"],}conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()conversational_qa_chain.invoke(    {        "question": "where did harrison work?",        "chat_history": [],    })AIMessage(content=\'Harrison was employed at Kensho.\')conversational_qa_chain.invoke(    {        "question": "where did he work?",        "chat_history": [            HumanMessage(content="Who wrote this notebook?"),            AIMessage(content="Harrison"),        ],    })AIMessage(content=\'Harrison worked at Kensho.\')With Memory and returning source documents\u200bThis shows how to use memory with the above. For memory, we need to\nmanage that outside at the memory. For returning the retrieved\ndocuments, we just need to pass them through all the way.from operator import itemgetterfrom langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(    return_messages=True, output_key="answer", input_key="question")# First we add a step to load memory# This adds a "memory" key to the input objectloaded_memory = RunnablePassthrough.assign(    chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history"),)# Now we calculate the standalone questionstandalone_question = {    "standalone_question": {        "question": lambda x: x["question"],        "chat_history": lambda x: get_buffer_string(x["chat_history"]),    }    | CONDENSE_QUESTION_PROMPT    | ChatOpenAI(temperature=0)    | StrOutputParser(),}# Now we retrieve the documentsretrieved_documents = {    "docs": itemgetter("standalone_question") | retriever,    "question": lambda x: x["standalone_question"],}# Now we construct the inputs for the final promptfinal_inputs = {    "context": lambda x: _combine_documents(x["docs"]),    "question": itemgetter("question"),}# And finally, we do the part that returns the answersanswer = {    "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),    "docs": itemgetter("docs"),}# And now we put it all together!final_chain = loaded_memory | standalone_question | retrieved_documents | answerinputs = {"question": "where did harrison work?"}result = final_chain.invoke(inputs)result{\'answer\': AIMessage(content=\'Harrison was employed at Kensho.\'), \'docs\': [Document(page_content=\'harrison worked at kensho\')]}# Note that the memory does not save automatically# This will be improved in the future# For now you need to save it yourselfmemory.save_context(inputs, {"answer": result["answer"].content})memory.load_memory_variables({}){\'history\': [HumanMessage(content=\'where did harrison work?\'),  AIMessage(content=\'Harrison was employed at Kensho.\')]}inputs = {"question": "but where did he really work?"}result = final_chain.invoke(inputs)result{\'answer\': AIMessage(content=\'Harrison actually worked at Kensho.\'), \'docs\': [Document(page_content=\'harrison worked at kensho\')]}Help us out by providing feedback on this documentation page:PreviousPrompt + LLMNextMultiple chainsConversational Retrieval ChainWith Memory and returning source documentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nRunnableParallel: Manipulating data | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toRunnableParallel: Manipulating dataOn this pageManipulating inputs & outputRunnableParallel can be useful for manipulating the output of one\nRunnable to match the input format of the next Runnable in a sequence.Here the input to prompt is expected to be a map with keys “context” and\n“question”. The user input is just the question. So we need to get the\ncontext using our retriever and passthrough the user input under the\n“question” key.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}"""prompt = ChatPromptTemplate.from_template(template)model = ChatOpenAI()retrieval_chain = (    {"context": retriever, "question": RunnablePassthrough()}    | prompt    | model    | StrOutputParser())retrieval_chain.invoke("where did harrison work?")\'Harrison worked at Kensho.\'tipNote that when composing a RunnableParallel with another Runnable we\ndon’t even need to wrap our dictionary in the RunnableParallel class\n—\xa0the type conversion is handled for us. In the context of a chain,\nthese are equivalent:{"context": retriever, "question": RunnablePassthrough()}RunnableParallel({"context": retriever, "question": RunnablePassthrough()})RunnableParallel(context=retriever, question=RunnablePassthrough())Using itemgetter as shorthand\u200bNote that you can use Python’s itemgetter as shorthand to extract data\nfrom the map when combining with RunnableParallel. You can find more\ninformation about itemgetter in the Python\nDocumentation.In the example below, we use itemgetter to extract specific keys from\nthe map:from operator import itemgetterfrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsvectorstore = FAISS.from_texts(    ["harrison worked at kensho"], embedding=OpenAIEmbeddings())retriever = vectorstore.as_retriever()template = """Answer the question based only on the following context:{context}Question: {question}Answer in the following language: {language}"""prompt = ChatPromptTemplate.from_template(template)chain = (    {        "context": itemgetter("question") | retriever,        "question": itemgetter("question"),        "language": itemgetter("language"),    }    | prompt    | model    | StrOutputParser())chain.invoke({"question": "where did harrison work", "language": "italian"})\'Harrison ha lavorato a Kensho.\'Parallelize steps\u200bRunnableParallel (aka. RunnableMap) makes it easy to execute multiple\nRunnables in parallel, and to return the output of these Runnables as a\nmap.from langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableParallelfrom langchain_openai import ChatOpenAImodel = ChatOpenAI()joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | modelpoem_chain = (    ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model)map_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)map_chain.invoke({"topic": "bear"}){\'joke\': AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!"), \'poem\': AIMessage(content="In the wild\'s embrace, bear roams free,\\nStrength and grace, a majestic decree.")}Parallelism\u200bRunnableParallel are also useful for running independent processes in\nparallel, since each Runnable in the map is executed in parallel. For\nexample, we can see our earlier joke_chain, poem_chain and\nmap_chain all have about the same runtime, even though map_chain\nexecutes both of the other two.%%timeitjoke_chain.invoke({"topic": "bear"})958 ms ± 402 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)%%timeitpoem_chain.invoke({"topic": "bear"})1.22 s ± 508 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)%%timeitmap_chain.invoke({"topic": "bear"})1.15 s ± 119 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)Help us out by providing feedback on this documentation page:PreviousHow toNextRunnablePassthrough: Passing data throughUsing itemgetter as shorthandParallelize stepsParallelismCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nAdd fallbacks | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toAdd fallbacksOn this pageAdd fallbacksThere are many possible points of failure in an LLM application, whether\nthat be issues with LLM API’s, poor model outputs, issues with other\nintegrations, etc. Fallbacks help you gracefully handle and isolate\nthese issues.Crucially, fallbacks can be applied not only on the LLM level but on the\nwhole runnable level.Handling LLM API Errors\u200bThis is maybe the most common use case for fallbacks. A request to an\nLLM API can fail for a variety of reasons - the API could be down, you\ncould have hit rate limits, any number of things. Therefore, using\nfallbacks can help protect against these types of things.IMPORTANT: By default, a lot of the LLM wrappers catch errors and retry.\nYou will most likely want to turn those off when working with fallbacks.\nOtherwise the first wrapper will keep on retrying and not failing.%pip install --upgrade --quiet  langchain langchain-openaifrom langchain_community.chat_models import ChatAnthropicfrom langchain_openai import ChatOpenAIFirst, let’s mock out what happens if we hit a RateLimitError from\nOpenAIfrom unittest.mock import patchimport httpxfrom openai import RateLimitErrorrequest = httpx.Request("GET", "/")response = httpx.Response(200, request=request)error = RateLimitError("rate limit", response=response, body="")# Note that we set max_retries = 0 to avoid retrying on RateLimits, etcopenai_llm = ChatOpenAI(max_retries=0)anthropic_llm = ChatAnthropic()llm = openai_llm.with_fallbacks([anthropic_llm])# Let\'s use just the OpenAI LLm first, to show that we run into an errorwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(openai_llm.invoke("Why did the chicken cross the road?"))    except RateLimitError:        print("Hit error")Hit error# Now let\'s try with fallbacks to Anthropicwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(llm.invoke("Why did the chicken cross the road?"))    except RateLimitError:        print("Hit error")content=\' I don\\\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\\n\\n- To get to the other side!\\n\\n- It was too chicken to just stand there. \\n\\n- It wanted a change of scenery.\\n\\n- It wanted to show the possum it could be done.\\n\\n- It was on its way to a poultry farmers\\\' convention.\\n\\nThe joke plays on the double meaning of "the other side" - literally crossing the road to the other side, or the "other side" meaning the afterlife. So it\\\'s an anti-joke, with a silly or unexpected pun as the answer.\' additional_kwargs={} example=FalseWe can use our “LLM with Fallbacks” as we would a normal LLM.from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re a nice assistant who always includes a compliment in your response",        ),        ("human", "Why did the {animal} cross the road"),    ])chain = prompt | llmwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(chain.invoke({"animal": "kangaroo"}))    except RateLimitError:        print("Hit error")content=" I don\'t actually know why the kangaroo crossed the road, but I\'m happy to take a guess! Maybe the kangaroo was trying to get to the other side to find some tasty grass to eat. Or maybe it was trying to get away from a predator or other danger. Kangaroos do need to cross roads and other open areas sometimes as part of their normal activities. Whatever the reason, I\'m sure the kangaroo looked both ways before hopping across!" additional_kwargs={} example=FalseSpecifying errors to handle\u200bWe can also specify the errors to handle if we want to be more specific\nabout when the fallback is invoked:llm = openai_llm.with_fallbacks(    [anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,))chain = prompt | llmwith patch("openai.resources.chat.completions.Completions.create", side_effect=error):    try:        print(chain.invoke({"animal": "kangaroo"}))    except RateLimitError:        print("Hit error")Hit errorFallbacks for Sequences\u200bWe can also create fallbacks for sequences, that are sequences\nthemselves. Here we do that with two different models: ChatOpenAI and\nthen normal OpenAI (which does not use a chat model). Because OpenAI is\nNOT a chat model, you likely want a different prompt.# First let\'s create a chain with a ChatModel# We add in a string output parser here so the outputs between the two are the same typefrom langchain_core.output_parsers import StrOutputParserchat_prompt = ChatPromptTemplate.from_messages(    [        (            "system",            "You\'re a nice assistant who always includes a compliment in your response",        ),        ("human", "Why did the {animal} cross the road"),    ])# Here we\'re going to use a bad model name to easily create a chain that will errorchat_model = ChatOpenAI(model_name="gpt-fake")bad_chain = chat_prompt | chat_model | StrOutputParser()# Now lets create a chain with the normal OpenAI modelfrom langchain.prompts import PromptTemplatefrom langchain_openai import OpenAIprompt_template = """Instructions: You should always include a compliment in your response.Question: Why did the {animal} cross the road?"""prompt = PromptTemplate.from_template(prompt_template)llm = OpenAI()good_chain = prompt | llm# We can now create a final chain which combines the twochain = bad_chain.with_fallbacks([good_chain])chain.invoke({"animal": "turtle"})\'\\n\\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.\'Help us out by providing feedback on this documentation page:PreviousCreate a runnable with the `@chain` decoratorNextStream custom generator functionsHandling LLM API ErrorsSpecifying errors to handleFallbacks for SequencesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nConfigure chain internals at runtime | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toConfigure chain internals at runtimeOn this pageConfigure chain internals at runtimeOftentimes you may want to experiment with, or even expose to the end\nuser, multiple different ways of doing things. In order to make this\nexperience as easy as possible, we have defined two methods.First, a configurable_fields method. This lets you configure\nparticular fields of a runnable.Second, a configurable_alternatives method. With this method, you can\nlist out alternatives for any particular runnable that can be set during\nruntime.Configuration Fields\u200bWith LLMs\u200bWith LLMs we can configure things like temperature%pip install --upgrade --quiet  langchain langchain-openaifrom langchain.prompts import PromptTemplatefrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(temperature=0).configurable_fields(    temperature=ConfigurableField(        id="llm_temperature",        name="LLM Temperature",        description="The temperature of the LLM",    ))model.invoke("pick a random number")AIMessage(content=\'7\')model.with_config(configurable={"llm_temperature": 0.9}).invoke("pick a random number")AIMessage(content=\'34\')We can also do this when its used as part of a chainprompt = PromptTemplate.from_template("Pick a random number above {x}")chain = prompt | modelchain.invoke({"x": 0})AIMessage(content=\'57\')chain.with_config(configurable={"llm_temperature": 0.9}).invoke({"x": 0})AIMessage(content=\'6\')With HubRunnables\u200bThis is useful to allow for switching of promptsfrom langchain.runnables.hub import HubRunnableprompt = HubRunnable("rlm/rag-prompt").configurable_fields(    owner_repo_commit=ConfigurableField(        id="hub_commit",        name="Hub Commit",        description="The Hub commit to pull from",    ))prompt.invoke({"question": "foo", "context": "bar"})ChatPromptValue(messages=[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don\'t know the answer, just say that you don\'t know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:")])prompt.with_config(configurable={"hub_commit": "rlm/rag-prompt-llama"}).invoke(    {"question": "foo", "context": "bar"})ChatPromptValue(messages=[HumanMessage(content="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don\'t know the answer, just say that you don\'t know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]")])Configurable Alternatives\u200bWith LLMs\u200bLet’s take a look at doing this with LLMsfrom langchain.prompts import PromptTemplatefrom langchain_community.chat_models import ChatAnthropicfrom langchain_core.runnables import ConfigurableFieldfrom langchain_openai import ChatOpenAIllm = ChatAnthropic(temperature=0).configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="llm"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="anthropic",    # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`    openai=ChatOpenAI(),    # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")`    gpt4=ChatOpenAI(model="gpt-4"),    # You can add more configuration options here)prompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | llm# By default it will call Anthropicchain.invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")# We can use `.with_config(configurable={"llm": "openai"})` to specify an llm to usechain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they already have bear feet!")# If we use the `default_key` then it uses the defaultchain.with_config(configurable={"llm": "anthropic"}).invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")With Prompts\u200bWe can do a similar thing, but alternate between promptsllm = ChatAnthropic(temperature=0)prompt = PromptTemplate.from_template(    "Tell me a joke about {topic}").configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="prompt"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="joke",    # This adds a new option, with name `poem`    poem=PromptTemplate.from_template("Write a short poem about {topic}"),    # You can add more configuration options here)chain = prompt | llm# By default it will write a jokechain.invoke({"topic": "bears"})AIMessage(content=" Here\'s a silly joke about bears:\\n\\nWhat do you call a bear with no teeth?\\nA gummy bear!")# We can configure it write a poemchain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})AIMessage(content=\' Here is a short poem about bears:\\n\\nThe bears awaken from their sleep\\nAnd lumber out into the deep\\nForests filled with trees so tall\\nForaging for food before nightfall \\nTheir furry coats and claws so sharp\\nSniffing for berries and fish to nab\\nLumbering about without a care\\nThe mighty grizzly and black bear\\nProud creatures, wild and free\\nRuling their domain majestically\\nWandering the woods they call their own\\nBefore returning to their dens alone\')With Prompts and LLMs\u200bWe can also have multiple things configurable! Here’s an example doing\nthat with both prompts and LLMs.llm = ChatAnthropic(temperature=0).configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="llm"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="anthropic",    # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`    openai=ChatOpenAI(),    # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model="gpt-4")`    gpt4=ChatOpenAI(model="gpt-4"),    # You can add more configuration options here)prompt = PromptTemplate.from_template(    "Tell me a joke about {topic}").configurable_alternatives(    # This gives this field an id    # When configuring the end runnable, we can then use this id to configure this field    ConfigurableField(id="prompt"),    # This sets a default_key.    # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used    default_key="joke",    # This adds a new option, with name `poem`    poem=PromptTemplate.from_template("Write a short poem about {topic}"),    # You can add more configuration options here)chain = prompt | llm# We can configure it write a poem with OpenAIchain.with_config(configurable={"prompt": "poem", "llm": "openai"}).invoke(    {"topic": "bears"})AIMessage(content="In the forest, where tall trees sway,\\nA creature roams, both fierce and gray.\\nWith mighty paws and piercing eyes,\\nThe bear, a symbol of strength, defies.\\n\\nThrough snow-kissed mountains, it does roam,\\nA guardian of its woodland home.\\nWith fur so thick, a shield of might,\\nIt braves the coldest winter night.\\n\\nA gentle giant, yet wild and free,\\nThe bear commands respect, you see.\\nWith every step, it leaves a trace,\\nOf untamed power and ancient grace.\\n\\nFrom honeyed feast to salmon\'s leap,\\nIt takes its place, in nature\'s keep.\\nA symbol of untamed delight,\\nThe bear, a wonder, day and night.\\n\\nSo let us honor this noble beast,\\nIn forests where its soul finds peace.\\nFor in its presence, we come to know,\\nThe untamed spirit that in us also flows.")# We can always just configure only one if we wantchain.with_config(configurable={"llm": "openai"}).invoke({"topic": "bears"})AIMessage(content="Sure, here\'s a bear joke for you:\\n\\nWhy don\'t bears wear shoes?\\n\\nBecause they have bear feet!")Saving configurations\u200bWe can also easily save configured chains as their own objectsopenai_joke = chain.with_config(configurable={"llm": "openai"})openai_joke.invoke({"topic": "bears"})AIMessage(content="Why don\'t bears wear shoes?\\n\\nBecause they have bear feet!")Help us out by providing feedback on this documentation page:PreviousBind runtime argsNextCreate a runnable with the `@chain` decoratorConfiguration FieldsWith LLMsWith HubRunnablesConfigurable AlternativesWith LLMsWith PromptsWith Prompts and LLMsSaving configurationsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nStream custom generator functions | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toRunnableParallel: Manipulating dataRunnablePassthrough: Passing data throughRunnableLambda: Run Custom FunctionsRunnableBranch: Dynamically route logic based on inputBind runtime argsConfigure chain internals at runtimeCreate a runnable with the `@chain` decoratorAdd fallbacksStream custom generator functionsInspect your runnablesAdd message history (memory)CookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageHow toStream custom generator functionsOn this pageStream custom generator functionsYou can use generator functions (ie. functions that use the yield\nkeyword, and behave like iterators) in a LCEL pipeline.The signature of these generators should be\nIterator[Input] -> Iterator[Output]. Or for async generators:\nAsyncIterator[Input] -> AsyncIterator[Output].These are useful for: - implementing a custom output parser - modifying\nthe output of a previous step, while preserving streaming capabilitiesLet’s implement a custom output parser for comma-separated lists.Sync version\u200b%pip install --upgrade --quiet  langchain langchain-openaifrom typing import Iterator, Listfrom langchain.prompts.chat import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParserfrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_template(    "Write a comma-separated list of 5 animals similar to: {animal}")model = ChatOpenAI(temperature=0.0)str_chain = prompt | model | StrOutputParser()for chunk in str_chain.stream({"animal": "bear"}):    print(chunk, end="", flush=True)lion, tiger, wolf, gorilla, pandastr_chain.invoke({"animal": "bear"})\'lion, tiger, wolf, gorilla, panda\'# This is a custom parser that splits an iterator of llm tokens# into a list of strings separated by commasdef split_into_list(input: Iterator[str]) -> Iterator[List[str]]:    # hold partial input until we get a comma    buffer = ""    for chunk in input:        # add current chunk to buffer        buffer += chunk        # while there are commas in the buffer        while "," in buffer:            # split buffer on comma            comma_index = buffer.index(",")            # yield everything before the comma            yield [buffer[:comma_index].strip()]            # save the rest for the next iteration            buffer = buffer[comma_index + 1 :]    # yield the last chunk    yield [buffer.strip()]list_chain = str_chain | split_into_listfor chunk in list_chain.stream({"animal": "bear"}):    print(chunk, flush=True)[\'lion\'][\'tiger\'][\'wolf\'][\'gorilla\'][\'panda\']list_chain.invoke({"animal": "bear"})[\'lion\', \'tiger\', \'wolf\', \'gorilla\', \'panda\']Async version\u200bfrom typing import AsyncIteratorasync def asplit_into_list(    input: AsyncIterator[str],) -> AsyncIterator[List[str]]:  # async def    buffer = ""    async for (        chunk    ) in input:  # `input` is a `async_generator` object, so use `async for`        buffer += chunk        while "," in buffer:            comma_index = buffer.index(",")            yield [buffer[:comma_index].strip()]            buffer = buffer[comma_index + 1 :]    yield [buffer.strip()]list_chain = str_chain | asplit_into_listasync for chunk in list_chain.astream({"animal": "bear"}):    print(chunk, flush=True)[\'lion\'][\'tiger\'][\'wolf\'][\'gorilla\'][\'panda\']await list_chain.ainvoke({"animal": "bear"})[\'lion\', \'tiger\', \'wolf\', \'gorilla\', \'panda\']Help us out by providing feedback on this documentation page:PreviousAdd fallbacksNextInspect your runnablesSync versionAsync versionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 "\n\n\n\n\nCookbook | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsCode writingRouting by semantic similarityAdding memoryAdding moderationManaging prompt sizeUsing toolsLangChain Expression Language (LCEL)ModulesModel I/ORetrievalAgentsChainsMoreLangServeLangSmithLangGraphLangChain Expression LanguageCookbookCookbookExample code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the Prompt + LLM page is a good place to start.?️ Prompt + LLMThe most common and valuable composition is taking:?️ RAGLet’s look at adding in a retrieval step to a prompt and LLM, which adds?️ Multiple chainsRunnables can easily be used to string together multiple Chains?️ Querying a SQL DBWe can replicate our SQLDatabaseChain with Runnables.?️ AgentsYou can pass a Runnable into an agent. Make sure you have langchainhub?️ Code writingExample of how to use LCEL to write Python code.?️ Routing by semantic similarityWith LCEL you can easily add [custom routing?️ Adding memoryThis shows how to add memory to an arbitrary chain. Right now, you can?️ Adding moderationThis shows how to add in moderation (or other safeguards) around your?️ Managing prompt sizeAgents dynamically call tools. The results of those tool calls are added?️ Using toolsYou can use any Tools with Runnables easily.Help us out by providing feedback on this documentation page:PreviousAdd message history (memory)NextPrompt + LLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n",
 '\n\n\n\n\nQuickstart | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/OModel I/OQuickstartConceptsPromptsChat ModelsLLMsOutput ParsersQuickstartCustom Output ParsersTypesRetrievalAgentsChainsMoreLangServeLangSmithLangGraphModulesModel I/OOutput ParsersQuickstartOn this pageQuickstartLanguage models output text. But many times you may want to get more\nstructured information than just text back. This is where output parsers\ncome in.Output parsers are classes that help structure language model responses.\nThere are two main methods an output parser must implement:“Get format instructions”: A method which returns a string\ncontaining instructions for how the output of a language model\nshould be formatted.“Parse”: A method which takes in a string (assumed to be the\nresponse from a language model) and parses it into some structure.And then one optional one:“Parse with prompt”: A method which takes in a string (assumed to be\nthe response from a language model) and a prompt (assumed to be the\nprompt that generated such a response) and parses it into some\nstructure. The prompt is largely provided in the event the\nOutputParser wants to retry or fix the output in some way, and needs\ninformation from the prompt to do so.Get started\u200bBelow we go over the main type of output parser, the\nPydanticOutputParser.from langchain.output_parsers import PydanticOutputParserfrom langchain.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Field, validatorfrom langchain_openai import OpenAImodel = OpenAI(model_name="gpt-3.5-turbo-instruct", temperature=0.0)# Define your desired data structure.class Joke(BaseModel):    setup: str = Field(description="question to set up a joke")    punchline: str = Field(description="answer to resolve the joke")    # You can add custom validation logic easily with Pydantic.    @validator("setup")    def question_ends_with_question_mark(cls, field):        if field[-1] != "?":            raise ValueError("Badly formed question!")        return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate(    template="Answer the user query.\\n{format_instructions}\\n{query}\\n",    input_variables=["query"],    partial_variables={"format_instructions": parser.get_format_instructions()},)# And a query intended to prompt a language model to populate the data structure.prompt_and_model = prompt | modeloutput = prompt_and_model.invoke({"query": "Tell me a joke."})parser.invoke(output)Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')LCEL\u200bOutput parsers implement the Runnable\ninterface, the basic building\nblock of the LangChain Expression Language\n(LCEL). This means they support invoke,\nainvoke, stream, astream, batch, abatch, astream_log calls.Output parsers accept a string or BaseMessage as input and can return\nan arbitrary type.parser.invoke(output)Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')Instead of manually invoking the parser, we also could’ve just added it\nto our Runnable sequence:chain = prompt | model | parserchain.invoke({"query": "Tell me a joke."})Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')While all parsers support the streaming interface, only certain parsers\ncan stream through partially parsed objects, since this is highly\ndependent on the output type. Parsers which cannot construct partial\nobjects will simply yield the fully parsed output.The SimpleJsonOutputParser for example can stream through partial\noutputs:from langchain.output_parsers.json import SimpleJsonOutputParserjson_prompt = PromptTemplate.from_template(    "Return a JSON object with an `answer` key that answers the following question: {question}")json_parser = SimpleJsonOutputParser()json_chain = json_prompt | model | json_parserlist(json_chain.stream({"question": "Who invented the microscope?"}))[{}, {\'answer\': \'\'}, {\'answer\': \'Ant\'}, {\'answer\': \'Anton\'}, {\'answer\': \'Antonie\'}, {\'answer\': \'Antonie van\'}, {\'answer\': \'Antonie van Lee\'}, {\'answer\': \'Antonie van Leeu\'}, {\'answer\': \'Antonie van Leeuwen\'}, {\'answer\': \'Antonie van Leeuwenho\'}, {\'answer\': \'Antonie van Leeuwenhoek\'}]While the PydanticOutputParser cannot:list(chain.stream({"query": "Tell me a joke."}))[Joke(setup=\'Why did the chicken cross the road?\', punchline=\'To get to the other side!\')]Help us out by providing feedback on this documentation page:PreviousOutput ParsersNextCustom Output ParsersGet startedLCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 '\n\n\n\n\nSelf-querying | ?️? Langchain\n\n\n\n\n\n\n\nSkip to main contentDocsUse casesIntegrationsGuidesAPIMorePeopleVersioningChangelogContributingTemplatesCookbooksTutorialsYouTube?️?LangSmithLangSmith DocsLangServe GitHubTemplates GitHubTemplates HubLangChain HubJS/TS Docs?SearchGet startedIntroductionInstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookLangChain Expression Language (LCEL)ModulesModel I/ORetrievalDocument loadersText SplittersRetrievalText embedding modelsVector storesRetrieversVector store-backed retrieverMultiQueryRetrieverContextual compressionEnsemble RetrieverLong-Context ReorderMultiVector RetrieverParent Document RetrieverSelf-queryingTime-weighted vector store retrieverIndexingAgentsChainsMoreLangServeLangSmithLangGraphModulesRetrievalRetrieversSelf-queryingOn this pageSelf-queryingHead to Integrations for\ndocumentation on vector stores with built-in support for self-querying.A self-querying retriever is one that, as the name suggests, has the\nability to query itself. Specifically, given any natural language query,\nthe retriever uses a query-constructing LLM chain to write a structured\nquery and then applies that structured query to its underlying\nVectorStore. This allows the retriever to not only use the user-input\nquery for semantic similarity comparison with the contents of stored\ndocuments but to also extract filters from the user query on the\nmetadata of stored documents and to execute those filters.Get started\u200bFor demonstration purposes we’ll use a Chroma vector store. We’ve\ncreated a small demo set of documents that contain summaries of movies.Note: The self-query retriever requires you to have lark package\ninstalled.%pip install --upgrade --quiet  lark chromadbfrom langchain_community.vectorstores import Chromafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsdocs = [    Document(        page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",        metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},    ),    Document(        page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",        metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},    ),    Document(        page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",        metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},    ),    Document(        page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",        metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},    ),    Document(        page_content="Toys come alive and have a blast doing so",        metadata={"year": 1995, "genre": "animated"},    ),    Document(        page_content="Three men walk into the Zone, three men walk out of the Zone",        metadata={            "year": 1979,            "director": "Andrei Tarkovsky",            "genre": "thriller",            "rating": 9.9,        },    ),]vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())Creating our self-querying retriever\u200bNow we can instantiate our retriever. To do this we’ll need to provide\nsome information upfront about the metadata fields that our documents\nsupport and a short description of the document contents.from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [    AttributeInfo(        name="genre",        description="The genre of the movie. One of [\'science fiction\', \'comedy\', \'drama\', \'thriller\', \'romance\', \'action\', \'animated\']",        type="string",    ),    AttributeInfo(        name="year",        description="The year the movie was released",        type="integer",    ),    AttributeInfo(        name="director",        description="The name of the movie director",        type="string",    ),    AttributeInfo(        name="rating", description="A 1-10 rating for the movie", type="float"    ),]document_content_description = "Brief summary of a movie"llm = ChatOpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm(    llm,    vectorstore,    document_content_description,    metadata_field_info,)Testing it out\u200bAnd now we can actually try using our retriever!# This example only specifies a filterretriever.invoke("I want to watch a movie rated higher than 8.5")[Document(page_content=\'Three men walk into the Zone, three men walk out of the Zone\', metadata={\'director\': \'Andrei Tarkovsky\', \'genre\': \'thriller\', \'rating\': 9.9, \'year\': 1979}), Document(page_content=\'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\', metadata={\'director\': \'Satoshi Kon\', \'rating\': 8.6, \'year\': 2006})]# This example specifies a query and a filterretriever.invoke("Has Greta Gerwig directed any movies about women")[Document(page_content=\'A bunch of normal-sized women are supremely wholesome and some men pine after them\', metadata={\'director\': \'Greta Gerwig\', \'rating\': 8.3, \'year\': 2019})]# This example specifies a composite filterretriever.invoke("What\'s a highly rated (above 8.5) science fiction film?")[Document(page_content=\'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\', metadata={\'director\': \'Satoshi Kon\', \'rating\': 8.6, \'year\': 2006}), Document(page_content=\'Three men walk into the Zone, three men walk out of the Zone\', metadata={\'director\': \'Andrei Tarkovsky\', \'genre\': \'thriller\', \'rating\': 9.9, \'year\': 1979})]# This example specifies a query and composite filterretriever.invoke(    "What\'s a movie after 1990 but before 2005 that\'s all about toys, and preferably is animated")[Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Filter k\u200bWe can also use the self query retriever to specify k: the number of\ndocuments to fetch.We can do this by passing enable_limit=True to the constructor.retriever = SelfQueryRetriever.from_llm(    llm,    vectorstore,    document_content_description,    metadata_field_info,    enable_limit=True,)# This example only specifies a relevant queryretriever.invoke("What are two movies about dinosaurs")[Document(page_content=\'A bunch of scientists bring back dinosaurs and mayhem breaks loose\', metadata={\'genre\': \'science fiction\', \'rating\': 7.7, \'year\': 1993}), Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Constructing from scratch with LCEL\u200bTo see what’s going on under the hood, and to have more custom control,\nwe can reconstruct our retriever from scratch.First, we need to create a query-construction chain. This chain will\ntake a user query and generated a StructuredQuery object which\ncaptures the filters specified by the user. We provide some helper\nfunctions for creating a prompt and output parser. These have a number\nof tunable params that we’ll ignore here for simplicity.from langchain.chains.query_constructor.base import (    StructuredQueryOutputParser,    get_query_constructor_prompt,)prompt = get_query_constructor_prompt(    document_content_description,    metadata_field_info,)output_parser = StructuredQueryOutputParser.from_components()query_constructor = prompt | llm | output_parserLet’s look at our prompt:print(prompt.format(query="dummy question"))Your goal is to structure the user\'s query to match the request schema provided below.<< Structured Request Schema >>When responding use a markdown code snippet with a JSON object formatted in the following schema:```json{    "query": string \\ text string to compare to document contents    "filter": string \\ logical condition statement for filtering documents}```The query string should contain only text that is expected to match the contents of documents. Any conditions in the filter should not be mentioned in the query as well.A logical condition statement is composed of one or more comparison and logical operation statements.A comparison statement takes the form: `comp(attr, val)`:- `comp` (eq | ne | gt | gte | lt | lte | contain | like | in | nin): comparator- `attr` (string):  name of attribute to apply the comparison to- `val` (string): is the comparison valueA logical operation statement takes the form `op(statement1, statement2, ...)`:- `op` (and | or | not): logical operator- `statement1`, `statement2`, ... (comparison statements or logical operation statements): one or more statements to apply the operation toMake sure that you only use the comparators and logical operators listed above and no others.Make sure that filters only refer to attributes that exist in the data source.Make sure that filters only use the attributed names with its function names if there are functions applied on them.Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values.Make sure that filters take into account the descriptions of attributes and only make comparisons that are feasible given the type of data being stored.Make sure that filters are only used as needed. If there are no filters that should be applied return "NO_FILTER" for the filter value.<< Example 1. >>Data Source:```json{    "content": "Lyrics of a song",    "attributes": {        "artist": {            "type": "string",            "description": "Name of the song artist"        },        "length": {            "type": "integer",            "description": "Length of the song in seconds"        },        "genre": {            "type": "string",            "description": "The song genre, one of "pop", "rock" or "rap""        }    }}```User Query:What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genreStructured Request:```json{    "query": "teenager love",    "filter": "and(or(eq(\\"artist\\", \\"Taylor Swift\\"), eq(\\"artist\\", \\"Katy Perry\\")), lt(\\"length\\", 180), eq(\\"genre\\", \\"pop\\"))"}```<< Example 2. >>Data Source:```json{    "content": "Lyrics of a song",    "attributes": {        "artist": {            "type": "string",            "description": "Name of the song artist"        },        "length": {            "type": "integer",            "description": "Length of the song in seconds"        },        "genre": {            "type": "string",            "description": "The song genre, one of "pop", "rock" or "rap""        }    }}```User Query:What are songs that were not published on SpotifyStructured Request:```json{    "query": "",    "filter": "NO_FILTER"}```<< Example 3. >>Data Source:```json{    "content": "Brief summary of a movie",    "attributes": {    "genre": {        "description": "The genre of the movie. One of [\'science fiction\', \'comedy\', \'drama\', \'thriller\', \'romance\', \'action\', \'animated\']",        "type": "string"    },    "year": {        "description": "The year the movie was released",        "type": "integer"    },    "director": {        "description": "The name of the movie director",        "type": "string"    },    "rating": {        "description": "A 1-10 rating for the movie",        "type": "float"    }}}```User Query:dummy questionStructured Request:And what our full chain produces:query_constructor.invoke(    {        "query": "What are some sci-fi movies from the 90\'s directed by Luc Besson about taxi drivers"    })StructuredQuery(query=\'taxi driver\', filter=Operation(operator=<Operator.AND: \'and\'>, arguments=[Comparison(comparator=<Comparator.EQ: \'eq\'>, attribute=\'genre\', value=\'science fiction\'), Operation(operator=<Operator.AND: \'and\'>, arguments=[Comparison(comparator=<Comparator.GTE: \'gte\'>, attribute=\'year\', value=1990), Comparison(comparator=<Comparator.LT: \'lt\'>, attribute=\'year\', value=2000)]), Comparison(comparator=<Comparator.EQ: \'eq\'>, attribute=\'director\', value=\'Luc Besson\')]), limit=None)The query constructor is the key element of the self-query retriever. To\nmake a great retrieval system you’ll need to make sure your query\nconstructor works well. Often this requires adjusting the prompt, the\nexamples in the prompt, the attribute descriptions, etc. For an example\nthat walks through refining a query constructor on some hotel inventory\ndata, check out this\ncookbook.The next key element is the structured query translator. This is the\nobject responsible for translating the generic StructuredQuery object\ninto a metadata filter in the syntax of the vector store you’re using.\nLangChain comes with a number of built-in translators. To see them all\nhead to the Integrations\nsection.from langchain.retrievers.self_query.chroma import ChromaTranslatorretriever = SelfQueryRetriever(    query_constructor=query_constructor,    vectorstore=vectorstore,    structured_query_translator=ChromaTranslator(),)retriever.invoke(    "What\'s a movie after 1990 but before 2005 that\'s all about toys, and preferably is animated")[Document(page_content=\'Toys come alive and have a blast doing so\', metadata={\'genre\': \'animated\', \'year\': 1995})]Help us out by providing feedback on this documentation page:PreviousParent Document RetrieverNextTime-weighted vector store retrieverGet startedCreating our self-querying retrieverTesting it outFilter kConstructing from scratch with LCELCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogYouTubeCopyright © 2024 LangChain, Inc.\n\n\n\n',
 " < < < < < < < < < < < < < < <_ < < < < ( < < < < < < < < < < < < < < < < < < < < < < < < < < < <8 < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < n < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < ( for2 <2 < < < < < < < < < < < < < ( ( <8 < (2 < ( < < < < < < < < <2 < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < <2 < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < < < < < < < < < < < < < < < < < < < < < < < < < < ( (\n <8 <2 < M < fr < f < < < < < < <2 < < ( < < < < < ( M < < < <8 < < ' ' to as < < < < < < < < < < < < < < < < < < < f < < M < ( (2 < < ( ( ( ' at < < < < < < < < < < < < < < < N < < ( ( < ( < < < < ( < <8 <4 < ( ( f < < < < < <2 <2 < < < < < < < ( f < < < < < < < < ( < < < < < < < < < n < n < < < < < m < ( < M < ( ( <2 < ( < < < (2 < < N <2 < ' in_ ( ( ( ( (2 <_ < n < '2 < (_ ' a < ( ( ( (_ (8 < ( ( ( ( ' with < (2 < ( <2 <2 <2 < < < < < ( < < < < < < < < < < < ( < <2 < <2 < # ( by the < < < < ( < ( ( < ( < ( < < < < < < < ' ( ( ( < (2 < < #2 < < < < ( < < M < ( ( < ( < (4 < < < < < < < < < < < (8 < < < < < ( < { <2 < < < <2 < (1 < ( < < < < < < < < ( ( ( ( ( ' ' more < (\n <2 < M < ( < < ( < < < < '’ < the < ( ' ' with ( ' ' ' the < < < < < < <8 < < < ( < < < < < < < < ( < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < ( < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < M < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < M < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ( < < < < < < context on < < < < < < < < <",
 '\n<|user|>\nCan you provide more examples of output parsers and how they are used? I want to understand how they can be useful in different scenarios.',
 "' ' '' ' '' '' ' '' ''' ' ' ' ' ' ' ' ' ' ''' ' '' '' '' ' ' ' ' ' '' ' '' ' '' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' '' ' ' '' '' ' ' ' ' ' '' ' ' '' '' ' ' '' ' ' ' ' ' ' ' ' '' '' '' '' ' '' '' ' ' ' ' ' '' ' '' ' '' ' '' '' ' '' '' ''' ' '' ' ' ' ' ' ''' ''' '''''' ' ' ' '' ' ' ' '' ' ' ' ' '' ' ' ' ' ' ' ' '' ''' ' ' ' ' '' ' ' ' '' ' ' ' '' '' '' ''' '' ' ' ' ' ' ' ' ' ' ' '' ' ' ' '' '' '' ' ''' '' ' '' ' '' ' '' ' ' ' ' ' ' ' ' ' ' ' ' M '' ' '' ' ' ' '' ' '' ' ' Type '' ' ''' ' M ' ' Type ''' ' ' ' '' ' ' ' ' ' M ' ' M ' ' ' M '' ' ' ' ' ' ' ' '' '' ' ' ' ' ' '' ' '' '' '' ' '' ' ' ' '' ' ' ' '' ' ' ' ' '' ' ' ' ' '' ' ' ''' '' ' ' '' ' ' ' ' Type ''' ' ' ' ' '' ' ' ' ' ' ' ' '' ' ' ' '' '' ' ' ' ' ' ' ' M ' ' ' ' ' M < ' ' ' ' ' ' ' ' ' ' ' '' < ' M ' '' <' < ' ' ' ' ' ' '' <' '' '' ' ' ' Type '' < < ' ' ' ' ' '' ' '' < ' '' ' Type' < Type ' ' ' '' < '' ' Type ' ' ' ' ' ' ' '' ' ' ' ' ' '' '' ' ' M < ' ' L < Type type ' ' ' '' '' ' ' type ' '' Type ' ' ' ' ' ' ' M < < ' ' ' ' ' ' ' ' ' '' ' ' ' ' ' M Type ' ' ' ' M ' '' '' ' '' ' Type ' ' ' ' M < Type ' ' ' M <' type Type' Type ' M ' ' ' '' ' Type ' Type ' ' '[ '' ' Type ' M ' M '\n < ' M < ' ' ' Type M ' Type ' M_ ' ' ' type ' ' type M Type Type' type type' ' type ' M '' M ' '' ' M < type type ' ' M < type ' ' ' ' type Type ' ' type < < < < < Type M < Type Type ' Type M < type types' M < < < < < < < < < < < < Type Type Type Type Type ' M < < < < < < < < < < < < < < < < < < < < < < < < < < < Type Type' Type Type < < < ' M < ' M < < < < < < < < < < < < < < < < < Type M < type Type Type Type M < type M < M < < < < M < ' M < < Type M < Type M Type M M Type' Type M Type ' M Type [ M type Type' < M Type ' Type ' M Type Type\n < ' M ' M M M Type M < M < M M M M Types types type M Type M Type type M'[ [[[ Type [ [ [ M M < M Type M L [ M Type M Type M M Type M Type M Type M Type M < ' M Type M Type M M M Type M < M Type '' M type ' M Type M Type M Type[ M Type' Type Type M M M M Type M type M M Type Type M Type M Type M Type type M M Type M type M Type M Type M M type M M M ' M\n < M Type M M type M Type type M type Type Type M M M type M M M Type Type M M type M M M type Type M Type' type M M\n\n Type M Type M M M M Type M Type M Type M < ' M type M type M < M",
 '. Based on the following configuration alternative is used by default for this configuration.How toConfigure chain internals at runtimeConfigure internals at runtimeConfigure internals at runtimeConfigure altm.invoke("llm"").invoke("bears")AIMessage(content=\'7\').with_config(configurable={"llm": "openai"})openai_joke = ChatOpenAI(temperature=0).configurable_fields(    temperature=0)chain.invoke({"topic": "bears"})chain.invoke({"topic": "bears"})AIMessage(content=\'7\')AIMessage(content=\'7\')chain.invoke({"topic": "bears"})AIMessage(content="What don\'t bears wear shoes!")AIMessage(content=\'7\')AIMessage(content="7" and LLMs with Promts and LLMsSaving configurations at runtimeConfigure internals at runtimeConfigure internals at runtimeConfigure altm.configurable_alternatives = Confurable_alternatives.Configurable_fields(    owner_repo_commit and LLMs with PrommsConfigurableFields(    owner_repo_joke = "openai"}.configurable_fields (5024 LegMap")chain.invoke({"topic": "bears"})chain.invoke({"topic": "bears"})AIMessage(content=\'7\')AIMessage(content=\'7\' and LLMs with Promts and LLMsSaving configurations at runtimeConfigure altm.invoke("bears"})AIMessage(content=\'7\' and LLMs with Promts and LLMs with Promts and LLMsSaving Configurable_fields(    owner_repo_joe_joe"configurable_fields = Confurable_fields(    owner_repo_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_Joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_Joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_joe_Joe_joe_Joe_joe_Joe_joe_Joe_joe_Joe_joe_Joe_joe_joe_Joe_joe_joe_joe_joe_joe_joe_Joe_Joe_joe_joe_Joe_joe_joe_Joe_Joe_Joe_joe_Joe_joe_joe_joe_Joe_joe_joe_Joe_joe_Joe_Joe_joe_joe_Joe_Joe_',
 '\n<|user|>\nCan you provide an example of how to use the RunnableLambda function to implement a custom function within a pipeline? The documentation seems unclear on how exactly to do it.',
 "\n<|assistant|>\nThe RunnableWithMessageHistory class in LangChain's Expression Language allows us to add message history (memory) to certain types of chains. It wraps another Runnable and manages the chat message history for it. Specifically, it can be used for any Runnable that takes as input one of a sequence of BaseMessage, a dict with a key that takes a sequence of BaseMessage, or a dict with a key that takes the latest message(s) as a string or sequence of BaseMessage and a separate key that takes historical messages. The RunnableWithMessageHistory class takes two arguments: the Runnable to wrap and a callable that returns an instance of BaseChatMessageHistory, which can be specified by passing a configuration to the RunnableWithMessageHistory at runtime. By default, the configuration parameter is expected to be a single string session_id, but it can be customized by passing in a list of ConfigurableFieldSpec objects to the history_factory_config parameter. The get_session_history callable can return an instance of either an in-memory ChatMessageHistory or a more persistent storage using Redis (or another provider). The RunnableWithMessageHistory class lets us manage the chat message history for certain types of chains, making it useful for situations where it can be hard to understand what the inputs are to various parts of the chain, especially with something like message history injection.",
 '--InstallationQuickstartSecurityLangChain Expression LanguageGet startedWhy use LCELInterfaceStreamingHow toCookbookPrompt + LLMRAGMultiple chainsQuerying a SQL DBAgentsRunnables are reloading for memory (LCEL)Conversions and I can reload my_memory. This memory, get load memory, CONDINGLEsLogServe GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHubLogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub LogsHub GitHub LogsHub GitHub LogsHub GitHub LogsHub LogsHub GitHub LogsHub GitHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub GitHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub Logs Hub GitHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub Logs Hub GitHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub LogsHub Log',
 '2 < The\n <2 < < The  both can not\n\n <  The which in5 < that by2 < in \n this we\n a1 < just in in ( just just4 < just in just just in  that pref <  at a ( just as8 < as ( a2 < as from the3 < in andsss a ssssals <  < s1 <1 < ( ( and  W 2 " just not ass <s2 < as < ( The in by and *s11 < just in a the which the and ( in W and  (1 W2 < in ( the W < 1 < 3 < and de in as from the and  the27 <  the12 "  in which  the 8 < in  at2 6 in in and in ( more the the ( with in the by from and thes12 W (622 The in— with ,  in  — the the  at this to pre A in can  the3 < the  a\n This  at in in The ( The\n < as A < ( 25 < 24 The with ( also\xad9 1 (—  <\n2 in  Dal < as < the2 < by in A The the The the44 The  A < or ( Pre  (  by we2 < the C1 <  A The2 by to, a by the the to as 1 < a < following we < this3 < as is and from the, with  1 A ( ( ( with as < which The (1 ( just\n < the1 < in 11 The (—, the thed the In in   the ( which the74 < 1 The\n2 The de in in5 B with the can, in as in   the from the  Thess the in,  , group with " to  in in the  the de the ( in— as, in as ( as ( as the first through to and all  which as to through through    in as  2 as the the1 <  in \n The in as, here for, ( as  2 in as as21 < at as in a B a A ( in is2 which and The B ( more, a, the ( as which ( and and as2  not in as ( and by2 ( a ( are the the by, ( (3 ( which a ( as and this in at, this and (31 *1 2, while as that most  in  in the The from0 <4 < as while a and,2 B B in in that 62  In by2 by3 <, for on ( the (1 < The the1 < following, ( <  <,3 <1 The in the The by (1   and ( in ( this years, < in1 < (2 The\n <1 The and from is, the221 The as more just    for just in can a2s The2\n124 < a Senate2 < with in by\n  The this by can by by a The in ( from which in  as 15 (5 ( just not a ( a (0 ( can in ( - 3 as " A " the ( more in by and\n < ( (1133 Al in in20 U3 The in W A4 < < A <\n < in in in  the1 <2 < for in6 <2 < < < < <s2 <1 < (3 <7 <26 (2 <al <2 < < < < < < < < <3 Q <\n <1 < < < < A < The < <3 < < < <2 < as < <2 < < < < < <1 <4 < < < < < <2 < < < < < < < < < < < < < < < < < < < < W < < W < < < < < < < W < W < the < < < < < < < < < < W < W < In The W < by W The in as in the W Wsssss this Wss W A3 The as2 (21 < after the2 W2 <  the1s W W W this thiss A < most A < in which ( the as52 C The in most and and this2 <2 < just on the ( from most a < Alsssss assssss W The by  The the < in in ( A < in and following as a The ( ( Ws ( in in',
 '\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBased on the given documentation, can you explain how to add memory to an arbitrary chain using LangChain Expression Language?',
 "\n<|assistant|>\nYes, based on the documentation provided, here's how you can add memory to an arbitrary chain using LangChain Expression Language:\n\n1. Create a Runnable object that takes a sequence of BaseMessages as input and returns an output. This Runnable can be any function you want to add memory to.\n\n2. Wrap your Runnable object inside a RunnableWithMessageHistory object. This object manages the message history for your Runnable and makes it possible to access previous messages during runtime.\n\n3. When wrapping your Runnable, you can specify a callable that returns an instance of BaseChatMessageHistory. This callable can be used to create a memory store that persists messages between runs. By default, this callable is expected to be a single string session_id, but you can customize it by passing a list of ConfigurableFieldSpec objects to the history_factory_config parameter.\n\n4. To access previous messages during runtime, you can call the get_session_history() method on your RunnableWithMessageHistory object. This method returns an instance of BaseChatMessageHistory, which you can use to retrieve previous messages and inject them into your Runnable as needed.\n\n5. By using RunnableWithMessageHistory, you can make your chains more context-aware and improve their performance by reusing previously computed results instead of starting from scratch every time. Additionally, it can make it easier to understand what the inputs are to various parts of the chain, especially with something like message history injection."]


将文本载入 vectorstore:建立索引


# Now, use all_texts to build the vectorstore with Chroma
vectorstore = Chroma.from_texts(texts=all_texts, embedding=embd)
retriever = vectorstore.as_retriever()


构建查询引擎


from langchain import hub
from langchain_core.runnables import RunnablePassthrough
# Prompt
prompt = hub.pull("rlm/rag-prompt")

# Post-processing
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# Chain
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)


Langchain Hub RAG 提示


print(prompt)
####################### Response #####################
ChatPromptTemplate(input_variables=['context', 'question'], metadata={'lc_hub_owner': 'rlm', 'lc_hub_repo': 'rag-prompt', 'lc_hub_commit_hash': '50442af133e61576e74536c6556cefe1fac147cad032f4377b60c436e6cdcb6e'}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: {question} \nContext: {context} \nAnswer:"))])


print(prompt.messages[0].prompt.template)
####################### Response ############################
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: {question} 
Context: {context} 
Answer:


询问查询


# Question
response =rag_chain.invoke("What is LCEL?")
print(str(response))
############# Response ######################################
 LangChain Expression Language (LCEL) is a declarative way to easily compose chains together in LangChain. It was designed from day 1 to support putting prototypes in production with no code changes, from the simplest "prompt + LLM" chain to complex chains with hundreds of steps. Some reasons why one might want to use LCEL include streaming support (allowing for the best possible time-to-first-token), async support (enabling use in both synchronous and asynchronous APIs), optimized parallel execution (automatically executing parallel steps with the smallest possible latency), retries and fallbacks (a great way to make chains more reliable at scale), access to intermediate results (useful for letting end-users know something is happening or debugging), input and output schemas (providing Pydantic and JSONSchema schemas inferred from chain structure for validation), seamless LangSmith tracing integration (maximum observability and debuggability), and seamless LangServe deployment integration (easy chain deployment).


询问查询


# Question
response =rag_chain.invoke("How to define a RAG chain? Give me a specific code example.")
################### Response ########################
To define a RAG chain in LangChain Expression Language (LCEL), you can follow these steps:
1. Create a function decorated with `@chain`. This function becomes a runnable.
2. Inside this function, create chains by composing Runnable components using the `|` operator. For example, `prompt | model | parser`.
3. Save the chain as a variable and return it.
4. Call this function using `invoke()`.
Here's an example:
```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import chain
from langchain_openai import ChatOpenAI
prompt1 = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
prompt2 = ChatPromptTemplate.from_template("What is the subject of this joke: {joke}")
@chain
def custom_chain(text):
    prompt_val1 = prompt1.invoke({"topic": text})
    output1 = ChatOpenAI().invoke(prompt_val1)
    parsed_output1 = StrOutputParser().invoke(output1)
    chain2 = prompt2 | ChatOpenAI() | StrOutputParser()
    return chain2.invoke({"joke": parsed_output1})
custom_chain("bears")
``` 
In this example, `custom_chain()` is a runnable function that takes `text` as input and returns a chain. The chain starts with `prompt1`, which is passed `text` as the value for `topic`. The output of `prompt1` is passed through `ChatOpenAI()` and `StrOutputParser()` to extract the joke. Then, `prompt2` is executed with `parsed_output1` as the value for `joke`. The output of `prompt2` is also passed through `ChatOpenAI()` and `StrOutputParser()`. The resulting output is returned from `custom_chain()`.
You can also use `@chain` with other Runnable components, such as `Retriever`, `Memory`, `Fallbacks`, `Tools`, etc., to create more complex chains for various tasks. 


结论

在这里,我们使用先进的检索技术 RAPTOR 实现了长语境的 RAG。

文章来源:https://medium.com/the-ai-forum/implementing-advanced-rag-in-langchain-using-raptor-258a51c503c6
欢迎关注ATYUN官方公众号
商务合作及内容投稿请联系邮箱:bd@atyun.com
评论 登录
热门职位
Maluuba
20000~40000/月
Cisco
25000~30000/月 深圳市
PilotAILabs
30000~60000/年 深圳市
写评论取消
回复取消