LangChain Chains
Chains allow you to combine multiple components into complex workflows. They're the core mechanism for creating sophisticated AI applications in LangChain.
⛓️ What are Chains?
Chains are sequences of calls to components (models, prompts, tools) that work together to accomplish tasks. They enable you to:
- Combine multiple LLM calls
- Add pre/post-processing steps
- Create conditional logic
- Build multi-step workflows
🔗 Basic Chains
LLMChain
The simplest chain combining a prompt and an LLM.
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
# Initialize LLM
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
# Create prompt template
prompt = PromptTemplate(
input_variables=["topic"],
template="Write a catchy tagline for a {topic} startup."
)
# Create chain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain
result = chain.run("AI-powered gardening")
print(result)
# Output: "Grow Smarter, Not Harder: AI for Your Garden"Sequential Chain
Run multiple chains in sequence, where output of one becomes input for the next.
from langchain.chains import SequentialChain
# Chain 1: Generate company name
name_prompt = PromptTemplate(
input_variables=["industry"],
template="Generate a creative name for a {industry} company:"
)
name_chain = LLMChain(llm=llm, prompt=name_prompt, output_key="company_name")
# Chain 2: Create tagline
tagline_prompt = PromptTemplate(
input_variables=["company_name"],
template="Create a tagline for {company_name}:"
)
tagline_chain = LLMChain(llm=llm, prompt=tagline_prompt, output_key="tagline")
# Chain 3: Write description
desc_prompt = PromptTemplate(
input_variables=["company_name", "tagline"],
template="Write a short description for {company_name} with tagline '{tagline}':"
)
desc_chain = LLMChain(llm=llm, prompt=desc_prompt, output_key="description")
# Combine into sequential chain
overall_chain = SequentialChain(
chains=[name_chain, tagline_chain, desc_chain],
input_variables=["industry"],
output_variables=["company_name", "tagline", "description"],
verbose=True
)
# Run the complete workflow
result = overall_chain({"industry": "sustainable fashion"})
print(f"Company: {result['company_name']}")
print(f"Tagline: {result['tagline']}")
print(f"Description: {result['description']}")🎯 Advanced Chains
Router Chain
Route inputs to different chains based on conditions.
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.prompts import PromptTemplate
# Define different prompt templates
physics_template = """You are a physics expert. Answer this question:
{input}
"""
math_template = """You are a mathematics expert. Solve this problem:
{input}
"""
history_template = """You are a history expert. Explain this historical topic:
{input}
"""
# Create prompt infos for routing
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for solving mathematical problems",
"prompt_template": math_template
},
{
"name": "history",
"description": "Good for explaining historical events",
"prompt_template": history_template
}
]
# Create router chain
llm = ChatOpenAI(temperature=0)
router_template = """Given a user question, route it to the most appropriate expert.
{destinations}
Question: {input}
"""
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
partial_variables={"destinations": "\n".join(
f"{p['name']}: {p['description']}" for p in prompt_infos
)}
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
# Create destination chains
destination_chains = {}
for p_info in prompt_infos:
prompt = PromptTemplate(template=p_info["prompt_template"], input_variables=["input"])
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[p_info["name"]] = chain
# Create multi-prompt chain
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=LLMChain(llm=llm, prompt=PromptTemplate.from_template("{input}"))
)
# Use the router chain
result = chain.run("What is Newton's second law of motion?")
# Routes to physics expertTransform Chain
Apply transformations to input or output data.
from langchain.chains import TransformChain
import json
# Define transformation function
def cleanup_function(inputs: dict) -> dict:
text = inputs["text"]
# Clean up the text
cleaned = text.strip().lower().replace("\n", " ")
return {"cleaned_text": cleaned}
# Create transform chain
cleanup_chain = TransformChain(
input_variables=["text"],
output_variables=["cleaned_text"],
transform=cleanup_function
)
# Use with LLM chain
template = "Summarize this text: {cleaned_text}"
prompt = PromptTemplate(template=template, input_variables=["cleaned_text"])
llm_chain = LLMChain(llm=llm, prompt=prompt)
# Combine chains
from langchain.chains import SimpleSequentialChain
combined_chain = SimpleSequentialChain(
chains=[cleanup_chain, llm_chain],
verbose=True
)
# Run the combined chain
messy_text = """
THIS IS A VERY MESSY TEXT WITH
EXTRA SPACES AND NEW LINES!!!
"""
result = combined_chain.run(messy_text)📝 LCEL Chains (LangChain Expression Language)
Modern Chain Syntax
The newer, more flexible way to create chains using the pipe operator |.
from langchain.schema import StrOutputParser
# Create prompt template
prompt = PromptTemplate.from_template(
"Tell me a joke about {topic} in the style of {style}"
)
# Create chain using LCEL
chain = prompt | llm | StrOutputParser()
# Run the chain
result = chain.invoke({
"topic": "programming",
"style": "a pirate"
})
print(result)
# Output: "Why did the programmer quit his job? He didn't get arrays! Arrr!"Complex LCEL Chain
Combine multiple operations in a pipeline.
from langchain.schema.runnable import RunnablePassthrough
# Define operations
generate_idea = PromptTemplate.from_template(
"Generate a creative business idea about {topic}"
) | llm | StrOutputParser()
expand_idea = PromptTemplate.from_template(
"Expand this idea with details: {idea}\n\nInclude:\n1. Target market\n2. Revenue model\n3. Key features"
) | llm | StrOutputParser()
create_name = PromptTemplate.from_template(
"Create a catchy name for this business: {expanded_idea}"
) | llm | StrOutputParser()
# Combine operations
business_chain = (
RunnablePassthrough.assign(
idea=generate_idea
)
.assign(
expanded_idea=expand_idea
)
.assign(
name=create_name
)
)
# Run the complex chain
result = business_chain.invoke({"topic": "sustainable pet care"})
print(f"Business Idea: {result['idea']}")
print(f"Expanded: {result['expanded_idea']}")
print(f"Name: {result['name']}")🛠️ Custom Chains
Creating Your Own Chain
Build specialized chains for your specific use cases.
from langchain.chains.base import Chain
from langchain.schema import BaseOutputParser
from typing import Dict, List
class SentimentChain(Chain):
"""Custom chain that analyzes sentiment and provides recommendations."""
input_variables: List[str] = ["text"]
output_variables: List[str] = ["sentiment", "recommendation", "score"]
def __init__(self, llm, **kwargs):
super().__init__(**kwargs)
self.llm = llm
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
text = inputs["text"]
# Analyze sentiment
sentiment_prompt = f"Analyze the sentiment of this text (positive/negative/neutral): {text}"
sentiment = self.llm(sentiment_prompt)
# Get sentiment score
score_prompt = f"Rate the sentiment score from -1 (very negative) to 1 (very positive): {text}"
score = self.llm(score_prompt)
# Provide recommendation
rec_prompt = f"Based on the sentiment analysis, what should be the next step? Text: {text}"
recommendation = self.llm(rec_prompt)
return {
"sentiment": sentiment,
"recommendation": recommendation,
"score": score
}
# Use the custom chain
custom_chain = SentimentChain(llm=llm)
result = custom_chain.run("I love this product! It's amazing!")
print(f"Sentiment: {result['sentiment']}")
print(f"Score: {result['score']}")
print(f"Recommendation: {result['recommendation']}")🎯 Real-World Examples
Document Q&A Chain
Build a question-answering system for documents.
from langchain.chains import RetrievalQA
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
# Load and process documents
from langchain_community.document_loaders import TextLoader
loader = TextLoader("company_docs.txt")
documents = loader.load()
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)
# Create retrieval chain
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
return_source_documents=True
)
# Ask questions
query = "What is our company's refund policy?"
result = qa_chain({"query": query})
print(f"Answer: {result['result']}")
print(f"Sources: {[doc.metadata for doc in result['source_documents']]}")Conversational Chain
Build a chatbot with personality and memory.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
# Create memory
memory = ConversationBufferMemory()
# Create conversational chain
conversation = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
# Have a conversation
response1 = conversation.predict(input="Hi! I'm interested in AI.")
response2 = conversation.predict(input="Can you recommend some resources?")
response3 = conversation.predict(input="What was my first message?")
print(f"Bot remembers: {memory.buffer}")🔧 Chain Configuration
Customizing Chain Behavior
# Set parameters for chains
chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True, # Show detailed execution
output_key="result", # Custom output key
return_intermediate_steps=True # Return intermediate results
)
# Add callbacks for monitoring
from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
result = chain.run("Explain quantum computing")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Total Cost: ${cb.total_cost}")🎯 Best Practices
1. Choose the Right Chain Type
- LLMChain for simple prompt-LLM combinations
- SequentialChain for multi-step workflows
- RouterChain for conditional logic
- LCEL chains for modern, composable workflows
2. Optimize Performance
# Use batching for multiple inputs
batch_inputs = [{"topic": f"topic_{i}"} for i in range(5)]
results = chain.batch(batch_inputs)
# Use streaming for long-running chains
for chunk in chain.stream({"topic": "space"}):
print(chunk, end="", flush=True)3. Error Handling
from langchain.schema import BaseOutputParser
from typing import Optional
class SafeParser(BaseOutputParser):
def parse(self, text: str) -> Optional[dict]:
try:
# Safe parsing logic
return {"result": text}
except Exception:
return {"error": "Parse failed", "original": text}
# Use with error handling
chain = prompt | llm | SafeParser()
result = chain.invoke({"input": "test"})4. Memory Management
- Use appropriate memory types for your use case
- Clear memory when necessary to avoid context overflow
- Consider memory costs for long conversations
Chains are the core building blocks for complex AI workflows. Next, explore how to add state and context with Memory components.