This content originally appeared on DEV Community and was authored by Utkarsh Rastogi
Hey there! Welcome to my journey of learning LangChain with AWS Bedrock. I'm documenting everything as I go, so you can learn alongside me. Today was my first day diving into this fascinating world of AI models, and honestly, it felt like having a conversation with the future.
Quick Setup Note: I'm using AWS SageMaker Studio notebooks for this entire series - it comes with all AWS permissions pre-configured and makes the learning process super smooth. Just create a notebook and you're ready to go!
What is LangChain and Why Use It?
LangChain is a Python framework that makes working with Large Language Models (LLMs) incredibly simple. Instead of writing complex API calls and handling raw JSON responses, LangChain provides a clean, intuitive interface.
Why LangChain?
- Simplicity: One line of code instead of 20+ lines of API handling
- Consistency: Same interface for different AI models (Claude, GPT, Titan, etc.)
- Power: Built-in features like memory, chains, and prompt templates
- Flexibility: Easy to switch between models or combine multiple AI calls
Think of LangChain as a bridge between your Python code and powerful AI models. Instead of dealing with complex API calls and JSON responses, LangChain makes it feel like you're just chatting with a really smart friend who happens to live in the cloud.
Setting Up Our Playground
First things first - let's get our tools ready. It's like preparing chai before a good conversation:
!pip install boto3==1.39.13 botocore==1.39.13 langchain==0.3.27 langchain-aws==0.2.31
import boto3
from langchain_aws import ChatBedrock
# Initialize Bedrock client
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1"
)
This is our foundation. The bedrock_client
is like getting a VIP pass to AWS's AI models. Simple, right?
Meeting Claude - The Thoughtful AI
Claude is like that friend who always gives thoughtful, well-structured answers. Let's set him up:
# Create a LangChain ChatBedrock
llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 256, "temperature": 0.7}
)
response = llm.invoke("Write a short poem about AWS In Human Feel Based on Indian Desi Version")
print("Claude Response:\n", response.content)
The magic happens in that invoke()
call. It's like asking a question and getting back a thoughtful response. The temperature: 0.7
makes Claude a bit creative - not too robotic, not too wild.
Meeting Titan - The Quick Responder
Now, let's try Amazon's own Titan model. But here's where I learned something important the hard way:
# Try with Titan model (shorter completion)
titan_llm = ChatBedrock(
client=bedrock_client,
model_id="amazon.titan-text-lite-v1",
model_kwargs={"maxTokenCount": 128, "temperature": 0.5}
)
prompt = """You are a creative Indian poet with a friendly desi vibe. Write a short poem (4 lines max) about AWS cloud services.
Use simple human feelings and desi cultural touches (like chai, monsoon, Bollywood style). Keep the tone warm, positive, and
free of any bad or offensive words.
"""
response = titan_llm.invoke(prompt)
print("Titan Response:\n", response.content)
The Gotchas I Discovered
1. Model Names Matter
I initially used amazon.titan-text-lite-v1
, but for chat interactions, amazon.titan-text-express-v1
works better. It's like calling someone by the right name - details matter!
2. Parameter Confusion: maxTokenCount vs max_tokens
This one got me! Different models expect different parameter names:
-
Claude models: Use
max_tokens
-
Some Titan models: Might expect
maxTokenCount
in certain contexts -
LangChain standard: Generally uses
max_tokens
Think of it like this - it's the same concept (limiting response length), but different models speak slightly different dialects. Always check the documentation!
3. Using the Right Model Instance
I made a silly mistake - created titan_llm
but then used llm
for the Titan response. It's like preparing two different teas but serving the wrong one to your guest!
What I Learned Today
- LangChain simplifies everything - No more wrestling with raw API responses
- Each model has personality - Claude is thoughtful, Titan is quick
- Parameter names vary - Always double-check the docs
- Temperature controls creativity - Lower = more focused, Higher = more creative
- Model IDs are specific - Use the right one for your use case
Wrapping Up
Day 1 was all about getting comfortable with the basics. Like learning to ride a bike, the first day is about balance and not falling off. Each day we'll be discovering new concepts through hands-on experimentation!
The beauty of LangChain is that it makes powerful AI feel approachable. You don't need a PhD in machine learning - just curiosity and willingness to experiment.
Happy coding! If you found this helpful, leave a comment and follow this whole series as we explore more LangChain magic together.
About Me
Hi! I'm Utkarsh, a Cloud Specialist & AWS Community Builder who loves turning complex AWS topics into fun chai-time stories ☕
This is part of my "LangChain with AWS Bedrock: A Developer's Journey" series. Follow along as I document everything I learn, including the mistakes and the victories.
This content originally appeared on DEV Community and was authored by Utkarsh Rastogi

Utkarsh Rastogi | Sciencx (2025-08-24T06:17:03+00:00) Day 1: LangChain Basics – My First Chat with Claude and Titan. Retrieved from https://www.scien.cx/2025/08/24/day-1-langchain-basics-my-first-chat-with-claude-and-titan/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.