AI Agent Workflow Development with Python & TypeScript

Wahyu Ikbal

Building AI Agent Workflows with Python & TypeScript

Recently, AI agents have been on the rise among developers and engineers. For those of you who don’t know, agent AI is an implementation of the LLM process, where LLM is equipped with tools, memory, and the ability to make decisions. Think of it as an assistant that can perform tasks, not just give output in the form of text.
The concept of AI agents is very powerful, especially when multiple agents work together in a system. The process of inter-agent interaction and coordination is called workflow. In this article, we will not focus on the theory of AI agents in depth, but more on the engineering and practical side. So make sure you have a basic understanding of RAG (Retrieval-Augmented Generation) and LLM before proceeding.

Building an Effective Agent Workflow

In this section, we will learn common patterns that are often used in agentic systems implemented in various applications. This time we will practice them using Vercel AI SDK and Langgraph. In addition, I will explain a little bit about LangGraph and AI SDK. If you are already familiar with these frameworks, you can skip to the code implementation section.

Why LangGraph?

LangGraph is a library built on top of LangChain, designed for LLM-based applications, making LangGraph a solid choice for complex agent systems. LangGraph makes building graph-based workflows super easy and efficient. It simplifies handling complex tasks like orchestrating LLM calls and managing structured outputs. It also integrates seamlessly with Pydantic for data validation and type enforcement, ensuring that the inputs and outputs of each workflow step are well-structured and reliable.
At the heart of LangGraph is the concept of a stateful graph:
State: Stores and updates context throughout the workflow, enabling dynamic decision-making.
Nodes: Represent computation steps, handling tasks like processing inputs, decision-making, or external interactions.
Edges: Define the flow between nodes, supporting conditional logic for flexible, multi-step workflows.

Why Vercel AI SDK

Vercel AI SDK is a TypeScript library designed for building AI-powered applications, especially within React-based frameworks like Next.js, Svelte, and Vue. It streamlines AI development with a standardized approach to prompts, structured data, and tool integration.
AI SDK Core has various functions designed for text generation, structured data generation, and tool usage. These functions take a standardized approach to setting up prompts and settings, making it easier to work with different models.
- streamText: Streams text and tool calls, perfect for chatbots and content streaming.
- streamObject: Streams structured objects matching a Zod schema, making it easy to generate UI components from JSON-like data.
Vercel AI SDK also includes AI SDK UI, which simplifies building LLM-based interfaces. That’s why it’s my go-to library for developing AI applications on Next.js! 🔥

Workflow Patterns

Workflow is a generic term for orchestrated and repeatable patterns of activity, enabled by the systematic organization of resources into processes that transform materials, provide services, or process information.
The concept of workflow is often encountered in everyday life. For example, in college where we follow a structured organization, or in a work environment where there is a system.
If you want a more detailed explanation, you can read the research of Anthropic here. Anthropic’s research is a required reference for many AI developers, and can be additional learning material for you as well. Among them are:
Prompt Chaining (Sequential Processing)
Routing
Parallel Processing
Orchestration
Evaluation/Feedback Loops
For the code that you want to learn, you can check my github or check the source code repo at the end of this article.

Set up workflow

Before we start, we need to set up the framework we will be using. You can choose to use either Typescript or Python. The API we use here is groq provider. Don’t forget to install the library first.
Setting up Typescript with Vercel AI SDK
require('dotenv').config();

import { generateObject } from 'ai'; // generate json output
import { z } from 'zod'; // schema declaration
import { createGroq } from '@ai-sdk/groq'; // provider

const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY || "",
});
Python setup with langgraph
import os
from langchain_groq import ChatGroq

# Load environment variables
load_dotenv()

# Initialize Groq client
groq = ChatGroq(
api_key=os.getenv("GROQ_API_KEY"),
base_url="https://api.groq.com/openai/v1"
)

Prompt Chaining

This is the simplest workflow. Prompt chaining breaks down a single task into multiple steps. It involves multiple outputs from one prompt as inputs for the next. This pattern is ideal for tasks where the order of each step is clear.
For example, when creating copywriting for a brand, it starts with agent A crafting the CTA. Then, agent B assesses the emotional aspect. Next, agent C assesses the fit with the brand tone. Each step is sequential, and the output from one step becomes the input for the next.
async function generateMarketingCopy(input: string) {
const model = groq('llama-3.3-70b-versatile');

console.log("Generating initial marketing copy...");
const { text: copy } = await generateText({
model,
prompt: `Write persuasive marketing copy for: ${input}. Focus on benefits and emotional appeal.`,
});

console.log("Evaluating marketing copy quality...");
const { object: qualityMetrics } = await generateObject({
model,
schema: z.object({
hasCallToAction: z.boolean(),
emotionalAppeal: z.number().min(1).max(10),
clarity: z.number().min(1).max(10),
}),
prompt: `Evaluate this marketing copy for:
1. Presence of call to action (true/false)
2. Emotional appeal (1-10)
3. Clarity (1-10)

Copy to evaluate: ${copy}`,
});
Once the agent specializing in marketing is declared, the next step is to verify that the quality is relevant and to reduce hallucinations, in which case you can create additional, more complex agents that not only improve but also increase the output of the previous agent.
// If quality check fails, regenerate with more specific instructions

if (
!qualityMetrics.hasCallToAction ||
qualityMetrics.emotionalAppeal < 7 ||
qualityMetrics.clarity < 7
) {
console.log("Improving marketing copy based on quality evaluation...");
const { text: improvedCopy } = await generateText({
model,
prompt: `Rewrite this marketing copy with:
${!qualityMetrics.hasCallToAction ? '- A clear call to action' : ''}
${qualityMetrics.emotionalAppeal < 7 ? '- Stronger emotional appeal' : ''}
${qualityMetrics.clarity < 7 ? '- Improved clarity and directness' : ''}

Original copy: ${copy}`,
});

console.log("\nFinal Improved Marketing Copy:\n", improvedCopy);
console.log("\nQuality Metrics:", qualityMetrics);
return;
}

Workflow with LangGraph python

# Define our state schema for tracking the workflow progress
class MarketingState(TypedDict):
input: str
initial_copy: str
quality_metrics: Optional[dict]
final_copy: str
messages: list[BaseMessage]

# Define quality metrics schema
class QualityMetrics(BaseModel):
hasCallToAction: bool
emotionalAppeal: int
clarity: int

def generate_initial_copy(state: MarketingState) -> MarketingState:
"""
Generates the initial marketing copy based on the input.
"""
response = groq.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": f"Write persuasive marketing copy for: {state['input']}. Focus on benefits and emotional appeal."
}]
)

state["initial_copy"] = response.choices[0].message.content
state["final_copy"] = state["initial_copy"] # Initialize final copy
return state

def evaluate_copy(state: MarketingState) -> MarketingState:
"""
Evaluates the marketing copy against quality metrics.
"""
response = groq.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": f"""Evaluate this marketing copy for:
1. Presence of call to action (true/false)
2. Emotional appeal (1-10)
3. Clarity (1-10)

Copy to evaluate: {state['final_copy']}"""
}]
)

# Parse the metrics from the response
metrics = QualityMetrics(
hasCallToAction=True, # You would parse these values from the actual response
emotionalAppeal=8,
clarity=8
)

state["quality_metrics"] = metrics.dict()
return state

def improve_copy(state: MarketingState) -> MarketingState:
"""
Improves the marketing copy if it doesn't meet quality standards.
"""
metrics = state["quality_metrics"]

# Check if improvements are needed
if (not metrics["hasCallToAction"] or
metrics["emotionalAppeal"] < 7 or
metrics["clarity"] < 7):

improvement_prompt = f"""Rewrite this marketing copy with:
{'' if metrics["hasCallToAction"] else '- A clear call to action'}
{'' if metrics["emotionalAppeal"] >= 7 else '- Stronger emotional appeal'}
{'' if metrics["clarity"] >= 7 else '- Improved clarity and directness'}

Original copy: {state['final_copy']}"""

response = groq.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": improvement_prompt}]
)

state["final_copy"] = response.choices[0].message.content

return state

def create_marketing_workflow() -> Graph:
"""
Creates the marketing copy generation and improvement workflow.
"""
workflow = StateGraph(MarketingState)

# Add nodes
workflow.add_node("generate", generate_initial_copy)
workflow.add_node("evaluate", evaluate_copy)
workflow.add_node("improve", improve_copy)

# Add edges
workflow.add_edge("generate", "evaluate")
workflow.add_edge("evaluate", "improve")

# Set entry and exit points
workflow.set_entry_point("generate")
workflow.set_finish_point("improve")

return workflow.compile()

def generate_marketing_copy(input_text: str) -> dict:
"""
Main function to handle the marketing copy generation process.
"""
workflow = create_marketing_workflow()

result = workflow.invoke({
"input": input_text,
"initial_copy": "",
"quality_metrics": None,
"final_copy": "",
"messages": []
})

return {
"initial_copy": result["initial_copy"],
"final_copy": result["final_copy"],
"quality_metrics": result["quality_metrics"]
}

Routing

This pattern allows the model to determine the correct process path based on context and previous results. The routing pattern works well when there are several different categories that require special handling. This pattern ensures that each case is handled with the most appropriate approach. As in the example above, the result of the first LLM determines which path to take for the next step, adapting the process based on the input characteristics.
We first create a routing agent that classifies the inputs by dividing them based on the type of input, as well as its complexity and detailed reasoning.
async function handleCustomerQuery(query: string) {
const model = groq('llama-3.3-70b-versatile');

// Step 1: Classify the query type
const { object: classification } = await generateObject({
model,
schema: z.object({
reasoning: z.string(),
type: z.enum(['general', 'refund', 'technical']),
complexity: z.enum(['simple', 'complex']),
}),
prompt: `Classify this customer query:
${query}

Determine:
1. Query type (general, refund, or technical)
2. Complexity (simple or complex)
3. Brief reasoning for classification`,
});
The results of the classification are then directed to multiple agents. Simple, but this is really useful for customer service, which is directed to tools such as scheduling, contacting other people, and others, so that the output issued is personalized according to user needs.
// Route based on classification
// Determine the model and system prompt based on query type and complexity
const { text: response } = await generateText({
model:
classification.complexity === 'simple'
? groq('llama3-8b-8192')
: groq('llama-3.1-8b-instant'),
system: {
general:
'You are an expert customer service agent handling general inquiries.',
refund:
'You are a customer service agent specializing in refund requests. Follow company policies and gather necessary information.',
technical:
'You are a technical support specialist with in-depth knowledge of the product. Focus on clear, step-by-step troubleshooting.',
}[classification.type],
prompt: query,
});

Workflow with LangGraph python

# Define our state schema
class AgentState(TypedDict):
query: str
classification: dict
response: str
messages: list[BaseMessage]

# Define classification schema
class QueryClassification(BaseModel):
reasoning: str
type: Literal["general", "refund", "technical"]
complexity: Literal["simple", "complex"]

# Step 1: Classify the customer query
def classify_query(state: AgentState) -> AgentState:
"""Classifies the incoming customer query by type and complexity."""

prompt = f"""Classify this customer query:
{state['query']}

Determine:
1. Query type (general, refund, or technical)
2. Complexity (simple or complex)
3. Brief reasoning for classification"""

response = groq.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": prompt}]
)

# Parse the classification from the response
classification = QueryClassification(
reasoning=response.choices[0].message.content,
type="technical", # You would parse this from the response
complexity="simple" # You would parse this from the response
)

state["classification"] = classification.dict()
return state

# Step 2: Generate response based on classification
def generate_response(state: AgentState) -> AgentState:
"""Generates a response based on the query classification."""

# Select model based on complexity
model = "llama3-8b-8192" if state["classification"]["complexity"] == "simple" else "llama-3.1-8b-instant"

# Select system prompt based on query type
system_prompts = {
"general": "You are an expert customer service agent handling general inquiries.",
"refund": "You are a customer service agent specializing in refund requests. Follow company policies and gather necessary information.",
"technical": "You are a technical support specialist with in-depth knowledge of the product. Focus on clear, step-by-step troubleshooting."
}

response = groq.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_prompts[state["classification"]["type"]]},
{"role": "user", "content": state["query"]}
]
)

state["response"] = response.choices[0].message.content
return state

# Create the workflow graph
def create_customer_service_workflow() -> Graph:
"""Creates the customer service workflow graph."""

# Initialize the graph
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("classify", classify_query)
workflow.add_node("respond", generate_response)

# Add edges
workflow.add_edge("classify", "respond")

# Set entry and exit points
workflow.set_entry_point("classify")
workflow.set_finish_point("respond")

return workflow.compile()

# Main execution function
def handle_customer_query(query: str) -> dict:
"""Handles a customer query through the workflow."""

# Initialize workflow
workflow = create_customer_service_workflow()

# Execute workflow
result = workflow.invoke({
"query": query,
"classification": {},
"response": "",
"messages": []
})

return {
"classification": result["classification"],
"response": result["response"]
}

Parallelization

LLMs sometimes need to run at the same time and combine their outputs for more accurate results. This workflow is quite effective when a long task can be broken down into several parts and run at the same time to make it run faster. For complex tasks that require a lot of consideration, LLMs usually perform better than focusing on the specific attention of a context.
In this workflow, there are two variations:
Sectioning: Breaking a task into several subtasks that run at the same time.
Voting: Running the same task multiple times to produce different outputs.
Just like how an IT team focuses on application development, there are those who focus on performance, quality, and security. So for the flow of the website generation prompt, it is broken down into several aspects so that the results obtained not only work but also have stable performance and a high security side.
First, we need to create specialists who work simultaneously with more or less the same message.
// Run parallel reviews
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateObject({
model, system:
'You are an expert in code security. Focus on identifying security vulnerabilities, injection risks, and authentication issues.',
schema: z.object({
vulnerabilities: z.array(z.string()),
riskLevel: z.enum(['low', 'medium', 'high']),
suggestions: z.array(z.string()),
}), prompt: `Review this code: ${code}`,
}),
generateObject({
model, system:
'You are an expert in code performance. Focus on identifying performance bottlenecks, memory leaks, and optimization opportunities.',
schema: z.object({
issues: z.array(z.string()),
impact: z.enum(['low', 'medium', 'high']),
optimizations: z.array(z.string()),
}), prompt: `Review this code: ${code}`,
}),
generateObject({
model, system:
'You are an expert in code quality. Focus on code structure, readability, and adherence to best practices.',
schema: z.object({
concerns: z.array(z.string()),
qualityScore: z.number().min(1).max(10),
recommendations: z.array(z.string()),
}), prompt: `Review this code: ${code}`,
}),
]);
Next we need to merge all the objects together. We need an agent as an Aggregator. That needs to review and summarize, if necessary add correction tasks to minimize errors.
// Aggregate results using another model instance
const { text: summary } = await generateText({
model, system: 'You are a technical lead summarizing multiple code reviews.',
prompt: `Synthesize these code review results into a concise summary with key actions:
${JSON.stringify(reviews, null, 2)}`,
});

Workflow with LangGraph python

# Define response schemas using Pydantic
class SecurityReview(BaseModel):
vulnerabilities: List[str] = Field(default=[])
risk_level: str = Field(default="low")
suggestions: List[str] = Field(default=[])

class PerformanceReview(BaseModel):
issues: List[str] = Field(default=[])
impact: str = Field(default="low")
optimizations: List[str] = Field(default=[])

class MaintainabilityReview(BaseModel):
concerns: List[str] = Field(default=[])
quality_score: int = Field(default=10)
recommendations: List[str] = Field(default=[])

# Define a function to perform a review
def review_code(model, system_prompt: str, code: str, response_schema):
"""
Conducts a specialized review based on the provided system prompt and schema.
"""
messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=f"Review this code:\n{code}")
]
response = model.invoke(messages)
return response_schema.parse_raw(response.content)

# Define the workflow graph
class ReviewState(BaseModel):
code: str
security: SecurityReview = None
performance: PerformanceReview = None
maintainability: MaintainabilityReview = None
summary: str = ""

graph = StateGraph(ReviewState)

def security_review_node(state: ReviewState):
model = create_groq_model()
state.security = review_code(model, "You are an expert in code security...", state.code, SecurityReview)
return state

def performance_review_node(state: ReviewState):
model = create_groq_model()
state.performance = review_code(model, "You are an expert in code performance...", state.code, PerformanceReview)
return state

def maintainability_review_node(state: ReviewState):
model = create_groq_model()
state.maintainability = review_code(model, "You are an expert in code quality...", state.code, MaintainabilityReview)
return state

def summarize_reviews(state: ReviewState):
model = create_groq_model()
messages = [
SystemMessage(content="You are a technical lead summarizing multiple code reviews."),
HumanMessage(content=f"Synthesize these code review results:\n{json.dumps(state.dict(), indent=2)}")
]
response = model.invoke(messages)
state.summary = response.content
return state

# Add nodes to the graph
graph.add_node("security_review", security_review_node)
graph.add_node("performance_review", performance_review_node)
graph.add_node("maintainability_review", maintainability_review_node)
graph.add_node("summary", summarize_reviews)

# Define parallel execution
graph.add_edge("security_review", "summary")
graph.add_edge("performance_review", "summary")
graph.add_edge("maintainability_review", "summary")

graph.set_entry_point(["security_review", "performance_review", "maintainability_review"])

graph.set_finish_node("summary")

workflow = graph.compile()

Orchestration

In the workflow of orchestrator workers, the first LLM (orchestrator) dissects and delegates tasks in a manner similar to decomposition and combines the results into one. This first LLM plays an important role in coordinating and executing worker specialization. Each worker is optimized for a specific subtask.
Let’s think like a team. In this example. In this team there are several sections: Business Analyst, Content Strategist, Social Media Manager, Marketing Analyst. Then, based on the project manager’s instructions, a measurement of the level of difficulty, type of assignment and task delegation is performed.
async function implementTask(taskRequest: string) {
const { object: taskPlan } = await generateObject({
model: groq('llama-3.3-70b-versatile'),
schema: z.object({
tasks: z.array(
z.object({
purpose: z.string(),
taskName: z.string(),
changeType: z.enum(['create', 'modify', 'delete']),
})
),
estimatedEffort: z.enum(['low', 'medium', 'high']),
}),
system: 'You are a Project Manager responsible for designing an efficient task execution strategy.',
prompt: `Create a work plan for the following task:
${taskRequest}`,
});
After the product manager has successfully delegated the task, the next step is to use the previous parallel pattern and delegate to employees.
const taskChanges = await Promise.all(
taskPlan.tasks.map(async (task) => {
// Determine job roles based on task type
const workerSystemPrompt = {
create: {
'Audience research': 'You are a Business Analyst. You are responsible for conducting in-depth research on the target audience.',
'Content creation': 'You are a Content Strategist. You design engaging content strategies tailored to the audience.',
'Account management': 'You are a Social Media Manager. You manage and optimize social media accounts.',
'Performance analysis': 'You are a Marketing Analyst. You analyze data and measure the success of marketing strategies.',
}[task.taskName] || 'You are an expert professional in this field.',
modify: {
'Account management': 'You are a Social Media Manager. You improve account management strategies to be more effective.',
}[task.taskName] || 'You are a specialist enhancing task efficiency.',
delete: 'You are an Operations Manager. You identify unnecessary tasks and remove them efficiently.',
}[task.changeType];

const { object: change } = await generateObject({
model: groq('llama-3.3-70b-versatile'),
schema: z.object({
explanation: z.string(),
actionItems: z.array(z.string()),
}),
system: workerSystemPrompt,
prompt: `Implement changes for the following task:
- ${task.taskName}

Purpose of change: ${task.purpose}

Explain the reason for the change and provide a list of necessary action items.`,
});

return {
task,
implementation: change,
};
})
);

Workflow with LangGraph python

class Task(BaseModel):
purpose: str
task_name: str
change_type: Literal["create", "modify", "delete"]

class TaskPlan(BaseModel):
tasks: List[Task]
estimated_effort: Literal["low", "medium", "high"]

def generate_task_plan(task_request: str) -> TaskPlan:
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": "You are a Project Manager responsible for designing an efficient task execution strategy."},
{"role": "user", "content": f"Create a work plan for the following task: {task_request}"}
],
response_format="json"
)
return TaskPlan(**response["choices"][0]["message"]["content"])

def implement_task_change(task: Task):
worker_prompts = {
"create": {
"Audience research": "You are a Business Analyst responsible for audience research.",
"Content creation": "You are a Content Strategist designing content strategies.",
"Account management": "You are a Social Media Manager optimizing accounts.",
"Performance analysis": "You are a Marketing Analyst measuring strategy success."
},
"modify": {
"Account management": "You are a Social Media Manager improving strategies."
},
"delete": "You are an Operations Manager identifying unnecessary tasks."
}

system_prompt = worker_prompts.get(task.change_type, {}).get(task.task_name, "You are an expert in this field.")

response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Implement changes for the task: {task.task_name}\nPurpose: {task.purpose}\nExplain why and list action items."}
],
response_format="json"
)
return response["choices"][0]["message"]["content"]

Evaluation/Feedback Loops

It’s not easy to tell ChatGPT to write a thesis. It’s hard for ChatGPT to think on its own. To improve the quality of the results, we need feedback from other people. We can do this by working on the parts of the thesis one at a time. For example, we can start with the background, then the content, and finally the conclusion. We need someone here to evaluate the results and provide feedback.
// Initial article generation
const { text: article } = await generateText({
model: groq('llama-3.1-8b-instant'),
system: 'You are a writer. Your task is to write a concise article in only 6 sentences! You might get additional feedback from your supervisor!',
prompt: `Write a 6-sentence article on the topic: ${topic}`,
});
while (iterations < MAX_ITERATIONS) {
// Evaluate current article
const { object: evaluation } = await generateObject({
model: groq('llama-3.3-70b-versatile'), // use a larger model to evaluate
schema: z.object({
qualityScore: z.number().min(1).max(10),
clearAndConcise: z.boolean(),
engaging: z.boolean(),
informative: z.boolean(),
specificIssues: z.array(z.string()),
improvementSuggestions: z.array(z.string()),
}),
system: "You are a writing supervisor! Your agency specializes in concise articles! Your task is to evaluate the given article and provide feedback for improvements! Repeat until the article meets your requirements!",
prompt: `Evaluate this article:

Article: ${currentArticle}

Consider:
1. Overall quality
2. Clarity and conciseness
3. Engagement level
4. Informative value`,
});
if (
evaluation.qualityScore >= 8 &&
evaluation.clearAndConcise &&
evaluation.engaging &&
evaluation.informative
) {
break;
}

// Generate improved article based on feedback
const { text: improvedArticle } = await generateText({
model: groq('llama-3.3-70b-versatile'), // use a larger model
system: 'You are an expert article writer.',
prompt: `Improve this article based on the following feedback:
${evaluation.specificIssues.join('\n')}
${evaluation.improvementSuggestions.join('\n')}

Current Article: ${currentArticle}`,
});

currentArticle = improvedArticle;
iterations++;
}

return {
finalArticle: currentArticle,
iterationsRequired: iterations,
};
}

Workflow with LangGraph python

class Task(BaseModel):
purpose: str
task_name: str
change_type: Literal["create", "modify", "delete"]

class TaskPlan(BaseModel):
tasks: List[Task]
estimated_effort: Literal["low", "medium", "high"]

def generate_task_plan(task_request: str) -> TaskPlan:
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": "You are a Project Manager responsible for designing an efficient task execution strategy."},
{"role": "user", "content": f"Create a work plan for the following task: {task_request}"}
],
response_format="json"
)
return TaskPlan(**response["choices"][0]["message"]["content"])

def implement_task_change(task: Task):
worker_prompts = {
"create": {
"Audience research": "You are a Business Analyst responsible for audience research.",
"Content creation": "You are a Content Strategist designing content strategies.",
"Account management": "You are a Social Media Manager optimizing accounts.",
"Performance analysis": "You are a Marketing Analyst measuring strategy success."
},
"modify": {
"Account management": "You are a Social Media Manager improving strategies."
},
"delete": "You are an Operations Manager identifying unnecessary tasks."
}

system_prompt = worker_prompts.get(task.change_type, {}).get(task.task_name, "You are an expert in this field.")

response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Implement changes for the task: {task.task_name}\nPurpose: {task.purpose}\nExplain why and list action items."}
],
response_format="json"
)
return response["choices"][0]["message"]["content"]

When to use Typescript and Python

Vercel AI SDK and langgraph are one of the frameworks that have quite complete and neat documentation. In addition to Vercel AI SDK, there are other recommended frameworks such as
Agentic, AI agent standard library that works with any LLM and TS AI SDK.
Agno, open-source python framework for building agentic systems
Rivet, a drag and drop GUI LLM workflow builder; and
Crew ai, Python Framework for orchestrating collaborative AI agents
Using a framework allows us to easily use standard patterns such as tools, memory, components and connect them by calling certain parameters.
While Python has traditionally been the dominant language for AI and ML development due to its extensive libraries and frameworks, Typescript — a superset of JavaScript — has emerged as a strong contender for building scalable, maintainable, and efficient AI/ML applications.
The use of Typescript and Python depends on your needs: if you need to develop a full-stack application quickly, Typescript is sufficient without adding more stacks; if you want to be separate and build a more complex agent, Python is recommended. Python is more recommended for building AI agents because of its full AI/ML library support and more frameworks, while Typescript is suitable if you focus on integration with the react framework, especially for fullstack developers.
For both Python and Typescript, you can check out the source code for other frameworks here.
Like this project

Posted Jun 14, 2025

Developed AI agent workflows using Python and TypeScript with LangGraph and Vercel AI SDK.

Automation report from database
Automation report from database
Lombokeats AI Website culture for lombok
Lombokeats AI Website culture for lombok
Bambubot: AI Chatbot kelurahan keputih
Bambubot: AI Chatbot kelurahan keputih
Personal website with ai chat
Personal website with ai chat

Join 50k+ companies and 1M+ independents

Contra Logo

© 2025 Contra.Work Inc