In today’s rapidly evolving digital environment, e-commerce platforms must maintain flexibility, scalability, and intelligence to keep businesses competitive. SaySo, the principal developer of the Ashley X Descend auction platform for Ashley Homestore, has turned to Medusa—a headless, open-source e-commerce platform—to meet these demands. By coupling Medusa’s dynamic workflows with AI expertise from Lambda Curry, enterprise retailers like Ashley Homestore are experiencing more efficient operations and cutting-edge innovation.
Why Medusa?
Medusa 2 is the latest iteration of the platform, designed for performance and developer experience. It supports microservices, modular tooling, and an API-first approach, making it straightforward to integrate with third-party services—including Large Language Models (LLMs) and other AI-driven tools. SaySo leverages Medusa’s flexibility to build powerful e-commerce features into Descend.
Key Capabilities
- Headless Architecture: Decouple your front-end from your back-end for granular customization of user experiences.
- Modular Setup: Use or extend only the features you need, integrating seamlessly with external APIs, libraries, or databases.
- Robust Workflows: Simplify complex processes like bulk imports, real-time order management, and data transformations.
To see some of these capabilities in action, check out my recent demo from SXSW, where I showcase how AI-powered workflows integrate with Medusa to enhance e-commerce automation. Watch the video below!
SaySo’s Collaboration with Lambda Curry
SaySo sought to enhance the Descend platform with sophisticated AI features to effectively handle Ashley Homestore’s massive inventory and unique requirements. To accomplish this, they partnered with Lambda Curry, a technology agency specializing in AI solutions on top of Medusa. Working together, they have:
- Automated Product Categorization: Leveraging ML models to sort and tag products efficiently.
- Improved Search and Discovery: Implemented NLP-driven enhancements for more intuitive product lookups.
- Optimized Workflows: Utilized tools like Langfuse to monitor and debug real-time processes, minimizing downtime.
View the storefront here: https://auction.ashleyhomestore.ca/
LangGraph’s (New) Functional API for AI Workflows
LangGraph now offers a Functional API (beta) that can streamline AI-driven workflows without managing nodes and edges explicitly. It’s especially handy for quick prototyping, combining human-in-the-loop tasks, or simply persisting states.
Example Using the Functional API
Below is an illustration of how you can define a task to categorize products using gpt-4o-mini while enforcing a Zod schema for structured output, then wrap it in an entrypoint for easy invocation.
// src/langgraph/ai-categories.ts
import { entrypoint, task, MemorySaver } from "@langchain/langgraph";
import { z } from "zod";
import { ChatOpenAI } from "langchain/chat_models/openai";
// 1. Define a Zod schema for the structured response
const CategorySchema = z.object({
category: z.string(),
confidence: z.number().min(0).max(1).optional(),
});
// 2. Instantiate the ChatOpenAI model with structured output
// Here we specify model = "gpt-4o-mini" and attach the Zod schema
const model = new ChatOpenAI({
modelName: "gpt-4o-mini",
temperature: 0.7, // Optional configuration
}).withStructuredOutput(CategorySchema);
// 3. A helper function that calls our ChatOpenAI model
async function categorizeWithLLM(productTitle: string): Promise<z.infer<typeof CategorySchema>> {
// We craft a system/user message to guide the categorization
const messages = [
{
role: "system",
content: "You are a product categorization assistant. Return valid JSON.",
},
{
role: "user",
content: `Please categorize this product: "${productTitle}".\nReturn a JSON object with { category: string, confidence: number }`,
},
];
// The model will parse and validate the result automatically
const response = await model.call(messages);
// Response is a typed object matching CategorySchema
return response;
}
// 4. Define a task that uses the above helper
const categorizeProduct = task({ name: "categorizeProduct" }, async (title: string) => {
const result = await categorizeWithLLM(title);
// If for some reason the model fails or returns nonsense, handle fallback
const category = result.category || "Uncategorized";
const confidence = result.confidence ?? 0.0;
return { category, confidence };
});
// 5. Create an entrypoint as the main workflow function
// MemorySaver checkpointer is optional, for state persistence
const checkpointer = new MemorySaver();
export const categorizeEntrypoint = entrypoint(
{
name: "categorizeEntrypoint",
checkpointer,
},
async (title: string) => {
// Execute the categorizeProduct task
const result = await categorizeProduct(title);
// Return the final result
return {
message: `Result of categorization for '${title}': ${result.category} (confidence: ${result.confidence})`,
};
}
);
Invoking the Entrypoint
You could expose this entry point in an API route, for instance:
// src/api/ai-categories/route.ts
import { categorizeEntrypoint } from "../../langgraph/ai-categories";
export async function GET(req, res) {
// In real usage, you'd parse out your product title from query or body
const { title = "Generic Product" } = req.query;
// Optionally set a thread_id to ensure reusability/resuming of the same workflow
const config = {
configurable: {
thread_id: "ai_categorize_thread",
},
};
// Now call our entrypoint
const output = await categorizeEntrypoint.invoke(title, config);
return res.json({ data: output });
}
Key Terminology
- CategorySchema: Zod schema for typed, validated AI responses.
- withStructuredOutput(...): Ensures the ChatOpenAI model returns JSON that matches the schema.
- task(...): Wraps a single asynchronous unit of work that can be retried, checkpointed, or even interrupted.
- entrypoint(...): Defines a top-level workflow function where you can combine multiple tasks.
- gpt-4o-mini: Swaps out the usual GPT-4 model for a specialized or smaller variant.
Building an AI Workflow in Medusa
Besides LangGraph’s functional approach, Medusa 2 provides a built-in workflow engine for orchestrating complex tasks across multiple services. You can blend both solutions if you prefer: letting Medusa handle top-level commerce workflows, and deferring LLM-driven logic to dedicated tasks in LangGraph.
1. High-Level Steps
- Medusa Workflow: Initiates product updates (e.g. new products from Ashley Homestore).
- LangGraph Task: Categorizes product data using an LLM.
- Update in Medusa: Persists results in the Medusa product record.
This synergy allows you to harness the best of both worlds: the reliability and domain knowledge from Medusa for commerce operations and the streamlined AI pipeline from LangGraph.
Debugging & Monitoring with Langfuse
Tools like Langfuse provide dashboards to trace and debug AI output:
- Identify bottlenecks or unclear model instructions.
- Quickly refine prompts.
- Enhance reliability in high-volume scenarios such as Ashley Homestore’s large product catalog.
From the vantage point of SaySo, building the Descend platform on Medusa’s flexible, modular foundation has delivered transformative results for Ashley Homestore. The injection of AI-powered features from Lambda Curry—ranging from advanced NLP to workflow optimization—enables large retailers to efficiently manage and grow their online catalogs. By introducing automated categorization, advanced search, and real-time monitoring, Descend remains a forward-thinking marketplace solution.
Whether you’re a growing brand or an established retailer, combining Medusa with AI superpowers offered by teams like SaySo and Lambda Curry is a proven path to future-proof your e-commerce strategy. Ready to explore how your business can benefit? Reach out today and let us help you harness the true potential of modern commerce technology.