AI Developer Challenge: Creative Content Generation Pipeline by Ariya BirjandiAI Developer Challenge: Creative Content Generation Pipeline by Ariya Birjandi

AI Developer Challenge: Creative Content Generation Pipeline

Ariya Birjandi

Ariya Birjandi

πŸš€ The AI Developer Challenge

Make Something Insanely Great

Welcome. This isn’t just a coding task. This is a mission. A calling for the bold and curiousβ€”those who dare to think differently. If you're ready to build something magical, something powerful, something insanely greatβ€”read on.

🌟 The Vision

Imagine this: A user types a simple idea β€”
β€œMake me a glowing dragon standing on a cliff at sunset.”
And your app...
Understands the request using a local LLM.
Generates stunning visuals from text.
Transforms that image into an interactive 3D model.
Remembers it. Forever.
You're not building an app. You're building a creative partner.

🎯 The Mission

Create an intelligent, end-to-end pipeline powered by Openfabric and a locally hosted LLM:

Step 1: Understand the User

Use a local LLM like DeepSeek or Llama to:
Interpret prompts
Expand them creatively
Drive meaningful, artistic input into the generation process

Step 2: Bring Ideas to Life

Chain two Openfabric apps together:
Text to Image App ID: f0997a01-d6d3-a5fe-53d8-561300318557 View on Openfabric
Image to 3D App ID: 69543f29-4d41-4afc-7f29-3d51591f11eb View on Openfabric
Use their manifest and schema dynamically to structure requests.

Step 3: Remember Everything

Build memory like it matters.
🧠 Short-Term: Session context during a single interaction
πŸ’Ύ Long-Term: Persistence across sessions using SQLite, Redis, or flat files Let the AI recall things like:
β€œGenerate a new robot like the one I created last Thursday β€” but this time, with wings.”

πŸ›  The Pipeline

User Prompt ↓ Local LLM (DeepSeek or LLaMA) ↓ Text-to-Image App (Openfabric) ↓ Image Output ↓ Image-to-3D App (Openfabric) ↓ 3D Model Output
Simple. Elegant. Powerful.

πŸ“¦ Deliverables

What we expect:
βœ… Fully working Python project
βœ… README.md with clear instructions
βœ… Prompt β†’ Image β†’ 3D working example
βœ… Logs or screenshots
βœ… Memory functionality (clearly explained)

🧠 What We’re Really Testing

Your grasp of the Openfabric SDK (Stub, Remote, schema, manifest)
Your creativity in prompt-to-image generation
Your engineering intuition with LLMs
Your ability to manage context and memory
Your attention to quality β€” code, comments, and clarity

πŸš€ Bonus Points

🎨 Visual GUI with Streamlit or Gradio
πŸ” FAISS/ChromaDB for memory similarity
πŸ—‚ Local browser to explore generated 3D assets
🎀 Voice-to-text interaction

✨ Example Experience

Prompt:
β€œDesign a cyberpunk city skyline at night.”
β†’ LLM expands into vivid, textured visual descriptions β†’ Text-to-Image App renders a cityscape β†’ Image-to-3D app converts it into depth-aware 3D β†’ The system remembers the request for remixing later
That’s not automation. That’s imagination at scale.

πŸ’‘ Where to start

You’ll find the project structure set, the entrypoint is in main.py file.
############################################################
# Execution callback function
############################################################
def execute(model: AppModel) -> None:
"""
Main execution entry point for handling a model pass.

Args:
model (AppModel): The model object containing request and response structures.
"""

# Retrieve input
request: InputClass = model.request

# Retrieve user config
user_config: ConfigClass = configurations.get('super-user', None)
logging.info(f"{configurations}")

# Initialize the Stub with app IDs
app_ids = user_config.app_ids if user_config else []
stub = Stub(app_ids)

# ------------------------------
# TODO : add your magic here
# ------------------------------



# Prepare response
response: OutputClass = model.response
response.message = f"Echo: {request.prompt}"
Given schema, stub implementation and all the details you should be able to figure out how eventing works but as an extra hint (if needed) here is an example of calling and app get the value and save it as an image:
    # Call the Text to Image app
object = stub.call('c25dcd829d134ea98f5ae4dd311d13bc.node3.openfabric.network', {'prompt': 'Hello World!'}, 'super-user')
image = object.get('result')
# save to file
with open('output.png', 'wb') as f:
f.write(image)

How to start

The application can be executed in two different ways:
locally by running the start.sh
on in a docker container using Dockerfile
If all is fine you should be able to access the application on http://localhost:8888/swagger-ui/#/App/post_execution and see the following screen:

Ground Rules

Step up with any arsenal (read: libraries or packages) you believe in, but remember:
πŸ‘Ž External services like chatGPT are off-limits. Stand on your own.
πŸ‘Ž Plagiarism is for the weak. Forge your own path.
πŸ‘Ž A broken app equals failure. Non-negotiable.

This Is It

We're not just evaluating a project; we're judging your potential to revolutionize our landscape. A half-baked app won’t cut it.
We're zeroing in on:
πŸ‘ Exceptional documentation.
πŸ‘ Code that speaks volumes.
πŸ‘ Inventiveness that dazzles.
πŸ‘ A problem-solving beast.
πŸ‘ Unwavering adherence to the brief
Like this project

Posted Jun 24, 2025

Created an AI-driven pipeline for text-to-3D model generation using Openfabric and local LLMs.