Just spent the last few days exploring Cloudflare AI Workers and decided to build something practical with it.
Built a simple voice-to-todo app using Expo on the frontend basically you speak into your phone, it hits a Workers endpoint that runs Whisper for transcription, then pipes it through an LLM to actually understand what you're asking and spit back structured tasks. All the heavy lifting happens on Workers, which honestly handles it pretty smoothly.
Nothing fancy, just a fun way to see how LLMs and speech processing can work together at the edge.
https://github.com/muhammadazhariqbal/schedule-ai-backend
https://github.com/muhammadazhariqbal/sayluno