Alexia Valenzuela's Work | ContraWork by Alexia Valenzuela
Alexia Valenzuela
pro

Alexia Valenzuela

Fixing & building AI apps with clean UX + reliable outputs

New to Contra

Alexia is ready for their next project!

Cover image for This project focused on executing
This project focused on executing a fully designed website into a live, responsive Framer build within a tight deadline. The design, structure, and copy were already finalized. The goal was precise implementation, performance, and clean delivery without compromising visual quality. Section 2: My Role I handled the full execution layer inside Framer, including: Translating static designs into responsive layouts. Structuring reusable components. Implementing interactions and transitions. Ensuring consistency in spacing, typography, and hierarchy Final QA and publishing This was purely a build-focused role. Prioritizing accuracy, speed, and polish.  Section 3: Execution Approach To ensure a smooth and fast delivery, I followed a structured approach: Design Audit Reviewed layout, spacing systems, and component patterns before building. Component System Setup Built reusable sections to maintain consistency and speed up implementation. Responsive Build Designed across breakpoints early — not as an afterthought. Interaction Layer Added subtle motion and transitions to elevate the experience without overcomplicating it. Final QA + Polish Checked alignment, spacing, responsiveness, and performance before publishing. Section 4: Outcome Delivered a fully functional, published Framer site within deadline Maintained high visual accuracy to the original design Ensured clean responsiveness across devices Created a polished, production-ready experience The result was a smooth transition from design to live product with no execution gaps. Section 5: Key Strength I focus on the execution stage where most projects lose quality. Instead of treating development as a technical step, I approach it as a continuation of design, where spacing, timing, and responsiveness are just as important as layout. That’s what allows me to deliver builds that feel finished, not just functional.
0
5
Cover image for DoseCTRL — Clinical UX System
DoseCTRL — Clinical UX System (Biotech Aesthetic Layer) One-liner A biotech-inspired interface system that blends clinical clarity with premium, sensory-driven design. Problem Health tracking tools often feel either overly clinical or overly generic, failing to build trust or emotional engagement. Users need an interface that feels both medically credible and personally engaging. Solution DoseCTRL introduces a visual system inspired by molecular structures, liquid forms, and soft clinical environments. The UI balances precision and calmness, reinforcing trust while maintaining usability. Experience System Visual language derived from glass, liquid, and molecular structures Soft gradients and translucency to create depth without distraction Minimal UI layers to reduce cognitive load Interaction flow designed for speed and repetition Key Interaction Moments Fast log entry with minimal fields and immediate feedback Timeline updates instantly after each action Subtle visual hierarchy guiding attention without overwhelm Reminder prompts that re-engage users without friction Visual / Aesthetic Direction Neutral base (off-white, beige) with soft biotech blue accents Glassmorphism elements to create a premium clinical feel High spacing and clean typography for clarity Subtle depth and shadow for a tactile interface Technical Considerations Mobile-first responsive design (Next.js + Tailwind) Component-based architecture for scalability Lightweight rendering for fast interactions Structured data model supporting future expansion Outcome / Impact The interface elevates the perception of peptide tracking from a utility tool to a trusted system. It creates a foundation where users feel both guided and in control, increasing long-term engagement and retention.
1
16
Cover image for Trivium Cohort — A Multi-Role
Trivium Cohort — A Multi-Role Learning Ecosystem Overview Trivium Cohort is a connected educational platform designed to unify students, parents, and teachers into a shared digital environment. Instead of treating each role as separate, the system creates a cohort-based experience where progress, communication, and engagement are continuously visible and interactive. The platform transforms traditional education workflows into a dynamic, role-specific experience, blending structure, gamification, and emotional engagement. ⸻ 🧠 Core Concept A three-sided system where each user experiences the same data differently through tailored UI/UX: ⸻ 👩‍🏫 Teacher Experience (Tools + Control Layer) Purpose: Provide clarity, oversight, and control. UI/UX Direction: • Clean, dashboard-driven interface • Data visualization (progress tracking, completion rates) • Task and curriculum management tools • Cohort-level insights Key Features: • Assign tasks and lesson plans • View student progress and trends • Generate worksheets and structured content • Monitor engagement across the cohort 👉 Feels like: Command center + analytics dashboard ⸻ 🧒 Student Experience (Immersive + Game-Like) Purpose: Make learning feel interactive, motivating, and alive. UI/UX Direction: • 3D-inspired or spatial interface • “World-like” navigation (classroom, tasks, progress zones) • Gamified progression system Key Features: • Interactive task completion (“quests”) • Visual progress (levels, points, achievements) • Direct connection to teacher and parent • 2-player interactive experiences with parent 👉 Feels like: A playable learning world ⸻ 👨‍👩‍👧 Parent Experience (Emotional + Engaged) Purpose: Turn passive observation into active participation. UI/UX Direction: • Soft, immersive digital environment • Less “gamey,” more guided + rewarding • Focus on emotional reinforcement and visibility Key Features: • Real-time activity feed (child progress, achievements) • Interactive engagement tools (encouragement, participation) • Shared experiences with child (light gamification) • Clear visibility into growth and development 👉 Feels like: A nurturing, interactive support space ⸻ 🔁 Core System Loop 1. Teacher assigns task 2. Student completes task (gamified interaction) 3. Progress is updated + visualized 4. Parent receives update + engages 5. Student receives feedback + motivation → Loop reinforces consistency, connection, and accountability ⸻ 💡 Differentiation Unlike traditional education tools, Trivium Cohort: • Designs 3 completely different UX layers for the same system • Integrates family engagement directly into the product loop • Uses gamification selectively (fun for students, meaningful for parents, structured for teachers) • Treats learning as a shared experience, not an isolated task ⸻ 🎯 Outcome A platform where: • Students feel motivated • Parents feel involved • Teachers feel in control All within a single, cohesive system
1
33
Cover image for Case Study 2: Searching for
Case Study 2: Searching for Sound Engineers Music-Driven Web Experience — Emotion-Based UX System One-liner: A web experience where the currently playing track controls the visual environment, creating a dynamic interface that responds to sound as a first-class design input. Problem Music platforms treat audio and UI as separate layers — you hear a song while looking at static metadata and album art. There is no system that allows sound itself to become a generative design material, leaving a significant emotional bandwidth on the table. Solution The system treats audio as a live data source. As music plays, extracted properties — tempo, frequency range, energy level, mood classification — drive UI state in real time. Color, motion, layout density, and interaction behavior all respond to what is playing. The experience is not visualized audio; it is an interface that feels like the music. Experience System Audio → State: The audio engine extracts properties per track (BPM, energy, key, mood). These map to a set of defined UI states — not a 1:1 linear translation, but a curated system of thematic environments. State → Visual: Each UI state has a corresponding visual language: color temperature, typography behavior, background motion, and element density. A high-energy track produces a compressed, kinetic interface. A slow, low-frequency track expands the layout and reduces motion. Visual → Interaction: User interactions (hover, scroll, click) are modulated by the current audio state. Hover effects are faster in high-BPM states, slower and more diffuse in ambient states. The interface is never static — behavior is continuous and reactive. Key Interaction Moments Track Transition — When a track changes, the UI transitions through a bridging state rather than cutting abruptly. Color and motion shift over 1–2 seconds, matching the audio's fade behavior. Energy Peak — At detected energy peaks (chorus, drop, climax), a brief full-screen pulse or layout shift signals the moment without interrupting the experience. User Hover / Explore — Hovering over track metadata or navigation elements reveals information through motion rather than a static tooltip. Reveal behavior scales with current audio energy. Silence / Pause — When audio stops, the UI enters a resting state — reduced contrast, minimal motion, slow breathing animation. The interface communicates absence without becoming dead. Visual / Aesthetic Direction The visual language is defined by restraint with controlled moments of intensity. The base palette is near-neutral — dark ground with desaturated tones — allowing color to carry full meaning when audio state triggers it. Typography shifts between a geometric grotesque (high-energy states) and a high-contrast serif (ambient states), reinforcing mood through form. Motion is physics-based: easing curves match the emotional texture of each audio state rather than defaulting to ease-in-out uniformity. Technical Considerations Audio Analysis: Web Audio API for real-time frequency and amplitude data; track metadata (BPM, energy, key) sourced from a music intelligence API (Spotify Audio Features or equivalent) for non-real-time properties. State Management: A finite set of UI states (~5–7) maps to audio property ranges. State transitions are debounced to prevent thrashing on rapid audio changes. Performance: CSS custom properties (--energy, --tempo, --mood-hue) are updated by a single JavaScript loop, keeping layout-triggering reflows out of the animation path. All background motion runs on the compositor via transform and opacity only. Responsiveness: The system degrades gracefully on mobile — motion is reduced, layout simplifications are applied, and battery-sensitive devices receive a low-motion mode via prefers-reduced-motion. Accessibility: Audio-reactive motion respects the OS-level reduced motion preference. Color contrast is validated against WCAG 2.1 AA at all UI states, including high-saturation peak moments. Outcome / Impact The experience creates a demonstrably higher sense of immersion and presence compared to static music interfaces — users spend more time in active listening states and engage more with track discovery. The system design is reusable: the audio-to-UI-state mapping layer can be extended to any visual theme without rearchitecting the core engine. As a portfolio piece, it demonstrates mastery of interaction design, real-time system behavior, and the use of sensory input as a first-class UX variable.
0
34
Cover image for Case Study 1: Anchor
AI Coaching
Case Study 1: Anchor AI Coaching System — Thought Ecosystem One-liner: Anchor is a behavioral intelligence platform that transforms raw thought input into structured self-awareness, connecting clients and coaches through a shared data layer. Problem People seeking coaching or self-improvement lack a consistent, structured way to capture thoughts in the moment — making it impossible to identify patterns over time. Coaches operate on incomplete, self-reported data and have no real-time visibility into a client's mental state between sessions. Solution Anchor is built as a closed-loop system: clients log thoughts and behavioral signals continuously, the AI layer processes that input into categorized patterns and insights, and coaches receive a structured dashboard that surfaces what matters before a session begins. The system removes the friction between raw experience and actionable insight. Key Components Thought Capture Interface — Low-friction mobile-first input for logging thoughts, moods, and behavioral signals in real time AI Pattern Engine — Classifies entries by theme, sentiment, and recurrence; surfaces behavioral loops and cognitive patterns over time Client Insight Feed — Visualizes logged data as a timeline, giving clients a mirror of their own mental landscape Coach Dashboard — Aggregated view of client activity, flagged patterns, and session prep prompts; reduces reliance on recall-based conversations Session Bridge — Pre-session summary generated by the AI layer, connecting ongoing data to the live coaching moment Feedback Loop Triggers — System nudges clients to log when behavioral patterns indicate a period of disengagement or elevated stress Core User Flows Client: Thought Entry Client opens app and taps to log a thought, mood, or behavioral note Entry is timestamped and optionally tagged (work, relationships, body, etc.) AI layer processes entry, links it to existing patterns, and updates the insight feed Client receives a lightweight reflection prompt if a pattern threshold is met Client: Insight Review Client navigates to their timeline or pattern view System surfaces recurring themes, frequency trends, and emotional arcs Client can annotate or expand on flagged entries Insights are visible to their assigned coach in the dashboard Coach: Session Preparation Coach opens dashboard and reviews client activity since last session AI-generated summary highlights key patterns, new themes, and notable entries Coach annotates or bookmarks specific entries for discussion Session opens with shared context — no cold start, no missed signals AI Layer The AI layer is the connective tissue between raw data and meaningful insight. It performs three functions: classification (categorizing entries by theme and emotional tone), pattern detection (identifying recurring behavioral loops across time), and synthesis (generating pre-session summaries and client-facing reflections). The system is designed to enhance human judgment — the coach's, and the client's — not replace it. AI outputs are always framed as hypotheses, not diagnoses. Design Decisions Low-friction capture is non-negotiable. If logging a thought takes more than two taps, the system loses the most valuable data — the unfiltered moment. The capture interface is intentionally minimal and persistent. Coaches see patterns, not just posts. The dashboard is not a feed of raw entries. It's a synthesized view designed to reduce cognitive load and surface signal over noise before a conversation begins. Insight is earned, not pushed. The client-facing reflection layer is triggered by pattern thresholds, not a fixed schedule. This preserves trust and avoids notification fatigue. The system is designed around the relationship. Every data point exists to improve a coaching conversation — not to gamify self-tracking or optimize engagement metrics. Outcome / Impact A coach managing 10–15 clients can enter each session with full behavioral context rather than spending the first 10 minutes on a status update. Clients who log consistently develop a structured self-awareness that compounds over time — reducing the gap between sessions and increasing session quality. The system's feedback loop model creates measurable engagement: clients who receive pattern-based nudges show higher re-engagement rates than those on fixed reminder schedules.
0
32