This content originally appeared on DEV Community and was authored by akan evans
This is a submission for the Google AI Studio Multimodal Challenge
What I Built
ClueFrame AI-powered, crime-solving applet. Players examine AI-created visual evidence, collect and link clues on an interactive detective board, question leads, and identify suspects while racing against a timer to crack the case.
It solves the “stale mystery” problem by producing endless, believable cases and turns passive puzzles into collaborative, narrative-driven investigations. The applet trains observation, inference, and teamwork in a low-risk, replayable format.
Built for tabletop fans, educators, teams building soft skills, and creators who need rapid scenario prototyping, it makes mystery-solving social and creative. Players don’t just consume a story, they co-author the outcome through evidence and deduction.
How I Used Google AI Studio
I used a two-model multimodal pipeline: gemini-2.5-flash for all text and structured data (scenario generation, clue metadata, refined image prompts) and imagen-4.0-generate-001 solely for photoreal visual evidence.
Gemini produces coherent scenarios, searchable metadata, and on-demand prompt refinements; Imagen renders those prompts into high-fidelity images. The UI links each image to its structured context so images are queryable, annotatable, and filterable on the detective board.
Benefits: photoreal imagery boosts immersion, structured text makes evidence machine-actionable, and rapid text→image refinement lets investigators iterate hypotheses and explore leads in real time.
The code was developed and run in Google AI Studio.
This content originally appeared on DEV Community and was authored by akan evans

akan evans | Sciencx (2025-09-12T11:51:50+00:00) ClueFrame. Retrieved from https://www.scien.cx/2025/09/12/clueframe-2/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.