Predict the finish of the season. Highest score wins.
Host controls for phase management and results entry.
The site you're looking at β the pixel avatars, the live table, the Firebase sync, the scoring engine, the π leaderboard β was built end-to-end by having a conversation with Claude. No templates, no boilerplate repo, no Stack Overflow. Just prompts and iteration.
The interesting thing isn't that AI can write code. It's that the quality of what you get is directly shaped by the quality of what you ask for. Below are the six prompting moves that actually moved this project forward, with the real prompts used.
Bad: "build me a football app". Good: describe the real user need, the interaction, and the context.
Why it worked: Contains the who (group of friends/colleagues), what (rank top 5 + bottom 3), when (final 5 fixtures), and shape (drag interaction). Claude doesn't have to guess; it can get on with it.
When you want multiple things, make them a list. It's easier for the AI to track every item and for you to check what was missed.
Why it worked: Nine completely different asks landed in one message. Because they were numbered, Claude could hit every single one and reply with a matching checklist. Without the numbering, at least three would have been missed.
The fastest way to describe an aesthetic is to point at one. Cultural references, brand names, and specific products do more work than adjectives.
Why it worked: "Habbo Hotel" is a one-word aesthetic brief. It conveys: chunky pixels, friendly proportions, saturated colours, 3/4 view, half-body shot. Three adjectives couldn't do that. Pair it with specific constraints ("top half", "leading scorer", "jersey colour") and you get original work in the right direction.
AI will confidently ship something that's internally inconsistent. The fix is to call it out specifically β don't just say "this feels wrong".
Why it worked: The correction is concrete ("rank position #1 β 'in top 5'"). It identifies the category error instead of just saying "looks off". Claude had to re-examine the feature, remove it, and switch to informational-only display. Whole game-logic fix from one question.
AI is great at execution, average at taste. The best ideas in this site came from one-line creative suggestions.
Why it worked: Introduces a social mechanic the AI wouldn't have dreamt up on its own. The prompt includes the question format AND the scoring implication, so Claude can wire it up directly (every player votes β majority wins points β creates a reading-the-room mini-game).
Left alone, AI tends to over-build. UI accretes, copy gets longer, "helpful" labels pile up. Explicitly ask for cleanup with a clear rule.
Why it worked: Gives the AI a specific rule it can apply ("must add value to gameplay or oversight") and delegates taste decisions. The storage-mode pill, the "where your picks are saved" explainer, the redundant scoring-info subtitle β all gone in one pass.
Pick a small, real problem you have β a tracker, a calculator, a little tool your team keeps doing by hand. Open claude.ai and start with a goal, not a spec. Iterate in small chunks. Push back when something's off. Ask for cleanup at the end. You don't need to be a developer to make this work β you need to be a clear thinker about what you actually want.
Fun stats about this site: ~20 conversational turns, 15 files, ~140KB total, 0 bundlers, 0 frameworks, 1 embarrassing deployment loop, 0 stack traces understood by the person driving.
Copy the text below to send via message, Slack, or any chat.