This article covers:
- The cost of losing judges
- Design for reviewing, not just submitting
- Align form fields with scoring criteria
- Give judges structured responses
- Set word limits that respect everyone’s time
- Control file submissions carefully
- Design for mobile judging
- Test with real judges before launch
- Provide logical order and clear section breaks
- Maintain consistency across categories
- Get feedback from judges after every cycle
- When to customise vs use templates
- Platform capabilities that should be standard
Judges are the heart of your awards programme. Lose them, and you lose credibility, momentum, and future cycles.
Most programme managers design entry forms thinking only about entrants but that’s dangerous and could lead to frustrated judges. When senior industry leaders quit mid-process because your form made their job harder, that’s hard to recover from.
The Cost of Losing Judges
When a judge drops out mid-process, you lose their scores, network, credibility, and willingness to recommend your programme.
On average 15-20% of volunteer judges don’t complete assigned entries. For paid judges, that drops significantly. Each dropout means redistributing entries (adding workload), delaying announcements, or accepting incomplete scoring.
Your process and entry form is where retention begins.
Replacing judges takes 3-5 weeks
Outreach, programme explanation, platform onboarding. Judge retention compounds year over year. Programmes keeping 80%+ of their panel save 40-60 hours annually on recruitment and build consistency in standards.
Design for Reviewing, Not Just Submitting
Most entry forms are built from the organiser’s perspective: what information do we need, what categories, how to structure questions for PR?
A good entry form is one judges can assess quickly, consistently, and without mental fatigue. If judges struggle or quit, your programme fails regardless of question quality.
When choosing your platform and tools, preview exactly what judges see before launch. Many platforms bury this function or show organisers a different interface than judges experience.
Align Form Fields with Scoring Criteria
If your scoring rubric asks judges to rate “innovation,” “feasibility,” and “social impact,” your entry form needs three corresponding sections. Not five sections, not one massive essay field. Three sections, clearly labelled, mirroring your scoring language.
When fields and criteria don’t match, judges waste energy translating responses into your framework. They scroll, piece together answers, and scoring becomes inconsistent.
The ideal is to have five scoring criteria = five form sections. They don’t need to be worded in exactly the same way , since that might confuse candidates. But separating the information in a meaningful way will allow judges to find what they need quicker.
Structure your form to map to your scorecard
You can have more questions than you have criteria if you want to allow for quicker decisions. You can add a question with a shorter answer to allow for quick triage, with a more thorough answer below to give judges more nuance if they want to dig deeper.
Download launch checklist
Never miss a critical step: 300+ tasks across 8 phases, from securing judges to winner announcement.
Give Judges Structured Responses
Long-form essay responses make judging harder. One entrant writes three paragraphs, another writes seven, a third uses bullet points. Judges hunt for information rather than assess quality.
Use descriptive instructions that create predictable patterns. Instead of “Describe your impact,” provide a viable framework:
- Who benefits from this work?
- What outcomes have you measured?
- How does this compare to similar initiatives?
Entrants write freely but know what to cover. Judges find what they need quicker.
Set Word Limits That Respect Everyone’s Time
Word limits protect judge attention and maintain assessment quality.
Without limits, some entrants submit 200 words, others 2,000. Judges either spend wildly different time per entry (destroying consistency) or start skimming (penalising thorough entrants).
Research on cognitive load suggests 150-300 words per question provides substance without overwhelming. For complex criteria, 400-500 words. Beyond 600 words per field it’s likely judges will skim that information after the first dozen entries.
Calculate total reading load
Six questions at 300 words each = 1,800 words per entry. At 250 words per minute reading speed (the average), that’s 7 minutes before scoring. Multiply by 150-300 entries per judge = 17 to 35 hours of reading time alone.
Modern platforms enforce limits automatically with character counters. Set clear limits, enforce them technically, and design questions entrants can address within constraints.
Want more articles like this?
Give this one a like
Control File Submissions Carefully
Supporting documents provide valuable context. Unlimited supporting documents create chaos.
Programmes letting entrants upload 15+ files force judges to either cherry-pick (inconsistent assessment), spend hours per entry (dropout), or reduce scoring rigour (lower quality).
Limit files to 3-5 maximum. Be specific: “upload one PDF containing project summary (max 2 pages)” and “upload up to three images showing your work.”
This helps judges know what to expect and where to find it. Vague “supporting documents” fields waste time opening files to figure out content.
Keep file sizes to 10-15MB for mobile judges reviewing between meetings. Modern platforms handle PDFs, images (JPG, PNG), and documents (DOCX, XLSX) in the browser by default.
Design for Mobile Judging
Busy industry leaders don’t have three uninterrupted hours at a desk. They review entries on trains, in waiting rooms, between meetings, evenings on the sofa.
Your entry form needs to work on mobile from launch because judges encounter that interface on 6-inch screens. Long horizontal fields requiring sideways scrolling, tiny illegible text, images that don’t scale – these cause judges to squint, guess, or quit.
Test on a phone before launch
Not “does it work” but “would I spend an hour reviewing 10 entries on this?” If not, adjust.
Modern platforms build mobile-responsive interfaces by default. Older systems retrofitted mobile as afterthought – the difference is immediately apparent in clunky navigation and broken layouts.
Fitting your process around your judges’ busy lives will more likely guarantee they speak highly of you.
Test with Real Judges Before Launch
Your entry form will have problems. Discover them before launch (costs an afternoon) or after submissions open (costs credibility and entries).
If you can, recruit 1-2 people who’ll judge your programme. Ask them to complete a test entry and review it. Watch for questions, confusion, and pauses.
Testing reveals:
- Instructions that seem clear to you but confuse judges
- Questions requiring information entrants don’t have
- Scoring criteria misaligned with form fields
- File upload behaviours you didn’t expect
- Word limits too restrictive or permissive
- Mobile interface problems invisible on desktop
Test twice: once with skeleton structure, again with full instructions. This catches structural problems early and refinement issues before launch.
Platform testing is essential. If your platform makes testing awkward and requires payment up front before you can test submit, assign test judges – that’s a bad sign. It’s important for you to be able to take a test drive before you start.
Insights like these straight to your inbox
Receive weekly tips from our founder to grow your program’s impact; regardless of what tools you use.
Provide Logical Order and Clear Section Breaks
Judges process entries sequentially. If your form jumps between topics: project description, budget, back to methodology, forward to outcomes. Judges will lose context and make mistakes.
Structure your form in logical narrative flow:
- Project overview (who, what, when, where)
- Methodology or approach
- Outcomes and impact
- Supporting evidence
- Future plans
Creative programmes might follow different logic (concept, execution, context, significance), but keep one clear path through material.
Match criteria order to their sections
When a judge scores entries, it will ask for the scores in order. Making those the same order as the forms will allow judges to minimise scrolling up and down the page.
If you can, use descriptive section headers. Not “Section 1” but “Project Methodology and Approach.” Judges should skim headers and understand entry structure immediately.
Maintain Consistency Across Categories
Running multiple award categories? Don’t create completely custom forms for each. Judges assessing across categories shouldn’t relearn form structure each time.
Create core structure that works across all categories, then customise specific questions where necessary. The basic flow like overview, methodology, impact, evidence is likely to apply to all. Specific criteria within sections might vary.
Consistency helps judges efficiently. One onboarding covers all categories. For radically different category types (written grants versus visual art), separate forms make sense, but keep structural consistency.
Get Feedback from Judges After Every Cycle
Improve your entry form by asking judges directly through structured post-programme surveys.
Questions worth asking:
- Which form sections made assessment easiest? Hardest?
- Were word limits appropriate?
- Did entrants provide information needed to score effectively?
- What changes would reduce judging time without compromising quality?
- Would you judge again?
That last question is your metric, and should probably come first in your questionnaire. Judges who hesitate are telling you something broke.
Common feedback themes may include:
- too much reading (tighten word limits)
- inconsistent entries (unclear instructions)
- difficulty finding information (poor structure)
- technical issues (platform problems).
Act on feedback. Document changes and communicate them when recruiting next cycle: “We’ve simplified form structure and reduced reading time by 25%.”
When to Customise vs Use Templates
Many platforms offer pre-built form templates. Use them as starting points, not final versions.
Use templates when running standard award types with common criteria, launching quickly, or new to awards management.
But don’t depend on them unless you want to discover an issue halfway through running your programme. Be sure to challenge all the fields and customise them extensively for your own needs.
Start with template, adjust based on your needs and judge feedback. Iterate every cycle. Programmes that continuously refine forms improve retention and entry quality year over year.
Platform Capabilities That Should Be Standard
These form-related capabilities should be baseline, not premium add-ons:
- Auto-save functionality (entrants shouldn’t lose work)
- Unlimited file uploads within reasonable size limits (no per-entry pricing)
- Mobile-responsive design for entrants and judges
- Preview showing exactly what judges and candidates see
- Inline form validation with helpful error messages
- Simple form building (launch basic form in under 60 minutes)
- Word count limits with visual indicators
- Try for free without having to book a call and give credit card information
If a platform lacks these you may wish to look compare awards management software.
Finally
Entry forms are infrastructure. When they work, nobody notices. When they fail, they can created unnecessary friction.
Time invested in judge-centric entry forms compounds over your programme’s lifetime. Judges return. Your panel maintains consistent standards.
Good form design respects the time and attention of everyone involved. Test with real judges before launch, iterate based on feedback, and choose platforms that treat form design as foundational work.
Whether you use Zealous or another platform, ask yourself: would you want to judge 20-30 entries using this form? If not, keep refining.
We can help!
Zealous makes running your programmes easier
But we’re not alone in the space – here are 8 others you may wish to consider (even if we would prefer you choose us!).
Want us to write more content like this? Give it a like
Share

Guy Armitage is the founder of Zealous and author of “Everyone is Creative“. He is on a mission to amplify the world’s creative potential.
Frequently Asked Questions
How long should each entry form field be for optimal judge experience?
For most assessment criteria, 150-300 words per field provides enough detail without overwhelming judges. More complex questions requiring detailed evidence can extend to 400-500 words, but beyond 600 words per field, judges start skimming rather than reading carefully.
Calculate total reading load: if you have six questions at 300 words each, that’s 1,800 words per entry (roughly 7 minutes of reading alone). Multiply by entries per judge (typically 15-30) and you’re asking for 2-6 hours of reading time, not counting actual evaluation work. Set strict word limits and enforce them technically through your platform’s character counter.
Should I use the same entry form across all award categories?
Maintain a consistent core structure across categories whilst customising specific questions where necessary. Judges often assess multiple categories, and forcing them to learn completely different form structures for each multiplies cognitive load unnecessarily.
Use the same basic flow: project overview, methodology, impact, evidence – across categories, then vary the specific criteria within each section. This consistency reduces judge training time by 40-60% and helps judges maintain assessment standards across categories. Only use radically different forms when category types are fundamentally different (for example, written grants versus visual art competitions).
What file upload limits should I set for judges reviewing on mobile devices?
Limit supporting files to 3-5 maximum per entry, with individual file sizes capped at 10-15MB. Judges increasingly review entries on mobile devices during commutes or between meetings, and 50MB files won’t load smoothly on phones.
Be specific about what each file should contain rather than offering vague “supporting documents” fields. This helps judges know what to expect and where to find information. Modern platforms should handle PDFs, standard images (JPG, PNG), and documents (DOCX, XLSX) by default without requiring organisers to configure settings for each file type. If a platform restricts file formats or charges extra for additional types, that signals outdated infrastructure.
How do I test my entry form before launching publicly?
Recruit 1-2 trusted individuals who will judge your actual programme and ask them to complete a test entry, then review it as if assessing real submissions. Pay attention to their questions, confusion, and any points where they pause.
Testing reveals issues like ambiguous instructions, questions requiring information entrants don’t have, scoring criteria misaligned with form fields, and mobile interface problems invisible on desktop. Test twice: once with skeleton form structure, again with full instructions and logic. This catches structural problems early and refinement issues before launch. Your platform should make testing the judge view easy – if you need workarounds to preview what judges will see, that’s a significant friction point.
What are red flags when evaluating awards management platforms?
Watch for platforms that charge extra for basic features like auto-save, make mobile judging awkward or limited, lack simple form preview functionality showing exactly what judges see, require technical expertise to build forms, or use per-entry pricing that penalises programme growth.
These signal fundamental design problems, not minor inconveniences. Modern platforms should let you launch a basic form in under 60 minutes, handle unlimited submissions within reasonable file size limits, provide mobile-responsive interfaces by default, and include inline form validation with helpful error messages. If capabilities you’d expect as standard are listed as premium features, that reveals where the platform’s priorities lie and they probably don’t align with making your job easier.
How do I get useful feedback from judges to improve my form?
Send structured post-programme surveys asking specific questions: Which form sections made assessment easiest? Were word limits appropriate? Did entrants provide information needed to score effectively? What changes would reduce judging time without compromising quality? Would you judge again?
That last question is your metric for success. Document common feedback themes like excessive reading per entry (tighten word limits), inconsistent entry quality (unclear instructions), difficulty finding information (poor structure), or technical issues (platform problems). Act on feedback and communicate changes when recruiting next year’s judges: “Based on feedback, we’ve simplified the form and reduced reading time by 25%.”
Judges who see their suggestions implemented become advocates who tell colleagues your programme is well-run, improving your recruitment pipeline significantly.
When during the planning process should I finalise the entry form?
Design your entry form during your planning phase, ideally 4-6 months before submissions open. This gives you time to align form fields with scoring criteria precisely, test with judges, gather feedback, and iterate before launch. Starting too late (weeks before opening) forces rushed decisions that create problems during judging.
Your form structure should inform other planning decisions like judge recruitment (what expertise do judges need based on your criteria?) and marketing approach (what benefits can you highlight based on your questions?). Early form design also helps identify whether your chosen platform can actually deliver what you need—discovering technical limitations after you’ve promoted your programme is an expensive mistake to make.











