How to Choose (the Right) Judges for Awards Programmes

Build a judging panel that shows up. Practical guide: recruitment timing, diverse panels, real costs, handling dropouts. 6-9 month timeline included.

You’ve secured your dream judge. They’ve agreed, you’ve announced it publicly, submissions are pouring in (partly due to the excitement candidates have getting their work seen by their idols). Then, three weeks before judging starts, the email arrives: “So sorry, something’s come up.”

This nightmare happens more often than you’d think. But it is (mostly) preventable.

Choosing judges isn’t (just) about credentials and industry clout. It’s about finding people who’ll actually follow through, understand what you’re trying to achieve, and treat the process with the respect your candidates deserve. Get it wrong and you’re scrambling at the worst possible moment. Get it right and your judges become your strongest advocates.

Start recruitment six months early

Most organisations start recruiting judges far too late. You’re not booking a plumber for next Tuesday (well you might be, but plumbing is beyond the scope of this article). You’re asking busy professionals to commit significant time months in advance.

High-profile judges book up quickly. The curator you want? Already committed to three other competitions. That industry leader? Judging two awards programmes and speaking at four conferences. By the time you reach out two months before your deadline, the best people are already overcommitted (that’s why they are the best people).

Start your judge recruitment 6-9 months before your submission deadline. This gives you time for the back-and-forth, the polite declines, the “let me check my schedule” responses that take three weeks.


If you need five judges and your response rate runs at 30% (which is good), you’re contacting at least 17 people… That takes time.

Here’s the other reason early recruitment matters: announcing your judges boosts submissions. When potential candidates see respected names on your panel, it validates your programme. But you can only announce judges who’ve actually confirmed, which means you need those confirmations two months before the deadline to maximise the marketing benefit (4 months if you want to be loved by your marketing department!).

Calculate backwards from when you want to announce judges publicly, add time for recruitment conversations, factor in holiday periods and busy seasons for your industry. Suddenly six months doesn’t seem excessive.

Recruitment timeline breakdown

6-9 months before deadline: Start recruitment. 3-4 months before: Announce confirmed judges publicly. 2 months before: Final confirmations and backup plans. Early recruitment reduces dropout from 15-20% to just 5-10%.

What makes someone a good Judge?

Credentials matter, obviously. But they’re not everything.

A good judge shows up. They respond to emails. They meet deadlines without constant reminders. They understand that real people invested time and often money in their submissions, and those people deserve thoughtful consideration.

You want someone with relevant expertise, yes. But also someone with availability. The distinction matters. That internationally renowned expert might be perfect on paper, but if they’re juggling six other major commitments, your programme gets squeezed. Better to recruit someone slightly less famous who’ll give your entries proper attention (to be honest, having both is better – but if you had to choose one, go for the latter).


Inviting Jeremy Clarkson to judge your national tofu award may not be a good idea.

Watch for warning signs during recruitment. “I’m incredibly busy but I’ll make it work” often means they won’t make it work. “Let me think about it” without a follow-up timeline usually means no. People who ask detailed questions about time commitment and process are often better bets than those who agree immediately without asking anything (what exactly are they agreeing to? will they follow through when they find out?).

Red flags during recruitment

“I’m incredibly busy but I’ll make it work” usually means they won’t. “Let me think about it” without a timeline means no. Those who agree immediately without asking questions about time commitment don’t understand what they’re signing up for.

Communication style matters too. You’ll need to correspond with these people throughout the process. Someone who takes three weeks to respond to a simple scheduling question will likely struggle with judging deadlines. Pay attention to early interactions.

And here’s something you won’t hear often (and is critical): judges need to match your values. If your programme prioritises innovation, don’t recruit judges known for conservative taste. If accessibility matters to your organisation, choose judges who’ve demonstrated inclusive thinking. Their judgements will reflect their values whether they realise it or not. Inviting Jeremy Clarkson to judge your national tofu award may not be a good idea.

Build your judge criteria first

Don’t just create a wishlist of dream judges and hope for the best. Build actual criteria.

Start with must-haves. What expertise is non-negotiable? What level of experience do entries require? How many hours can you realistically ask judges to commit? (More on calculating this later, but spoiler: it’s probably more than you think and technology is your friend in reducing this!)

Then identify your nice-to-haves. Geographic diversity? Mix of established and emerging voices? Specific industry sectors represented? Previous judging experience?


If what you have to offer is ‘supporting your eco-system’ – then the very least you can offer them is a perfect process that respects their time

The distinction matters because you’ll compromise on nice-to-haves when your first choices decline. You won’t compromise on must-haves.

Consider your entry volume. Under 100 entries? Three to five judges can assess everything thoroughly without burning out. Between 100-500 entries? You’re looking at 5-10 judges, possibly working in panels. Above 500 entries, you need either a qualifying round to narrow the field or a much larger judging pool divided by category.

Don’t forget to factor in attrition. If recruitment happens less than two months before judging starts, expect 15-20% dropout. Plan accordingly. That means recruiting more judges than you technically need, or at least having backup options ready.

Budget affects everything too. Can you pay judges (based on our experience that’s mostly unlikely)? Even modest honorariums (£500-£2,000 depending on time commitment and pricing strategy) make securing commitments easier. If you’re relying entirely on goodwill, you need to offer something else of value: significant recognition, networking opportunities, genuine influence over outcomes that matter. However, if what you have to offer is “supporting your eco-system” – then the very least you can offer them is a perfect process (reducing the ask to only an hour).

Where judges hang out

Professional associations and industry bodies are the obvious starting point. They often maintain directories of experts willing to volunteer time. The networking is built-in, which helps with recruitment conversations.

Your past winners make interesting judges, assuming they’ve developed enough experience since winning. They understand your programme from the inside, which can be valuable. But watch out for unconscious bias towards work that resembles their own winning entries. And make sure enough time has passed that they’re not judging against their peers (#awkward).

Academic institutions house experts in most fields. University faculty often have flexibility in their schedules and value service work. PhD candidates in relevant fields bring current research knowledge and fresh perspectives, though they may lack the industry recognition that attracts submissions.

Previous judges for other programmes worth approaching, particularly if those programmes align with yours. They already understand judging processes and commitments. Just make sure you’re not directly competing for the same candidate pool (there might be opportunities there for you to partner, but if you’re rivals you might want to skip that).


Online engagement ≠ qualification

Social media and professional networks reveal who’s actively engaged in your field. Someone regularly posting thoughtful commentary about your industry probably has opinions worth hearing (be sure to verify their actual expertise beyond online presence. Online engagement ≠ qualification).

Don’t overlook practitioners working outside major cities or internationally. Geographic diversity brings valuable perspectives, and video conferencing makes location largely irrelevant for most judging work these days.

Panels that improve outcomes

Diverse judging panels aren’t just good optics (and we’ve seen plenty of the “optics” panels).

Diverse panels catch excellent work that homogeneous panels miss.

When everyone on your panel shares similar backgrounds, training, and career paths, they share blind spots too. Work that challenges conventional thinking gets overlooked. Approaches unfamiliar to the panel get dismissed as “not quite right” rather than recognised as innovative.


Diverse judging panels aren’t just good optics. They catch excellent work that homogeneous panels miss.

Diversity means more than demographic representation (of course that is vital too).

Consider diversity across:

  • Career stage (emerging professionals see different strengths than established leaders)
  • Professional backgrounds (practitioners, academics, critics, administrators bring different lenses)
  • Geographic location (regional perspectives matter)
  • Industry sectors (cross-sector judges catch interdisciplinary innovation)
  • And the very important race, gender, age, and other protected characteristics

Start building diversity from the onset, not when finalising your panel. If your long list of 20 potential judges all reflect light in the same way or all have XY genes, the problem isn’t your final selection – it’s your sourcing. Consider rebuilding that list.

Reach out intentionally to professional associations serving underrepresented communities, regional industry groups outside major cities, practitioners with excellent work but less public visibility. Don’t just add one person from an underrepresented group to an otherwise homogeneous panel and call it diverse.

Sure on occasion your panel may end up looking a certain way (if you’re not paying, then it will be people who can afford to give their time for free)

Create an environment where all judges feel genuinely valued for their expertise, not tokenised. That means clear communication, respect for everyone’s time equally, and meaningful involvement in outcomes. If diverse judges feel like they’re there for appearances rather than genuine contribution, they won’t return.

The real costs of judge management

Judge management costs more than most organisations budget for. Not just honorariums (if you’re paying them), but coordination time that adds up quickly.

If each judge needs 30 minutes of onboarding, asks 5-10 questions during judging that take 15 minutes each to respond to properly, and requires 20 minutes of follow-up after judging concludes, you’re looking at 2-3 hours of coordination per judge. Five judges means 10-15 hours. Ten judges means 20-30 hours. There is some overlap and you can reduce this significantly with the right processes. (that’s separate from any actual judging you’re doing yourself).

Technical support adds more. Judges encounter login issues, forgotten passwords, scoring interface questions, media that won’t load properly. Someone needs to handle that (and quickly because judges work on tight schedules and get impatient quickly – and that’s understandable when you give your time for free!).

Quick calculation

A programme with 200 entries, 5 judges spending 3 minutes per entry, and coordination taking 2.5 hours per judge means roughly 62 hours of judge time and 12.5 hours of coordination time. At £25/hour, that’s £1,875 in implicit costs even if nobody’s being paid.

Conflict resolution takes time too. Judges disagree. Judges score wildly differently. Judges miss deadlines. Judges drop out. Each situation requires careful handling to maintain relationships and programme integrity.

Some programmes pay judges nothing beyond expense reimbursement, others offer £500-£2,000 honorariums depending on time commitment. Corporate programmes sometimes pay significantly more. Celebrity judges might expect appearance fees for eye-watering figures (if you have those budgets, lucky you!).

Don’t forget travel and accommodation if you’re running in-person judging or bringing judges to your awards ceremony. Video calls reduce these costs substantially, but some programmes value face-to-face deliberation.

Some awards management systems can support you in minimising the costs above – but be mindful some of those cost per judge! Factor this into your budget alongside any judge payments.

Why technology affects judge participation

Your judging platform directly impacts judge willingness to participate and return.

Clunky systems frustrate judges immediately. If they spend 15 minutes just figuring out how to view entries, that’s 15 minutes not spent evaluating work. If they need to download files individually rather than viewing them in-browser, that’s friction that makes scoring entries feel like unnecessary labour.

Judges work everywhere

Busy professionals judge on trains, during lunch breaks, and late at night. If your platform only works on desktop, you’ve eliminated half their available time. Mobile-first design isn’t optional anymore – it’s essential for judge participation.

Mobile accessibility increasingly matters. Judges are busy people. They judge on trains, during lunch breaks, late at night when family’s asleep. If your platform only works well on desktop, you’ve just eliminated half the potential judging time for many people (and you really don’t want to ask them to download an app…).

But that’s not all – you really want:

Clear scoring interfaces

Clear scoring interfaces reduce decision fatigue. If judges spend mental energy figuring out how to record their assessment rather than making the assessment itself, quality suffers. Simple, intuitive scoring beats elaborate rubrics that confuse more than clarify.

On Browser rich media support

Rich media support is essential for many programmes. Images should display at proper resolution without requiring downloads to view. Video should play smoothly in-browser. Audio should work reliably. PDF documents should be readable without external viewers. When media doesn’t work properly, judges can’t judge properly.

Progress tracking

Help judges manage their own workflow. They should see at a glance how many entries they’ve scored, how many remain and view all their scores in order to review their choices. Without this visibility, judges lose track and miss entries.

Chat / Commenting

Collaboration features matter for programmes using panel deliberation. Judges need to leave comments for co-judges, flag entries for group discussion. If your platform doesn’t support this, you’re managing deliberation through email threads, which becomes chaotic quickly.

Export capabilities

This matters for some judges who want their own records. CSV downloads of scores, PDF compilations of comments – not essential features but appreciated by judges who work on paper (back to diversity – an older generation may favour it).

Technical problems always surface at deadlines. Your platform needs reliable support. Judges shouldn’t wait three days for answers when they’re trying to meet your scoring deadline.

Clunky judging systems don’t just frustrate current judges. They ensure those judges never return. And they complain to peers who might have judged for you in future. Your technical choices have reputation implications beyond the immediate programme. A judge may even refuse to participate with you if you’re using a platform they had a bad experience with in the past on another award.

Handling dropouts and difficult situations

Despite your best planning, judges drop out. They get sick. Family emergencies happen. Work commitments change. That big project they thought would finish before judging starts runs late.

Build redundancy into your recruitment. Having one backup judge confirmed gives you options when someone drops out. Recruiting 10-15% more judges than you technically need provides a buffer.

Always have a backup

Recruit 10-15% more judges than you technically need. Having one confirmed backup judge means you have options when the inevitable dropout happens. Prevention costs nothing; scrambling at the last minute may cost you your sanity (and lead to a few white hairs).

When dropouts happen close to deadlines, assess whether you can redistribute workload among remaining judges without compromising quality. If each judge was handling 30 entries, asking them to handle 35 might work. Asking them to handle 50 probably won’t.

Communicate transparently with participants if judging gets delayed, but don’t name the judge who dropped out. That’s unprofessional and sets fire to a bridge unnecessarily.

Conflicts of interest surface regularly. Judges realise they know an entrant. Judges discover they competed for the same grant an entrant won. Judges recognise work from a colleague they dislike. Your process needs a clear, easy recusal mechanism.

Some judges score consistently higher or lower than others. That’s normal -people have different standards. Your process should accommodate this through normalisation or discussion, not by pressuring judges to match arbitrary scoring patterns.

Judges miss deadlines sometimes despite reminders. A friendly nudge usually works (some platforms can do that for you automatically). If it doesn’t, you need to know quickly so you can activate backup plans. Automated deadline tracking helps catch this early.

Occasionally judges provide inadequate feedback. If you’ve promised entrants constructive comments and a judge writes “good” on every entry, that’s a problem. Address it privately and directly, emphasising the value proper feedback provides candidates. In the worst cases provide judges with templates for answers.

Very rarely, judges behave inappropriately – attempting to contact entrants directly, sharing confidential submission information, making discriminatory comments. Have a clear policy for addressing serious issues and don’t hesitate to remove judges when necessary.

Creating advocates

First-year judges who have good experiences often return. But you can’t just assume they will.

Thank judges properly when judging concludes. Ideally not a generic mass email. Give specific acknowledgment of their contribution. What insights did they bring? How did their perspective improve outcomes? Make them feel their work mattered.

Share results and impact. Judges want to know what happened after they scored entries. Who won? How are winners being supported? What impact is your programme having?

If you feel your judges have the time – ask for feedback on the judging process. What worked well? What frustrated them? What would make them more likely to judge again? If they give you feedback be sure to implement/respond to suggestions. This shows you value their input.

If multiple judges mentioned that instructions were unclear, revise them. If the scoring system confused people, simplify it. Showing continuous improvement makes judges feel heard.

Maintain relationships between competitions. Occasional updates about your organisation, early previews of next year’s programme, invitations to relevant events – stay connected (don’t stalk…stalking is always bad).


Visibility has value (so do backlinks!)

Recognise judges publicly. Their names on your website year-round (not just during judging), mentions in impact reports, acknowledgment in communications with winners. Visibility has value (so do backlinks!).

Make returning easier than starting fresh. “Would you like to judge again next year?” asked at the right moment gets better response than cold recruitment emails 12 months later.

The value of judge continuity is significant. Returning judges understand your process, maintain consistent standards across years, and provide valuable perspective on how entries evolve. First-year judges bring fresh eyes. You want both, but you definitely want some continuity.

Remember – judges are investing in you

Even if you’re paying judges, you’re almost certainly not paying them what their time is actually worth. They’re doing this because they care about the field, want to support emerging talent, or believe in your organisation’s mission.

Respect their time. Meet your own deadlines for providing information. Don’t change requirements halfway through. Make the technical side as smooth as possible. Respond quickly when they have questions.

Trust their judgement (that’s why you invited them). If someone scores entries differently than you’d expect, that’s probably valuable perspective, not a problem to fix. Let judges judge.

Handle disagreements among judges professionally. Different perspectives produce different assessments – this is normal and healthy. Your process should accommodate genuine differences of opinion without forcing artificial consensus. The greatest entries will be the ones with the fiercest conversations.

Protect judges from harassment. If entrants contest results, you handle it. Judges shouldn’t be defending their decisions to upset participants. That’s your job as organiser.


The greatest entries will be the ones with the fiercest conversations.

Remember that judging quality work is harder than it looks. Assessing 50 excellent entries and selecting winners requires genuine intellectual and emotional labour. Respect this.

Your judges shape how your programme is perceived. Respected judges lend credibility. Returning judges signal that your organisation treats people well. Diverse judges demonstrate commitment to inclusive excellence. Choose thoughtfully, treat them well, and invest in relationships that last beyond a single competition.

You guessed it..

Zealous makes your judging easier

But we’re not alone in the space here are 8 others you may wish to consider (even if we would prefer you choose us!).


Let us know you want us to write more content like this with a love!

Guy Armitage is the founder of Zealous and author of “Everyone is Creative“. He is on a mission to amplify the world’s creative potential.

Frequently Asked Questions

How many judges do I need for my awards programme?

It depends on your entry volume and judging format. For programmes with under 100 entries, three to five judges typically works well, allowing for thorough evaluation without overwhelming anyone.

Programmes with 100-500 entries might need 5-10 judges, often working in panels. Above 500 entries, consider a multi-round process with an initial qualifying round to reduce judge workload. Factor in a 15-20% dropout rate if recruitment happens within two months of judging, and recruit accordingly. You might also want backup judges confirmed just in case.

When should I start recruiting judges for my competition?

Start 6-9 months before your submission deadline, especially if you’re targeting high-profile judges who book up quickly. Starting early gives you time to recover if your first choices decline, conduct proper outreach (most people need 2-3 contacts before responding), and handle the inevitable scheduling negotiations.

If you’re planning to publicly announce your judging panel to boost submissions, you’ll need names confirmed at least 3-4 months before deadline. Early recruitment also significantly reduces dropout rates—from 15-20% when recruiting late to just 5-10% when recruiting six months ahead.

Should I pay my judges or keep them as volunteers?

There’s no universal answer, but transparency is essential. High-profile industry judges often expect honorariums of £500-£2,000 depending on time commitment, with corporate programmes typically budgeting higher. Emerging professionals might value recognition and networking over payment.

At minimum, cover reasonable expenses for travel or materials. Whatever you decide, state it clearly in your initial outreach—judges appreciate knowing expectations upfront. If budget constraints mean you can’t pay what judges deserve, be honest and work harder on providing other forms of value like visibility, networking, or genuine influence over outcomes.

How do I build a diverse judging panel without it feeling like tokenism?

Start by auditing your longlist for representation gaps across race, gender, age, geography, professional background, and career stage—not just your final selections. If your list of 20 potential judges lacks diversity, rebuild it rather than adding one or two people at the end.

Actively reach out to professional associations serving underrepresented communities, regional industry groups outside major cities, and excellent practitioners without huge public profiles yet. Most importantly, create an environment where all judges feel genuinely valued for their expertise through clear communication, respect for their time, and meaningful involvement in outcomes. Diversity improves judging quality by bringing varied perspectives that catch excellent work homogeneous panels miss.

What should I do if a judge drops out close to the deadline?

First, assess whether you can redistribute their workload among remaining judges without compromising quality. If each judge was handling 30 entries, asking them to handle 35 might work. Asking them to handle 50 probably won’t.

If redistribution isn’t feasible, activate your backup judge immediately—this is why recruiting one extra option is smart planning. Communicate transparently with participants if judging will be delayed, but don’t name the judge who dropped out. For future competitions, get written confirmations with specific date commitments and check in monthly as the date approaches to catch potential dropouts earlier.

How can I make judging less time-consuming for busy professionals?

Choose a platform that works seamlessly on mobile devices, displays rich media directly in the browser without downloads, and has genuinely intuitive scoring interfaces. Be realistic about time commitments upfront—calculate hours based on entry numbers and criteria complexity, then add 20% because it always takes longer than expected.

Consider multi-round judging where initial qualifying narrows the field before detailed evaluation begins. Automated reminders help judges stay on track without requiring constant manual follow-up from you. Most importantly, provide clear instructions before judging starts and make technical support easily accessible when issues arise.

Should judging be blind or should judges know who submitted entries?

This depends on your programme goals and industry norms. Blind judging (where judges don’t see submitter names or organisations) reduces unconscious bias and works well for purely merit-based awards. However, some programmes deliberately assess entries in context—you might judge a small organisation’s work differently than a major institution’s given different resources.

If you choose non-blind judging, implement clear conflict of interest policies and easy recusal processes. Many programmes use hybrid approaches: blind initial rounds to surface the strongest work, then open final judging where context matters for determining winners. Consider what serves your specific programme goals best rather than following a universal rule.

Free Guide

Double your entries in 2025

  • 5 mistakes to avoid when organizing awards, competitions…
  • 10 tips for marketing your program
  • Easy strategies to engage with judges
  • Pricing strategies for your competitions
  • Reduce friction with candidates
Your subscription could not be saved. Please try again.
Your subscription has been successful.