ONE EVENT, TWO WORLDS: BRIDGING THE INDUSTRY-ACADEMY GAP TO DELIVER BETTER EVENTS

The event industry does not lack expertise; it lacks a system that turns thousands of successfully delivered events, research, and thought leadership into accumulated professional knowledge that helps us continually improve how events are planned, delivered, and evaluated.

Dr Mike Duignan (University of Paris 1, France)

EXECUTIVE SUMMARY

  • Events succeed operationally but often fail to generate shared learning across the industry

  • Academic research and professional practice address the same problems but in less compatible formats

  • The divide is structural, created by incentives, timelines, and communication methods rather than capability

  • Hybrid professionals positioned along a spectrum between research and practice enable knowledge translation

  • Treating every event as both delivery and learning activity allows the industry to function as a true profession.

INTRODUCTION

Sometimes, it feels as though the “academic” world and the “practical” world exist in paralell spaces. That’s true, but not always the case, and we need to do more to reconcile these worlds to develop better events, to achieve their objectives and to be more sustainable.

Imagine an event that runs perfectly on the day: gates open on time, the audience flows smoothly, suppliers deliver, no serious incidents occur, and stakeholders leave satisfied. By any operational measure, it is a success. Yet six months later, the same organisers face the same transport congestion, the same community complaints, and the same funding uncertainty. Nothing was learned beyond that particular weekend.

Now imagine the opposite scenario. A detailed evaluation report is produced: sophisticated data analysis, carefully modelled social impact indicators, and policy-relevant conclusions about visitor behaviour and legacy outcomes. The document is methodologically robust and widely cited. Yet the organisers who hosted the event never read it, and the next edition is planned exactly as before.

Both situations are common.

The first demonstrates delivery without accumulated knowledge. The second demonstrates knowledge without operational influence. Neither, on its own, improves the industry.

The event and festival field often behaves as though these two outcomes belong to different worlds. Practitioners primarily focus on making events happen under pressure and uncertainty. Researchers primarily focus on understanding what events mean and what they produce over time. Each group may perceive the other as operating in an entirely different environment, distant, even slightly incomprehensible.

This separation is not caused by hostility or lack of respect. It is produced by structure. The people designing evaluation frameworks and the people installing barriers at 6 a.m. on event day are solving related problems but operating inside different professional systems.

The result often leads to a peculiar situation: an industry built on collaboration that can sometimes struggle to collaborate between “two worlds” - the industrial world, and the academic one.

Events succeed every weekend, yet the same problems return because the lessons never travel.

THE PROBLEM AND/OR OPPORTUNITY

The central issue is not necessarily that research lacks relevance or that practice lacks sophistication. The problem is that knowledge within the event and festival industry is produced and used in different places, for different purposes, and in different formats. Academic work aims to create transferable understanding — explanations that apply beyond a single event, for example. Professional practice, for example, may aim to deliver a specific event safely, legally, and successfully within immediate constraints.

Because these objectives differ, the outputs often don’t align. Research often answers long-term questions, while organisers need short-term decisions. Practitioners accumulate valuable experiential insight, yet it remains undocumented and therefore unavailable to a broader pool of knowledge and accessible to the wider professional community.

As a result, lessons learned at one event frequently remain local rather than collective.

This misalignment creates a missed opportunity. Events are repeated annually, across cities and countries, making them ideal environments for systematic learning. If operational experience and structured analysis were connected more, the sector could gradually build shared professional knowledge — improving i.e. safety, strengthening funding cases, supporting better policy decisions, and advocating for the industry as a whole.

Indeed, one of the reasons why CEF was launched was because our industry is siloed, not just by knowledge differences, but by sector, subject, and geography too — but that’s a matter for another article.

The urgency of this issue has increased as events have become more complex and more scrutinised. Organisers now operate within heightened expectations around safety, sustainability, accessibility, and demonstrable public value. Public authorities and funders increasingly require evidence of impact, while communities expect tangible benefits if they are to take the temporary disruption. At the same time, operational environments have grown more uncertain: extreme weather, regulatory change, rising costs, and reputational risk all affect decision-making.

In such conditions, experience alone becomes insufficient, yet analysis alone is impractical. An organiser preparing a licensing application may be asked to demonstrate crowd management planning, environmental mitigation, and community engagement outcomes.

WHY DOES THIS MATTER NOW?

These demands require both operational judgement and credible evidence, that could be enhanced further by engaging with the “academic world”. Without structured knowledge, organisers rely on repetition and intuition; without operational grounding, research struggles to influence real decisions.

The industry is therefore reaching a point where the separation between understanding events and delivering them creates practical limitations. Professionalisation increasingly depends not on just doing more events, but on learning systematically from them.

Discussions about the relationship between academia and industry can often be framed as a cultural tension or disagreement: theory versus practice, abstraction versus reality. The usual solution proposed is greater “engagement” — more guest lectures, more reports, or occasional consultation. While useful, these approaches treat the issue as a communication problem between individuals.

This article instead interprets the “gap” as structural rather than interpersonal. Both groups are already competent within their own systems. Researchers are not detached from reality - certainly not many I know personally; but they are responding to incentives that reward methodological rigor and generalisable knowledge, dictated by institutions.

HOW DOES THIS ADD TO OR COUNTER WHAT WE ALREADY KNOW?

Indeed, what universities and educational institutions choose to measure as key performance indicators remains a persistent concern, and understanding why existing metrics often overlook meaningful industry engagement would merit several doctoral studies in its own right.

Practitioners are not resistant to learning; they are responding to environments where immediate decisions carry operational and legal consequences.

Seen this way, the gap persists not because either side misunderstands the other, but because the sector lacks mechanisms that translate knowledge between two professional formats. The challenge is therefore not persuasion but integration: designing processes where operational activity and analytical learning occur simultaneously rather than sequentially.

Key Arguments

  • At the centre of the divide is a misunderstanding about what counts as expertise. We often treat academic knowledge and practitioner knowledge as competing forms of authority. In reality, they address different aspects of the same problem.

    Academic knowledge is explicit. When written well, it is structured and transferable. It can explain patterns: why incidents occur, how audiences behave collectively, and what long-term outcomes events produce. Because it is codified, it can travel beyond a single location. A crowd management model developed in one country can improve safety planning in another.

    Practitioner knowledge is more tacit. It is built through repeated exposure to real conditions. It is not usually written down in an open-access public forum, but it is operationally powerful. Someone who I knew referred to this as “why should I/we give away our Crown Jewels - our IP - beyond the organisation”.

    This is not an uncommon view.

    Experienced event managers recognise warning signs before formal indicators appear: a queue that “feels wrong,” a crowd that is becoming compressed, or a site layout that will cause congestion once darkness falls.

    A short example illustrates the difference: Two festivals have similar attendance numbers and identical entrance widths on paper. A planner using only documentation might judge them equivalent. A veteran operations manager, however, may notice that one entrance faces a transport drop-off point while the other requires a gradual walk-in approach. The second produces staggered arrivals; the first produces surge arrivals. The operational risk is therefore entirely different. Later modelling may confirm this, but the practitioner recognised it immediately through experience.

    Neither form of knowledge is sufficient alone. Experience can identify a problem but may not generalise beyond that event. Research can explain a pattern but may not detect it quickly enough during live operations.

    When separated, the sector alternates between two weaknesses: reactive decision-making and unused insight. When combined, each strengthens the other. Practice identifies what needs explaining; research explains what needs preventing.

    The issue, therefore, is not whose expertise is superior. It is that the events industry has not consistently connected two complementary knowledge systems that address the same operational realities from different directions.

  • The persistence of the divide is less about attitudes and more about incentives. Each group is rewarded for success, but success is defined differently.

    For practitioners, success is more immediate and visible. The event opens, runs safely, satisfies audiences, and closes without major incident. Contracts are renewed, licences approved, and reputation maintained. The evaluation of performance is tied to delivery.

    For academics, success is cumulative and documented. A study must be rigorous, reviewed, and published. Its credibility depends on methodological care and time for analysis. The evaluation of performance is tied to contribution to knowledge.

    These reward systems unintentionally discourage collaboration. A researcher who rushes findings to meet an event timeline risks producing unreliable conclusions. An organiser who pauses planning to collect detailed data risks missing operational deadlines. Both are acting rationally within their own professional structures.

    Consider a typical post-event evaluation. An organiser may produce a report within weeks because funders require it. The report contains attendance figures, economic estimates, and stakeholder feedback. It satisfies accountability requirements but rarely influences broader practice. Meanwhile, a researcher studying the same event may publish findings two years later. The analysis may reveal behavioural patterns, social outcomes, or safety insights, yet by then the event has already repeated multiple editions without those lessons.

    The consequence is a learning gap. Events recur annually, but the knowledge cycle moves more slowly than the operational cycle. The industry therefore accumulates experience but not always understanding.

    This misalignment explains why many well-run events still confront recurring issues: transport pressures, community resistance, crowd congestion, or volunteer retention. The sector is not failing to act; it is failing to synchronise learning with delivery.

    Effective professionalisation requires aligning incentives so that knowledge production and operational timelines overlap. Without this alignment, even high-quality research and highly competent practice remain parallel activities rather than a shared system of improvement.

  • Even when research and practice address the same issue, they can fail to connect because they describe reality differently. The barrier is linguistic rather than intellectual.

    Academic communication relies on conceptual clarity. Researchers use categories, models, and defined terminology to ensure precision and comparability. This allows findings to be applied across contexts. However, this format rarely matches the decision environments faced by organisers, who operate through briefings, site plans, and operational instructions, for example.

    A safety researcher might analyse “risk perception and behavioural response to density conditions.” A safety officer needs a clear instruction: close Gate B when crowd flow exceeds a certain rate. Both refer to the same phenomenon, but only one is immediately usable on event day.

    The problem becomes visible during planning meetings. Research may show that visitors experience discomfort before dangerous crowd density occurs. Yet if that insight is expressed only as a conceptual discussion, it does not translate into an operational trigger. When translated into a practical rule — for example, activating diversion routes once queues extend beyond a visible marker — the same knowledge becomes actionable.

    A short example illustrates this: Studies of queue psychology consistently show that uncertainty causes frustration more than waiting time itself. An organiser unaware of this may focus solely on increasing entrance speed. An organiser informed by the research instead installs visible signage indicating expected wait times and the reason for delay. The queue length remains unchanged, but complaints drop significantly. The knowledge was not new; its format was.

    This demonstrates that the industry’s difficulty is not lack of evidence. It is lack of translation between knowledge formats. Research speaks more in explanations. Practice speaks more in instructions. When the two are connected, the same insight can shape both understanding and operations.

    The professional task, therefore, is not merely producing more studies or more operational manuals. It is designing mechanisms that convert explanation into decision-making tools and convert operational experience into structured knowledge.

  • Where collaboration does work, it rarely happens by accident. It typically depends on people who are comfortable operating across both environments. These are often described as “hybrid” professionals, but this should not be interpreted as a separate category of worker. This is better understood as a spectrum: some individuals are more academically oriented, others more operationally oriented, and many occupy positions between the two.

    In practice, the boundary is already porous. Numerous researchers began their careers working on events and retain a strongly practical outlook. Equally, many experienced organisers have undertaken postgraduate study, teach part-time, or regularly engage with evidence and evaluation. These individuals are not exceptions; they are translators. Their value lies in recognising what each side needs from the other and adjusting knowledge accordingly.

    When such people are present within teams, knowledge becomes usable. A production manager with analytical training begins documenting operational decisions over time rather than relying primarily past experience. A researcher embedded within an organising team reframes research questions to address real planning dilemmas rather than purely academic interests. The same information now serves both operational improvement and broader understanding.

    Consider event impact evaluation. Organisers are frequently required to demonstrate social or environmental outcomes, yet traditional frameworks can appear complex and time-consuming. Someone with familiarity in both environments can convert them into manageable indicators: volunteer retention rates, repeat attendance from local residents, or measurable reductions in waste streams. The organiser gains credible evidence for funders, while comparable data becomes available for wider learning.

    A similar effect appears in risk management. Operational teams often develop effective safety practices through experience, but these remain localised. When a team member with analytical awareness records why a routing decision reduced congestion, or how a communication protocol prevented confusion, the insight becomes transferable. What was once a site-specific solution becomes knowledge others can use.

    These individuals function less as intermediaries and more as infrastructure. They allow ideas, methods, and experience to circulate across organisations. Without them, research remains external and practice remains isolated. With them, events can better accumulate shared learning across locations and years. The sector’s development therefore depends not on choosing between academic and practitioner identities, but on ensuring that organisations contain people positioned at multiple points along this spectrum.

CONCLUSIONS

The event and festival industry often sees delivery and understanding as separate achievements — but they are two sides of the same coin. One group primarily makes events happen; the other primarily explains what they mean. Both are necessary, yet when they operate independently the sector repeats a familiar pattern: competent events alongside limited collective learning.

The consequence is subtle but important. Each year thousands of events are planned, delivered, and evaluated, yet the same operational challenges recur across places and generations of organisers. Safety issues reappear, community relationships reset, and funding justifications are repeatedly rebuilt. This does not indicate poor practice. It indicates that knowledge is not accumulating in a systematic way.

The gap between research and practice is therefore not a disagreement about expertise. It is a structural gap in how knowledge moves. Experience remains local, and analysis remains external. Professional fields mature when lessons from one context improve performance in another. Medicine, engineering, and public health advance because practice feeds evidence and evidence feeds practice. Events have not consistently developed that feedback loop.

The important shift is to recognise that delivery and learning are part of the same activity. An event is not only a cultural or commercial product; it is also a site of knowledge production. Each decision about layout, communication, or engagement produces information that can improve future events if captured and interpreted.

The industry’s future depends less on delivering more and more events but more on increasing what is learned from each one. The goal is not to eliminate the differences between academic and practitioner roles, but to connect them so that implementation and explanation continually reinforce each other.

Events succeed every year, yet the same problems return — not from poor practice, but from knowledge that never accumulates.

PRACTICAL ACTIONS

Bridging the gap does not require large structural reform. It requires deliberate practices that allow operational activity and learning to occur together. The aim is to ensure that every event produces both a successful experience and usable knowledge.

1. Define shared questions before planning begins
Before detailed operational planning, organisers and analysts should identify a small number of decisions the event must get right — for example, transport flows, neighbourhood relations, or volunteer retention. Research activity should then focus directly on those decisions rather than on general interest topics.

2. Embed evaluation into operations rather than adding it afterward
Evaluation is often treated as a reporting obligation completed once the event has finished. A more effective approach is to integrate data collection into routine processes: ticketing data, steward observations, waste monitoring, and stakeholder briefings. When information is captured during delivery, it becomes both accurate and operationally relevant.

3. Assign a knowledge responsibility within the team
Teams commonly allocate roles for safety, production, and finance, but rarely for learning. Appointing a staff member responsible for documenting decisions, recording unexpected issues, and coordinating with researchers ensures that operational insight is not lost once the site closes. This does not require a new department — only recognition that learning is part of delivery.

4. Produce two outputs from the same evidence
After an event, the same data should generate two forms of output: a concise operational briefing for organisers and stakeholders, and a structured analysis suitable for wider dissemination. The first improves the next edition; the second allows others to benefit. This dual-output approach prevents evaluation from becoming either purely academic or purely administrative.

5. Encourage mobility between education and industry
Organisations benefit from staff who teach occasionally, supervise student projects, or participate in applied research. Educational programmes benefit from instructors who remain involved in live events. These exchanges do not need to be permanent. Short placements, guest teaching, or collaborative projects are sufficient to align questions and expectations.

6. Align funding requirements with learning outcomes
Public authorities and funders can accelerate change by requesting evidence not only of attendance or economic impact but of operational learning. For instance, applications might require organisers to explain what was learned from previous editions and how planning has changed as a result. This shifts evaluation from justification toward improvement.

7. Establish repeatable metrics across editions
Consistency matters more than complexity. Tracking a small set of indicators — such as crowd flow rates at key points, local resident participation, or volunteer return rates — allows events to compare performance over time. Reliable comparison turns individual experiences into accumulated knowledge.

Taken together, these actions create a simple principle: every event should improve the next one, and ideally improve events beyond its own organisation. When learning is treated as an operational responsibility rather than an optional extra, research and practice begin to function as parts of the same professional system.

IMPLEMENTATION CHALLENGES


Implementing these practices is not without difficulty. Event teams already operate under financial pressure, limited staffing, and demanding timelines. Allocating time to documentation or collaboration can appear unrealistic when immediate delivery risks dominate attention. Smaller events in particular may lack capacity to collect or analyse information systematically.

There are also institutional barriers. Academic research still depends on ethical approval processes, publication timelines, and methodological standards that cannot always align with fast planning cycles. Similarly, organisers may be reluctant to share operational data because of commercial sensitivity, reputational risk, or contractual obligations.

A further challenge lies in expectations. Not every event can produce comprehensive evidence, and not every research project can produce immediate operational guidance. The objective is not perfect integration but gradual improvement. Even modest steps — recording decisions, repeating simple measures across editions, or involving a researcher in a single planning stage — can begin building a learning culture.

The goal is therefore pragmatic: establishing consistent pathways for knowledge exchange while recognising the real constraints under which both researchers and organisers operate.

AUTHOR(S)

Dr Mike Duignan, University of Paris 1, France

Mike is Founder and Chair of the Centre for Events & Festivals (CEF), where his work focuses on bringing together leading thinkers from both academia and industry to develop research, guidance, and practical resources that support the delivery of high-quality events and festivals. He is also the CEO of Legacies, the official consultancy arm of CEF, which translates research and professional expertise into applied advice for organisations and policymakers.

Alongside this, he is Professeur des universités at University of Paris 1 (France). He serves as Editor-in-Chief of Event Management Journal, a leading academic journal dedicated to the study and analysis of events and festivals, founded in 1993 and based in New York, and is the editor of Routledge’s book series How Events Transform Society.

Over the past 15 years his career has spanned both higher education and the private sector. He has held roles as Director of Research, Intelligence and Education at Trivandi; Head of Department and Reader at the University of Surrey (UK), where he also directed the UK Olympic Studies Centre; and Associate Professor at the University of Central Florida (USA). For the past five years he has also been part of the Centre for Science and Policy’s international network of scholars at the University of Cambridge (UK), providing policy advice to governments and NGOs.

Disclaimer
The views and insights expressed in this article are those of the author(s) and reflect their research and professional expertise. They do not represent the views of the Centre for Events & Festivals CIC or its partners.