If you've included an online component to your stakeholder engagement or public consultation process, you will have made decisions about the best way to structure the process. There are various schools of thought and preferences, and often individual software solutions will promote one way (their way!) over another.
Our software Darzin is used to draw together feedback from various channels, including online engagement. So we tend to be more of an independent and interested observer of these processes, rather than a promoter of one view or the other. We've observed different ways clients handle these questions, seen some successes and some failures. (How do we define success and failure is a whole other topic to explore!).
I thought it would be interesting to explore this topic with a broader audience to understand what the experiences have been in relation to the three considerations listed below. Are there other design factors that determine the success or otherwise of your online engagement? Can you relate to these three challenges, and what option A or B do you tend to favour?
I. Anonymous vs named responses
A. In favour of anonymous
- I can freely comment on issues that I would otherwise not be able to publicly engage in (due to my work or personal relationships).
- More people engage in the discussions.
- I don’t need to censor my comments.
B. In favour of named responses
- I have to censor my thoughts – I cannot be unreasonable or extreme, or offensive to other commentators. (Anonymity lets me hide and take more extreme positions than I really feel, because I can and because it can be fun.)
- Having my name against my comment means that I feel more of an obligation to read and understand other people’s comments, and present a coherent, reasonable argument that I am happy to put my name to.
- More 'real people', less online trolls hijacking the discussion.
II. How much control do we retain over Discussion Topics?
A. In favour or tightly controlled set of discussion topics
- The discussion stays on topic, and we get the information we really need.
- We collect information that we can act on. If the discussions veer too far off topic, people may mistakenly get the idea that we can address those other issues as well as or instead of the issues we started discussing.
- It is easier to follow a well thought out, clearly defined set of discussion topics (than one that meanders all over the place).
B. In favour of allowing people to create their own topics
- If people can create their own topics, they can discuss the issues that are important to them and you get a better understanding of the breadth and depth of the topic.
- People are more engaged if the discussion topics have a closer match to their interests. A badly worded question, or a question that is too narrow in its focus frustrates me and either stops me from commenting altogether or it makes me more aggressive in my response.
III. What do we do with the Feedback data?
A. In favour of a general broad-level assessment
- It’s more important to understand the reach of the online engagement – how many people has it reached, how many have actively participated.
- A broad level assessment allows me to quickly draw some broad themes and conclusions that I can use in my decision processes
- The feedback is simple enough for me to draw broad conclusions without needing to go through a detailed analysis process
B. In favour of detailed/qualitative analysis of all feedback
- I like to incorporate feedback from online channels in with my other consultation feedback for analysis and reporting, to present a more comprehensive picture of the feedback.
- I need to be able to demonstrate transparency in how the feedback has informed decisions.
Detailed feedback warrants detailed analysis and reporting. Just looking at numbers of impressions and clicks is not enough.
What has your experience been with designing the online component of your stakeholder engagement or public consultation process in relation to these three challenges?