In the late 2000s, futurist Clay Shirky wrote a handful of books describing what we all know as the “Web 2.0.” Instead of simply a resource for searching or reading, the internet now was dynamic, user-generated, and collaborative. In his seminal book, Cognitive Surplus, Shirky’s observation was that this collaborative layer fundamentally changed the economics of knowledge generation and sharing. Humanity built Wikipedia much quicker than Encyclopedia Britannia and a maximally more useful resource. At the same time, early in the Obama Administration, a team championed the idea termed “ExpertNet” — a simple concept that the government can and should rely on the best ideas out there. The internet now enabled more rapid information sharing. Instead of hiring just one team of consultants, the government should challenge the best to develop innovative solutions. Challenge.gov may be the most visible e-democracy federal government collaborative initiative.
The essay examines how this notion of civic participation rises to the definition of collaboration, and how sustainable and effective digital tools such as Challenge.gov have been and can be. Put simply, why was ExpertNet shut down, but Challenge.gov exist — and what does that say about the opportunity for online expert engagement at the federal level? By comparing ExpertNet and Challenge.gov through both legal and theoretical lenses, the analysis reveals that the internet enables a successful competition-based, not consensus-based model for collaborative government, a model proving valuable for innovation, but potentially damaging for social capital.
Technique 1: ExpertNet & the Problem of the Federal Advisory Committee Act
From 2008-2010, Professor Beth Noveck was part of the team tasking with creating the administration’s “open government initiative” (Noveck, 2009). This involved various technology strategies including open data and greater science investments — as well as a commitment to use technology to better engage experts from throughout the country in policymaking. Building upon Shirky’s idea of a “public commons,” Noveck’s team imagined an “ExpertNet,” a “directory of directories for public officials to find and connect with academic (or subject-matter) experts outside the typical confines of a consulting agreement. The administration explained their agenda: “We believe that everyone has expertise, experience and enthusiasm which, if shared in manageable ways, will help us make smarter decisions together… We want to make sure that everyone who is interested and has something relevant and useful to share has an opportunity to participate.”
Indeed, the White House followed collaborative principles in designing this platform, launching a public wiki for anyone to edit shaping the design and architecture on the platform. (Unfortunately, an archive of the Wiki is unavailable, so reports are second-hand.) A parallel could be drawn to LinkedIn or even Reddit but for and by the public sector. The ExpertNet was designed to become the online community where academics and subject matter experts could freely engage with policymakers on problems.
Legal & Consensus Issues
The ExpertNet initiative found short-lived homes within the Department of Defense and within the Food and Drug Administration. Professor Noveck explains that these departments leverage the thinking — the “directory of directories” — approach to connect their own leaders and experts within the organization. There have been instances of success driving cross-team collaboration (Noveck, p. 228, 2012). These projects, however, like the White House’s ambitions from broader ExpertNet vision were hampered by federal legislation. The Federal Advisory Committee Act (FACA) handicaps — according to Noveck and others — the ability of federal operators to directly engage with outside groups of experts: FACA was designed in 1972 to establish better guidelines and ethical standards for groups affecting federal operations.
In a way FACA’s requirements of process may align with collaborative democrats’ recommendations for Joint Fact Finding (JFF). That said, even JFF advocates would need to appreciate that the process has limited scope, can be expensive, and is time intensive. That runs counter to the Cognitive Surplus thinking open government experts were driving towards. Simply relying on ExpertNet does threaten democratic legitimacy is not also representative of the people — which in fact is part of the point of the FACA rules.
A response would be that the scope of an ExpertNet engagement would not be as significant as a full JFF, and indeed the smaller scale engagement open up access to more participants — where consensus may not even need to be a goal. Conflict might be preferable (Peterson et al., 2006). Indeed, derisive dialogue on Reddit comes to mind, and questions of standards of discourse and level of comfort even arise (Leach, 2006). Further, one could argue a consensus-based approach may not make sense for ExpertNet or the JFF aspects of collaboration since the scientific method may be preference. These questions about the nature of dialogue — consensus or conflict — make one less clear of the potential outcomes of this technique.
Online Deliberative Theory
Even though the fully fleshed out Expert et did not come to life, it is still worthwhile to understand how it might have fit into best practices for online deliberation. Korthagen and van Keulen conducted an extensive study of 22 online deliberation processes and identified 7 ideal criteria for successful online 1) a combination of online and offline engagement; 2) a connection to a formal decision-making process; 3) clear communication; 4) feedback loops; 5) discussion and voting; 6) sustainability; and 7) broad mobilization (2020).
It seems the connection to a formal decision-making process seems like the most striking shortcoming in this approach. (Many of the other may not apply.) First, open government advocates such as Professor Landemore, who helped with the deliberative democracy project in France, suggest these rules may hamper the inclination of government officials or the experts to engage with a platform like ExpertNet. In effect, FACA requires an intensive administrative burden on behalf of the agency to host convening and work with experts on finding a “consensus.” Though that definition is unclear — and problematic. As Noveck points out, “But the ambiguity about when FACA should and should not apply has increased restraint among agency lawyers. They fear litigation in the event consultation is deemed improper and in violation of FACA’s rules on how a group is convened or how it operates.” In this way the legal burden can also become a cultural hesitation — leading to rhe demise of the ExpertNet agenda.
Second, looking at the participatory budgeting projects internationally, the monetary buy-in seemed essential to encourage participants to vote. In this case, when working with experts the issue seems even more pressing. Experts may already have consultancies; they may have affiliations with schools that may come in the way; or they may not simply want to work for free. Incentives and game mechanism are critical in creating meaning for participants (Gastil & Broghammer, 2020). More recently, subsequent projects have built in reward mechanisms including public attention, online recognition, or sponsorships.
Finally, ExpertNet ran afoul of legal, cultural, and theoretical concerns. There are not outcomes to speak up aside from the comparison it will provide for future examples. One such stark contrast is Challenge.gov, which presents a less radical, more costly, but more sustainable approach to expert collaboration in the digital age.
Technique 2: Challenge.gov
Challenge.gov followed similar intentions to ExpertNet in that it called for “open innovation within government.” Historically there has been a process to flip traditional corporate procurement cycles from top-down mandates to crowd-sourced competitions (Mergel, 2018). Challenge.gov brought this thinking into the White House during the Obama administration. “Challenges—or contests—are novel methods to engage external stakeholders in the problem-solving, solution design, and policy implementation processes” (Mergel and DeSouza, 2013). For instance, an agency, such as NASA may put up a request from experts on how to design new 7G networks, and scientists from anywhere compete — all the while the public can view and content. Since inception, the platform has hosted over 100 challenges working with over 700 federal sponsoring teams.
Legal & Consensus Issues
Challenge.gov was hosted in the General Services Administration. Public sector buy-in is critical to collaborative projects generally. GSA is operational and cross-cutting for every federal agency, and given their central role in contracting, they were well positioned to home the operational brunt of the program. This presented more substantial staying power for the program. Legally, it seems like Challenge.gov took a more iterative approach to engaging with outside experts. Instead of envisioning a new communication layer between themselves and advisors, they decided to make little changes to the existing procurement rules and drive — through a top-down methodology embolden by the White House. That said, confounding expectations political changes in the White House did not affect agency participation in the challenges (Hameduddin et al., 2020). Consensus is seemingly a loaded term legally for online deliberation, which Challenge.gov sidestepped.
Online Deliberation Theory
Theoretically, Challenge.gov seems to meet the ideals online deliberation, though data on offline engagement is lacking. Of each of its ideals, the most interesting would be the connection to a formal institution, sustainability and broad mobilization: most other e-governance have launched and passed away many times by now (Aichholzer & Rose, 2020). Challenge.gov has had over 70 from 2011 through 2014, and in 2020, the site lists over 80 archived projects from that year alone. Given the legal positioning, their connection and sustainability is clear. That said, not unlike participatory budgeting projects, the requirement of effort from the convening public agency can be significant (Aichholzer & Rose, 2020). According to a study of challenges from 2011 through 2015, it took federal agencies on average of two years to go through the entire challenge process. This suggests even with a proven online toolkit, and executive buy-in online participation may be difficult to run often.
Accordingly, the issue of broad mobilization (and so representativeness) should be examined. Although these techniques are expert-focused, online participation on a deliberative democracy theory demands consideration of representatives and inclusion (Leach, 2006). It might be argued that an online system like this is simply another way for the “usual suspects” to have their role in policy or procurement. Indeed, this could be an area of future research. What is notable, however, is that this open competition approach is still fundamentally democratic — although not in the typically ballot box system. Another form of decision-making for deliberative democrats could be “open assemblies,” where individuals self-select to attend certain convening, discuss, and then vote (through various means). The fact that they are self-selected but also transparent provides democratic legitimacy, according to Landemore (2021). The previous Challenge.gov announcements signal a public voting and commenting function, which seems dormant at this writing. Yet strengthening that capacity, and boosting more transparency in the process, Challenge.gov could be a (albeit small) reliable online participation platform for better policymaking.
Both techniques suggest that tapping into the cognitive surplus requires more than a wiki or a webform. ExpertNet failed to launch but illustrated a fundamental challenge in federal policy against less formal collaboration. Challenge.gov was positioned well within the federal ecosystem, but still required bureaucratic maneuvering to succeed — and imposed its own bureaucracy to impede innovation. What is most notable is that these examples overlook entirely the advice consistent across the literaure: online and in-person collaboration must go together (Korthagen and van Keulen, 2020). The promise of something like ExpertNet would have been more informal, social connections — a key ingredient to successful collaborations (Ansell & Gash, 2008). Even with the instrumental outcomes of these online participation tools, and others like it, one must wonder whether we can achieve its intrinsic goals (Leach, 2006). Lest we miss an opportunity to make the whole greater than the sum of its parts.
- Aichholzer, G. & Rose, G. (2020) “Experience with Digital Tools in Different Types of e-Participation.” European E-Democracy in Practice. Springer, pp. 93-140.
- Ansell, C., & Gash, A. (2008) Collaborative governance in theory and practice. Journal of Public Administration Research and Practice, 18(4), 543-571.
- Chopra, A. (2010). ExpertNet Wiki: An Update. White House Blog Archives. Retrieved from https://obamawhitehouse.archives.gov/blog/2010/12/29/expertnet-wiki-update
- Dukes, E., Firehock, K., Birkhoff, J. (2011). Community-based collaboration: Bridging socio-ecological research and practice. United Kingdom: University of Virginia Press.
- Gastil, J. & Broghammer, M. (2020). “Linking Theories of Motivation, Game Mechanics, and Public Deliberation to Design an Online System for Participatory Budgeting.” Political Studies.
- Hameduddin, T., Fernandez, S., & Demircioglu, M. A. (2020). Conditions for open innovation in public organizations: evidence from Challenge.gov. Asia Pacific Journal of Public Administration, 42(2), 111–131. https://doi.org/10.1080/23276665.2020.1754867
- Korthagen, I. & van Keulen, I. (2020) “Assessing Tools for E-Democracy: Comparative Analysis of the Case Studies.” European E-Democracy in Practice. Springer, pp. 295-327.
- Landemore, H. (2021). Open Democracy: Reinventing Popular Rule for the 21st Century. Kindle Edition.
- Leach, W. (2006b) “Theories about Consensus-Based Conservation.” Conservation Biology 20(2): 573–575.
- McDermott, R. (2013). U.S. implements some Open Government Partnership commitments, ExpertNet falls by the wayside. FierceGovernment.
- Mergel, I. (2018). Open innovation in the public sector: drivers and barriers for the adoption of Challenge.gov. Public Management Review, 20(5), 726–745. https://doi.org/10.1080/14719037.2017.1320044
- Mergel, I., & Desouza, K. C. (2013). Implementing Open Innovation in the Public Sector: The Case of Challenge.gov. Public Administration Review, 73(6), 882–890. https://doi.org/10.1111/puar.12141
- Noveck, B. S. (2009). Wiki government how technology can make government better, democracy stronger, and citizens more powerful. Brookings Institution Press.
- Noveck, B. S. (2015). Smart Citizens, Smarter State: The technologies of expertise and the future of governing. Harvard University Press.
- Peterson, N., Peterson, M. & Peterson, T. (2005) “Conservation and the Myth of Consensus.” Conservation Biology, 19(3): 576–578.
- Rossini, P. & Stromer-Galley, J. (2020) “Citizen Deliberation Online.” The Oxford Handbook of Electoral Persuasion.
- Shirky, C. (2010). Cognitive surplus: Creativity and generosity in a connected age. Penguin Press.
- White house reveals plans for ExpertNet wiki. (2010). Informationweek - Online, Retrieved from http://libproxy.usc.edu/login?url=https://www-proquest-com.libproxy1.usc.edu/trade-journals/white-house-reveals-plans-expertnet-wiki/docview/821730142/se-2?accountid=14749