Arms Control through Societal Verification: Invaluable or Ineffective?

In “Societal Verification: Leveraging the Information Revolution for Arms Control Verification,” authors Hinderstein and Hartigan propose a rather exciting idea: that arms control verification, like telecommunications or online shopping, could be transformed by the advent of the “Information Age.” Certainly it’s not an entirely novel concept; H&H mention the example of Internet users assisting in the analysis of vast amounts of satellite imagery for various purposes. Nor is their proposal ill timed. The authors cite the transition to fewer individual warheads as well as the need for multilateral verification as factors that will drive a greater need for verification.

Yet upon closer inspection, such an approach may not be quite as effective as it appears. The authors lay out a number of potential uses for societal verification, which consist primarily of “defining patterns,” “looking for shifts,” “identifying outliers,” “filling in blind spots,” and “detecting signals.” Of these, the ones dealing with outliers and signals would appear to be most easily applied to societal verification; informing a large group of people to be on the lookout for a specific item or activity (such as in the DARPA red balloon challenge) could be highly effective. However, establishing patterns, especially around a heavily guarded facility such as an enrichment plant, could be considerably more difficult. If our societal “informants” are to be employed in the very casual way that this approach necessitates (otherwise, we’re simply hiring less-skilled inspectors), they’re unlikely to be willing to spend the time or effort required to map out specific goings-on or movements over a long period of time. Detecting changes in these patterns would present similar problems, as well as requiring that such patterns be supplied to the informants, introducing information leakage/confidentiality concerns.

The authors’ own list of challenges that such programs would face offers yet more discouragement. Validation is a particularly worrying issue; putting one’s trust in a single report of an inconsistency or treaty violation is a dicey proposition indeed when nuclear issues hang in the balance, and while overlapping reports can partially negate this, the potential for “disinformation campaigns,” as the authors term them, seems overwhelmingly high. Indeed, the monitored country need only hire a few of its citizens to relay cross-corroborating false reports to ruin such a system. Interference is also a concern; the authors mention that in some countries, Internet access could be temporarily restricted, thwarting continual verification efforts, while in others (such as China), governments may closely track users’ online activities, and punish those who break the law. This concern in particular receives far too little consideration; if reporting on nuclear activity could be considered treason by a particularly authoritarian regime (and really, is relaying information on the military activity of your own country to a rival’s government in exchange for compensation not tantamount to espionage?), who is going to be willing to risk imprisonment or even death for what would inevitably be a very small reward? Additionally, some of the countries that the Western world is most concerned about having/acquiring nuclear weapons are so restrictive that very, very few of their citizens have access to the sort of open internet access that this method requires (North Korea is a good example of such a country.)

The above concerns should not be interpreted as a total dismissal of societal verification. Simple crowd-sourced analysis of satellite imagery has the potential to be of great value for arms verification, as does the “outlier” spotting method (provided that the aforementioned interference concerns are overcome.) Certainly, with the increased demands for verification, and the vast resources required for traditional verification approaches to meet this need, we cannot afford to overlook any potential solution. I believe that societal verification has significant potential, but we must not overlook its weaknesses.

My questions to you:

  1. Do you believe that societal verification can overcome its many challenges and become a trusted verification method?
  2. Are there novel approaches to arms control that societal verification offers that were not discussed in this paper?
  3. Would you be willing to participate in a societal verification program in your own country? Another country?
  4. Do you believe that the recent spate of online privacy concerns endangers societal verification?

Elliot

23 thoughts on “Arms Control through Societal Verification: Invaluable or Ineffective?

  1. I agree that societal verification could be a useful asset to more traditional verification techniques, but this article brings up many of the important challenges. I think volume and validation are two of the most important problems we need to be wary of. It would be hard to both sift through all of the extra data this would bring in (volume) and then verify the quality of this information (validation). Social verification would be used to “enhance the overall picture of a particular country at a snapshot in time.” However, it is important to note that the countries this would be most useful for are the ones with the least access to cell phones/electricity (ex. North Korea) or are under the most control (ex. China). I’m not sure how viable this technique would be in these countries. I also worry that some governments might be more skeptical of their citizens and therefore conceal their programs even further. If we can find a way to streamline all of this data for easier/more convenient processing then I think it is a technique we should pursue. Just some thoughts!

  2. I think that the points that you both raise (Elliot and egelb) about the challenges that societal verification faces as a helpful form of arms verification are indeed correct. The countries where such widespread verification techniques would be useful are the ones governed by oppressive regimes prevent the people from utilizing useful technology of this sort. Thus, I believe that a more apt discussion of the potential uses of this technology instead discusses the application of societal verification towards goals that are more realistic. For example, one thing that Prof. Glaser has mentioned in lecture is that cell phones can be used as radiation detectors in cases like the Fukushima Reactor explosion. If research focused on making this technology useable on a mass scale for things like radiation detection, it could play a very significant role in making sure that people live in the safest conditions, helping inspectors to find the location of chemical contamination and the like. However, the challenges posed to societal verification for nuclear arms verification make it nearly impossible for use in this realm.

  3. I think that one of the main problems with using social media for verification purposes is the time and resources that will be necessary to verify the verification. How will we really be able to separate out those who have sincere motives, and the inevitable people who just want to cause trouble? Things like Facebook and Twitter, as H&H mention, are such open sources that the information would hardly be contained—once it is out there, anyone can access it. Another avenue mentioned is using smart phone apps as verification sensors. However, as Elliot mentioned, there is a very large chance that one small, misinformed detail could encourage a decision that would be disastrous in the end.
    Another issue is the potential for panic that could be caused by asking citizens to be on the look out for certain things. On the one hand, people would have to be informed on what it was they were looking for (“language” barrier). That said, what’s to say that this information won’t cause people to “cry wolf” and be extremely paranoid? This could be more troublesome than helpful, and again would require ample resources to make sure that the information is accurate.

  4. I agree with the current discussion about the limitations of societal verification, especially validation and interference. To create a system that incentivizes participation, discourages false reporting, is at least mostly immune to external interference, can be validated with minimal effort, and produces usable results may not be possible, especially for issues as important as arms control. What is interesting to me is the question of who would control such a system and what would be necessary to implement it. There must be some central organization, like MIT during the Red Balloon Challenge. Government is the first thing that comes to mind, as the major established organization of most countries. Common examples, like those in the article referring to instances in China and Egypt where the government tried to stop the flow of information, tend to illustrate government interference. But how could governments try to improve societal verification? To what extent could government efforts help with challenges such as participation and validation? Would certain types of government be favored to organize societal verification?

  5. I think everyone has brought up really important points so far about the implementation issues that social verification currently faces, especially in terms of an oppressive government’s ability to restrict access to technology and social media. I would like to expand on the latter point by exploring the possibilities of public participation in social verification programs even without government constraints. Even if a country is non restrictive in its policies, I do not think it is a given that the public would participate in such activities. It is very possible that a country’s citizens could support its nuclear program and might actually be in favor of breakout or of maintaining secret stockpiles of weapons. If enough individuals are in a favor of such policies, this could cause a lack of reporting, leading to a failure of the social verification process. In addition, even if only a small amount of citizens are supportive of the government’s nuclear activities, this minority could be enough to disrupt attempts at verification. As Elliot has pointed out, Hinderstein and Hartigan discuss the problems that the release of false or misleading information can cause for validation (7). These “disinformation campaigns” do not necessarily have to be government-endorsed, but could instead be unofficial in nature, with private citizens taking it upon themselves to undermine the endeavors of those who are trying to report the truth. Therefore, even a lack of restrictions is not necessarily a guarantee of accurate reporting, if reporting even takes place at all.

  6. Given the numerous implementation issues that have been mentioned already, I do not believe that societal verification that involves people actively looking for indicators/patterns for treaty and agreement violations (data gathering) will ever become a trusted verification model. It is one thing to identify specific red balloons like in the DARPA project. It is an entirely different thing to have the public look for ambiguous markers that could or could not represent an infraction. The sustainability of verification projects would be also questionable given the
    amount of effort it takes. Personally speaking, I would be willing to participate in such a social verification program but my interest and commitment level would decrease over time.

    On the other hand, I think there is much potential for societal verification in terms of data analyzing and not data
    gathering. If governments can gather raw data from satellites and other means, one can use the computing power of the public. This requires minimal effort from participants since it simply uses your computer’s idle time to run
    calculations. This is already used for numerous science projects such as SETI@home (you might discover signs of extraterrestrial life on your computer) and World Community Grid. Since computers are doing the analysis, this
    eliminates the chance of cross-corroborating false reports. Furthermore, this type of analyzing system is beneficial since the data is parceled out into tiny pieces so one cannot reconstruct potentially sensitive data merely by being
    involved in the project.

  7. It seems that, since I agree with much of what has been said regarding the challenges of verification via crowdsourcing, it would be beating a dead horse to repeat it. I do think, however, that it is important to point out that verification is not the only security-related field that could benefit from social networks and the widespread availability of information technology. Incentivizing people to verify arms control policies may be difficult and fraught with uncertainty or even danger, but the system would probably work with greater facility in preventing domestic terrorism. An easy method (like an app, website, email address, or number to text) to submit information would be a powerful tool in combining eyes and ears to discern suspicious activity.

    If, for example, repeated patterns of individuals acting oddly were reported at times and in locations close together, the relevant agency could investigate a potential crime or attack that would not have been foreseen if the individual sightings had been isolated incidents. I have often seen the signs at airports and train/bus stations that say “If you see something, say something,” and wondered if there is any good way of coordinating those reports. By placing crowdsourced reports in a centralized database, perhaps by category, that kind of coordination may become more plausible.

  8. In response to question 3, I would not feel comfortable taking part in a societal verification program, as any sort of reporting could (rightly) count as espionage, especially since, as mentioned by many other commenters, the places where societal verification is most desperately needed have the most authoritarian and oppressive governments. While societal verification is an interesting new tool, it’s not a revolutionary or trustworthy verification method, and needs a lot of work to even become a plausible idea, for reasons previously discussed in other posts. I also feel that the online privacy controversy would, rightly or wrongly, dissuade people from reporting, especially to a centralized authority, for fear that the reporting could either be turned against them, or a simple distrust of government-sponsored surveillance. The ideal use would be passive, such as the radiation detectors in cell phones the Professor mentioned in class. Although, this is also a privacy concern – if the UN or US was found to have been, say, forcing cell phone manufactures to make cell phones surreptitiously start video recording and upload the footage to a centralized database upon detecting significant amounts of radiation, there would surely be outrage. Essentially, the main difficulty with the process is that so many aspects of the program are as yet unquantified, and thus no meaningful conclusion can be drawn from them.

  9. It seems like the thrust of this post is to suggest that although crowd sourcing has some potential to reduce the risk of nuclear proliferation, there are significant challenges to making use of this technology. Though all of the challenges named by the author are certainly present, I think that with crowd sourcing we are ultimately only limited by our imaginations. True, it is difficult to imagine a scenario where countries would allow anyone close enough to a nuclear facility for an anti-proliferation equipped to take meaningful measurements, but that limitation doesn’t necessarily render smart phones or crowd sourcing impotent.

    One innovation mentioned in the Social Verification reading that has a lot of potential is the addition of a geiger counter to smart phones. At the very least, geiger counters would enable authorities to detect spikes in radiation that could occur before the detonation of a dirty bomb or a true nuclear device. Theoretically, phones with geiger counters could be used to track the vector and magnitude of shipments of fissile material as well. This is only one application and I think there are many more to be found.

  10. I agree that using crowdsourcing to prevent domestic terrorism is more practical than incentivizing people to verify weapons. The bigger problem is a human issue, not a technological one. We may have better, faster, more powerful monitoring and analysis technology, but the technical capability has always exceeded its use. An example of
    crowdsourcing closer to home is the honor code. It is our duty to report those we see cheating, but technology and change has lead to more take-home exams and Internet submissions that it is harder to catch a fellow student cheating. I think governments would find new ways to hide their plans. I do not believe that societal verification can overcome its challenges to become a trusted verification method.

  11. I don’t believe that societal verification will be a stand-alone verification method (to be fair, single-source intelligence is avoided by any agency/individual that has been properly trained), but it could provide an interesting method for collecting open source intelligence. As Zach Ogle mentioned below, the radiation detectors (which remind me almost of the sonar phones from “The Dark Knight”) suggest that there are uses that are limited only by our imagination. I think many of the commentators have hit upon the challenges of asking individuals to spy on their nation and of counter-intelligence operations to disguise/deceive, but given the tendency for individuals with smartphones to film or otherwise record events, and the already pre-existing ability of our intelligence agencies to access data, I could certainly see societal verification being a project worth pursuing. While it may be unlikely for ordinary citizens to notice a missile silo being built in the middle of the woods, he could certainly be valuable in more dense areas or situations such as the military build up along the Ukrainian border or a military parade where the latest technology is showcased. Again, this would not replace other sources of intelligence, but could supplement these sources while they are being mobilized or re-tasked towards the situation.

  12. In regards to the efficacy of societal verification technologies and techniques, it is clear that though the idea of using new media to expedite and develop the process of verification is unique and harbors enormous potential, there are numerous obstacles standing in the way of actually employing societal verification for arms control verification. First of all, it is difficult to both incentivize and train everyday citizens and social media users to effectively identify, track, and address shifts or outliers in nuclear weapon facilities and individual nuclear warheads. This is particularly problematic in authoritarian states, where not only the incentive to report and be on the lookout for certain signals or situations of concern does not exist, but in fact the very governments that we are trying to verify more accurately are the ones instituting penalties and threatening punishment of citizens who make public or report sensitive and confidential information. Furthermore, another problem, from a simple day-to-day standpoint, is that citizens generally lack the will and knowledge necessary to report particular arms control violations. For verification agencies to have to sort through the inaccurate reporting and tracking of all citizens using media to help detect the potential or ongoing violations of multilateral arms treaties is a daunting and almost impossible task.

    Societal verification should never be the only mechanism we pursue to ensure the compliance of states in arms treaties, however it is useful to think about what role crowdsourcing and society can play in conjunction with traditional enforcement mechanisms such as on ground supervisors and inspections. This is not to say that societal verification is a technique that is not worth pursuing, but rather, that while the State Department has ambitious initiatives to equip arms control and safeguards inspectors (and potentially in the future, the every day citizen) with smartphones and tablets linked to one central application, we must also remember that this will most likely decrease the quality of information that we receive.

  13. There are clearly many pressing challenges presented by the employment of nontraditional stakeholders as a means of verification, most of which have been discussed in some capacity below. One challenge, however, that was not mentioned by Hinderstein and Hartigan that I find worth discussing (and that was briefly mentioned by mremick below) is the issue of civilian paranoia. It was one of the first things that occurred to me as I imagined a world where normal citizens are asked to be aware of/searching for anything that looks like a violation of policy. (This is similar to a hypothesized trend in the realm of disease control wherein large-scale vaccination as a purely preventative measure, even if bioterrorism is not seeming very likely, could lead to public panic). There is a level of comfort derived by “normal” citizens from the fact that, despite the known potential for nuclear or other forms of terrorism/warfare, government agencies have it under control. But as soon as a federal institution approaches citizens and asks them to report suspicious behavior, that comfort is lost, and the sense of responsibility of the individual-level is dramatically increased, potentially leading to many false positives, one of the clearly outlined downfalls of societal verification.

  14. I agree with much of what has been stated below regarding the potential issues of using societal verification as a means to verify international weapons treaties. Although the potential efficacy of this type of programs was demonstrated by the DARPA red balloon experiment, one major confounding factor from this experiment was the financial incentive offered to the participants. Without this motivation, is it unlikely that the participants in the challenge would have taken on the project, as the time investment would not be worth it unless there was a reward as well. This type of issue would play out if a societal verification program were to be rolled out on a large scale. Given the work that would likely be necessary for this program to be effective, some financial compensation would likely be necessary as well in order to motivate citizens to participate in the program. Without this incentive, then the program would run into a major collective action problem. In theory, every one of us would benefit from the results of this program ( ie, increased security/safety) regardless of the amount of effort that we put in, as safety from the type of security threat this program would seek to prevent is a public good that is more or less shared equally by all citizens. Therefore, there is a strong incentive to free ride, that is, to simply sit back and reap the rewards while others do all the work. Because this same free-riding incentive exists for everyone, it is likely that very little work will actually be done to support the program, and individuals will generally choose to reap the benefits of the public good without contributing to it. Based on this problem, it would be necessary to provide financial incentive for private citizens to participate in the program. However, at this point, based on the costs of the program, it would likely be more efficient to simply hire trained inspectors to complete the work that would otherwise have been done by society, as their ability to specialize in this type of work will lead to greater results in verification.

  15. I think you hit the nail on the head when you mentioned the concern of reporting on a nuclear activity being considered treason by a particularly authoritarian regime. Reporting on the military information of your own country would almost certainly be considered espionage regardless of the country, but in authoritarian regimes this would realistically pose great danger for the people involved in reporting. I share the general concern of effectiveness in countries such as North Korea where people won’t even have access to such sensitive information (if any information at all) but before I even get there I just worry about the potential danger these programs could put citizens of these countries in. While I understand the goals that this surveillance aims for, it seems like the risks associated with it may not be worth the benefits especially with the limitations that you hinted at in question 1.

    Furthermore, there seems to be a real issue of privacy concerns even in countries with relatively more freedom if people’s online privacy is threatened. With as much concern as has surrounded the recent NSA developments and other forms of government surveillance, I question how eager people would be to take it upon themselves to “spy” for the government within the United States especially if they felt that it could come back to bite them at some later date. Overall it just seems like the theoretical surveillance has far too many potential negative side effects and its effectiveness has yet to be proven in some sort of quantifiable way. I think that the danger of abuse by authoritarian regimes should be an immediate red flag and progress should be made with caution, but I also suspect that the “recent spat” of online privacy concerns you mentioned will simply make it an unpopular proposition.

  16. I agree with all of what @juan_garavito:disqus outlined below, particularly the sentiment that the possibility of reporting being deemed treason is one of the main strikes against this method. I would argue that even in countries that have less outwardly authoritarian governments, this danger still looms. Even in a country like the United States, we are quickly learning the extent to which certain types of conversations are being monitored by organizations like the NSA. For citizens of any nation to share information about their country’s military behavior, over any sort of communication medium, involves a degree of risk. I would suspect that fewer citizens would be willing to assume that burden, and risk the potential for penalization at a later date.

    I also continue to be concerned about the potential for false information to be spread through crowd sourcing mediums. If a non-state terrorist group were to learn that a government was in fact gathering information through crowd sourcing, it would not be impossible for them to infiltrate this medium and plant misleading information. Anonymity might some citizens the courage to risk “treason” and report on their governments’ activities. But it would also empower non-state terrorist groups and give them a new venue through which they might infiltrate government security networks and cause harm.

  17. I believe the sentiments about the unreliability and weaknesses of crowd sourced information for arms verification show a significant setback in having a purely crowdsourced means of verification. However, I do think there is significant potential in a hybrid approach that uses the increased availability of information in tandem with less trained or ‘less skilled inspectors’ that could provide a more effective method of nuclear verification. As you mentioned restriction on information flow by governments, the potential to label informants as treasonous, and the ability to have a counter information campaign pose significant and not easily solved problems for pure crowdsourcing like that of the red balloon challenge. But, going back to the potential we have seen in skybox imaging there may still be a way to leverage vast amounts of data and somewhat trained individuals to supplement the typical inspection process. Say for example we could train people to identify shifts, patterns, and outliers for facilities, we could use less skilled individuals to sort a vast amount of information to help current efforts. Otherwise, in areas that have agreed to treaties, key connections could be made to relay activities of nuclear development and armament facilities as well as activity at things like uranium mines. This could be seen as monitoring things like energy usage statistics, water flow, pizza orders etc. to see if facilities were having increased activity.

    That said, most of these methods differ strongly from a pure social verification where normal citizens would pass on information. I think thought that with the gravitas of something like nuclear armament a country that wanted to would easily exploit the flaws in a pure social method, making it both unreliable and potentially risky. As we have seen for things like drone strikes in Afghanistan, people may inform of an illegitimate threat for personal gain, rather than truthful information. Thus, as a pure method I am not sure social verification would be effective, but there is significant potential in leveraging large amounts of data with somewhat trained individuals to streamline the normal verification process.

  18. Going off @disqus_J8XBRfzZE2:disqus’s point, the idea of utilizing the public to process and investigate raw data is intriguing. We have most recently seen this process with the search for Malaysia Airlines flight 370. For example, the firm DigitalGlobe made available some 9,000 square miles worth of images for public review. (http://time.com/28332/missing-jet-crowd-sourcing-project/) Of course, the recency of the event, coupled with intense media coverage, all probably played roles in incentivizing interested individuals to contribute. Furthermore, perhaps this crowdsourcing project has not been as successful as hopes because debris may no longer be floating on the ocean surface.

    However, nuclear missiles and other military buildups remain on land and can tougher to camouflage. If individuals are able to access such information (images, maps, etc.) via a neutral third-party (e.g., a UN regulatory/inspecting agency), then the crowdsourcing model could be used to verify existing commitments and identify possible trouble areas. Along the lines of such a model, a system of “up-voting” the most promising leads could be used to help focus investigation for trained inspectors—who should be doing the final analysis of data in any case. Incentivizing individuals to participate may be more difficult; furthermore, it would be best to have more individuals than a few; a smaller number of participating individuals may not lead to better results (especially if a select few individuals are pursuing an agenda).

  19. Going off jameson’s point, the idea of utilizing the public to process and investigate raw data is intriguing. We have most recently seen this process with the search for Malaysia Airlines flight 370. For example, the firm DigitalGlobe made available some 9,000 square miles worth of images for public review. (http://time.com/28332/missing-jet-crowd-sourcing-project/) Of course, the recency of the event, coupled with intense media coverage, all probably played roles in incentivizing interested individuals to contribute. Furthermore, perhaps this crowdsourcing project has not been as successful as hopes because debris may no longer be floating on the ocean surface.

    However, nuclear missiles and other military buildups remain on land and can tougher to camouflage. If individuals are able to access such information (images, maps, etc.) via a neutral third-party (e.g., a UN regulatory/inspecting agency), then the crowdsourcing model could be used to verify existing commitments and identify possible trouble areas. Along the lines of such a model, a system of “up-voting” the most promising leads could be used to help focus investigation for trained inspectors—who should be doing the final analysis of data in any case. Incentivizing individuals to participate may be more difficult; furthermore, it would be best to have more individuals than a few; a smaller number of participating individuals may not lead to better results (especially if a select few individuals are pursuing an agenda).

  20. Crowd-sourcing has proven itself to work in a variety of cases. Wikipedia is an example of when it works – multiple participants, each with their own expertise, come together and contribute their knowledge. Wikipedia has also proven to be relatively self-maintaining: blatantly wrong or “troll” information is removed quickly by other editors. More subtle/obscure incorrectness, misconceptions, or debatable information also gets slowly weeded out.
    Likewise, in many cases of journalism, I have seen many times where a casual reader impressively debunks the entire argument or evidence, gets voted up, and eventually leads to the whole article taken down for factual error.

    I’m not sure arms control and verification is one of those cases. As mentioned, crowd-sourcing relies heavily on the incentive and interest of its participants. In the case of Wikipedia, the majority consists of editors truly dedicated to proliferating their knowledge, thereby always having a community to constantly update pages and weed out errors. It is a different case to distribute “work” to the general population. Even when said “work” is in their interest – national safety – few people may show interest (such as many ignored “beware of suspicious luggage” signs by weary
    pedestrians). Meanwhile, the crowd-sourcing can instead attract unwanted groups of people – for example, terrorists who decide to create a computer hack that would send a high number of alerts in a particular area as a false lead.

  21. I think the Wikipedia example that Tianyuan brings up is interesting and very important. Even though you don’t have to “sign in” or show some sort of identification to be an editor of a Wikipedia article, the system more or less works. In other words, anonymity isn’t really a big problem that hinders Wikipedia from functioning as well as it does. Although there is a group of “official editors” who try to monitor the content, the sense I get is that Wikipedia is more about a community of (average) people who are interested in and dedicated to creating a free and comprehensive knowledge base. As Tianyuan points out, this form of crowdsourcing is not suitable for arms control and verification. Anonymity, or being able to verify the contributor’s true identification, is one problem that first jumps out at me. Unless there’s a mechanism through which we could check for the person’s true ID, the content that the person provides cannot be trusted. Implementing such a mechanism isn’t difficult, but it would also defeat the whole purpose of utilizing the power of crowdsourcing. Like many of the posts below, I see a lot of potential in using societal verification for arms control; before this gets put to use, however, there are several critical challenges that need to be addressed first.

Leave a Reply